Computational Methods and Experimental Measurements XV Wit Transactions On Modelling and Simulation
Computational Methods and Experimental Measurements XV Wit Transactions On Modelling and Simulation
Computational Methods and Experimental Measurements XV Wit Transactions On Modelling and Simulation
and Experimental
Measurements XV
WITeLibrary
Home of the Transactions of the Wessex Institute.
Papers presented at Computational Methods and Experimental Measurements XV
are archived in the WIT eLibrary in volume 51 of WIT Transactions on
Modelling and Simulation (ISSN 1743-355X). The WIT eLibrary provides the
international scientific community with immediate and permanent access to individual
papers presented at WIT conferences.
Visit the WIT eLibrary at www.witpress.com.
CMEM XV
CONFERENCE CHAIRMEN
G. M. Carlomagno
University of Naples Federico II, Italy
C.A. Brebbia
Wessex Institute of Technology, UK
Organised by
Wessex Institute of Technology, UK
University of Naples Federico II, Italy
Sponsored by
WIT Transactions on Modelling and Simulation
WIT Transactions
Transactions Editor
Carlos Brebbia
Wessex Institute of Technology
Ashurst Lodge, Ashurst
Southampton SO40 7AA, UK
Email: carlos@wessex.ac.uk
Editorial Board
B Abersek University of Maribor, Slovenia
Y N Abousleiman University of Oklahoma,
USA
Singapore
J M Hale University of Newcastle, UK
K Hameyer Katholieke Universiteit Leuven,
Belgium
C Hanke Danish Technical University,
Denmark
K Hayami University of Toyko, Japan
Y Hayashi Nagoya University, Japan
L Haydock Newage International Limited, UK
A H Hendrickx Free University of Brussels,
Belgium
C Herman John Hopkins University, USA
S Heslop University of Bristol, UK
I Hideaki Nagoya University, Japan
D A Hills University of Oxford, UK
W F Huebner Southwest Research Institute,
USA
J A C Humphrey Bucknell University, USA
M Y Hussaini Florida State University, USA
W Hutchinson Edith Cowan University,
Australia
T H Hyde University of Nottingham, UK
M Iguchi Science University of Tokyo, Japan
D B Ingham University of Leeds, UK
L Int Panis VITO Expertisecentrum IMS,
Belgium
N Ishikawa National Defence Academy, Japan
J Jaafar UiTm, Malaysia
W Jager Technical University of Dresden,
Germany
Y Jaluria Rutgers University, USA
C M Jefferson University of the West of
England, UK
P R Johnston Griffith University, Australia
D R H Jones University of Cambridge, UK
N Jones University of Liverpool, UK
D Kaliampakos National Technical
University of Athens, Greece
N Kamiya Nagoya University, Japan
D L Karabalis University of Patras, Greece
Technology, USA
Japan
Computational Methods
and Experimental
Measurements XV
EDITORS
G.M. Carlomagno
University of Naples Federico II, Italy
C.A. Brebbia
Wessex Institute of Technology, UK
Editors:
G.M. Carlomagno
University of Naples Federico II, Italy
C.A. Brebbia
Wessex Institute of Technology, UK
Published by
WIT Press
Ashurst Lodge, Ashurst, Southampton, SO40 7AA, UK
Tel: 44 (0) 238 029 3223; Fax: 44 (0) 238 029 2853
E-Mail: witpress@witpress.com
http://www.witpress.com
For USA, Canada and Mexico
Computational Mechanics Inc
25 Bridge Street, Billerica, MA 01821, USA
Tel: 978 667 5841; Fax: 978 667 7582
E-Mail: info@compmech.com
http://www.witpress.com
British Library Cataloguing-in-Publication Data
A Catalogue record for this book is available
from the British Library
ISBN: 978-1-84564-540-3
ISSN: 1746-4064 (print)
ISSN: 1743-355X (on-line)
The texts of the papers in this volume were set
individually by the authors or under their supervision.
Only minor corrections to the text may have been carried
out by the publisher.
No responsibility is assumed by the Publisher, the Editors and Authors for any injury and/or
damage to persons or property as a matter of products liability, negligence or otherwise, or
from any use or operation of any methods, products, instructions or ideas contained in the
material herein. The Publisher does not necessarily endorse the ideas held, or views expressed
by the Editors or Authors of the material contained in its publications.
WIT Press 2011
Printed in Great Britain by Martins the Printers.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without the prior written permission of the Publisher.
Preface
This volume contains most of the papers presented at the 15th International
Conference on Computational Methods and Experimental Measurements
(CMEM 11), held at the Wessex Institute of Technology in the New Forest, UK.
The Conference, which is reconvened every couple of years, started in Washington
DC in 1981. Since then, it has taken place in many different locations around the
world. The key objective of the Conference is to offer the scientific and technical
community an international forum to analyse the interaction between computational
methods and experimental measurements and all associated topics with special
consideration to their mutual benefits.
The constant development of numerical procedures and of computers efficiency,
coupled with their decreasing costs, have generated an ever increasing growth of
computational methods which are currently exploited both in a persistently
expanding variety of science and technology subjects, as well as in our daily life.
As these procedures continue to grow in size and complexity, it is essential to
ensure their reliability. This can only be achieved by performing dedicated and
accurate experiments. Current experimental techniques at the same time have
become more complex and elaborate and require the use of computers for running
tests and processing the resulting data.
This book presents a significant number of excellent scientific papers, dealing with
contemporary research topics in the field. The contributions have been grouped in
the following sections:
The Editors are very grateful to all the authors for their valuable contributions and
to the Members of the International Scientific Advisory Committee, as well as
other colleagues, for their help in reviewing the papers in this volume.
The Editors
Wessex Institute Campus
The New Forest, UK, 2011
Contents
Section 1: Computational and experimental methods
The exponentially weighted moving average applied to the control
and monitoring of varying sample sizes
J. E. Everett ......................................................................................................... 3
Experimental and analytical study on
high-speed fracture phenomena and mechanism of glass
H. Sakamoto, S. Kawabe, Y. Ohbuchi & S. Itoh ................................................ 15
Multiscale multifunctional progressive fracture of composite structures
C. C. Chamis & L. Minnetyan ........................................................................... 23
Enhancing simulation in complex systems
R. M. Alqirem..................................................................................................... 35
Computational aerodynamic analysis of flatback airfoils by
coupling N-S equations and transition prediction codes
L. Deng, Y. W. Gao & J. T. Xiong ..................................................................... 45
Numerical investigation of dynamic stall phenomenon on a plunging airfoil
F. Ajalli & M. Mani ........................................................................................... 55
Design optimization of a bioreactor for ethanol production
using CFD simulation and genetic algorithms
E. R. C. Gis & P. Seleghim Jr.......................................................................... 67
Design of pipelines for high operating pressure by numerical simulations
and experimental validation
Y. Theiner, H. Lehar & G. Hofstetter ................................................................ 75
Mechanical behaviour of high metakaolin lightweight aggregate concrete
A. Al-Sibahy & R. Edwards ............................................................................... 85
Section 9: Ballistics
Coupled numerical-experimental study of an armour perforation
by the armour-piercing projectiles
B. Zduniak, A. Morka & T. Niezgoda .............................................................. 615
Numerical analysis of missile impact being shot by
rocket propelled grenades with rod armour
T. Niezgoda, R. Panowicz, K. Sybilski & W. Barnat........................................ 625
On the truss-type structures backing the ceramic tiles in the
ballistic panels subjected to the impact of hard steel projectiles
A. Morka & P. Dziewulski ............................................................................... 635
The influence of conical composite filling on energy absorption
during the progressive fracture process
W. Barnat, T. Niezgoda, R. Panowicz & K. Sybilski........................................ 645
Section 1
Computational and
experimental methods
Abstract
The exponentially weighted moving average (EWMA) can be used to report the
smoothed history of a production process, and has some considerable advantages
over a simple moving average (MA). Discussion of these advantages includes
comparison of the filter characteristics of the EWMA and MA in the frequency
domain. It is shown that the EWMA provides a much smoother filter than does
the MA, and the corresponding implications of this difference are examined in
the time domain. In smoothing a production process, the successive entities
being smoothed commonly have varying weights, where the weights may be
such quantities as tonnage, value or time interval. Standard textbook treatments
of moving averages and exponential smoothing are generally confined to equal
spaced data of equal weight. Adapting the average to cope with items of varying
weight is shown to be trivial for the case of MA, but is not so obvious for the
EWMA. This paper shows how the exponential smoothing constant has to be
adapted to provide a consistent EWMA. Applications of the EWMA in process
control are discussed, with particular reference to quality control in the mining
industry.
Keywords: quality control, forecasting, exponential smoothing, sample size.
1 Introduction
It is common to consider a series of observations, xn, where each observation is
equivalently spaced in time or distance or some other relevant dimension.
For forecasting and for system control purposes, it is useful to have some
summary of the performance up to the nth observation.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110011
(1)
(2)
Figure 1 shows the uniform weights of 1/k that are applied to the past k
observations.
Wt
1/k
n-k+1
Figure 1:
The moving average has the disadvantage that, for the first k intervals, each of
the observations is treated as being of equal importance, but then is suddenly
disregarded, as soon as it falls off the end of the data being averaged. This
discontinuity has several disadvantages that will be discussed more fully in a
later section.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
= (1-)Sn-1 + xn
(3)
Figure 2 shows how the weights applied to earlier data die off exponentially
as we go back through the data history.
Wt
<-- earlier
Figure 2:
Figure 3:
MA
EWMA
k
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Amplitude
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.1
0.2
0.3
0.4
0.5
Frequency = 1/Wavelength
Figure 5:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
1.0
0.8
0.6
0.4
0.2
0.0
0.0
Figure 6:
0.1
0.2
0.3
0.4
0.5
Frequency = 1/Wavelength
(4)
For a Moving Average, the length k[n] over which the average is taken will
therefore have to be varied so that it encompasses the same tonnage (or as nearly
as possible, the same tonnage).
4.2 Exponential smoothing (EWMA)
The treatment for exponentially smoothing over observations with varying
tonnages is not so immediately obvious.
It is clear that the appropriate alpha value is a function of the tonnage: if the
tonnage w increases we should use a larger [w], so that a larger tonnage has
more influence on the smoothed grade.
Consider two scenarios. Under the first scenario, two successive shifts have
identical grade x and equal tonnage w.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
11
Under the second scenario a single shift delivers ore of twice the tonnage, 2w
but again with the same grade x.
If we start with a smoothed grade SO, it is clear that under either scenario we
should end up with the same grade, which we shall call SF.
Under the first scenario, where each of the two shifts has grade xn and
tonnage wn:
SF = (1-[w])((1-[w])SO + [w]x) + [w]x
= (1-[w])2SO + [w](2-[w])x
(5)
Under the second scenario, the single shift has grade x and tonnage 2w:
SF = (1-[2w])SO + [2w]x
(6)
(7)
(8)
We see that these two conditions are in fact identical, both being equivalent
to:
[2w] = (1-[w])2
(9)
(10)
(11)
Equation (11) has the satisfactory properties that [0] is zero, and also that
[W] tends to 1 as W becomes very large.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(12)
= (1-[1])/[1]
(13)
(14)
Wt
=2/k
EWMA
MA
1/k
n-k+1
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
13
6 Conclusions
By considering both the time (or distance) domain and the frequency domain,
this paper has shown that Exponential Smoothing (EWMA) has considerable
advantages over Moving Averages (MA).
The problem of varying sample sizes has been considered, and we have
shown that the appropriate exponential smoothing factor for a sample of size w is
given by equation [11], [W] = 1 - (1-[1])W, where [1] is the exponential
smoothing factor to be applied to samples of unit weight.
We have further shown, in equation (14), that [1] should be approximately
2/T, where T is the comparable MA tonnage, or the blending tonnage in a
production process.
References
[1] Diebold, F.X. Elements of Forecasting. Fourth ed. Mason, OH: SouthWestern, 2008.
[2] Box, G. & Jenkins, G. Times Series Analysis: Forecasting and Control. San
Francisco, CA: Holden-Day, 1970.
[3] Ramjee, R., Crato, N. & Ray, B.K. A note on moving average forecasts of
long memory processes with an application to quality control. International
Journal of Forecasting,18, pp. 291297, 2002.
[4] Everett, J.E. Computer aids for production systems management in iron ore
Mining. International Journal of Production Economics,110/1, pp. 213-223,
2007.
[5] Marks R.J. Handbook of Fourier Analysis and Its Applications. Oxford
University Press, 2009.
[6] Cooley, J.W. & Tukey, J.W. An algorithm for the machine calculation of
complex Fourier series, Mathematics of Computation,19, pp. 297301,
1965.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
15
Abstract
The new high-speed crushing technique of glass bottles was proposed for
recycling. The proposed system uses the underwater shock-wave by explosive
energy. This method is excellent compared with the conventional mechanical
method in the crushing time, crushing efficiency, collection ratio of glass
cullet and using simple crushing apparatus etc. In this study, using commercial
beer bottle, the behaviors of underwater shock-wave by explosive energy and the
high-speed fracture phenomena of glass bottle were clarified by the experiment
and analytical method.
Keywords: glass-cullet, underwater shockwave, explosive enemy, high-speed
fracturerecycle.
1 Introduction
Glass bottles are widely used as containers of drinks, food and medicine due to
their characteristics for sealing up, transparency and storage stability. Many of
these glass bottles after use are reused as returnable bottles or recycled as the raw
material of glass container which is called cullet that are crushed to small
fragments [1, 2]. The authors paid attention to this raw material recycling
process of generating cullet. In the conventional cullet generation method, a
mechanical crushing one is used [3]. In order to recycle them by using this
method, these bottles need to be washed inside before melting. As the bottles
shapes vary greatly, this washing of the bottles inside takes a lot of time and it is
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110021
2 Experiment
2.1 Experimental method
The schemes of experimental apparatus are shown in Fig.1. The experiment was
done in an explosion-proof steel container, which had been filled with water. The
glass bottles were given underwater shockwave by two kinds of explosive, string
type (1) and ball type (2) shown in Fig.1. The glass fragments called cullet
were all collected and the weight of each fragment size was measured after the
explosive experiment.
Experimental unit
beer bottle
explosive
(1)string type
(2)ball type
Distance
l
Safety box
water
(b) Detail A
Scheme of experimental apparatus.
Figure 2:
17
The PETN(explosive rate: 6308 m/s) as explosive and the electric detonator
as ignition were used and explosive shape.
2.3 Observation of high-speed fracture process
In order to visualize the behavior of underwater shock-wave and the bottle
fracture process, a high-speed photograph system was used, which consisted of a
high-speed camera (IMECOM 468) and two Xenon flashlights (H1/20/50 type).
The optical observation system is shown in fig.3
Wall
Wall
Window
IMECOM
468
Xenon
Flashli
Specimen
PMMA
tank
Explosion room
in
trigger
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
3 Experimental results
3.1 Cullet distribution
The fragments of glass bottle were collected and these cullets were classified
by three kinds of sieves (4.75 mm, 2 mm, 1 mm) and each weight was measured.
The weight ratio in the case of string type explosive is shown in fig.4.
From fig.4, it is found that the weight of 1 mm or less cullet size decreases as
the distance increases and the weight ratio of 1 mm or less grain size increases as
the distance decreases. Here, it is interesting that the weight ratio of the 12 mm
cullet size is almost constant regardless of the distance l.
Distance (mm)
Figure 4:
Shock wave
Shock wave
Shock wave
(b) 100s
(c) 120s
19
Explosive dir.
(a) 80s
Figure 5:
Shock wave
Shock wave
Shock wave
Explosive
di
(a) 80s
(c) 120s
(b) 100s
Figure 6:
Shock wave
Shock wave
Beer
bottle
Beer
bottle
Explosive
Explosive
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
320s
480s
Explosion-gas.
640s
Figure 8:
800s
4 FEM simulation
The fracture behaviors of bottles submitted to underwater shock-wave was
simulated using the FEM analysis (FEM code: LS-DYNA) [4].
In this simulation, 2-D model (shock-wave propagation analysis) and 3Dmodel (shock-wave propagation analysis and fracture analysis) were employed.
2-D and 3-D analysis models are shown in fig.9 (a) and (b), respectively.
air
water
Explosive
Glass
bottle
Explosive
Glass
bottle
air
water
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
21
Shock-wave
Fracture parts
Figure 10:
Shock-wave
Figure 11:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
5 Conclusions
The relation between strength of underwater shockwave and fracture cullet
grain size were discussed and the behaviors of shock-wave and fracture process
by shock-wave were discussed. The results obtained are summarized as follows:
1) The weight ratio of the small cullet grain sizes increases as the distance
decrease and the weight ratio of the 12mm cullet size is almost constant
regardless of the distance l.
2) The behaviors of shock-wave generated by explosive energy and the highspeed fracture process were clarified by using a high-speed photograph
method and FEM simulation.
References
[1] Sakka, S., The dictionary of glass, Asakura Press Ltd., 1998.
[2] Kobayashi, A.S., Experimental Techniques in Fracture Mechnics, Society
for Experimental Stress Analysis ,1973.
[3] Sakamoto, H., et al, WIT transactions on Modelling and Simulation, Vol.41,
pp.497-503,2005.
[4] V.P.W. Shim, S. Tanimura, C.T. Lim, Impact Response of Materials &
Structure, Oxford University Press, 1999.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
23
Abstract
A new approach is described for evaluating fracture in composite structures. This
approach is independent of classical fracture mechanics parameters like fracture
toughness. It relies on computational simulation and is programmed in a standalone integrated computer code. It is multiscale, multifunctional because it
includes composite mechanics for the composite behavior and finite element
analysis for predicting the structural response. It contains seven modules; layered
composite mechanics (micro, macro, laminate), finite element, updating scheme,
local fracture, global fracture, stress based failure modes, and fracture
progression. The computer code is called CODSTRAN (Composite Durability
Structural ANalysis). It is used in the present paper to evaluate the global
fracture of four composite shell problems and one composite built-up structure.
Results show that the composite shells. Global fracture is enhanced when
internal pressure is combined with shear loads.
Keywords: micro mechanics, laminate theory, thin shells, thick shells, built-up
structures, non-linearities.
1 Introduction
The global fracture behavior of fiber composite structures has become of
increasing interest in recent years, because of the multitude of benefits that
composites offer in practical engineering applications such as lightweight
airframes, engine structures, space structures, marine and other transportation
structures, high-precision machinery, and structural members in robotic
manipulators. Composite structures lend themselves to tailoring to achieve
desirable characteristics such as a high strength to weight ratio, dimensional
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110031
2 Fundamental concept
It is instructive to briefly describe the fundamental concepts on the origin of
CODSTRAN and the related concepts. The most obvious one is that classical
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
25
3.
4.
Figure 2:
Once the finite element solution of the first load increment with internal
forces/displacements has been obtained at each node then the downward
decomposition starts. It is noted that the finite element solution requires nodal
information because it is computationally more expedient for the composite
decomposition to be performed [8]. Then the decomposition is repeated by using
the ply information stored in the synthesis process. The mono ply stresses/strains
are then evaluated by using the schematic in fig. 3 where the local failures are
identified. If any failures occurred at this level, the respective stiffness and
fractured region are eliminated for the second simulation. The process continues
until local incremental convergence has occurred. At this point the load is
increased by the second increment. Loading increments are progressively larger
at the beginning until local fracture is detected. Then the load increment is
reverted back to the last increment and is progressively halved until convergence
is achieved and the next load increment is applied by a value equal to the
previous load increment. Fig. 4 illustrates this concept. Therefore, the solution is
incremental from the micromechanics scale to the structural local/global
convergent scale. The structural dynamics equations solved by the finite element
in CODSTRAN, which have global variable convergence criteria, are
summarized in the chart as is depicted in fig. 5.
These equations are solved at each load increment. There is another level of
ply failure criteria. This is a stress failure criterion with a combined stress failure
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
Ply micro-stresses
decomposition.
Figure 4:
through
composite
stress
27
progressive
Figure 5:
Figure 6:
29
laminate configuration of the shell are also shown in the title of fig. 7. Additional
details are described in [8]. The environmental conditions are noted in the small
table insert to the right of the shell. The results are plotted pressure versus
damage percent in part (b) top right. Third vibration frequency versus pressure
part (c), down left; and third vibration frequency versus damage percent (d),
down right.
Each plot has six different curves, one each for environmental effects. The
very top curve () is with no environmental effects. The second from the top
curve () is room temperature and one-percent moisture content by volume. The
third from the top curve () is for the temperature 99.9C (200F). The fourth
from the top curve () represents the combined temperature moisture effects
200F with one-percent moisture by volume. The fifth from the top curve () is
for temperature 149C (300 F) only. The last curve () is for the combined
environmental effects 148.9C (300 F) with one-percent by volume moisture.
Note that the 148.9C (300F) temperature only curve shows the second greatest
failure pressure. The reason is that that shell has the lowest residual stress that
counteracts the temperature degradation effects. The important point to observe
in these results is that the environmental effects have substantial structural
integrity degradation effects. The curves plotted in fig. 7(c) show the significant
degradation on the third vibration frequency. The structural degradation effects
are also significant when the third vibration frequency is plotted versus damage
percent.
Figure 7:
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 9:
Figure 10:
31
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 11:
The fourth illustrative example is another thick shell under external pressure as
shown in fig. 12 where the shell composite system, laminate configuration, finite
element model are also shown. This shell was analyzed for degradation in the
frequency and the buckling load as the damage propagated along the longitudinal
direction as shown in fig. 13. The buckling load did not degrade until the damage
length was about 42cm (28 in). long. After that the buckling load degradation
was relatively great with global collapse of about 80 psi down from 2.3KPa
(340psi0 or a degradation of about 1.8KPa (260psi). This is a very interesting
result because it indicates the buckling of a composite thick shell has a relative
large damage tolerance with respect to buckling resistance. Though frequency
degradations are not shown here, these degrade slower than the buckling load.
Additional results are described in [11].
Figure 12:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 13:
33
4 Concluding remarks
Up-dated computational simulation is one direct approach to evaluate fracture in
composite structures. CODSTRAN is a stand-alone general purpose integrated
multiscale/multifunctional computer code which consists of several modules
including micromechanics through structural analysis. It is applicable to general
classes of composite structures. Composite shells fracture investigated herein
included defect free shells and shells with defects. The simulation results
presented are from the microscale to global structural fracture. A built-up
composite structure subjected to combined loads was evaluated from
micromechanics fracture to global fracture. Results from all of the above
problems indicate that shear load combined with tension or compression stabilize
the solution as shown by the greater damage sustained at global structural
fracture. Embedded defects have no influence in the global shell fracture when
the shell is subjected to internal pressure.
Acknowledgement
The authors express their sincere appreciation to Dr. Subodh Mital whose review
comments improved the readability of the article.
References
[1] Chamis, C. C. and Sinclair, J. H., Dynamic Response of Damaged
Angleplied Fiber Composites. NASA TM79281, 1979.
[2] Minnetyan, L., Chamis, C. C. and Murthy, P. L. N., Structural Behavior of
Composites with Progressive Fracture, Journal of Reinforced Plastics and
Composites, Vol. 11, No. 4, April 1992, pp. 413442.
[3] Chamis, C. C. and Smith, G. T., Composite Durability Structural
Analysis. NASA TM79070, 1978.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
35
Abstract
Simulation is an important tool in understanding and designing physical systems,
engineering systems and social systems. Because of its importance and broad
range, it has been the subject of numerous research studies and books.
Simulation is about techniques for using computers to imitate (simulate) the
operations of various kinds of real world complex systems. It has been an
accepted tool for the improvement of decision making through learning how to
deal with the complexity of the real world. The complexity slows the learning
loop and reduces the learning gained on each cycle.
This paper illustrates the importance of system thinking in enhancing the
simulation process and providing the ability to see the world as a complex
system, where you cannot just do one thing and that everything is connected
to everything else. It is a holistic worldview that enables people to act in
consonance with the best interests of the system as a whole and thus enhance the
learning loop through various system thinking tools. The case study in this paper
illustrated the use of a system dynamics simulator to allow the financial manager
in a firm to test different account receivables scenarios and the strategies to
control these accounts. We found that this simulator has helped the manager to
get a deeper insight into the effect of their decisions and the different interrelated
variables that involve with setting a strategy to control account receivables.
Keywords: systems thinking, complex systems, systems dynamics, simulation.
1 Introduction
As the world becomes more complex, many people and organisations find
themselves bombarded with lots of problems to solve, less time to solve them,
and very few chances to learn from their mistakes. Managers will need to deal
with complexity and with these changes. Also they need to develop their new
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110041
2 Systems thinking
The systems thinking concept was produced as the world of systems found that
there is a need to shift from the more linear, analytic way of thinking that many
of us are used to, to a non linear, dynamic and holistic thinking.
Moving to new the Paradigm in analysing complex problems enables the
managers and the analyst to understand dynamic relationships and complexity
that influence the behaviour of a system as shown below
2.1 Features of systems thinking
2.1.1 Dynamic and non-linear thinking
As discussed before, the static thinking assumes that causality runs only one way
and any systems factors are independent which is quite primitive. Dynamic
thinking offers effective alternatives to see and understand systems or problems.
This creative thinking allows viewing the world with ongoing, interdependent
relations, dynamic process. Each of the causes in the dynamic thinking is linked
in a circular process to both the effect and to each of other causes. These circular
processes are the feedback loops which enable us to better understand what is
going on in the system; these circular loops represent a non-linear and dynamic
thinking (Richmond [2]).
Taking into consideration this type of thinking, the analyst or the manager can
understand the problem in a better way as the feedback process inside a firm
clarifies the dynamic relations inside a firm, analysing the causes and effects and
their interconnection and allows for observing the behaviour over time.
For example, if a firm decreases its products price, this decision has an effect
on the sales as it increases the sales, but on the other hand the firms profits will
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
37
be less than usual, which affects the firms pricing policy and push the firm to
increase the prices.
2.1.2 Holistic thinking
Holistic thinking is one of the most significant features of systems thinking as it
allows us to see the Big Picture. So instead of examining each part of the
system, the whole system is examined. Whatever the problem we are
experiencing and searching for its source, we must always widen our focus to
include that bigger system. Dealing with the wholes rather than parts is a very
effective idea in system analysis. Each part or department in a firm is not isolated
from other department, so trying to solve a problem in one process; we must first
look the whole firm and the interconnections inside it to understand the nature
and the reasons for such problem.
This research illustrated how systems thinking tools provides managers and
analysts with a creative holism
2.1.3 Systemic thinking
In recent years, systems thinking has provided new effective methods for
tackling issues in a systemic than a reductionist way. Systems thinking allow us
to look for various patterns of behaviour, to seek underlying systemic
interrelationships which are responsible for these types of behaviour and events.
A recent study by Bartlett [3] defines systemic thinking as a technique that
provides a deeper insight into complex situation very quickly. It stated that
Systemic thinking combines analytical thinking (breaking things apart) with
synthetical thinking (putting things together) as the next figure shows.
This provides more effective holistic and systemic analysis of the system.
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
39
There is lots of user friendly System Dynamics software available now that
allows conversion of causal loop diagram and stock and flows diagram into
sophisticated computer simulation of the problems or issues being investigated.
Examples of these different kinds of software are DYNAMO, STELLA,
ITHINK, VENSIM, and POWERSIM (the latest is being used in my study).
Initial values are identified for the stocks, variables values are also identified
for the relationships, and the structural relationships are determined between the
variables using constants, graphical relationships, and mathematical functions
where appropriate. The computer simulation software also facilitates the creation
of Microworlds (or management flight simulators) which are kinds of System
dynamics Simulators [7, 8] as shown in the case study.
3 Case study
In this section, a simplified generic system dynamics model of a small firm has
been built. This model can be used to analyze the firm which sells product or
services to its customers and control the money owed to the firm by its
customers which is shown in its accounts as an asset called account receivables.
Figure 3 shows the account receivable structure which represents the total
monies owed the firm by its customers on credit sales made to them. It depends
mainly on the credit policy given to the customer to encourage them to increase
their purchases from the firm.
Collecting the sales revenue from the customers, new loans, and the owners
investment are the main income to the firm which increases the cash as shown in
figure 4. The cash is the most liquid asset in the firm that is always ready to be
used to pay bills, pay supplier, repayment of bank loans, and much more
expected and unexpected outlays.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
41
The manager has the opportunity to change the credit term allowed by the
firm to its customers. This shows the analyst the behaviour of the firm if any
changes in the credit terms happened and its effect on the monthly repayment of
loan, interest rate, and the line of credit approved by the bank.
Figure 5:
Figure 6:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 7:
Examples of some of two main ratios that have significantly changed when
simulating the scenario are shown in figure 7 which shows the historical
changes during one year when the days allowed to customers payment =0,
while figure 8 depicts the changes that happened to these ratios when the firm
expand its credit policy. As assumed before, the firm offer the customers 60 days
to pay for their purchases.
Current Ratio
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
43
4 Conclusion
It is clear now that Systems Thinking tools especially System Dynamics are
better than others because it can easily deal with non-linearities and time which
are not considered by a static analysis.
By applying System Dynamics, one can enhance the usefulness of the model
to address and analyse problems in a complex situations and provide more
significant, rational and pertinent policy recommendations.
In summary, the process is to observe and identify problematic behaviour of
a system over time and to create a valid diagrammatic representation of the
system, capable of reproducing by computer simulation the existing system
behaviour and facilitating the design of improved system behaviour. Existing
business simulators are designed to allow the users to play a realistic role in
management decision making. Users can make decisions and receive outcome
feedback about their performance, by rehearsing strategies and observing results,
the managers in the case study were able to discover how to make better
decisions and hence improve their performance and reduce the risk of losing
money and thus increase the firms financial performance as illustrated.
References
[1] Sterman, J., Business Dynamics: System Thinking and modelling for a
complex world, USA, 2000
[2] Richmond, B., Systems thinking: critical thinking skills for the 1990's and
beyond. System dynamics review, 9(2), pp. 113-133, 1993
[3] Bartlett, G., Systemic thinking: a simple thinking technique for gaining
systemic focus. In the international conference on thinking, breakthroughs,
USA, 2001
[4] Checkland, P., Systems Thinking: Systems Practice, John Wiley & Sons,
1981
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
45
Abstract
Flatback (Blunt Trailing Edge) airfoils are adopted for the inboard region of
large wind turbine blade due to their structural and aerodynamic performance
advantages. Very limited experimental data at high Reynolds Number makes it
difficult for wind turbine designers to design and use these section shapes
because the wind tunnel experiments are limited by the Reynolds Number and
the solid blockage. In this study, a 2-D Reynolds-Average Navier- Stokes Solver
coupled with a transition prediction based on the eN method is used to CFD
computation of blunt trailing edge airfoils. A new coupling structure with a timeaccurate transition prediction model taking the unsteady flow as a result of the
bluff-body vortex shedding into account is developed. The computational grid is
C-Grid generated by the tool of Gridgen, and the vertical angle at the blunt
trailing edge is smoothed slightly to increase the grid quality. An airfoil of
DU97-Flat modified by DU97-W-300 airfoil for wind turbine application is
calculated and effects of grid points are investigated. The aerodynamic
performance of DU97-W-300 is calculated and comparisons between the results
from literature and wind tunnel experimental data are performed, and the results
show that the method in present study can obtain the aerodynamics performance
with much less grid numbers while agreeing better with the wind tunnel
experimental data than the literature. One issue that requires attention is the
prediction of maximum lift and the failure to accurately capture stall behaviour
by the various computational techniques used in this study.
Keywords: wind turbine, airfoil, flatback airfoil, couple, transition prediction.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110051
1 Introduction
In aerodynamic performance prediction and geometry design of horizontal axis
wind turbine (HAWT), the airfoil data of lift and drag coefficient for the
different airfoils applied along the span play a significant role. As a result, the
designer will spend a lot of up-front time to prepare reliable airfoil aerodynamic
data. It is believed that the errors in airfoil data tables are the single largest
source of error in most rotor load and performance predictions [13].
Recently, the blunt trailing edge or called flatback airfoils have been proposed
for the inboard region of large wind turbine blades [46]. Flatback airfoils
provide several structural and aerodynamic performance advantages.
Structurally, the flatback increases the sectional area and section moment of
inertia for a given airfoil maximum thickness. Aerodynamically, the flatback
increases section maximum lift coefficient and lift curve slope and reduces the
well-documented sensitivity of the lift characteristics of thick airfoils to surface
soiling [7]. But the flow separation and body-off vortex shedding in the flatback
region increase the drag also. One of the problems with wind tunnel testing
thickness airfoils is that these types of models tend to create a significant amount
of solid blockage and wake blockage thereby affecting the measurements and the
flow development in the wind tunnel test section. Solid blockage is typically kept
at 5% or less, but this value limits the model chord length, which in turn limits
attainable Reynolds Numbers. The Reynolds Numbers are also restricted by load
limitations of wind tunnel pyramidal balance. As a result, the published
experimental results on flatback airfoils are obtained at low Reynolds Numbers,
or are for limited trailing edge bluntness. The lack of experimental data
precipitates the analysis of flatback airfoils using computational fluid dynamics
(CFD). Recent years, the analysis of flatback airfoils using CFD tend to using
more and more grid numbers to capture the vortex structure, but the transition
positions are generally obtained by a transition position prediction code and then
used as a fixed transition position in the N-S solver and therefore that does not
count for the unsteady nature of the flow field because of the blunt trailing edge
[812]. In present study, several computational techniques are applied including
using an N-S solver coupling a transition position prediction code and transition
flow region model, and the transition positions are predicted counting the
unsteady nature of flow.
2 Flow solver
The aerodynamic performance characteristics of flatback airfoils are calculated
using CFD code based on Reynolds-average Navier-Stokes (RANS) equations.
The code solves the compressible, two-dimensional, RANS equations. The
governing equations are central differenced in standard second-order form and
second- and fourth-order artificial dissipation terms are added for numerical
stability. The code employs local time stepping and implicit residual averaging,
multi-grid technique to accelerate convergence.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
47
DU97-Flat
Y/C
0.1
DU97-W-300
-0.1
-0.2
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
X/C
Figure 1:
In previous study, in order to keep cell skewness minimal and the angles in
the cells close to 90, a C-grid works well with sharp trailing edges. When the
trailing edge grows thicker, the cell skewness increases and the grid-shocks
appear at the blunt base; in that case, the O- grid in contrast does not generate
grid shocks and works well with blunt trailing edges [11]. But in the present
study, in order to compare with sharp trailing edge, C-grid is generated using the
grid generation tool called Gridgen. In grid generating, the vertical angle at the
blunt trailing edge is smoothed slightly to increase the grid quality, and that is
common for this kind of airfoils [13]. In this study, the RANS equations are
solved coupling a transition position prediction code to obtain the pressure
distribution of boundary layer. The accuracy of the parameters of the boundary
layer influence the accuracy of the transition position prediction, therefore an
initial spacing is 1.010-6 and the first 60 points are with equivalent spacing.
Table 1 shows a series of grid numbers. Fig. 2 shows the C-grid of 35296 that
was used and a close-up of the trailing edge.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Grid
wall
wake
vertical
far field/chord
25680
962
32
80
12
35296
1282
48
96
20
400104
1522
48
104
20
448112
1602
64
112
20
496112
1842
64
112
30
544128
1922
80
128
30
Figure 2:
Fig. 3 shows the calculated lift coefficient versus the number of grid points.
We can see from the figure that with grid number increasing, the lift coefficient
tends to converge toward a constant value, and in order to obtain enough
accurate aerodynamic performance, we should increase the number of grid points
but that brings a increase of computational cost; as a result, we should try to find
a trade-off between them. In the following calculation, the number of grid is
selected as 448112.
0.4
Cl
0.35
DU97-Flat 6
Re=3.0x10
x
x
0.3
0.25
20000
30000
40000
50000
60000
70000
Grid Points
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
49
4.1 eN method
The eN method based on linear stability analysis is used as a transition criterion.
For a set of specified dimensional frequencies, the amplification N factor is
computed as:
x
N i dx
(1)
x0
is zero. A
Re X 5.2 Re X T 3 4
(2)
Re X 10.4 Re X T 3 4
(3)
This model is applicable for flow situations where transition is predicted well
upstream of laminar separation, but numerical experiments show that in cases
small separation bubbles, this model is available as well.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
method is used to
*,T
*,T
is defined as:
C T xT
(4)
(5)
f) As convergence criterion, xl
*,T
In the case that criterion is satisfied, the iteration is finished; else the algorithm
*,T
51
DU97-Flat 6
Re=3.0x10 =0
0.9
X tru , X trl
0.8
0.7
0.6
0.5
Upper Surface
0.4
0.3
0.2
Lower Surface
0
Figure 4:
Number of iterations
10
0.5 DU97-Flat
6
Re=3.0x10
=0
Lift Coeffieicnt
Drag Coefficient
0.4
0.14
0.12
0.1
Cl
0.08
0.2
Cd
0.3
0.06
0.04
0.1
0.02
0
10
Number of iterations
Figure 5:
DU97-W-300
6
Re=3.0x10
Xfoil Cal.
Coupling Cal.
0.5
Xtru ,Xxtrl
0.4
Lower Surface
0.3
Upper Surface
0.2
0.1
Figure 6:
2
10
12
14
DU97-W-300
Ma=0.165 Re=3.0x106
DU97-W-300
Ma=0.165 Re=3.0x10 6
0.05
x
x
Cl
x
x x
x
x
x
x x
x
x
x
x x x xx
x
xx x x x
x
-0.1
x
RANS S-A
SACCARA S-A
Experimental
Figure 7:
x
x
0.5
-0.15
0.5
-0.05
xx
x
x
Cm
x
x
1
x
xx
1.5
0
Cl
1.5
x x x xx
10
15
-0.2
x
-0.25
x
0.02
0.04
RANS S-A
SACCARA S-A
Experimental
0.06
0.08
Cd
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
53
7 Conclusion
Flatback airfoils are considered for the inner regions of large wind turbine
blades. The concept of blunt trailing edge is nothing new and has been previous
investigated by CFD method. However, previous studies are typically based on a
RANS solver using a fixed transition locations computed from a transition
prediction code, which dont take the unsteady nature of flow field because of
the vortex shedding at the trailing edge into account. From the investigation of
the effects of grid points, a trade-off should be made between the growth of the
number of grid points and the computational time. A RANS solver is coupled
with a transition prediction code with a time-accurate transition model. The
results show that the transition locations and the computed aerodynamic
coefficient converge toward constant values within 10 iteration steps. The wind
turbine airfoil of DU97-W-300 is computed by the coupling method and the
results are compared with experimental results and the results from the literature.
The comparisons show that the computed results agree better with the
experimental data than the results in the literature while with less grid points.
One issue that requires attention is the prediction of maximum lift and the failure
to accurately capture stall behavior by the various computational techniques used
in this study.
References
[1] Patrick J. AeroDyn Theory Manual, NREL/EL-500-36881, 2005.
[2] Simms D., Schreck S., Hand M., Fingersh L.J. NREL Unsteady
Aerodynamics Experiment in the NASA-Ames Wind Tunnel:
A Comparison of Predictions to Measurements. NREL/TP -500-29494,
2001.
[3] Tangler, J.L. The Nebulous Art of Using Wind-Tunnel Airfoil Data for
Predicting Rotor Performance. NREL/CP-500-31243, 2002.
[4] TPI Composites, Innovative Design Approaches for Large Wind Turbine
Blades Final Report, SAND2004-0074, 2004
[5] Standish, K.J., and van Dam, C.P., Aerodynamic Analysis of Blunt
Trailing Edge Airfoils, Journal of Solar Energy Engineering, 125(4),
pp. 479-487, Nov. 2003
[6] Jackson K, Zuteck, M., van Dam, C.P., Standish, K.J., and Berry, D.,
Innovative Design Approaches for Large Wind Turbine Blades, Wind
Energy, 8(2), pp. 141-171, 2005
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
55
Abstract
The unsteady separated turbulent flow around an oscillating airfoil plunging in a
sinusoidal pattern in the regime of low Reynolds number is investigated
numerically, employing the URANS approach with advanced turbulence model,
k-SST transitional. A comparison with experimental data shows that the
Transition SST model is capable of predicting the flow characteristics for the
increasing cycle while the main difficulty lies in the accurate modeling of the
complicated separated flows during the decreasing stroke. The flow development
of the dynamic stall is also discussed.
Keywords: dynamic stall, plunging airfoil, k--SST transitional model.
1 Introduction
Dynamic stall has been widely known to significantly affect the performance of a
large variety of fluid machinery, such as helicopters, highly maneuverable
fighters, gas turbines, and wind turbines. It is well recognized that the dynamic
stall process can be categorized into four key stages, i.e. attached flow at low
angles of attack, development of the leading edge vortex (LEV), the shedding of
the LEV from the suction surface of the blade and the reattachment of the flow
[1]. Numerous experimental and computational investigations [25] have shown
that the unsteady flow can be separating or reattaching over a large portion of the
upper surface of the oscillating airfoil and that the predominant feature of the
dynamic stall is the formation and rapid convection over the upper surface of the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110061
2 Numerical simulations
2.1 Case studied
The aerofoil employed in the numerical calculations is an E361 airfoil with a
chord length of c=0.15m and maximum thickness of 12%c which in this case
executes the sinusoidal plunging motion h=8(cm) +sin(t) with reduced
frequency k=c/2U=0.14. The free stream velocity is U=10m/s with a
turbulence intensity of 0.2% which corresponds to a chord Reynolds of
Rec=1105. The mean angle of oscillations was set up 12deg with in static stall
angle. Numerical set up are based on the experimental tests in order to compare
numerical results with experimental data. A more comprehensive description of
the experimental setup is detailed in [7].
2.2 Numerical techniques
Firstly for the static flow field investigations the RANS approach with advanced
turbulent model, namely the k--SST transitional model were used, furthermore
the k- RNG and the low Reynolds k--SST model were employed as the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
57
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(a) =8deg
59
(b) =10deg
(c) =12deg
Figure 2:
gradient amplified the disturbance in the separation zone and prompted transition
and the bubble was shrinking in size. According to this model, at angle of attack
of 10 deg, the separation position is at around 6% chord position and transition
occurred at 14.2% of chord position. It is noticeable to say that, at this angle of
attack the airfoil was close to stall therefore a large portion of the flow on the
aerofoil was separated and the separated vortex formed near the trailing edge.
Because of better prediction of k- SST transitional method, this model applied
for the dynamic cases.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(a)
(b)
Figure 3:
61
Figure 4:
e) eq=19 deg
a)
eq=11.76deg
b) eq=12.26deg
c)
eq=14.3deg
d) eq=17.55 deg
Figure 5:
f) eq=20.17 deg
g) eq=20.75
h) eq=21 deg
i) eq=21.15 deg
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
a) eq=20.82 deg
f) eq=14.07 deg
b) eq=20.06 deg
g) eq=12.02 deg
c) eq=19.2 deg
h) eq=10 deg
d) eq=18.15 deg
i) eq=8.1 deg
e) eq=15.54deg
j) eq=6.27 deg
63
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 7:
a)5%
b) 10%
c) 20%
d) 50%
65
Figure 8:
4 Conclusions
In this paper static and dynamic flow field on an Eppler 361 airfoil at Re=105
were investigated. For a static airfoil different turbulence methods analyzed
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] P. Wernert, W. Geissler, M. Raffel, and J. Kompenhans, Experimental and
numerical investigations of dynamic
stall on a pitching airfoil, AIAA
journal, vol. 34, pp. 982-989, 1996.
[2] McCroskey, W. J., McAlister, K. W., Carr, L. W., Pucci, S. L., Lamber, O.,
and Indergrand, R. F., Dynamic Stall on Advanced Airfoil Sections, Journal
of American Helicopter Society, Vol. 26, July 1981, pp. 4050.
[3] McCroskey, W. J., Unsteady Airfoils, Annual Review of Fluid Mechanics,
Vol. 14, 1982, pp. 285311.
[4] Ericsson, L. E., and Reding, J. P., Fluid Mechanics of Dynamic Stall. Part
I. Unsteady Flow Concepts, Journal of Fluids and Structures, Vol. 2,Jan.
1988, pp. 133.
[5] Lee, T., and Basu, S., Measurement of Unsteady Boundary Layer
Developed on an Oscillating Airfoil Using Multiple Hot-Film Sensors,
Experiments in Fluids, Vol. 25, No. 2, 1998, pp. 108117.
[6] S. Wang, L. Ma, D. Ingham, M. Pourkashanian, and Z. Tao, Numerical
Investigations on Dynamic Stall Associated with Low Reynolds Number
Flows over Airfoils, in The 2010 International Conference On Mechanical
and Aerospace Engineering (CMAE 2010) Chengdu, China, 2010.
[7] Ajalli F., Mani M., Soltani M. An Experimental Investigation of Pressure
Distribution around a Heaving Airfoil, The 5th International conference on Heat
Transfer, Fluid Mechanics and Thermodynamics, South Africa, Spring 2007.
[8] Fluent 6.3,26 (Theory Guide),"
[9] M. Mani, F. Ajalli M. R. Soltani, An experimental investigation of the
reduced frequency effects into pressure coefficients of a plunging airfoil,
Advances in Fluid Mechanic 2008.
[10] McAlister, K. W., Carr, L. W. & McCroskey, W. J., Dynamic stall
experiments on the NACA 0012 airfoil, NASA TP 1100. 1978.
[11] Lee, T., and Gerontakos, P., Investigation of Flow over an Oscillating
Airfoil, Journal of Fluid Mechanics, Vol. 512, 2004, pp. 313341.
[12] T. Lee , P. Gerontakos, Investigation of flow over an oscillating airfoil, J.
Fluid Mech. (2004), vol. 512, pp. 313341
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
67
Abstract
The increase of the use of biofuels, like bioethanol, has presented new challenges
to engineering problems. The optimization of bioreactors is crucial to improve
the bioethanol extraction and to avoid stagnant zones in the flow that can
compromise the chemical reactions involved in the process. This paper presents
a solution using Computational Fluid Dynamics tools coupled with Genetics
Algorithm to find an improved reactor structure. The preliminary results show
that the influence of the height outlet tube alone is able to modify completely the
flow pattern inside the reactor, improving the efficiency of the reactor.
Keywords: bioethanol, bioreactor, CFD, genetic algorithms, optimization.
1 Introduction
The search for new ways to provide fuel for the society has been constantly
increasing. One of the great challenges for scientists and academic researchers is
to provide fuels without jeopardizing the environment. An interesting alternative
is the bioethanol production from sugar cane or other kinds of biomass, like corn,
beet, etc.
The production of sugar cane ethanol is economically feasible only in large
scale, such as 45 thousand liters per hour, for instance. At these scales, any
efficiency improvement can result in a very significant overall optimization in
terms of production rates and environmental impacts as well. Due to the
difficulty to do experimental analyses, the expensive materials that are used and
the time limitations, Computational Fluid Dynamics (CFD) tools are a useful
alternative to study the flow inside reactors and bioreactors, and to better define
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110071
69
2 Methodology
The ANSYS CFX 13.0 commercial software was used to simulate the flow
inside the bioreactor. To perform the flow inside the bioreactor, the selected fluid
was initially water. The modelling equations were the Navier-Stokes equations,
given by eqn. (1) and eqn. (2):
. U 0
t
(1)
( U )
. UXU p SN
t
(2)
2
3
U U .U
T
(3)
where U is the normal velocity, p the pressure, t the time, de density and SN
the forcing terms.
Due the computational efforts required, some simplifications in the flow and
the mesh were assumed. This simplification does not affect the objective of this
study. A steady flow simulation was performed. The flow was isothermal and
incompressible. Turbulence effects were taken in account with the K- model,
and the Navier-Stokes equations were solved using the Finite Volume method.
The advection scheme was solved using High Resolution Method.
To perform the first simulation, an initial structure was used. Figure 1 shows
the mentioned structure, composed by one tube for fluid entrance, one tube for
fluid exit, and a cylindrical reservoir. The tube diameter for flow entrance and
exit are both 5 cm, and the reservoir volume is 887 liters. These dimensions were
considered according to the requirements for ethanol sugar cane production. The
reservoir height was 1.5 m and the initial outlet tube was 1m.
A tetrahedral mesh with about 2787 nodes and 13963 cells was used in this
simulation, as shown in Fig. 2.The inlet velocity applied was 0.1m/s2 and a
steady analysis was performed, and the outlet pressure was defined as 0 Pa.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
Figure 2:
Domain discretization.
Besides the fluid analysis, the aim of this study was to optimize the bioreactor
structure, improving the bioreactor efficiency by velocity controlling. As
discussed previously, genetic algorithms are a useful tool in the optimizing
process, allowing multi-objective optimization.
The results presented in this paper were performed applying the MultiObjective Genetic Algorithm MOGA-II [9].MOGA II is an improved version of
MOGA (Fonseca and Fleming [10]) and uses five different operators for
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
71
3 Results
MOGA II was employed to solve the optimization problems with the parameters
shown in Table 1.
Forty-six height outlet possibilities were performed. All the height outlet tube
possibilities and feasibilities are shown in Fig. 3.
Table 1:
MOGA II parameters.
Number of generations
Probability of directional Cross-Over
Probability of selection
Probability of mutation
Figure 3:
Parameters
100
0.5
0.05
0.1
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
4 Conclusions
Propose new ways to obtain bioethanol from sugar cane and improve the existing
ones are a great challenge for engineering applications. Considering a bioreactor
applied in bioethanol production, this paper presents the results of an
investigation about the influence of the structural parameters in the flow and,
consequentially, in the chemical reactions in the bioethanol production. Were
tested 46 different values of outlet tube height, and the best value was
determined. To select the best parameters a Genetic Algorithm, MOGA II, was
applied by Mode FRONTIER 4.3 commercial software coupled with Ansys CFX
13. Despite the initial tests have computational efforts limited, due the intention
to execute the coupling between the CFD analysis and the optimization process,
a good performance was obtained.
The knowledge about the optimized structure provides a bioreactor yield
improvement, material economy and environmental impact reduction. Future
works are concerned in tests validation and verification through an experimental
prototype fabrication and improvements on CFD modeling.
References
[1] Patwardhan A. W., Joshi J. B., Fotedar S. & Mathew T., Optimization of
gas-liquid reactor using computational fluid dynamics. Chemical
Engineering Science.60 , pp. 30813089 (2005)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
73
[2] Xia J., Wang S., Zhang S. & Zhong J., Computational investigation of fluid
dynamics in a recently developed centrifugal impeller bioreactor.
Biochemical Engineering Journal.38 , pp. 406413 (2008)
[3] Micale G., Rizutti L. & Brucato A., CFD simulation of particle suspension
height in stirred vessels. Chemical Engineering Research and Design.82 ,
pp. 12041213 (2004)
[4] Shao X., Lynd L., Bakker A., LaRoche R. & Wyman C., Reactor scale-up
for biological conversion of cellusosic biomass to ethanol. Bioprocess
Biosystem Engineering. 33, pp. 485-493. (2010)
[5] Rezende M. C. A. F., Costa C. B. B., Costa A. C., Maciel M. R. W.
&Maciel R. F., Optimization of a large scale industrial reactor by genetic
algorithms. Chemical Engineering Science.63 , pp. 330341 (2008)
[6] Mokeddem D. &Khellaf A., Optimal feeding profile in fed-batch
bioreactors using a genetic algorithm. International Journal of Production
Research. 48, pp. 6125-6135. (2010)
[7] Kordabadi H. & Jahanmiri A., Optimization of methanol synthesis reactor
using genetic algorithm. Chemical Engineering Journal.108 , pp. 249255
(2005)
[8] Dasgupta D., & Michalewicz, Z., (eds). Evolutionary algorithms in
engineering applications, Springer: New York, 1997.
[9] Poloni A., Giurgevich A., Onesti L. & Pediroda V., Hybridization of a
multi-objective genetic algorithm, a neural network and a classical
optimizer for a complex design in fluid dynamics. Computational methods
in applied mechanics and engineering. 186, pp. 403-420. (2000)
[10] Fonseca, C.M. & Fleming P.J., Multi objective Genetic Algorithms. Proc.
of the IEE Coloquium on Genetic Algorithm for Control Systems
Engineering, IEEE: London UK, pp. 16, 1993.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
75
Abstract
For pipelines laid in difficult ground, as frequently encountered in Alpine
regions, a pipe system, consisting of individual pipes, made of ductile cast iron,
which are connected at joints at the construction site, are a favourable type of
construction. The paper deals with the development of such pipelines for high
operating pressures. Because the joints have to sustain both the high operating
pressure and high axial forces assuring water tightness, they are a critical part of
such pipelines. In this contribution the synthesis of numerical simulations and
experimental validation will be presented as an efficient approach for developing
such pipes. The dimensions of prototypes are determined on the basis of the
results of fully three-dimensional FE-simulations. These prototypes are then used
to check the design by ultimate load testes and to compare the numerical
prediction with the measured response.
Keywords: penstock, pipelines, high operating pressure, ductile cast iron, sleeve
joints, numerical simulation, material and geometric nonlinear behaviour,
contact behaviour, load carrying behaviour, experimental validation.
1 Introduction
In Alpine regions, frequently difficult ground conditions are encountered for the
laying of pipelines. A suitable construction type for difficult terrain is a pipe
system, consisting of individual pipes made of ductile cast iron, which are
connected at joints at the construction site.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110081
Figure 1:
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
77
2 Numerical model
The joints are the critical regions of such pipelines. Hence, it is important to
properly reflect the behaviour of the connection of two adjacent pipes in a
numerical model. In order to comply with this requirement, two adjacent pipes,
denoted as spigot pipe and socket pipe in Figure 2, together with the locking bars
and the rubber sealing are discretized by three-dimensional finite elements with
quadratic shape functions. Because of two-fold symmetry only a quarter of the
joint is discretized (Figure 3) and the respective symmetry conditions are applied
as boundary conditions at the vertical faces of the FE-mesh.
The employed ductile cast iron is modelled as an elastic-plastic material of
the von Mises-type with isotropic strain hardening. The Youngs modulus is
given as 154050 N/mm, the yield stress at 0.2% permanent strain and the tensile
strength amount to 300 N/mm and 620 N/mm2, respectively, and the strain at
rupture is about 7%. The given material properties refer to the respective test
conducted on the joint DN 200 K13, which is described in the next section. In
particular, the measured tensile strength exceeds the respective code requirement
[5].
For describing the material behaviour of the rubber sealing the constitutive
model by Mooney-Rivlin is employed. The latter is sufficiently accurate,
because the strains in the rubber sealing do not exceed the limitations of this
constitutive model. Contact between the pipes and the locking bars as well as
between the pipes and the rubber sealing is taken into account by Coulomb-type
friction laws. The complete 3D FE-model for the prototype DN200 K13 consists
of about 1.2 Mio. degrees of freedom.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
In the first step of the FE-analysis the installation of the rubber sealing is
simulated. In this context, contact between the rubber sealing and the pipes has
to be taken into account. The deformed configuration of the rubber sealing after
installation, which was obtained from a simpler FE-simulation assuming axial
symmetry, was integrated into the 3D FE-model.
In the second step the installation of the locking bars is simulated. In this
context, contact between the locking bars and the pipes has to be taken into
account.
Subsequently, the internal pressure is increased step by step. Figure 4 shows
the predicted deformations of the joint (magnifying the displacements by a factor
of 10) for pressure levels of 185 bar and 225 bar. According to Figure 4, the
locking bar is clamped between the welding bed of the spigot pipe and the socket
pipe. Failure of convergence of the numerical simulation occurs at an internal
pressure exceeding 225 bar. Hence, the internal pressure of 225 bar indicates the
ultimate load of the joint. At this pressure level, large domains of the joint are
characterized by plastic material behaviour, in particular, the region in the
vicinity of the contact area between the locking bar and the socket pipe and the
region in the vicinity of the contact area between the welding bed of the spigot
pipe and the locking bar.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
79
3 Experimental investigation
The test set-up for the experimental investigation, shown in Figure 5, was
selected complying with the respective code requirements of the European
standard NORM EN 545 [5]. Two connected pipe segments of 1 m length each
were supported by saddles. The latter were mounted on trolleys for reducing
restraint effects due to relative displacements of the pipe segments at the joint,
caused by the increasing internal pressure during the test.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 5:
Figure 6:
Test set-up.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
81
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
83
5 Summary
This paper focused on the development of pipe systems for high operating
pressures, consisting of individual pipes, made of ductile cast iron, which are
connected at the construction site by joints, assuring water tightness by a rubber
sealing. Since the joints are critical parts of such pipelines, a fully threedimensional FE-model of a joint was developed for assessing the preliminary
design of a joint by numerical simulations. It was employed for studying the load
carrying behaviour of a joint up to failure. The numerical results, and, thus, the
preliminary design, were evaluated by an experimental investigation on a
prototype of a joint. In the test the structural behaviour was monitored by strain
gauges and displacement transducers. Comparison of the computed and
measured results and further numerical investigation of the impact of design
modifications on the structural behaviour led to the final design of a joint.
Acknowledgements
Financial support of this applied research project by the Austrian Research
Promotion Agency (FFG) is gratefully acknowledged. Furthermore, the work
was supported by the Austrian Ministry of Science BMWF as part of the
UniInfrastrukrurprogramm of the Forschungsplattform Scientific Computing at
LFU Innsbruck.
References
[1] Titze, E., Duktile Gussrohre fr Beschneiungsanlagen, Gussrohrtechnik, 37,
pp. 13-17, 2003
[2] Titze, E., Extreme Belastungen Planung und Bau einer
Turbinenrohrleitung aus duktilem Gusseisen unter Bercksichtigung
bruchmechanischer Bemessungsverfahren. Gussrohrtechnik, 32, pp. 58 ff,
1997
[3] Hofstetter, G., Lehar, H., Niederwanger, G., Design of pile-supported buried
pipelines by a synthesis of FE ultimate load analyses and experimental
investigations, Finite Element in Analysis and Design, 32, pp. 97-111, 1999.
[4] Lehar, H., Niederwanger, G., Hofstetter, G., FE ultimate load analyses of
pile-supported pipelines tackling uncertainty in a real design problem, in
Analyzing Uncertainty in Civil Engineering, Eds.: Fellin, W., Lessmann, H.,
Oberguggenberger, M., Vieider, R., Springer-Verlag: Berlin, pp. 129-163,
2005.
[5] NORM EN 545., Ductile iron pipes, fittings, accessories and their joints
for water pipelines. Requirement ant test method, European Committee for
Standardization (CEN), B-1050 Brssel, 2007.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
85
Abstract
The work described in this paper forms part of a much larger investigation of the
behaviour of a new developed type of lightweight aggregate concrete which
would be suitable for use as load bearing concrete masonry units. The
experimental work investigated the effect of high metakaolin (MK) content on
the mechanical behaviour of newly modified lightweight aggregate concrete.
15% metakaolin and waste glass were used as a partial replacement for both
ordinary Portland cement and natural sand. A medium grade expanded clay type
Techni Clay was used as a coarse aggregate in the concrete mixes. Equal
amounts of waste glass with particles sizes of 0.5-1 and 1-2 mm were used
throughout this study. Unit weight, compressive and splitting tensile strengths
were measured at various ages in accordance with the relevant British/EN
standards. Fresh concrete properties were observed to justify the workability
aspect. An assessment was carried out to indentify the pozzolanic activity of
metakaolin material. The tests results were compared with the obtained results of
controlled and lower metakaolin contents concretes which were previously
studied. The tests results showed that metakaolin material had an explicit role in
improving the strength and unit weight of modified lightweight concrete mixes.
Compressive and splitting tensile strengths increase with an increase in the
metakaolin content, while a counteractive behaviour was recorded for the unit
weight aspect. The metakaolin material showed higher pozzolanic activity which
was overcame there duction of compressive strength due to the negative effect of
glass aggregate. However, the workability of concrete mixes degraded at higher
metakaolin inclusion.
Keywords: lightweight aggregate concrete, metakaolin, waste glass, mechanical
behaviour.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110091
1 Introduction
In recent years, much research has been undertaken with the aim of improving
the mechanical properties of lightweight concrete. Improvements to the strength
and unit weight are the major keys to evolution of the lightweight concrete
behaviour. The strength characteristic represents the load bearing capacity of
concrete to support the applied load. Unit weight is an indicator for its lightness
and capability to afford thermal insulation when used within the internal and
external building elements. This orientation also associated with environmental
aspects in the construction processes, and the idea of environmental friendly
solution showed up during the last few decades.
Due to huge amounts of waste glass produced in UK, crushed or ground glass
aggregate is one of most effective environmental treatments to mitigate these
wastes. The features of glass aggregates are granular particle shape, smooth
surface texture and very low tendency to absorb the mixing water. These
characteristics produce dry consistency and lower strength concrete mix. There is
a slight increase in alkalisilica reaction ASR of glass aggregate concrete
compared with normal aggregate concrete. However, this reaction can be
reduced by using mineral by product materials.
In the concrete mixes, the crushed and ground glass aggregates are usually
used as a partial replacement to the coarse or fine aggregate and Portland cement
respectively [1].
Metakaolin is a versatile mineral by product material which can be used to
improve the strength and durability of concrete mixes. It has a higher ratio of
pozzolan and it is used as a construction material has sharply increased with
consumption grows every year on a global basis. The production of metakaolin is
by calcinations of kaolinitic clay at (650C800C). It has surface area larger
than Portland cement reach to about15 times with a specific gravity about
2.4 g/cm3. It is usually used as a partial replacement for Portland cement. Since
the use of metakaolin material reduces the emissions of CO2during cement
manufacture, it was considered in the industry for the production of precast
concrete [24].
Expanded clay is an artificial lightweight aggregate which is considered as an
essential approach to reduce the demand on the natural aggregate. Concrete
mixes producing by this approach providing several advantages such as
(1) Reduction of dead load of building that will reduce the dimensions of
structural members giving a reduction in the quantity of reinforcement.
(2) Lighter and smaller pre-cast elements which lead to less expansive casting,
handling and transportation operations. (3) Providing large space due to
reductions in the sizes of columns, beams and slabs dimensions. (4) High
thermal insulation and increased fire resistance [5].
A newly modified lightweight concrete containing combination of expanded
clay, waste glass and metakaolin materials was investigated, and, herein, the
mechanical behaviour of highly metakaolin ratio is concerned.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
87
2 Experimental program
The experimental program of this research aims to investigate the mechanical
behaviour of high metakaolin lightweight concrete which would be suitable for
use as a load bearing concrete walls.
A medium grade expanded clay type Techni Clay was used as coarse
aggregate in the concrete mixes. This type of expanded clay was produced by the
Plasmor Concrete Product Company. It has a typical moisture content and
particle density of 20% ww and 550 kg/m3 respectively. Figure 1 shows the
grading of expanded clay.
Natural sand for building proposes was used as a fine aggregate to produce
concrete mixes. Grading of the used sand is shown in Figure 1.
120
%Passing
100
80
60
40
ExpandedClay
20
NaturalSand
0
0
10
12
Sievesizemm
Figure 1:
Waste glass with particles size of 0.5-1 and 1-2 mm was used as a partial
replacement to natural sand with ratio of 15% by volume. The waste glass was
provided by the Specialist Aggregate Ltd Company with a specific gravity of
2.52.
Metakaolin material was used as a partial replacement to the ordinary
Portland cement with ratio of 15% by weight.
The mix proportions were 1: 0.76: 1.5 by volume which was equivalent to 1:
1.27: 0.63 by weight with 50 mm slump. Cement content and W/C ratio of
controlled concrete (0% glass + 0% metakaolin) were 392 kg/m3 and 0.45
respectively. Figure 2 shows the materials which were used in producing the
lightweight concrete mixes.
The mixing operation was carried out according to BS EN12390-2 [6] using
0.1 m3 vertical portable mixer. The fresh concrete was casted in moulds by
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
After casting, the samples were kept out in the laboratory condition and
coated by nylon sheet to ensure a humid air around the specimens. After 24
hours, the samples were demoulded, marked and immersed in a basin of water at
a temperature 20 2C until the date of test.
Before performing the experimental tests, the apparent moisture was removed
from the specimens, and the surfaces were cleaned from any loose grit or any
other materials that could be in contact with the loading plate.
The mechanical behaviour tests included unit weight, compressive and splitting
tensile strengths. All these tests were conducted according to the relevant British
/EN standards [711].
An average value of three specimens was adopted for each test result. Short
and long term behaviours were investigated. The tests results were compared
with the obtained results of controlled and lower metakaolin contents concrete
samples which were previously studied.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
89
Density(kg/m3)
1680
1640
1600
7Days
28Days
90Days
180Days
1560
1520
0
10
15
20
Metakaolinratio(%)
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
0%MK
5%MK
15%MK
10%MK
Density(kg/m3)
1680
1640
1600
1560
1520
0
50
100
150
200
Age(Days)
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
91
Compressivestrength(MPa)
28
25
22
19
16
180Days
90Days
28Days
7Days
13
10
0
10
15
20
Metakaolinratio(%)
Figure 5:
Compressivestrength(MPa)
28
25
22
19
16
15%MK
10%MK
5%MK
0%MK
13
10
0
50
100
150
200
Age(Days)
Figure 6:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(3)
where
s depends on the type of cement and curing temperature, it is in range of (0.20.38). However, s value of 15% metakaolin concrete mix was 0.29 with Rsquared of (80%).
The pozzolanic reactivity of metakaolin material which is described in BS EN
196-5 [16] was measured according to values of compressive strength as in [17].
The specific strength ratio R which is an indicator of contribution of mineral
admixture to strength is defined as
(4)
100
(4)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Pozzolanic
parameter
(%)
Calculated values of R, Rp, K and P for controlled and modified concrete mixes.
Controlled mix
15% G +5% MK
15% G +10% MK
15% G +15% MK
7
days
28
days
90
days
180
days
7
days
28
days
90
days
180
days
7
days
28
days
90
days
180
days
7
days
28
days
90
days
180
days
16.06
18.53
19.93
20.8
15.1
19.53
20.82
23.04
15.45
19.93
22.07
22.75
17.64
22.52
24.24
25.09
0.160
0.185
0.199
0.208
0.158
0.205
0.219
0.242
0.171
0.221
0.245
0.252
0.207
0.264
0.285
0.295
0.000
0.000
0.000
0.000
0.001
0.020
0.019
0.034
0.011
0.036
0.045
0.044
0.046
0.079
0.085
0.087
1.000
1.000
1.000
1.000
0.940
1.053
1.044
1.107
0.962
1.075
1.107
1.093
1.098
1.215
1.215
1.206
0.000
0.000
0.000
0.000
-1.03
9.86
9.04
14.23
6.446
16.32
18.70
17.71
22.61
30.05
30.09
29.53
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 1:
93
2.7
2.4
2.1
1.8
180Days
120Days
1.5
0%
5%
10%
15%
20%
Metakaolinratio(%)
Figure 7:
4 Conclusions
The experimental measurements of high metakaolin lightweight aggregate
concrete were reported, discussed and then compared with the mechanical
behaviour of lower metakaolin contents. The main results can be summarized as
follow:
The high metakaolin lightweight aggregate mix exhibited inferior
workability due to its role in acceleration the hydration of cement, while
an adequate workability was recorded to the lower metakaolin contents.
A slightly decreasing in density values of modified concrete mixes
compared with the controlled mix and the higher reduction was at 10%
metakaolin content for all test ages.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
95
Acknowledgements
The authors would like to express their gratitude to the Plasmor Concrete
Product Company, staff of the Civil Engineering Laboratory of the University of
Manchester and the Ministry of Higher Education and Scientific Research in
Iraq.
References
[1] Miao, L., Incorporating ground glass in self-compacting concrete,
Construction and Building Material, 25, 919-925, 2011.
[2] Eva, V., et al., High performance concrete with Czech metakaolin:
Experimental analysis of strength, toughness and durability characteristics,
Construction and Building Material, 24, 1404-1411, 2010.
[3] Parande, A., et al., Metakaolin: a versatile material to enhance the
durability of concrete an overview, Institute of Civil Engineering, doi:
10.1680/stco, 2009.
[4] Wild, S., et al., A relative strength, pozzolanic activity and cement
hydration in superplasticised metakaolin concrete, Cement and
Construction research, 26, No.10, 1537-1544, 1996.
[5] Kayali, O., Fly ash lightweight aggregates in high performance concrete,
Construction and Building Materials, 22 (12): 2393-2399, 2008.
[6] BS EN 12390-1, Making and curing specimens for strength test, British
Standards: 1-12, 2009.
[7] BS EN 206-1, Concrete: Specification, performance, production and
Conformity, British Standards: 1-74, 2000.
[8] BS EN 12390-7, Testing hardened concrete, Part 7: Density of hardened
concrete, British Standards: 1-14, 2009.
[9] BS EN12390-3, Testing hardened concrete, Part 3: Compressive strength
of test specimens British Standards: 1-22, 2009.
[10] BS EN 12390-6, Testing hardened concrete, Part 6: Splitting tensile
strength of test specimens, British Standards: 1-14, 2009.
[11] BS ISO 1920-10, Testing of concrete, Part 10: Determination of static
modulus of elasticity, British Standards: 1-12, 2009.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
97
Abstract
A new methodology of physical and FE modelling and simulation of bridgetrack
moving train (BTT) systems has been developed with the use of commercial
CAE systems. A methodology is related to composite (steel-concrete) bridges,
ballasted tracks and high-speed trains. In the methodology, Altair HyperMesh,
LS-DYNA, LS-PrePost and HyperView software was applied. The methodology is
based on homogenization of reinforced concrete (RC) platform slab, RAIL_TRACK
and RAIL_TRAIN LS-Dynas modules for simulating the moving traintrack
interaction, non-linear modelling of rail fastenings and crushed stone ballast,
application of cylindrical and revolute constrained joints and discrete springs and
dampers for modelling suspensions in rail-vehicles. For experimental validation of
numerical modelling and simulation of BTT systems, the KNI 140070 composite
viaduct and the EuroCity EC 114 train moving at 160 km/h have been selected.
The experimental setup contained Keyence LK-G 157 system (CCD laser
displacement sensors), PULSE system (acceleration sensors), and PHANTOM v12
high-speed camera. According to the experiment plan, selected vertical
displacements and vertical and horizontal accelerations vs. time were measured.
The simulated time-histories of displacements and accelerations have been compared
to respective experimental diagrams. The results have proved that the validation
is positive.
Keywords: railway bridge, ballasted track, high-speed train, numerical
modelling, simulation, experimental tests, validation.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110101
1 Introduction
Nowadays, serious problems with durability protection of bridge superstructures,
tracks and approach zones loaded by high-speed trains are observed. First of all,
it results from complexity of bridge-trackmoving train (BTT) systems, for which
nonlinear models are described by a huge number of parameters. Many of these
parameters, describing fasteners, ballast, subsoil layers, rail-vehicles
suspensions, track irregularities, settlements etc., are only estimated and difficult for
identification. Producers and research institutions involved in modern high-speed
trains do not bring to light structural details, values of parameters or their
research results. These circumstances make exact prediction of dynamic response
of bridges to moving trains very difficult.
In the 2nd half of the 20th century scientists mostly developed analytic
numerical methods in dynamics of railway bridges, summarized in monographs
(Klasztorny [1, 2]). Simple problem-oriented computer codes were created and
used for simulations. At present, one may observe various numerical approaches
to dynamics of railway bridges but commercial CAE systems based on FEM are,
in general, not used in this field by Klasztorny [2], Yang et al. [3], Cheng et
al. [4], Au et al. [5], Zhang et al. [6], Song and Choi [7]. Summing up, assuming
vibrations of BTT systems may be considered as 3D but symmetric with respect to
the vertical longitudinal plane of symmetry. Applications of advanced CAE systems
in dynamics of bridges are at early stage of its development.
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
99
St3M steel. Bottom flanges have been enforced with additional cover plates. The
thickness of a new RC platform ranges from 0.29 m in the track axis to 0.25 m at
the side wall. The platform is made of C35 concrete reinforced with AII/18G2-b
steel rebars. The side wall is made of C30 concrete and has vertical dilatations at
, , and of the span length. The RESTON rolling bearings (on the left
support) can shift up to 50 mm in the longitudinal direction. Bearings under the
left inside main beam are unmovable in the lateral direction; the remaining
bearings can displace in the lateral direction up to 20 mm.
A scheme of the longitudinal section of the KNI 140070 viaduct is depicted in
fig. 2 where all elements taken in the FE modelling are marked, i.e. the homogenized
platform (the slab and the walls), the main beams set, the vertical ribs welded to
webs of the main beams, the horizontal bearing plates welded to the bottom
flanges of the main beams (over the bearings).
PSARY
Track No. 1
GORA WLODOWSKA
(y)
x
9320
101440 = 14400
15000
15340
(mm)
290
20
16010
700
(y)
30
50
x
100
170
600
200 200
470
Figure 2:
20
14400
15340
600
200 200
100
170
470
(mm)
a)
750
200
900
Ballast
Cement-Stabilized
Subsoil
900
Subsoil Embankment
A
1200
(mm)
>10000
2400
130
470
18600 = 10800
/2 = 7200
/2 = 7200
10000
4000
600
AA
b)
750
BB
c)
1:1
.5
2000
2600
Figure 3:
2600
3000
4800
4310
6000
4850
The ballasted track in the KNI 140070 bridge zone: the longitudinal
section (a); the cross-section in the approach zone (b); the crosssection over the bridge span (c).
The EC 114 PRAHA EuroCity train, moving at 160 km/h over the bridge, has
been taken into consideration. The trainset consists of 6 units. Car lengths, centre
pins distances and wheel sets distances are reflected in fig. 4. All units are
equipped with two independent two-axle bogies.
v
4850
...
2500
4800
14700
2500
2500
16500
2500
4900
2500
Figure 4:
2500
16500
2500
5345
14700
2500
(mm)
2850
16500
4900
2500
5150
2850
2500
...
2500
101
symmetrized cross-section
y
original cross-section
70
150
50
(x)
4 12 every 100/200/300
12 every 200
12 every 150
300
160 270
45
12 every 150
16 every 150
380
270 485
340
925
660
1000
340
660
1000
4850
340
660
1000
340
485
925
270
(mm)
b)
(y)
dilatation
dilatation
bearing axis
(mm)
170
Figure 5:
1900
300
1800
7200
3800
Figure 6:
The following assumptions have been made in physic modelling of the track. The
rail-line axis is rectilinear, and in the unloading state the rails are
rectilinear. No rail surface irregularities appear. Vibrations of the track are small
and symmetric with respect to the vertical xz plane. The rails are prismatic beams
deformable in flexure and shear, made of linearly viscoelastic material. Layers of the
embankment are considered as a linearly viscoelastic material continuum.
Rail fasteners were simulated using massless one-dimensional discrete nonlinear spring and damper elements oriented vertically. The embankment has been
reflected approximately by a rectangular prism with unmovable side and bottom
boundary surfaces and meshed using 8-node 24 DOF solid elements. Sleepers
are modelled as rigid beams vibrating only vertically, using finite beam elements
and respective constraints. The ballast layer has been divided into cubicoid columns
in coincidence with FE mesh of the parts under the ballast (9 ballast columns under
each sleeper). Each ballast column was reflected by a vertical set of nonlinear
spring and damper elements. The lumped mass distribution for the ballast has been
put into the bottom set of the nodal points contacting the platform slab and the top
subsoil layers.
Values of geometrical and mechanical parameters of the ballasted track parts
are extracted from Klasztorny [1, 2], Niemierko et al. [11], and refs. [10, 12, 13].
RAIL_TRACK and RAIL_TRAIN modules available in LS-DYNA [15] were
applied for approximate modelling the train-track interaction (without simulation
of wheels rotation). The wheel-rail contact stiffness amounts to 2 MN/mm as
suggested in ref. [15]. Hughes-Liu beam elements (2-node elements with 12
DOFs [15]) were used for FE modelling of rails bent in the vertical planes. In
order to declare a set of integration points for the rail cross-section, the
INTEGRATION card has been applied. For each rail a substitute double-tee
asymmetric cross-section was assumed, denoted in ref. [15] as Type 10: I-Shape
1. The actual values of the centre-of-gravity location, the area and the
geometrical moment of inertia have been saved with respect to the horizontal
principal axis of the cross-section.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
103
The FE numerical model of the ballasted track has been created in HyperMesh
and LS-PrePost software. The main dimensions of the track, the abutments and
the embankment are depicted in fig. 3, whereas one of the FE mesh schemes of
the track are partly reflected in fig. 7.
kf
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
cbs
mbs
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
mbs
cbs
kf
cf
ms
kbs
mbs
cbs
cf
ms
kbs
cbs
mbs
200
900
900
300 300
(mm)
470
820
130
...
Figure 7:
600
600
600
600
600
600
600
600
...
The side view on the physical and FE model of the track in the left
abutment zone of the KNI 140070 viaduct.
The total length of the track section modelled numerically was equal to
810 m, and contains the following sections: the initial train position (with zero
static wheel pressures), the zone of increasing the static wheel pressures up to the
full values, the train-track vibration stabilization zone (lasting 1 sec), 60 m long
main zone (including the approach zones and the bridge), the zone of bridge free
vibrations (lasting 1 sec), the final train position zone. In total, the FE track
model contains ~141,800 beam, shell, brick and discrete elements and ~21,700 point
mass elements.
Modelling of the EC 114 trainset has been performed in LS-PrePost software
[15] under assumption that vibrations of the train units are symmetric with
respect to the main longitudinal vertical plane of symmetry. A numerical model of
the EC 114 trainset consists of the following components: carbodies, bogie frames,
wheel sets, and vertical massless discrete linear viscoelastic elements reflecting the
primary and secondary suspension systems. All mass components were modelled
using shell and beam elements treated as rigid bodies. Wheel sets have been reflected
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Carbody
Translational Joints
(CONSTRAINED_JOINT_CYLINDRICAL)
Figure 8:
Wheelset
Bogie Frame
Rotational Joints
(CONSTRAINED_JOINT_REVOLUTE)
The FE models of carbodies, bogie frames and wheel sets were created at full
conformity between actual vehicles and the numerical models with respect to
their masses and principal mass moments of inertia. In total, the FE model of the
6-unit EC 114 train contains ~950 beam, shell and discrete finite elements and
~50 point masses. In the simulations, the DYNAMIC_RELAXATION option [15]
has been replaced with loading the system by a set of vertical forces put in the
moving vehicle-rail contact points according to the formula:
P (t )
t
P0
1 cos ,
2
t0
(1)
where P0 is the static pressure of a single wheel on the rail head, t0 = 2 sec is a time
of increasing of the static pressures up to the full values (0 t 2 sec).
A constant service velocity of the vehicle FE model was declared in two
steps, with options INITIAL_VELOCITY for t = 0 PRESCRIBED_MOTION_
RIGID for t > 0 applied for all carbodies and bogies FE models [15].
Selected output quantities were registered using HISTORY_NODE_SET and
HISTORY_NODE_SHELL options [15]. The sampling frequency amounts to 20 Hz
in the zone of increasing the static wheel pressures, 200 Hz in the train-track
vibration stabilization zone and 1,000 Hz in the 60 m long main zone. The
computations have been made using the 120-P supercomputer. At the service
velocity 160 km/h the real time equals 14.4 sec, while the CPU time amounts to
~90 hrs. Selected time-histories for displacements and accelerations were created
using the LS-PrePost software.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
105
B
3600
3600
(mm)
14400
Figure 9:
AA
BB
CC
A3
A4
No.1 No.2 No.3 No.4
Beams
L1
L2
L1A1 L2A7
A2
No.1 No.2 No.3 No.4
Beams
A5
A6
No.1 No.2 No.3 No.4
Beams
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
t [s]
Figure 10:
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
t [s]
Figure 11:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 1:
107
Experiment
2.23
2.25
FE Analysis
2.18
2.23
10
8
6
4
2
0
2
4
6
8
10
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
t [s]
Figure 12:
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
t [s]
Figure 13:
6 Conclusions
Based on the results of the validation test, the following main conclusions have
been formulated. The experimental validation of the numerical modelling and
simulation of the BTT systems, examined on the KNI 140070 composite viaduct,
located on the Polish Central Main Line, loaded by the EC 114 PRAHA express
train moving at 160 km/h, is positive. The simulated (numerical) and experimental
dynamic responses in displacements of the bridge superstructure are in good
conformity, both qualitatively and quantitatively. Simplifications assumed in nonlinear
physic modelling of the BTT systems are acceptable for one-track simply-supported
bridge spans loaded by high-speed passenger trains. The simulated and experimental
dynamic responses in accelerations exhibit longer durability of fully symmetric
bridge superstructures compared to slightly asymmetric real superstructures. The
examined modernized composite viaduct is insensitive dynamically at 160 km/h
service speed of the EC 114 train.
Note that the viaduct has been redesigned in order to adopt it to service speeds
of 300350 km/h. Further investigations should be aimed at the control and
validation tests at higher service velocities of trains.
Acknowledgements
This paper is a part of a research project No. N N509 2923 35, realized by Military
University of Technology, Poland in the period 20082011. Financial support of
Ministry of Science and Higher Education, Poland is gratefully acknowledged.
References
[1] Klasztorny, M., Vibrations of single-track railway bridges under highspeed trains [in Polish], Wroclaw University of Technology Press:
Wroclaw, 1987.
[2] Klasztorny, M., Dynamics of beam bridges loaded by high-speed trains
[in Polish], WNT Press: Warsaw, 2005.
[3] Yang, Y.-B., et al., Vibrations of simple beams due to trains moving at high
speeds, Eng. Struct., 19(11), pp. 936944, 1997.
[4] Cheng, Y.S., et al., Vibration of railway bridges under a moving train by
using bridge-track-vehicle element, Eng. Struct., 23(12), pp. 15971606,
2001.
[5] Au, F.T.K., et al., Impact study of cable-stayed bridge under railway traffic
using various models, J. Sound & Vibration, 240(3), pp. 447465, 2001.
[6] Zhang, Q.-L., et al., Numerical simulation of train-bridge interaction
dynamics, Computers & Structures, 79, pp. 10591075, 2001.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
109
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
111
Abstract
This paper deals with the estimation of the moisture diffusivity, together with
other thermophysical properties, as well as the heat and mass transfer
coefficients of a convective drying body, on the basis of single thermocouple
temperature measurements by using an inverse approach. Potato and apple slices
have been chosen as representative drying bodies with significant shrinkage
effects. A mathematical model of the drying process of shrinking bodies has
been applied. The Levenberg-Marquardt method and a hybrid optimization
method of minimization of a resulting least-squares norm were used to solve the
present inverse problem. The experiments have been conducted on the
experimental setup that is designed to simulate an industrial convective dryer.
An analysis of the influence of the drying air speed, temperature and relative
humidity, drying body dimensions, and drying time on the estimation of the
unknown parameters enables the design of appropriate experiments that have
been conducted as well. The estimated moisture diffusivities are compared with
the results published by other authors. The experimental transient temperature
and moisture content changes during the drying are compared with numerical
solutions.
Keywords: inverse approach, thermophysical properties, drying.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110111
1 Introduction
An inverse approach to parameter estimation in the last few decades has become
widely used in various scientific disciplines. In this paper, application of inverse
concepts in drying is analyzed. A mathematical model of the drying process of
shrinking bodies has been applied where the moisture content and temperature
field in the drying body are expressed by a system of two coupled partial
differential equations. The system of equations incorporates several coefficients
that are functions of temperature and moisture content and must be determined
experimentally. All the coefficients, except for the moisture diffusivity, can be
relatively easily determined by experiments. The main problem in the moisture
diffusivity determination by classical or inverse methods is the difficulty of
moisture content measurements. We have recently analyzed a method for
moisture diffusivity estimation by the temperature response of a drying body [1
6]. The main idea of this method is to make use of the interrelation between the
heat and mass transport processes within the convective drying body and from its
surface to the surroundings. Then, the moisture diffusivity, together with other
thermophysical properties of the body, as well as the heat and mass transfer
coefficients can be estimated on the basis of an accurate and easy to perform
single thermocouple temperature measurement by using an inverse approach.
v a, T a
x
2L
Figure 1:
In the case of an infinite flat plate the unsteady temperature, T(x, t), and
moisture content, X(x, t), fields in the drying body are expressed by the following
system of coupled nonlinear partial differential equations for energy and
moisture transport
c s
s X
T T
k
H
t x x
t
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(1)
s X
X
D s
t
x
x
113
(2)
vs
1
V 1 ' X
s ms
b0
(3)
where ms is the mass of the dry material of the drying body, V is the volume of
the drying body, b0 is the density of a fully dried body and is the shrinkage
coefficient.
The problem of the moving boundaries due to the changes of the dimensions
of the body during the drying was resolved by introducing the dimensionless
coordinate
x
L( t )
(4)
Substituting the above expression for s (=1/vs) and into eqns. (1) and (2)
and rearranging, the resulting system of equations for the temperature and
moisture content prediction becomes
T
k 1 2T dL T H s X dL X
t s c L2 2 L dt
c b0 t L dt
(5)
1 2 X b 0 1 ( D s ) dL X
X
D b0 2
t
s L 2 2s L2
L dt
(6)
t 0 : T ( ,0 ) T0 , X ( ,0 ) X 0 .
(7)
1 T
jq H ( 1 ) jm 0
L 1
(8)
1 X
jm 0
L 1
(9)
D s
The convective heat flux, jq(t), and mass flux, jm(t), on these surfaces are
jm hD C 1 C a
jq h Ta T 1
(10)
(11)
where his the heat transfer coefficient, and hD is the mass transfer coefficient, Ta
is the temperature of the drying air, and T=1 is the temperature on the surfaces of
the drying body.
The water vapour concentration in the drying air, Ca, is calculated from
ps ( Ta )
(12)
Ca
Rw Tk ,a
where is the relative humidity of the drying air and psis the saturation
pressure. The water vapor concentration of the air in equilibrium with the surface
of the body exposed to convection is calculated from
C 1
a(T 1 , X 1 ) p s (T 1 )
Rw Tk , 1
(13)
The water activity, a, or the equilibrium relative humidity of the air in contact
with the convection surface at temperature T=1 and moisture content X=1 are
calculated from experimental water sorption isotherms.
The boundary conditions on the mid-plane of the drying slice are
0,
0
0.
0
(14)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
115
3 Inverse approach
For the inverse problem of interest here, the thermophysical properties and the
boundary conditions parameters of a drying body are regarded as unknown
parameters.
The estimation methodology used is based on the minimization of the
ordinary least square norm
E (P ) [Y T(P)]T [Y T(P )]
(15)
4 Experimental
Real experiments have been conducted to investigate the applicability of the
method to food processing, when involving drying of thin flat samples. The
experiments have been conducted on the experimental setup that is designed to
simulate an industrial convective dryer.
Drying of approximately three millimeter thick potato or apple slices have
been examined. The slices have been in contact with the drying air from the top
and the bottom surfaces. Two shelves, fig. 2, each holding three moist slices
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Ta
Side view
Top view
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
117
drying air. All other quantities appearing in the direct problem formulation were
taken from published data of other authors.
Moisture diffusivity of foods is very often considered as an Arrhenius-type
temperature function [9, 10]
D = D0 exp[-E0/(R Tk)]
(17)
with constant values of the Arrhenius factor, D0, and the activation energy for
moisture diffusion, E0.
Table 1 shows the computationally obtained parameters and RMS-error for
potato, for experiment P1: Ta = 58.13 OC, 2L0 = 3.14 mm, X0 = 4.80 kg/kg and T0
= 17.53 OC and for apple, for experiment A1: Ta = 60.17 OC, 2L0 = 2.96 mm, X0
= 6.46 kg/kg and T0 = 16.91 OC.
In fig. 3 the estimated moisture diffusivities are compared with the results
published by other authors [11].
In fig. 4 the experimental transient temperature reading, Tx=0, and the
experimental volume-averaged moisture content change during drying of potato
are compared with numerical solutions for the estimated parameters. A similar,
very good agreement was obtained for apple slice as well.
Table 1:
Apple
63.905
50.215
24.137
2.70
0.1011
0.64
D [m /s]
D [m /s]
1E-08
1E-08
1E-09
Our
results
1E-10
Our
results
1E-10
Potatoe
1E-11
0.0026
0.0029
Figure 3:
0.0032
-1
-1
0.0035
T
[K ]
Apple
1E-12
0,0028
0,0032
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
-1
-1
T0,0036
[K ]
Xexp
Xnum
0
20
40
40
T [ C]
Texp
2
0
Figure 4:
60
Tnum
-1
X [kgkg db]
Ta
4
20
0
60 Time
80 [min]
6 Conclusions
It can be concluded that in the convective drying experiments of apples and
potatoes it is possible, based on a single thermocouple temperature response, to
estimate simultaneously the moisture diffusivity, the convection heat and mass
transfer coefficients, and the relative humidity of the drying air.
Estimated moisture diffusivities compare well with the values obtained by
other authors who utilized different methods.
Very good agreement between the experimental and numerical temperature
and volume-averaged moisture content changes during drying has been obtained.
References
[1] Kanevce, G.H. Kanevce, L.P. & Dulikravich, G.S., Moisture diffusivity
estimation by temperature response of a drying body, Proc. of the 2nd Int.
Conf. On Inverse Problems in Engineering Mechanics, eds. M. Tanaka &
G. S. Dulikravich, Elsevier: Amsterdam, pp. 43-52, 2000.
[2] Kanevce, G.H. Kanevce, L.P. & Dulikravich, G.S., An inverse method for
drying at high mass transfer Biot number, Proc. of the HT03 ASME
Summer Heat Transfer Conference, Las Vegas, Nevada, USA, ASME
paper HT20003-40146, 2003.
[3] Kanevce, G.H. Kanevce, L.P., Dulikravich, G.S., & Orlande, H.R.B.,
Estimation of thermophysical properties of moist materials under different
drying conditions, Inverse Problems in Science and Engineering, 13(4),
pp. 341-354, 2005.
[4] Kanevce, G.H. Kanevce, L.P., & Dulikravich, G.S., Application of inverse
concepts to drying, Thermal Science, 9(2), pp. 31-44, 2005.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
119
[5] Kanevce, G.H. Kanevce, L.P., Mitrevski, V.B., Dulikravich, G.S., &
Orlande, H.R.B., Inverse approaches to drying with and without shrinkage,
Proc. of the 15th Int. Drying Symposium (IDS2006), ed. I. Farkas,
Budapest, Hungary, Vol. A, p. 576, 2006.
[6] Kanevce, G.H. Kanevce, L.P., Mitrevski, V.B., Dulikravich, G.S., &
Orlande, H.R.B., Inverse approaches to drying of thin bodies with
significant shrinkage effects, Journal of Heat Transfer, 129(3), pp. 379386, 2007.
[7] Dulikravich, G.S., Martin, T.J., Dennis, B.H. & Foster, N.F.
Multidisciplinary hybrid constrained GA optimization, Evolutionary
Algorithms in Engineering and Computer Science: Recent Advances and
Industrial Applications (EUROGEN99), eds. K. Miettinen, M. M. Makela,
P. Neittaanmaki & J. Periaux, John Wiley & Sons, Ltd., Jyvaskyla, Finland,
pp. 231-260, 1999.
[8] Marquardt, D.W., An algorithm for least squares estimation of nonlinear
parameters, J. Soc. Ind. Appl. Math., 11, pp. 431-441, 1963.
[9] Rovedo, C., Suarez C. & Viollaz P., Analysis of moisture profiles, mass
Biot number and driving forces during drying of potato slabs, J. of Food
Engineering, 36, pp. 211-231, 1998.
[10] Zogzas N.P. & Maroulis Z.B., Effective moisture diffusivity estimation
from drying data: A comparison between various methods of analysis,
Drying Technology, 14(7&8), pp. 1543-1573, 1996.
[11] Mitrevski, V.B., Investigation of the drying processes by inverse methods,
PhD. Thesis, University of Bitola, Macedonia, 2005
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
121
Abstract
Particle size reduction of dry material by milling is a key unit operation for the
pharmaceutical, agricultural, food and paper industries. Knowledge of particle
flow and size reduction in a hammer mill is thus critical to optimize the design
and operation of such equipment. Milling experiments are performed using
lactose non pareils in a laboratory scale Hammer Mill. The size and shape of the
resultant progeny of particles are analyzed by sieves/light scattering and
microscope/image analysis techniques respectively. Discrete Element Method
(DEM) based computational methods are developed to perform a quantitative
examination of granular flow, fracturing and subsequently fragmentation patterns
for the same hammer mill. A parametric study was performed to understand the
effect of hammer speed (rotational), feed rate, hammer-wall tolerance on size
reduction process. Simulations were carried out to study the effect of mill speed
on kinetic energy of particles.
Keywords: discrete element method, granular flow, fragmentation, hammer mill.
1 Introduction
Particle size reduction of dry granular material by mechanical means, also known
as milling or communition, is undoubtedly a very important unit operation in
pharmaceutical, agricultural, food, mineral and paper industries. For example,
particle size reduction has a significant impact on pharmaceutical product
performance and stability as it affects the solubility and bioavailability of many
poorly soluble BCS Class II drugs [1]. The most commonly used mills are the
rotary cutter, hammer mill, roller mill, ball mills and fluid energy mills, used in
various stages of manufacturing. Size reduction is generally achieved by particle
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110121
2 Experimental
The milling equipment (Thomas Wiley Mill, Thomas Scientific, Swedesboro,
NJ) used in our study is a variable speed, digitally controlled, direct drive mill;
that provides continuous variation of cutting speeds from 650 to 1140 rpm with
constant torque maintained throughout the speed range. Once the material is
loaded through the feed hopper in to the grinding chamber, it drops by gravity
and fragments after colliding with the rotating hammers. Parametric studies are
conducted to study the effect of speed, load, and impeller wall distance and feed
rate on particle size reduction. The experiments were performed using lactose
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
123
non pareils. In our milling experiments the material is fed at the top center,
thrown out centrifugally at first impact with the hammers and ground by impact
against the wall or cut due to presence of hammers at the periphery. The material
is retained until it is small enough to fall through the screen that forms the lower
portion of the casing. Particles fine enough to pass through the screen is
discharged almost as fast as they are formed in a glass container. To obtain
rigorous information for the parametric studies, samples are collected in the glass
container at regular intervals for example, 5 seconds for the first four samples, 10
seconds for the next four samples and 20 seconds for the next samples. All
experiments are conducted at least thrice at room temperature of 28C and relative
humidity of 35% to ensure repeatability. The effect of experimental factor was
studied at two-three levels. Average particle sizes for the entire sample are
calculated from sieve analysis using Rotap Sieve analyzer determination was
done by sieve analysis. Particle shape analysis was performed by Optical
Microscope-Camera (Olympus SZ61) and Image Analysis software (Image Pro
Plus). The conditions examined using the Wiley Mill are enlisted in Table 1.
Table 1: Conditions examined for the milling experiments with lactose.
Parameters
Speed
600rpm - 1140rpm
Clearance
2.9mm-3.7mm
Feed rate
60g/min-100g/min
3 Computational method
Numerical simulations were performed using the Discrete Element Method
(DEM) originally developed by Cundall and Strack [13, 14]. In this model the
granular material is considered as a collection of frictional inelastic spherical
particles. Each particle interacts with its neighbors or with the boundary only at
contact points through normal and tangential forces. The forces and torques
acting on each of the particles are calculated in the following way:
Fi = mig + FN + FT +Fcohes
Ti = ri FT
(1)
(2)
Avicel 101
Regular lactose
45
60
75
Material
K
4 .472 Kl c
D
V S
p r
2/3
(4)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
125
Values
4000
5mm
72.83 degrees
600N/m
600N/m
0.7
0.3
2.0 * 10-5 seconds
22.13 9.8 GPa
0.7
0.5
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
100g/min
60g/min
800
600
400
200
Averageparticlesize(microns)
0
0
900
800
700
600
500
400
300
200
100
0
100g/min
60g/min
200
400
Timeinseconds
Timeinseconds
(b)
(a)
Figure 1:
(a) Effect of feed rate on particle size reduction at hammer tip speed
of 600rpm. (b) Effect of feed rate on particle size reductions at
hammer tip speed of 1140rpm.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Averageparticlesize
(microns)
900
800
700
600
500
400
300
200
100
0
127
100g/min,650rpm
100g/min,870rpm
100g/min,1140rpm
0
200
400
Timeinseconds
Figure 2: Change of average particle size with time as a function of Impeller
speed.
(a)
Figure 3:
(b)
3.7mm
900
800
700
600
500
400
300
200
100
0
2.9mm
200
400
Timeinseconds
AvergeParticleSize(microns)
AvergeparticleSize(microns)
irrespective of hammer speeds. This is because much of the particle shape was
retained at hammer tolerance of 3.7mm whereas fragmentation was observed at
hammer wall tolerance of 2.9mm As seen in figure 4a and figure 4b increasing
the clearance decreased the rate of size reduction of the mill. This resulted in
accumulation of particles in chamber. As a consequence, the particle size
distribution turned coarser and wider.
900
800
700
600
500
400
300
200
100
0
3.7mm
2.9mm
200
600
Timeinseconds
(a)
Figure 4:
400
(b)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Kineticenergy(Nm)
7.00E+01
6.00E+01
5.00E+01
4.00E+01
3.00E+01
2.00E+01
1.00E+01
0.00E+00
129
650rpm
870rpm
1140rpm
0
1
2
Timeinseconds
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
-0.05
-0.05
-0.1
-0.1
-0.15
-0.15
-0.2
-0.2
-0.25
-0.25
-0.2
-0.15
-0.1
-0.05
0
0.05
650 rpm@2.5secs
0.1
0.15
0.2
0.25
-0.25
-0.25
-0.2
-0.15
-0.1
(a)
-0.05
0
0.05
1140rpm@2.5secs
0.
0.15
0.2
0.25
(b)
Figure 6: (a) Velocity profile of particles at 650 rpm. (b) Velocity profile of
particles at 650 rpm.
4.2.2 Effect of material properties
The effect of granular cohesion on fragmentation is examined while keeping the
operational condition (650rpm, 2.9mm clearance) constant. As described in
section 3, the materials of different levels of cohesion is simulated by varying K
value were used. Figures 7a and 7b shows the time series of axial snapshots from
simulation of fragmentation of Avicel-101 (MCC) and Regular Lactose granules.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Ent 12 m*v 2j
j 1
(5)
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
16000
Fast-Flo Lactose
Reg Lactose
Avicel 101
12000
Avicel 101
MillingEnergy(MJ)
Regular Lactose
New particles
131
1.5
8000
4000
0.5
0
0
0.5
1.5
2.5
0
0
0.05
Time (s)
(a)
Figure 8:
0.1
0.15
0.2
time (s)
(b)
5 Conclusion
DEM simulation and experiment based parametric studies is performed to study
the effect of different operating conditions on size reduction in a hammer mill. In
experiments, greater size reduction is observed at higher speeds and low feed
rates owing to the greater centrifugal force experienced by the particles and
longer mean free path lengths respectively. Particle shape analysis reveals
fragmentation as the dominant process of size reduction in Hammer mill under
the investigated conditions. In simulation, we find the kinetic energy of particles
increases with impeller speed contributing greater fragmentation. Materials with
higher cohesion show lower fragmentation.
References
[1]
[2]
[3]
[4]
Jounela A.J., Pentikainen P.J., Sothmann A., Effect of particle size on the
bioavailability of digoxin. European Journal of Clinical Pharmacology 8,
pp. 365-370, 1975.
Clearly P.W., Predicting charge motion, power draw, segregation, wear
and particle breakage in ball mills using discrete element method, Miner.
Eng., 11, pp. 1061-1080, 1998.
Watanbe, H., Critical rotation speed for ball-milling, Powder Technology,
104, pp. 95-99, 1999.
Misra B.K., Rajamani R.K., Simulation of charge motion in ball mills,
Part 2: numerical simulations, Int. J. Miner. Processing, 40, pp. 187197,1994.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
Hlungwani O., Rikhotso J., Dong H.,. Moys H., Further validation of
DEM modeling of milling: effects of linear profile and mill speed.,
Minerals Engineering, 16, pp. 993-998, 2003.
Austin L.G., Lucker P.T., A simulation model for air-swept ball mill
grinding coal, Powder Technology, 38, pp. 255-266, 1984.
Kwan C.C., Mio H., Chen Y.C., Ding Y.L., Saito F., Papadopoulos D.G.,
Benthem A.C., Ghadiri, M., Analysis of the milling rate of
pharmaceutical powders using distinct element method, Chemical
Engineering Science, 60, pp. 1441-1448 2005.
Campbell G.M., Bunn P.J., Webb C., Hook S.C.W., On predicting roller
mill performance, Part II. The breakage function, Powder Technology,
115, pp 243-255, 2001.
Austin L., A preliminary simulation model for fine grinding in high speed
hammer mills, Powder Technology, 143-144, pp 240-252, 2004.
Gotsis C., Austin L.G., Batch grinding kinetics in the presence of a dead
space as in a hammer mill, Powder Technology, 41, pp. 91-98, 1985.
Vogel L., Peukert W., From single particle impact behavior to modeling
of impact mills, Chemical Engineering Science, 60, pp. 5164-5176, 2005.
Djordjevic N., Shi F.N., Morrison R.D., Applying discrete element
modeling to vertical and horizontal shaft impact crushers, Minerals
Engineering, 16, pp. 983-991, 2003.
Cundall P.A., A computer model for simulating progressive large-scale
movements in blocky rock systems. Proceedings of Symposium
International Society of Rock Mechanics, 2, pp. 129, 1971.
Cundall P.A., O. D. L. Strack, A discrete numerical model for granular
assemblies. Geotechnique, 29, pp. 47-65, 1979
Walton O. R., Numerical simulation of inclined chute flows of mono
disperse, inelastic, frictional spheres. Mechanics of Materials, 16, 239247, 1993.
Chaudhuri B., Alexander A.W.A., Faqih A, Muzzio, F.J., Davies C.,
Tomassone M.S., Avalanching flow of cohesive powders, Powder
Technology, 164, pp. 13-21, 2006.
Morrison R.D., Shi F., Whyte R., Modeling of incremental rock breakage
by impact For use in DEM models, Mineral Engineering, 20, pp. 303309, 2007.
Grady D.E., Fragmentation under impulsive stress loading, In: W.L.
Fourney et.al. (Eds), Fragmentation by blasting, Society for Experimental
Mechanics, Connecticut, USA, 63-72, 1985.
Poschel T., Schwager T., Computational Granular Dynamics: Model and
Algorithms, Springer Verlag, Berlin, Germany, 210, 2005
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
133
Abstract
In this study, the effect of asphaltenes on the viscosity of Nigerian light crude
oils at different temperatures was studied. Different weight per cent of bitumen
containing 14 per cent asphaltenes and deasphalted bitumen samples from
Alberta, Canada were mixed with Nigerian crude oils. The mixtures were
prepared from five light crude oil samples and two bitumen samples. The
experimental viscosity data obtained at different temperatures and weight per
cent of the bitumen samples were correlated with the viscosity equation of Singh
et al. The viscosity of pure crude oils (for instance, Forcados crude oil)
containing asphalted and deasphalted bitumen (Rush Lake and Plover Lake
bitumen) increased by as much as 160% and 60% respectively.
However, the results obtained from correlating the viscosity data with the
equation indicated that the discrepancies in measured data could be mainly
attributed to the presence of asphaltenes, but the percent errors between
measured and predicted viscosity were consistent for the entire samples. The
results were identified the light crude oils as better alternative diluents for the
production and transportation of heavy crude petroleum.
Keywords: Nigerian crude oil, asphaltenes, bitumen, hydrocarbon condensates,
solvents, asphaltenes solubility, light crude oil, viscosity modeling.
1 Introduction
Pipeline transportation of heavy oil has been among the major challenges faced
by the upstream petroleum industry. The high viscous nature of heavy oil,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110131
2 Experimental methods
The light crude oil samples used as diluents were obtained from oil wells in the
following locations in the Niger Delta region of Nigeria; Umutu Flow station,
Beryboye Flow station, Beniboye Flow station, Forcados field, Warri Terminal
and Kaduna Refinery. Two bitumen samples, Rush Lake and Plover Lake, (from
Alberta, Canada) were supplied by Alberta Research Council. The solvent npentane is analytical grade chemical (Sigma-Aldrich, HPLC grade, 99.9%)
commercially available and was used as received.
Asphaltene precipitation was carried out using 1:40 ml bitumen : n-pentane
ratio. The mixture was agitated using a G10 gyrotary shaker for 24 hours, left to
stand for two hours and filtered using a 7.0cm whatman filter paper as described
in ASTM 863-63. The procedure used to recover the deasphalted bitumen has
been documented in our previous publication [8]. To prepare the mixtures of
light crude oil containing 5wt% to 40wt% bitumen, a known amount of the
bitumen was added quantitatively to the light crude oil in increment of 5wt%.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
135
The mixtures, contained in tightly closed 200ml sample bottles, were then placed
in the shaker for 7 days. This was to ensure for complete dispersion of the
bitumen and the homogeneity of the mixtures. The kinematic viscosity of the
samples was determined with Canon-Finske opaque reversible capillary
viscometer in accordance with the procedure in ASTM D-445-83-1986.
3 Discussions of results
Figures 1 and 2 illustrate the viscosity-temperature relationship for the pure light
crude oils, and the mixtures that contained different wt% of pure and deasphalted
Rush Lake and Plover Lake bitumen. As expected, the results clearly show that
the presence of asphaltenes, even in a small amount, in crude oil has a significant
impact on the viscosity of the oil. The results are consistent with those obtained
in our previous work with calorimetric method [8]. Figures 1 and 2 gave a true
Pure Farcados
Farcados + 5% bitumen
0.5
Viscosity (cSt.)
0.4
0.3
0.2
0.1
0
20
Figure 1:
25
30 35 40 45
Temperature (oC)
50
55
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Viscosity (cSt.)
0.2
0.15
0.1
0.05
25
35
45
55
Temperature (oC)
Figure 2:
representation of the results observed with all the crude oil samples examined in
this study. The light crude oils used contained no asphaltenes, and after blending
with asphalted bitumen the mixtures remained homogeneous throughout the
experiment. Mixtures examined contained as much as 40wt% asphalted bitumen
and 25wt% deasphalted bitumen. For the light crude oil samples used in this
study, the viscosity increases exponentially with higher weight percent of
asphalted bitumen. Similar upward trend was observed with deasphalted
bitumen, but with significant reduction in the percent increment as shown in
Figures 3 and 4. The impact of asphaltenes on viscosity is clearly evident in
Figure 3. With 20wt% addition of asphalted bitumen the oil viscosity at 30C
increased by approximately 160% and by only 60% for the same amount of
deasphalted bitumen. This trend was observed over the studied temperature
range for the entire mixtures, as shown in Figure 4.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
137
180
With bitumen at 30oC
160
V iscosity Increase (% )
140
120
100
80
60
40
20
0
0
Figure 3:
10
15
20
Amount of Bitumen (%)
25
120
V isc o sity In c r e a se (% )
100
80
60
40
20
0
30
35
40
45
50
55
Temperature (oC)
Figure 4:
log
b
T 37.78
1
310.93
(1)
139
0.35
Beryboye Measured
Beryboye Predicted
Farcados Measured
Farcados Predicted
Warri Measured
Warri Predicted
Kaduna Measured
Kaduna Predicted
0.3
Viscosity (cSt.)
0.25
0.2
0.15
0.1
0.05
0
20
Figure 5:
25
30
35
40
45
Temperature (oC)
50
55
Pure Farcados
Farcados + 5% bitumen
Farcados + 5% deasphalted
Correlation Error (% )
-4
-8
20
25
30
35
40
45
50
55
Temperature (oC)
Figure 6:
light oil (Figure 8). Similar results were obtained for the entire pure light oil
sample used in this study. The percent average deviations for the samples are
given in Table 1. The results show that the light oil viscosity characteristics were
not affected by the addition of asphalted and deasphalted bitumen up to 40wt%.
As mentioned earlier, the addition of bitumen increased the viscosity of the light
oils (Figures 1 and 2), with asphalted bitumen nearly tripled the increase in
viscosity obtained with deasphalted bitumen.
Besides this phenomenon, the overall viscosity behavior of the mixtures
remained similar to that of the pure light crude oils even at high temperatures.
The percent errors obtained with the viscosity equation (Eq.(1)) for the mixtures
illustrate that the mixtures examined were homogeneous, as the predicted
viscosity closely matched the measured viscosity. In many cases, better match
between measured and predicted viscosities were obtained for the mixtures than
for pure light crude oil samples. The percent average deviations for these
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Pure Farcados
Farcados + 15% bitumen
Farcados + 15% deasphalted
4
Correlation Error (% )
141
-2
-4
20
25
30
35
40
45
50
55
Temperature (oC)
Figure 7:
samples are compared in Table 1. The results illustrate that light oil is arguably a
good solvent for bitumen, as the asphaltenes and other solids (resins, maltenes,
heavy saturates) appeared to be soluble in the oils. It is worthy to note that the
light crude oils used in this study contain no asphaltenes, and the compositions
are mainly low molecular weight paraffins. Aromatic solvents, such as toluene
and xylene, have been long identified as the best solvent for reducing viscosity
of heavy oil/bitumen due to their ability to maintain asphaltene molecules in
solution. For the light crude oils used here, molecular diffusivity, intermolecular
attraction and low hydrogen bonding rather than molecular polarity might be the
crucial factors that characterize the oil as good solvent to reduce the viscosity of
bitumen.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
8
Pure Warri Light Crude
Warri + 15% Plover Lake
Warri + 35% Plover Lake
-4
-8
25
Figure 8:
30
35
40
45
Temperature (oC)
50
55
4 Conclusions
The most important benefit of this study is that knowing the relatively huge
reserve of non-asphaltic light crude oils and the untapped bitumen and heavy oil
resources in Nigeria, we have shown that non-asphaltic light crude oils can be
successfully used in production and pipeline transportation of bitumen. The
results show that for up to 40wt% bitumen the viscosity characteristics of the
mixtures remain identical to that of the light crude oil, an indication that the
asphaltenes and other resinous solids remain soluble in the mixtures. The
viscosity correlations for both pure light oil samples and the mixtures gave
similar percent deviations with an overall absolute deviation of 2.6%; making
light crude oils a good diluent for bitumen.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 1:
143
Data
Points
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
85
Mean
4.28
2.25
2.93
2.98
1.35
2.60
1.74
1.20
2.17
3.31
1.95
2.85
1.82
3.13
3.90
2.86
2.60
Acknowledgements
This work was funded in part by NSERC, Canada, and PTF, Nigeria. The
supports of American University of Nigeria (AUN), and NNPC Research Lab
(Port Harcourt), are highly appreciated. Ours thanks to Harrigan A. Pepple
(NNPC, Warri), MacDonald I. Bara-Hart (NNPC, Benin), and Samuel O.
Robinson (AGIP Oil, Umoku) for their various contributions in the acquisition of
field samples and related data.
References
[1] Urquhart, R.D. Heavy Oil Transportation: Present and Future. J. Can. Petrol.
Technol., 25 (2), pp.68-71, 1986.
[2] Chang, C., Nguyen, Q.D., & Ronningsen, H.P. Isothermal start-up of
pipeline transporting waxy crude oils. J. Non-Newt. Fluid Mech., 87,
pp.127-54, 1999.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
145
Abstract
Early diagnosis and management of foot arch abnormalities would reduce future
complications. Conventional, mainly non-objective clinical examinations were
not evidence based and somehow due to expert ideas with high inter rater
differences. In new Pedscope we made the Optic Ped Scan (OPS), patient stand
on a Resin made 5-10 mm Plexiglass while the image of the whole plantar
surface was digitally scanned, showing the pressure sites in different colours
based on a Ratio-Metric Scale. Any off-line measurement or relative pressure
ratios could be easily studied. The outcome of the OPS is an image file resulting
from the subjects body weight on emptying the capillaries of plantar skin which
causes the colours to change. These physiological changes of plantar colour
could be amplified when passing though the clear, hardly elastic form of
plexiglass (Acrylic or methyl methacrylate Resin or Poly methyl 2methylpropenoate - PMMA), we prepared in Arak Petrochemical Company. We
studied 2007, school age students as a pilot study of the technique in Arak
schools and measured: Foot Length, Axis Length, Heel Expanse, Mid-foot
Expanse, Forefoot Expanse, Arch Length (medial), Arch Depth, Hallux Distance
and relative pressures of 10 defined zones.
Students had 28.15% Flat Foot, 1.54% Pes Cavus, 11.01% Hallux Valgus,
0.64% Hallux Varus, 0.04% Convexus and 0.04% complex/various deformities.
OPS worked properly for both diagnosis and measurements. The new technique
could be several times cheaper than other techniques.
Keywords: foot arches, foot pressure sites, optic pod scan, foot deformity.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110141
1 Introduction
In 1976, Sharrard mentioned flat foot, knock knee and intoeing or metatarsus
varus, as the three most common causes for parental concern when their child
starts to walk [1]. Accurate diagnosis and proper management of foot arch
abnormalities may reduce future musculoskeletal complications. Diagnosis,
interventional treatments and estimation of the prevalence of foot deformities in
screening studies need more reliable quantitative techniques than a physicians
observation. Conventional, mainly non-objective clinical examinations were not
evidence based and somehow due to expert ideas with high inter rater
differences.
In past decades, arch exams were performed by means of a pedscope chair, a
wooden chair with a thick glass on its top roof that reflects an image of plantar
surface of the subjects foot to an oblique (45) adjusted mirror set so the image
could be seen from outside. With technique, the examiner was not able to
measure the foot arch parameters, estimate the pressure of any particular sites or
and record the findings for later comparisons but it was still helpful to show the
plantar contact and weight bearing surface area of the foot with the brighter sites
implying more contact pressures. The traditional solution to recording these sorts
of findings, as still remembered by many middle aged colleagues, was to puff
the patients bare feet with Talc powder before asking him to stand again on a
black cardboard as a printout or less durable evidence of the study.
On the other hand, new techniques such as LASER, 3D Scanners, Multi
cameras and pressure platforms or sensitive mats (Pedobarographs) are much
more accurate and advanced methods but they are mostly expensive and still not
easily accessible in a physicians office.
This study was designed to test the hypothesis that the conventional clinical
examination of the foot just by looking at a childs feet as they stand up, stand on
tiptoe or dangle the foot in the air as they sit on an exam table, is insufficient for
diagnosing of foot deformities.
Optic Ped Scan (OPS) or similar quantitative measurement can lead to a much
better views of foot plantar surface and its actual pressure bearing areas than the
conventional approach and judging based on the soft tissue mass of the medial
arch.
In this paper, OPS is introduced as an alternative technique and the results of
its first pilot application to detect foot deformities are presented.
2 Methods
In new Pedscope we made (OPS), patients stands on a Resin made of 5-10 mm
Plexiglass while the image of the whole plantar surface was digitally scanned by
an ordinary office type colour scanner (hp - 3770) fixed beneath the plexiglass
sheet to a computer. A 300 dpi resolution of scanning gave reasonably sharp
images, showing the arches and pressure sites of the subjects foot floor in
different colours based on a Ratio-Metric Scale with bright yellow indicating
highly pressured areas and dark red for non pressured areas.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
147
Examples of OPS layouts. (a) Pes Cavus and (b) Flat foot.
The outcome of the OPS is an image file (JPG/TIFF/ BMP) resulting from the
subjects body weight on emptying the capillaries of plantar skin which turns the
pressured sites into a yellowish colour while the other areas with lower weight
bearing ratios are pink-reddish.
These physiological changes of plantar colour could be amplified when
passing though the clear, hardly elastic form of plexiglass (Acrylic or methyl
methacrylate Resin or Poly methyl 2-methylpropenoate PMMA) which was
prepared after several pilot studies (Figures 1 and 2). We optimized the aimed
optical quality of this thermoplastic PMMA by reducing its stage of Rubber
Toughening at Arak Petrochemical Co., R&D Unit (www.arpc.ir) in Arak, Iran.
The colour density of any particular area of the image could still be a rational
representative of the pressure applied or a percentage of 100% of subjects body
weight if standing on one foot. In this way, all three anatomic and functional
arches of the foot [2] could be scanned and sent to a computer. Any off-line
measurements or calculating relative pressure ratios of each part of the achieved
footprint could be easily studied.
To examine the application of the new technique and optimize the proper
thickness of the PMMA sheet for a different range of subjects weight, we
studied 2007, 12-13 year old Students in Level 5 of primary schools in Arak.
874 girls and 1133 boys participated in a pilot study of the technique. Each
subject was scanned three times while standing on one foot (right and then left
foot) and a normal double feet standing. All scanned images of each subject were
saved for later analysis.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
149
metatarsal heads, fourth and fifth metatarsal heads and their corresponding toes
in forefoot. The other areas of loading were the heel with two medial and lateral
parts, the mid-foot, and the area beneath the navicular bone as the origin of the
medial arch first ray [3, 4].
Almost all graphics software has the ability to measure a grey scale in its edit/
make colour toolbar (RGB indicator flyout) which can scale a grey colour
between white (255) and black (0) relatively. To measure the relative pressure of
10 divided zones, each footprint image was first converted from colour to grey
scale (B&W), then the mean relative whiteness (load) of each zone was
measured, assuming the total borne weight as 100% (Figure 4).
The relative pressure measurements, even for each particular zone, could not
be pooled to calculate normative data. These measurements could vary greatly
because of multiple variables and the relative pressures should be studied in each
subject separately.
3 Results
28.15% of students (boys 34.8% and girls 22.9%) had Flat feet based on clinical
observation while based on OPS results and Denis classification only 19.8% did
(boys 21.7% and girls 17.5%). In this type of foot deformity clinical observation
over estimated by 8.3% over the actual prevalence. OPS files also showed
1.54% Pes Cavus (boys 1.05% and girls 2.17%), 11.01% Hallux Valgus (boys
5.82% and girls 17.73%), 0.64% Hallux Varus (boys 0.52% and girls 0.8%),
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
(a) OPS of right foot in a 12 y.o. female student with Bow legs and
its abnormal relative pressure distributions on lateral border as a
second heel and (b) shows its relative pressure of ten plantar zones,
in schematic bars as percentage of 100% borne weight when
standing on her right foot.
Table 1:
151
Foot Parameters
Right Foot
Left Foot
n=2007
(874 girls, 1133 boys)
n=2007
(Mean SD) - Cm
n=2007
(Mean SD) Cm
22.56 1.02
21.85 1.13
5.3 0.09
6.66 0.1
8.13 1.01
5.85 0.89
2.42 0.66
3.24 0.99
22.46 1.06
21.76 1.03
5.29 0.07
6.66 0.08
8.15 0.93
5.85 0.96
2.8 1.03
3.28 1.12
Foot Length
Axis Length
Heel Expanse
Mid-foot Expanse
Forefoot Expanse
Arch Length
Arch Depth
Hallux Distance
4 Discussion
An OPS footprint with flatfoot could be classified according to Denis [5] into
three grades of severity: grade 1 in which the support of the lateral edge of the
foot is half of that of the metatarsal contact area (almost Mid-foot Expanse = 1/2
of Forefoot Expanse); grade 2 in which the weight bearing areas of the central
zone and forefoot are equal; and grade 3 in which the support in Mid- foot or
central zone is greater than the width of the metatarsal support area. This
classification could be described by OPS measurement methods.
Pfeiffer et al. [6], reported a prevalence of 44% in a group of 835 (411 girls
and 424 boys) children of 3 to 6 years old (52% in boys and 36% in girls). They
used a three-dimensional laser surface scanner with the subjects in a standing
position. Garca-Rodrguez et al. [9] measured the prevalence of flat feet in 1181
school age children of Malaga in Spain as 2.7% which was the prevalence of
stage 2 and 3 in Denis classification. 2.7% of students were diagnosed to have
flat feet based on plantar footprints.
Children with first degree flat foot could be considered abnormal based on
clinical observation while looking at their footprints usually shows no
enhancement of contact area at the central zone. These incomplete arches could
be considered as evolutionary foot problems without pathological significance
[7, 8]. Prevalence of flat foot in OPS results in our subjects were closer to 12.3%
Although this was mentioned by Denis, other reported a prevalence several
times higher than this figure [9, 10].
The critical age for development of the plantar arch is 6 years [11].
Consequently, if the sampled population of such studies cover this age, the
prevalence of flat feet will overestimate the problem. Infants are born with flat
feet, and the main medial arch develops gradually by weight bearing during the
first decade of life [12, 13]. Most flat feet conditions correct themselves
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
5 Conclusions
We conclude that OPS can be used sufficiently for both screening studies,
diagnosis and measurements of the foot plantar parameters and relative pressure
sites, but due to different types, sizes and contact areas, different weights,
reciprocal weight distribution and complex structure of the foot, it is difficult to
draw the line between normal and abnormal feet. The pattern of pressure
distribution in each particular area may be studied before and after different
interventions or compared for the clinical diagnosis of the abnormal cases but it
is not possible to justify a normative data even as a percentage of borne weight.
The actual prevalence rate of foot deformities are higher than appears in
literature in the region and society / school authorities involved with child health
care should reconsider the foot plantar deformities of students before they start
their more active future life.
References
[1] Sharrard, WJW., Intoeing and flat foot. British Medical Journal, 1,
pp. 888889, 1976.
[2] Soames, RW., Appendicular skeleton. Grays anatomy: the anatomical
basis of medicine and surgery. 38th ed., ed. Williams, PL., Bannister, LH.,
Berry, MM., Collins, P., Dyson, M., Dussek, JE. et al. Churchill
Livingstone: New York, pp. 7334, 1995.
[3] Kanatli, U., Yetkin, H., Simsek, A., Ozturk, AM., Esen, E., Besli, K.,
Pressure distribution patterns under the metatarsal heads in healthy
individuals. Acta Orthop Traumatol Turc, 42(1), pp. 630, 2008.
[4] Myerson, MS., The diagnosis and treatment of injury to the tarsometatarsal
joint complex. J. Bone Joint Surg [Br], 81, pp. 75663, 1999.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
153
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
155
Abstract
Fireline intensity is one of the most relevant quantities in forest fire science. It
helps to evaluate the effects of fuel treatment on fire behaviour, to establish
limits for prescribed burning. It is also used as a quantitative basis to support fire
suppression activities. However, its measurement at field scale for actual fire
remains a challenge. Hence, it has been poorly used as a key quantity to test the
new generation of models of fire spread that have been developed these last ten
years. An inverse method to obtain fireline intensity is through the observation of
the flame length. This geometrical information is measured using a stereovision
system placed in the lateral position relative to the direction of the fire spread.
Algorithms were developed in order to automatically segment the fire area of the
images and estimate the 3D coordinates of salient fire points and then the flame
length. The three dimensions of the information permit to obtain the flame length
with metric measures. In the present work, we directly measure the fireline
intensity at laboratory scale by oxygen consumption calorimetry. The results are
then used to establish a relationship between fireline intensity and flame length
obtained by the stereovision system.
Keywords: stereovision, 3D coordinates, fireline intensity, flame length, oxygen
consumption calorimetry.
1 Introduction
Fires devastate regularly forests and shrublands as well as populated areas all
over the world. Foresters and fire fighters are faced with problems such as the
management of wildland/urban interfaces, the establishment of safety zones and
suppression strategies. An important concept helpful in fire mitigation and fight
is to scale fires in function of their potential threat. This scale is based on the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110151
I B Hwr
(1)
where IB (kW/m) is the fireline intensity, H (kJ/kg) is the heat yield of the fuel, w
(kg/m2) is the weight of fuel consumed in the active flame front and r (m/s) is the
rate of spread of the fire. The fireline intensity is a widely used measure in forest
fire applications: it helps to evaluate the effects of fuel treatment on fire
behaviour [2], to establish limits for prescribed burning [3], to assess fire impacts
on ecosystems [4]. It is also used as an indicator for the classification of fires in
terms of risk [1] or as a quantitative basis to support fire suppression activities
[5]. The measure of the fireline intensity according to Eqn (1) is extremely
difficult because it requires determining the fuel consumption in the fire front.
An alternative method consists in measuring the geometrical properties of the
flame and use a correlation to derive the fire intensity. The flame length is
defined as the distance from the base of the flame to the highest point of the
flame [6]. For instance Byram [1] proposed the following relationships between
flame length and intensity:
LB = 0.0775 IB0.46IB = 259LB2.17
(2)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
157
2 Stereoscopic methodology
A stereo vision system is composed by two cameras, fig. 1. Each camera,
considered as pinhole, is defined by its optical center Ol (for the left one), its
optical axis (Olzl) perpendicular to the image plane and its focal length. Each
camera has intrinsic parameters (pixel coordinates of the projection center,
vertical and horizontal distortions) and extrinsic parameters (characterizing the
position and orientation of the camera relative to an object; three rotations and
three translations parameters). Let P be a point in a 3D space; Pl and Pr are
respectively the image of P in the left and right image planes. The line (elPl) is
the epipolar line representing the set of left image points corresponding Pl [13
16].
Figure 1:
(el Pl ) FPr
(3)
Figure 2:
Triangulation method.
Feature points are salient points of the image. The Harris detector is the most
commonly used operator for corner point extraction. It is based on local autocorrelation of the signal [17, 18]. This operator measures local changes of gray
levels. In some cases, regions of interest detection is performed in a given color
system prior to salient points extraction. This segmentation permits the
extraction of connected parts of an image based on color criteria [19]. Then
follows a contour detection carried out for example by using the Harris edge
detector and the Corner detector based on global and local curvature properties
[1921]. Based on global Feature points are then searched along the contours.
When feature points are located in the right and left images, a matching
algorithm is used in order to find corresponding points in the two images.
Matching algorithms can be classified as correlation-based and feature-based
methods [17]. In correlation-based methods, used in this work, the elements to
match are image windows of fixed size, and the similarity criterion is a measure
of the correlation between windows in the two stereo images.
3 Experiments
3.1 Experimental procedure to measure the flame characteristics
The measurement of the flame length was carried out using a stereovision system
placed in lateral position relative to the direction of the fire spread (circled
system in fig. 3).A Point Grey pre-calibrated Bumblebee XB3 camera was
used [22, 23]. This camera is a trinocular multi-baseline stereo camera with an
extended baseline of 24 cm. It has a focal length of 3.8 mm with 66 HFOV. The
image sensor is a 1/3" Sony ICX445AQ CCD. This system is pre-calibrated for
stereo processing. The image resolution is 1280x960 with a frame rate of
16 FPS.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
159
Image acquisition was developed using C++ with Point Grey Triclops SDK.
This system permits the simultaneous acquisition of a pair of images of the
scene. The captured images are stored for further processing using algorithms
developed with Matlab. The experiments were carried out with a frame rate of
0.25 Hz. Only the images within the steady phase were treated. The threedimensional points of fire were obtained from stereoscopic images. The steps
involved in this approach are [24]:
1- Segmentation of fire regions; Kos segmentation algorithm is used in this
work [25],
2- Extraction of salient features points,
3- Features selection refinement using a normalized cross-correlation
matching strategy,
4- Computation of three-dimensional fire points using stereo
correspondence.
Figure 4 shows the fire area segmented in the image with the matched points
detected on the entire contour of the fire. Figure 5 shows the 3D points of the fire
obtained from the corresponding points marked in Figure 4. The X-axis
corresponds to the depth of the fire front; the Y-axis corresponds to its height,
and the Z-axis to its width. The reference frame has its origin in the left image
center point of the XB3 camera. In order to visualize the correspondence
between the 2D and the 3D information, four points are numbered in fig. 4 and
fig. 5. With 2D data, it is impossible to distinguish points that are on the ground
from points located above whereas it is possible with 3D information. For
example, the Y coordinate of the point number 1 shows that this point is not on
the ground.
The length of the flame is defined as the distance between the ground and the
highest 3D points. It is thus necessary to estimate the position of the ground. A
method was developed in order to obtain automatically the ground even if the
plane is inclined. The lowest 3D points of the back part of the smoldering fire
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
Figure 5:
3D fire points.
front are identified for all the images and a plane is estimated from these points
using a least square method. Then, through homography, the coordinates of the
3D points are selected to form a base plane with Y = 0. The selection of the 3D
lowest points is carried out as follow: the 3D points are divided in sets following
the Z axis. In each set, we note Ymax and Zmax the maximum Y and Z coordinate of
selected points, Ymin and Zmin the minimum Y and Z coordinate of selected
points. Then we estimate y=YmaxYmin. Due to the possible inclination of the
ground, two conditions are used to select the points of the ground: YYmin+y
and ZZmin+z. For each 3D set of fire points obtained at a given time, the Y
coordinate of the highest point corresponds to the flame length, L. This
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
161
procedure is carried out for each image of the set of images acquired during the
steady phase. From all the flame length values, the mean L and the standard
deviation L are computed. Then, the values of L that are within the interval
L L , L L are averaged in order to compute the estimated flame length of
the whole steady phase.
E 0WO2
Wair
1 X X
H 2O
a
O2 V s , 298
P
TS
k
Vs , 298 22.4 A t
kp
(5)
a
a
X Oa2 1 X CO
X Oa2 1 X CO
2
2
X Oa2
a
X CO
2
X Oa2
(4)
(6)
where q is the HRR, E is Huggetts constant [27], 0 is the density of dry air at
298K and 1 atm, WO2 is the molecular weight of oxygen, Wair is the molecular
weight of air, X denotes the molar fraction, Vs,298 is the standard flow rate in the
exhaust duct. Superscript is for incoming air, a is for analyzers. A is the cross
sectional area of the duct, kt is a constant determined by calibration with a
propane burner, kp =1.108 for a bi-directional probe, p is the pressure drop
across the bi-directional probe and Ts is the gas temperature in the duct.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
Species
Hc,net
(kJ/kg)
(m-1)
(kg/m3)
pM
(%)
w
(kg/m2)
(cm)
17091
2394
287
47
0.6
Genista salzmannii
(GS)
20645
3100
967
0.9
3.5
Pinus pinaster
(PP)
20411
3057
511
35
0.61.2
37
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
163
I OC q W
(7)
Fireline intensity
(kW/m)
HRR by OC calorimetry
Byram's intensity
Mass loss
Mass (g)
100
2500
80
2000
60
1500
40
1000
500
20
Quasi-steady state
Time (s)
0
0
0
Figure 7:
3000
100
200
300
400
500
600
700
Fireline intensity and mass loss over time for fire spread across a
fuel bed of PP for a load of 1.2 kg/m2.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
AF 0.6 kg/m2
GS 0.9 kg/m2
PP 0.6 kg/m2
PP 1.2 kg/m2
75
y = 0,843x
R = 0,949
50
25
Fireline Intensity
by Byram (kW/m)
0
0
25
Figure 8:
50
75
100
125
0.6
0.4
IB (kW/m)
0.2
0.00
Figure 9:
50.00
Calorimeter
Byram (1959)
Newman (1974)
100.00
150.00
200.00
Byram
Thomas (1963)
Nelson and Adkinks (1986)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
165
We observe that all the correlation models overestimate the fireline intensity
measured by OC. It should be noticed that the Byram fireline intensity computed
from the measured flame length with eqn. (2) matches the one computed using
eqn. (1) denoted by marker in figure 9, showing that the flame length is well
measured. In order to take into account the good values (circle in figure 9) the
new correlation between the intensity and the flame length is proposed as:
L= 0.0646 I0.5281
I = 179 L1.8936 or I. 180 L1.9
(8)
(9)
5 Conclusion
Combined stereovision and oxygen consumption calorimetry allowed
establishing a new correlation to assess fireline intensity. This first step is
promising. In future, this correlation will be tested at field scale and the link
between geometrical flame properties and radiant heat flux ahead of a fire front
will be investigated.
Acknowledgements
This work was carried out in the scope of project PROTERINA-C supported by
the EU under the Thematic 3 of the Operational Program Italia/France Maritime
2007-2013, contract: G25I08000120007.
References
[1] Byram G.M., Combustion of forest fuels (Chapter 3). Forest fire: control
and use, ed. K.P. Davis, McGraw-Hill: New York, pp. 6189, 1959.
[2] Fites J.A. & Henson C., Real-time evaluation of effects of fuel-treatments
and other previous land management activities on fire behavior during
wildfires, Report of the Joint fires science rapid response project, US Forest
Service, pp. 113, 2004.
[3] McAthur A.G., Communication, 1962, Control burning in eucalypt forests,
Forestry and Timber Bureau Leaflet, Australia, No 80.
[4] Hammil K.A. & Bradstock R.A., Remote sensing of fire severity in Blue
Mountains: influence of vegetation type and inferring fire intensity.
International Journal of Wildland Fire, 15, pp.213226, 2006.
[5] Palheiro P.M., Fernandes P. & Cruz M.G., A fire behaviour-based fire
danger classification for maritime pine stands: comparison of two
approaches. Proc. of the Vth International Conference on Forest Fire
Research, ed. D.X. Viegas, CD-ROM, 2006.
[6] Albini F.A., Wildland fires, American Scientist, 72, pp.590597, 1984.
[7] Newman M., Toward a common language for aerial delivery mechanics.
Fire Management Notes, 35, pp. 1819, 1974.
[8] Thomas H., The size of flames from natural fire. In: proceedings of the 9th
symposium on combustion. Proc. of the combustion Institute, Pittsburgh,
pp. 844859, 1963.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
167
Abstract
A finite element analysis and a photoelastic stress analysis are conducted in
order to determine the stress field developed in the pin on plan contact problem.
Although this problem is relatively easy to study experimentally, the purpose
here is to show the possibilities of the finite element method; after validation of
the numerical procedure, problems with complicated geometry and boundary
conditions can then be solved numerically. Isochromatic and isoclinic fringes,
similar to the ones obtained experimentally by the photoelastic method, are
obtained numerically over the whole model. The details of the finite element
solution are fully given in the paper. Many studies have been achieved in order
to separate the principal stresses and obtain their orientations (integration of the
equilibrium equations ) in order to compare them with the simulated results.
However, this requires a high precision of measurement. Here, a whole field
comparison of the experimental and numerical photoelastic fringes and a local
analysis using the principal stresses difference, allowed us to validate the
numerical approach. Relatively good agreements were obtained. A numerical
solution for a three dimensional contact problem is also developed for a rigid
parallelepiped on a deformable cylinder. The mesh was refined in the
neighborhood of the contact zone in order to achieve better approximation of
stresses. The loading is given by the limit conditions that are simply the imposed
displacement. The calculated photoelastic fringes are obtained for various
sections inside the model. These simulated fringes can be compared to the
experimental ones which can be obtained by slicing the model and analyzing it in
a plan polariscope. The program developed allows us to calculate stresses on any
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110161
1 Introduction
Several studies have shown that failure of mechanical parts occur generally in
the neighborhood of the contact zones [14]. Stress initiation is mainly
controlled by the shear stress mechanisms, particularly for metallic materials, by
displacement of the dislocations on the crystallographic plans of higher densities.
It is therefore very important to determine the type and the amplitude of the
imposed mechanical solicitations.
Theoretical and numerical studies of the contact stresses are in some cases
very complex. Several methods have been used to analyze this type of problem.
In this work two methods have been used: the photoelasticity method and the
finite elements method in order to determine stresses developed on the model.
The photoelastic fringes obtained experimentally with a plan polarized light
are used to determine the values of the principal stresses difference over the
whole model. To obtain the individual values of the stresses, that is to separate
the principal stresses, several studies have been conducted by integrating the
equilibrium equations (Zenina et al. [5] and Plouzennec [6]). However, a high
precision is required in the unwrapping of the isochromatic and the isoclinic
fringes obtained on the analyzer to determine respectively the difference and the
direction of the principal stresses.
As already done in previous papers (Bilek et al. [7, 8]), it is sufficient to make
a comparison between experimental and simulated fringes. Another comparison,
which is more accurate, is made between experimental and simulated values of
the principal stresses difference along the vertical axis of symmetry.
2 Experimental analysis
The model, made of epoxy resin (PLM-4R) mixed with a hardener, is mounted
on a loading frame (figure 1) equipped with two dynamometers. The model is
loaded via a steel pin of rectangular cross section (12x12mm), the load is set to
F=1300N. The loading frame with the model is then positioned on the
polariscope for analysis.
Plane polarized light is used to observe the photoelastic fringes. The
isochromatic and the isoclinic fringes obtained on the analyzer are used to
determine the values of the principal stresses difference and the principal stresses
directions, particularly in the neighborhood of the contact zone.
The model in the shape of a parallelepiped (67 x 58 x 10 mm) is cut in the
birefringent material. Poissons ratio and Youngs modulus, which are necessary
to implement the numerical solution, are measured with the help of electrical
strain gages mounted respectively on a tensile specimen and a cantilever beam.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
Figure 2:
169
Strains measured on the surface of the models allowed us to obtain easily these
necessary values: = 0.37 and E=2435 MPa.
Figure 2 shows the well known photoelastic method based on the birefringent
phenomenon; the refractive index n1 and n2 which depend on stresses in the two
principal directions induce a retardation angle . The light intensity obtained on
the analyzer after traveling through the polarizer, the model and the analyzer has
two terms: sin22 and sin2/2 which give respectively the isoclinic fringes and
the isochromatic fringes (eq. (1)).
I= a2 sin22 sin2/2
(1)
The isochromatic fringes allow us to obtain the values of the principal stresses
difference on the model by using the well known relation (eq. (2)). This can
only be done once the values of the fringe order N have been completely
determined. The values of the fringe order N are determined either by the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
The ratio f=/C called the fringe constant depends on the light wave used and
the model material. Several solutions are available to obtain this value easily.
Here, we subjected a beam (40mm x10mm cross section) to a constant bending
moment (15000 N.mm) in a portion of its length (figure 3), the light wave length
used is =546nm. We can see that the fringes are parallel to the horizontal axis of
symmetry as one would expect; stresses at a same distance from the neutral axis
are identical. We recall that isochromatics are loci of points having the same
principal stresses difference. The isochromatic fringes are, therefore, parallel to
the neutral axis of symmetry. Knowing the fringes orders and using the fact that
the stress 2 is equal to zero (no load is applied in that direction), the value of the
fringe constant can then be easily deduced by using (eq. (2)):
f=11,65N/mm/fringe.
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
171
In this paper we are interested mainly in validating the finite element solution;
it is therefore sufficient to compare the experimental and the calculated fringes.
Another comparison, which is more accurate, is made between stresses obtained
experimentally by analyzing the experimental isochromatic fringes and stresses
obtained directly with the finites elements simulation, along the vertical axis of
symmetry of the model.
Figure 5:
3 Numerical analysis
In the finite element calculations, we considered that the material behaves
everywhere as a purely elastic isotropic material. Youngs modulus
(E1=210000MPa, E2=2435 MPa) and Poissons ratio (1=0.3, 2=0.37) for the
two bodies in contact were introduced in the program. The mesh was refined in
the neighborhood of the contact zone (figure 6) in order to achieve better
simulation of stresses.
To achieve a better simulation of the applied load, an imposed displacement
is applied to the model at the contact surface between the pin and the plan. The
equivalent applied load is calculated then as the sum of the elementary vertical
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
load components at the nodes located at the lower surface of the model which is
in contact with the loading frame.
Iterations on displacements at the contact nodes are stopped when the
calculated corresponding load is equal to the value of the applied load within an
acceptable error (0.1%) set in the program. The isochromatic fringes represented
by sin2/2 (eq. 1) are calculated then easily over the whole model. The details of
the calculation are shown hereafter.
3.1 Numerical calculation of the isochromatic fringes
The following relation (eq. 3) which can be obtained readily from Mohrs circle
for stresses allows us to calculate the principal stresses difference at any point of
a stressed model.
((x y)2 +42xy)0.5 = 1 2 = Nf/e
(3
The different values of the retardation angle (eq. 4) can be calculated at any
point on the model using the following relation:
= 2N = ((x y)2 +42xy)0.5 2 e/f
(4)
Figure 7:
173
Figure 8:
The term sin2 2 represents the isoclinic fringes which are loci of points
where the principal stresses directions are parallel to the polarizer and the
analyzer. In the simulation program, the different values of the isoclinic
parameter can be calculated with the following relation (eq. 5) which can be
obtained readily from Mohrs circle for stresses:
= arct (2xy /(x-y))
(5)
The program calculates the different values of the parameter . The image
corresponding to the isoclinic fringes (sin2 2) can then be calculated and
displayed (figure 9). The comparison is then possible with the experimental
isoclinic fringes which are the dark fringes obtained experimentally (figure 5).
Experimentally it is not possible, of course, to observe the isoclinics alone
whereas for the finite element solution this is possible. Good agreement was
obtained between the experimental and the simulated isoclinic fringes.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 9:
Figure 10:
Experimental model.
The same as for the previous two dimensional case, displacements are
imposed on the upper surface of the model at the nodes that will come into
contact after the load is actually applied. The imposed displacement is calculated
separately at each node; the imposed displacement decreases from a set value
for the first point of contact and decreases as we move away from this point. The
corresponding applied load is then calculated as the sum of the elementary
vertical loads on the nodes at the lower surface of the model which is in contact
with the loading frame.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
175
(6)
Figure 11:
In the finite element solution, it is necessary to select the thickness of the slice
to be isolated. The slice thickness should be small enough so that stresses remain
relatively constant across the thickness. Here, we choose 10 mm which
corresponds approximately to the generally used thickness for a two dimensional
model. This process is repeated along the length of the cylinder in order to
determine the variation of stresses in the whole volume. Figure 11 shows the
simulated isochromatic fringes obtained with software package castem.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 12:
For a same slice (image 1, figure 11), we can see that stresses decrease as we
move away from the contact zone. Also as we move along the cylinder (image 1
through 8) we see less isochromatic fringes which show clearly that stresses
decrease. The stresses at the lower of the cylinder remain relatively constant; the
load is uniformly distributed over the surface of contact with the loading frame.
The principal stresses difference along the vertical axis of symmetry increases
to a maximum value of about 0.72 MPa and then decreases as we move away
from the contact zone. The value of the principal stresses difference increases
again, as we move close to the contact zone of the cylinder with the loading
frame, to reach a value of 0.27 MPa. This graph can be obtained along the
vertical axis for any plan along the length of the cylinder.
5 Conclusion
We have shown through the study of a two dimensional model that the
simulation of stresses developed on a plan loaded with a pin gives relatively
good agreements with the experimental ones. The isochromatic and the isoclinic
fringes are comparable to the photoelastic fringes obtained on a regular
polariscope. A solution for a three dimensional problem is developed. The
isochromatic fringes are obtained for various sections along the cylinder. The
principal stresses difference can be easily calculated in the volume of the
cylinder. This allows us to locate the zones of stress concentration which is of
great importance in the design of mechanical components. An experimental
solution either by the stress freezing method or the optical slicing method can be
used for comparison purposes.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
177
References
[1] Mihailidis, A., Bakolas, V., & Drivakovs, N., Subsurface stress field of a
dry line Contact. Wear V. 249, pp 546-556, 2001.
[2] Burguete, R. L. & Patterson, E. A., A photoelastic study of contact between
a cylinder and a half-space. Experimental Mechanics V.7, No. 3, 1997.
[3] Kogut, L. & Etsion I., Elastic-Plastic contact analysis of a sphere and a rigid
flat. Journal of Applied Mechanics, V.69, pp.657- 662, 2002.
[4] Bilek, A., Dupr, J. C., Ouibrahim, A. & Bremand, F., 3D Photoelasticity
and numerical analysis of a cylinder/half-space contact problem, Computer
methods and experimental measurements for surface effects and contact
mechanics VII, pp 173 -182, 2000.
[5] Zenina, A., Dupr, J.C. & Lagarde, A., Separation of isochromatic and
isoclinic patterns of a slice optically isolated in a 3-D photoelastic medium.
Eur. J. Mech. A/Solids 18, pp. 633-640, 1999.
[6] Plouzennec, N., Dveloppement de Processus danalyse dImages en
Photolasticimtrie par un feuillet plan obtenue par dcoupage mcanique et
optique. Thse de lUniversit de Poitiers, 1996.
[7] Bilek, A., Dupr, J. C., Brmand, F. & Ouibrahim, A., Studies of contact
problems by 3D photoelasticity, comparison with finite element analysis,
International conference on experimental mechanics, Italy, 2004.
[8] Bilek, A., Ouibrahim, A., Brmand, F. & Dupr, J. C., Experimental and
numerical analysis of a cylinder/cylinder contact problem. ETDCM8,
Experimental techniques and design in composite materials, Italy 2007.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 2
Fluid flow
181
Abstract
Transition from laminar to turbulence in separated-reattached flow occurs
frequently and plays a very important role in engineering. Hence, accurately
predicting transition is crucial since the transition location has a significant
impact on aerodynamics performance and a thorough understanding of the
transition process can greatly help to control it, e.g. to delay the turbulent phase
where laminar flow characteristics are desirable (low friction drag) or to
accelerate it where high mixing of turbulent flow are of interest (in a combustor).
However, it is very difficult to predict transition using conventional ReynoldsAveraged-Navier-Stokes (RANS) approach and the transition process is not fully
understood. Nevertheless significant progress has been made with the simulation
tools such as Large Eddy Simulation (LES) which has shown improved
predictive capabilities over RANS and can predict transition process accurately.
This paper presents briefly LES formalism and followed by its applications to
predict/understand the transition process and unsteady behaviour of the free
shear layer in separated-reattached flow.
Keywords: transition, separated-reattached flow, LES, RANS, shear layer,
unsteady, turbulence.
1 Introduction
Separated flows are common and play an important role in many engineering
applications from cooling of small electronic devices to airfoil and turbomachinery design. If a separated flow reattaches downstream a separation bubble
is formed and its characteristics are a crucial aspect of the engineering design
process. Three types of separation bubble are possible depending on the state of
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110171
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
183
possible future trends in several important areas in LES and transitional bubble
study.
2 Mathematical formulation
2.1 LES governing equations
The governing equations for any fluid flow, called the Navier-Stokes equations,
are derived according to the fundamental conservation laws for mass, momentum
and energy. In LES only large eddies (large scale motions) are computed directly
and hence a low-pass spatial filter is applied to the instantaneous conservation
equations to formulate the 3D unsteady governing LES equations. When the
finite volume method is employed to solve the LES equations numerically the
equations are integrated over control volumes, equivalent to convolution with a
top-hat filter, therefore there is no need to apply a filter to the instantaneous
equation explicitly and in this case it is called implicit filtering.
The filtered equation expressing conservation of mass and momentum in a
Newtonian incompressible flow can be written in conservative form as:
i ui 0
(1)
t (u i ) j ( u i u j ) i p 2 j ( S ij ) j ( ij )
(2)
where the bar over the variables denotes the filtered, or resolved scale quantity
and:
S ij
1
( i u j j u i )
2
ij (ui u j ui u j )
(3)
(4)
S ij is the resolved scale strain rate tensor and ij is the unknown sub-grid scale
or residual stress tensor, representing the effects of the sub-grid scale motions on
the resolved fields of the LES, which must be modelled or approximated using a
so called sub-grid scale model.
2.2 Sub-grid scale modelling
Many different kinds of sub-grid scale models have been developed [35]and
most of them make an eddy-viscosity assumption (Boussinesqs hypothesis) to
model the sub-grid scale stress tensor as follows:
1
3
ij 2t S ij ij ll
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(5)
t is called sub-grid scale eddy viscosity and eqn. (2) then becomes:
t ( u i ) j ( u i u j ) i P 2 j [( t ) S ij ]
(6)
t (C S ) 2 S
S ( 2 S ij S ij ) 2
( x y z ) 3
(7)
C S is the so called Smagorinsky constant and typical value used for it is 0.1.
Despite increasing interest in developing more advanced sub-grid scale
models this very simple model has been used widely and proved surprisingly
successful although it has clear shortcomings such as that it is too dissipative
(not good for transition simulation) and the Smagorinsky constant needs to be
adjusted for different flows. An improvement on this simple SGS model was
suggested by Germano et al. [6] a dynamic sub-grid scale model, which
allows the model constants C S to be determined locally in space and in time
during the simulation.
2.3 Numerical methods
The finite volume method is the most popular numerical method used in fluid
flow simulation and most of LES studies have been carried out using this
method. A brief discussion on many important numerical issues will be
presented in this section.
2.3.1 Filtering
When the finite volume method is used there is no need to explicitly filter the
instantaneous Navier-Stokes equations since the governing equations can be
regarded as implicitly filtered as mentioned in section 2.1.The velocity
components at the corresponding grid points are interpreted as the volume
average. Any small scale (smaller than the mesh or control volume) motions are
averaged out and have to be accounted for by a sub-grid scale model. However,
note that it is impossible in this case to discuss the convergence properties (grid
independent solution) of the LES equations because with every mesh refinement,
more small scale eddies are resolved and strict convergence is only achieved in
the limit of the so called Direct Numerical Simulation (DNS).
2.3.2 Spatial and temporal discretization
The most popular spatial discretization scheme used in LES is the second-order
central differencing duo to its non-dissipative and conservative properties (not
only mass and momentum but also kinetic energy conserving), which are
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
185
essential for LES. This is the reason why usually first- and second-order upwind
schemes or any upwind-biased schemes are not used in LES since they produce
too much numerical dissipation. While higher-order numerical schemes,
generally speaking, are desirable and can be applied fairly easily in simple
geometries, their use in complex configurations is rather difficult. In addition, it
is difficult, at least for incompressible flows, to construct high-order energy
conserving schemes. Hence it is likely that with increasing applications of LES
to flows of engineering interest in complex geometries the second-order central
differencing scheme is still the most popular choice.
As for the temporal discretization (time advancement), implicit schemes
allow larger time steps to be used. However, they are more expensive because at
each time step non-linear equations have to be solved. Furthermore, large time
steps are unlikely to be used in LES in order to resolve certain time scales for
accurate simulations of turbulence. Hence, explicit schemes seem to be more
suitable for LES than implicit schemes and most researchers in LES use explicit
schemes such as the second-order AdamsBashforth scheme. Since the time
steps are usually small in LES so that it is not essential to use higher-order
schemes either.
2.3.3 Inflow boundary conditions
Most boundary conditions used in LES are fairly standard and similar to those
used in the RANS approach but specifying inflow boundary conditions
accurately for LES proves to be very difficult. This is because in LES of
turbulent flow at inflow boundary, unlike the RANS computations where only
time-averaged information is required that can be usually specified according to
experimental data, three components of instantaneous velocity need to be
specified at each time step, which are almost impossible to be obtained from
experimental data. Hence normally boundary conditions in LES at inflow
boundary have to be generated numerically which usually lack physical flow
properties. For example, the simplest way is to specify the mean flow velocity
profile (usually obtained experimentally) plus some random perturbations.
However, random disturbances are nothing like real turbulence since they have
no correlations; neither in space nor in time. Therefore, they decay rapidly and it
takes usually a long distance downstream from the inflow boundary for a desired
realistic turbulence to develop, and in some cases the use of random noise at the
inlet does not develop turbulence at all. On the other hand one can use the socalled precursor simulation technique, which is basically to perform another
simulation and store the data as the input for the required simulation. This can
generate the most realistic turbulence information at inflow boundary but it is far
too expensive. Many efforts have been made to develop a method which can
generate numerically three instantaneous inflow velocity components in such a
way that they have all the desired turbulence properties. However, so far there
are methods developed which can generate inflow turbulence with certain
properties but no methods available yet to generate inflow turbulence with all the
desired characteristics such as intensity, shear stresses, length scales and power
spectrum [7].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Many studies have revealed that in the absence of any finite magnitude
environmental disturbances, transition in the separated shear layer of a separation
bubble is dominantly initiated through the inviscid Kelvin-Helmholtz (KH)
instability mechanism. This mode of instability closely resembles that of the
planar free-shear layer in mixing layers and jets [9]. The LES study of Yang and
Voke [10]revealed a primary 2D instability of a separated shear layer (induced
by a smooth leading edge) via the KH mechanism. A similar mechanism was
also observed by Abdalla and Yang [11] in their LES studies of a separation
bubble over a sharp leading edge. The LES study by Roberts and Yaras [12]
demonstrated that transition of a separated shear layer through the KH instability
does not eliminate the existence of a so called Tollmien-Schlichting (TS)
instability(a viscous instability typically associated with attached flow boundary
layer transition) in the inner part of the flow where the roll up of shear layer into
vortical structures occurred at the dominant TS frequency. They emphasized the
possibility of an interaction between the TS and KH instability modes. Several
other studies have shown that KH instability plays a dominant role in the
transition process of separation bubbles. A number of experimental studies have
also suggested that the TS instability mechanism plays a significant role in a
transitional separation bubble [1315].
The next stage of the transition process after the dominant primary KH
instability is less well understood. In planar free shear layers, the primary
spanwise vortices generated by the KH instability are known to undergo an
instability leading to the vortex pairing phenomenon [9, 16, 17]. This pairing of
vortices is regarded as the governing secondary mechanism associated with
growth of planar free shear layers. A similar vortex pairing phenomenon has also
been reported in separated shear layer studies [18] but Abdalla and Yang [11]
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
187
Figure 1:
Figure 2:
189
that this was associated with large shrinkage of the bubble caused by a big vortex
shedding at a lower frequency as shown in Fig. 3. However, this low frequency
peak was not observed in some other separated boundary layer transition studies
[11, 19]. Abdalla and Yang [11], in their LES of a transitional bubble over a flat
plate with a sharp leading edge, showed a characteristic frequency in the range
0.7-0.875 U0/l along with some less dominant modes between 0.3-0.6 U0/l. They
inferred that this slightly lower frequency content may be related to pairing of
vortices as a similar range of frequency had been reported for the pairing
phenomenon behind a backward facing step but no low frequency peak as
mentioned above was observed. Yang and Abdalla [19] studied the same
problem with 2% free-stream turbulence and reported a peak frequency band at
about 0.80.9 U0/l, in close agreement with the characteristic frequencies already
measured in previous studies but again no low frequency peak was observed.
Those results indicate that this low frequency mode in separatedreattached
flows may only appear in the case of turbulent separation as suggested earlier by
Cherry et al. [21] but further study is needed to clarify this.
Figure 3:
4 Conclusions
The present paper has presented briefly LES formalism and reviewed some of its
applications to study transition process in separated-reattached flow, focusing on
the current understanding of physics of the transition process. Several important
issues associated with LES have been discussed. Although significant progress
has been made towards a better understanding of the transition process in
separated-reattached flows our current understanding is far from complete, and
there are still many areas where further investigation is needed. According to the
author the following issues/areas are particularly important and future research
should be focused on:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
191
References
[1] Langtry, R.B. & Menter, F.R., Transition modelling for general CFD
applications in aeronautics. AIAA 2005-522, Reno, Nevada, 2005.
[2] Smagorinsky, J., General circulation experiments with the primitive
equations: I the basic experiment. Monthly Weather Review, 91, pp. 99164, 1963.
[3] Lesieur, M. & Metais, O., New trends in large eddy simulations of
turbulence. Annual Review of Fluid Mechanic,28, pp. 45-82, 1996.
[4] Sagaut, P., Large Eddy Simulation for Incompressible Flows, an
Introduction, Springer, 2ndedition, 2003.
[5] Kajishima, T. & Nomachi, T., One-equation sub-grid scale model using
dynamic procedure for the energy production. Transaction of ASME, 73,
pp. 368-373, 2006.
[6] Germano, P., Piomelli, U., Moin, P. & Cabot, W.H., A dynamic sub-grid
scale eddy viscosity model. Physics of Fluids,3(7), pp. 1760-1765, 1991.
[7] Veloudis, I., Yang, Z., McGuirk, J.J., Page, G.J. & Spencer, A., Novel
implementation and assessment of a digital filter based approach for the
generation of large-eddy simulation inlet conditions. Journal of Flow,
Turbulence and Combustion, 79, pp. 1-24, 2007.
[8] Piomelli, U. & Balaras, E., Wall-layer models for large-eddy simulation.
Annual Review of Fluid Mechanics,34, pp. 349374, 2002.
[9] Ho, C.M. & Huerre, P., Perturbed free shear layers, Annual Review of Fluid
Mechanics, 16, pp. 365-424, 1984.
[10] Yang, Z. & Voke, P.R., Large-eddy simulation of boundary layer
separation and transition at a change of surface curvature. J. Fluid Mech.,
439, pp. 305333, 2001.
[11] Abdalla, I.E. & Yang, Z., Numerical study of the instability mechanism in
transitional separating-reattaching flow. International Journal of Heat and
Fluid Flow, 25, pp. 593-605, 2004.
[12] Roberts, S.K. & Yaras, M.I., Large-eddy simulation of transition in a
separation bubble. ASME J. Fluids Eng., 128, pp. 232-238, 2006.
[13] Lang, M., Rist, U. & Wagner, S., Investigations on controlled transition
development in a laminar separation bubble by means of LDA and PIV.
Exp. Fluids, 36, pp. 4352, 2004.
[14] Roberts, S.K. & Yaras, M.I., Effects of periodic unsteadiness, free-stream
turbulence and flow Reynolds number on separation-bubble
transition.ASME-GT2003-38626, 2003.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
193
Abstract
In this research, an investigation into the aerodynamic characteristics of a body
with deployable drag surfaces for recovery system has been carried out using
computational fluid dynamics. Two models of the body with retracted position of
drag surfaces and deployed position of drag surfaces has been considered for
studying the influence of drag surfaces on the flow structure and aerodynamic
forces. For this purpose force measurement and flow visualization for each case
has been carried out in Mach numbers 0.4 and 1.5. Validation of the results has
been done through comparing aerodynamic coefficients with results of a semiexperimental method. A general study of the main aerodynamic coefficients
shows that at all angles of attack, the coefficient of lift decreases and the
coefficient of drag increases. Visualization of the flow structure shows a region
of separated flow upstream and a dead flow region with large vortices
downstream of the drag surfaces.
Keywords: numerical simulation, aerodynamic characteristics, recovery system,
drags coefficient, pressure distribution, shock wave, vortex.
1 Introduction
Several methods have been employed for recovery of flying objects with various
degrees of success. But the most prominent method, especially for heavier bodies
is parachute recovery [1], in which a number of parachutes are deployed in a predefined sequence to reduce the bodys rate of descent. Physics of supersonic
flow around parachutes has its own complexities such as area oscillation and
shock wave formation [2]. The controlled deceleration of a flying body is a vital
part of many aerospace missions that involves significant technological
challenges and creative engineering solutions [3]. For many flying objects,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110181
195
supersonic and subsonic turbulent flows around the body were numerically
investigated. The computation of the flow field around such complex
configuration, as drag surfaces adjacent fins or wings, is of considerable interest
for an accurate calculation and prediction of the shock structures because there is
an interaction between drag surfaces shock waves with fins or wings. Accurate
determination of forces is required for the calculation of trajectory and structural
analysis of body in design and operation.
In these simulations the main focus has been on aerodynamic characteristics.
To demonstrate the efficiency and accuracy of the present methodology the
aerodynamic characteristics of this configuration including drag and lift forces
obtained by commercial CFD code for subsonic and supersonic free stream
Mach numbers were compared with the semi-experimental measurements at
various angles of attack.
2 Computational methodologies
This study is undertaken using Fluent software as a tool to predict the flow field
around the body. To investigate the flow field, the Navier-Stokes equations are
modelled by using density-based method. The flow considered here is threedimensional, stationary, viscous, and turbulent in which the RNG K-epsilon
model has been used for turbulence modelling. To accommodate accurate
turbulence modelling, the standard wall functions are selected. A first order
accurate method is computed to establish the flow. A second order accurate
method is computed to achieve the convergence of the solution.
The verification of the CFD method included quantitative comparisons
provided by Missile DATCOM as well as qualitative study of flow structure.
Missile DATCOM is a semi-experimental aerodynamic prediction code that
calculates aerodynamic forces, moments and stability derivatives as function of
angle of attack and Mach number for a variety of axisymmetric and
nonaxisymmetric missile configurations.
2.1 Geometry and boundary conditions
In this study to find out the influence of the drag surfaces on the aerodynamic
characteristic of the body two cases have been considered. Figure 1 shows both
the models.
In first case, a body-fins configuration is examined with 2350 in length and
338 in diameter. The configuration has an ogive nose with four tail fins arranged
in a cruciform pattern. The fins have a supersonic airfoil cross section. The set of
the fins increases the lift force and stability of the body.
In the second case, four drag surfaces deployed in between of each pair of the
fins is modelled which placed across the centerline of the vehicle partially block
the flow. The drag surfaces are conformed to the outer surface of the body when
are retracted. In other words the deployable drag surfaces folded on the body.
When the deployment command is issued by the flight computer, the drag
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(a)
(b)
Figure 1:
surfaces are released, and travel from the retracted position (figure 1a) to the
deployed position (figure 1b). This unconventional method has been applied to
enhance the drag forces. In this research a half of the model has been considered
because of the symmetric flow and geometry to reduce calculation time. Three
types of boundary conditions were used: wall, pressure far-field, symmetry.
2.2 Grid generation
The accuracy of the CFD results greatly depends on how accurately the
configuration is modelled and meshed. The entire solution domain is discretized
by a structured grid. Several grid studies were performed to ensure that the grid
was fine enough to capture the physical characteristics of the flow. It had been
seen that the calculated CA approaches the constant value by increasing the grid
qualification. The limitation in computer memory storage is a major factor in
preventing further increase in grid size.
The total number of cells is about 4,000,000 for body-fins configuration and
4,500,000 cells for body-fins with deployable drag surfaces. Figure 2 shows the
quality of the structured grid adjacent to the surface of the body, nose, fins and
drag surfaces. For evaluating the quality of the structured grid adjacent to the
body surface, the values of Y+ at the first of wall nodes were calculated. It was
shown that most of them have the values more than 30 except for a negligible
number of the grids. A trade off between computation time and quality of result
led to a grid with finer mesh near to solid surface and coarser mesh adjacent to
far field boundaries.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
197
(a)
(b)
Figure 2:
The quality of the structured grid (a) adjacent to the surface of the
nose (b) adjacent to the surface of the body, fins and drag surfaces.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Cd
0.7
0.5
0.3
0.1
0
10
12
14
16
AngleofAttack(Deg.)
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
199
3.5
3.0
2.5
Ca
2.0
1.5
1.0
0
10
12
14
16
AngleofAttack(Deg.)
Figure 4:
Figure 5:
of the drag surfaces which has not been seen on body before releasing drag
surfaces. The region of influence of the separation is seen to encompass a larger
area of the body at supersonic flow as compared with the subsonic flow in figure 6.
Downstream of the drag surfaces, contra rotating vortices have been predicted
which is shown in figure 7 in Mach number 0.4 and angle of attack 0 degree.
Since the surface area of body base is increased by releasing drag surfaces, it is
expected the pressure base drag increased significantly. Eventually, the
impingement upon the surface leads to changes in pressure distribution on the body
surface which is shown in figure 8. This pressure distribution change results in
different forces and moment on fins and body. Upstream of the drag surfaces, there
is a pressure increase, which produces an existence of shock waves.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
Figure 7:
201
Figure 8:
Figure 9:
In figure 9, the surface static pressure contours on one side of typical fin and
drag surfaces are presented at Mach number 1.5 and 8 degrees of angle of attack.
As can be seen, the difference flow patterns and difference pressure distribution
on the opposing fin and drag surfaces produce a different load distribution.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
3.2
2.8
2.4
Cd
1.6
1.2
Before Releasing
the Drag Plate
0.8
0.4
0
0
10
12
14
16
AngleofAttack(Deg.)
Figure 10:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
203
2.5
2
Before Releasing
the Drag Plate
Cd
1.5
After Releasing
the Drag Plate
1
0.5
0
0
Figure 11:
AngleofAttack(Deg.)
10
1.6
1.4
1.2
Cl
0.8
0.6
0.4
0.2
0
0
Figure 12:
10
AngleofAttack(Deg.)
12
14
16
0.9
0.8
0.7
Cl
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
AngleofAttack(Deg.)
Figure 13:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
4 Conclusion
An investigation into the aerodynamic characteristics of a body with deployable
drag surfaces is carried out. A series of numerical modelling is done for a range
of angles of attack in Mach numbers 0.4 and 1.5 to include both subsonic and
supersonic flow regimes. The study included two models of body with retracted
position of drag surfaces and deployed position of drag surfaces. The drag and
lift force coefficients were calculated from the flow field solutions. The results
for retracted and deployed models were compared with each other in order to
study the influence of drag surfaces on the aerodynamic force on the body.
A general study of the main aerodynamic coefficients shows that at all angles
of attack, the coefficient of lift decreases and the coefficient of drag increases.
This is due to the disturbing effect of the plates on the flow structure around the
nearby fins which in its own turn decreases the performance of the set of fins
significantly. The presence of the flat plate as local protuberance produces more
drag force in different angles of attack as expected. This additional drag force
brings body velocity and body rate of descent down to an acceptable level of
speed for parachute deployment.
References
[1] Available Online: www.info-central.org/recovery_techniques.shtml
[2] Anita Sengupta et al, Supersonic Performance of Disk-Gap-Band Parachutes
Constrained to a 0-Degree Trim Angle, Journal of Spacecraft and Rockets,
Vol. 46, No. 6, NovemberDecember 2009
[3] Richard Benneyet al, Aerodynamic Decelerator Systems-Diverse Challenges
and Recent Advances, Journal of Aircraft, Vol. 38, No. 5, September
October 2001
[4] Brandon P. Smith et al., A Historical Review of Inflatable Aerodynamic
Decelerator Technology Development, IEEE Aerospace Conference, March,
2010
[5] Arash N. Lahouti et al., Design, Development and Testing of a Rigid
Aerodynamic Decelerator System for Recovery Of a High-Altitude
Sounding Rocket Payload, The 1st International ARA Days: Atmospheric
Re-entry Systems, Missions and Vehicles, France, 2006
[6] Anon., Sounding Rocket Program Handbook, NASA Goddard Space Flight
Center, Wallops Flight Facility, Wallops Island, Virginia, USA, June. 2005
[7] SRP-4 Design Team, SRP-4 Design Document, University of Alaska
Fairbanks, Alaska, USA, 2001
[8] Wm. David Washington et al., Experimental Investigation of Grid Fin
Aerodynamics, Symposium on Missile Aerodynamics, Italy, 1998
[9] Mark Bell et al., A Numerical Study into a Local Protuberance Interaction
with a Fin on Supersonic Projectile, 47th AIAA Aerospace Sciences Meeting
Including the New Horizons Forum and Aerospace Exposition, Florida,
2009
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
205
Abstract
In the present work, a complete three dimensional fluid dynamics simulation of
reacting fluid flow in post-combustion chamber of an electric arc furnace is
presented. The chemical composition of the simulated off-gases was representing
typical crucial load. The gap (where oxygen enters to combust hydrogen and
carbon monoxide) size was an independent variable. The optimal gap size is
desirable: if the gap size is too large, the thermal efficiency diminishes. On the
other hand, if the chosen gap size is too small, oxygen deficiency occurs, which
lead to incomplete combustion of carbon monoxide and hydrogen. Herein
established, that by means of CFD calculation, the optimal gap size can be
evaluated for a particular steelmaking furnace.
Keywords: steelmaking, post-combustion, CFD.
1 Introduction
The steel production by electric arc furnace (EAF) from scrap metal is widely
used technique. During the production of steel considerable amount of
combustible gases are formed such as carbon monoxide and hydrogen, which are
extracted directly through the fourth hole, which is placed on the water-cooled
furnace roof. As a result, the furnace inner is depressurized, which helps to
minimize the harmful gas emissions, and air enters the furnace from the factorys
ambient. The flow rate of the penetrating air (usually called false air), is defined
by: the suction flow-rate of the direct evacuation system (DES),the design of the
furnace inner and the amount of generated gases in the furnace. Consequently,
the operation of the DES influences the mass and energy balance of the EAF.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110191
Table 1:
Chamber specification.
Total volume
Inlet area
Outlet area
Water-cooled surface
Figure 1:
207
388,7 m3
2,9 m2
6,6 m2
430,0 m2
Consequently, the selected operating parameters define the mixing ratio, i.e.
the ratio between the airflow through the gap (V ) and the off-gases from the
EAF VEAF .
The geometry was discretized by tetrahedral elements, Figure 3. The
computational mesh was unstructured and conformal. Local mesh refinements
were applied in regions where steep gradients of the dependent variables
(temperature, velocity, pressure) were expected.
The mesh was also refined in reacting zone (where carbon monoxide,
hydrogen and oxygen mix) where high gradients of concentrations are expected.
Steep temperature gradient was likewise expected in reacting zone due to the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
The gap size controls the mixing ratio (see text for definition).
exothermic reactions. The number of the cells was typically between 500.000
and 650.000, depending on the gap size. Since the present investigation is a
preliminary study, the mesh independent study was not presented herein
nevertheless it is currently in progress.
The convergence criteria of CFD calculations were achieved and the adopted
residuals for each equation are listed in Table 2.
2.2 Kinetic mechanism
The following skeleton reactions were taken into account within the reacting
flow:
2
2
2
2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(1)
(2)
Figure 3:
Table 2:
209
Equations
Continuity
Velocity (x,y,z)
Energy
Turbulent kinetic energy ()
Turbulent dissipation rate ()
DO intensity
Species
Residuals
1E-05
1E-06
1E-06
3E-06
5E-06
1E-06
1E-06
Inlet
Pressure
-10 Pa
Temperature
1523 K
Chemical composition Table 4.
Outlet
Velocity
23,6 m/s
Gap
Pressure
-10 Pa
Temperature
473 K
Chemical composition Table 4.
Walls
Temperature
353 K
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
211
The selected concentration of the entering gas at the fourth hole and at the gap
is listed in Table 4. These values represent a typical load during the melting
period, which is the most crucial period from post-combustion point of view.
Table 4:
Input data for chemical compositions. At the gap, pure air was
assumed.
Material
CO
CO2
H2
H2O
O2
N2
Inlet (%)
30
0
10
0
0
60
Gap (%)
0
0
0
1
21
78
The reactions occur only in case of mixing of combustible gases and oxygen.
Figure 4 showing the mass fraction of carbon monoxide and proves that the
reactions take place at the very beginning of the post-combustion chamber.
Consequently, the mixing can assumed very efficient. The calculated conversion
factors are listed in Table 5.
It can be noted, that complete combustion of carbon monoxide and hydrogen
occurred only in case of 40 cm gap size. When the gap size is 30 cm, the
conversion is almost satisfactory (Table 5). Nevertheless, in case of 20 cm gap
size, there is not enough oxygen for the complete reaction. Therefore, the
Table 5:
H2 conversion
0,72
0,99
1,00
CO (ppm)
34300
1475
64
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
H2 (ppm)
6920
163
0,34
Figure 4:
conversion rate of H2/CO remains low and its concentration at the exit is quite
high.
In Table 6, the concentrations of the exiting gases are listed. The
compositions of the leaving gases were compared to literature data [1] and
experimental measurements, provided by the Stg Group Company. The
calculated data match with the available data. The effect of the gap size on the
oxygen content can be easily derived. The small gap size causes oxygen deficit
and incomplete combustion.
Table 6:
CO (%)
3,43
0,15
6,47E-03
CO2 (%)
6,01
7,80
7,30
H2 (%)
6,92E-01
1,63E-02
3,41E-05
O2 (%)
2,90E-05
1,18
2,67
H2O (%)
16,2
15,9
14,8
Thermal efficiencies are lower when the gap size is large and the oxygen
excess also presents (Table 7). Moreover, large gap causes lower temperature at
the outlet (Figure 5), which is desirable to avoid the damage of the polymer
filters.
Table 7:
Mixing Ratio
1,34
2,09
2,32
Thermal Efficiency ()
0,65
0,62
0,60
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 5:
213
Our results show, that the increase of the gap size reduces the emission of
harmful gases due to the complete oxidation. In the meantime, the value of the
thermal efficiency remains reasonably high.
From the fluid dynamic point of view, the selected geometry of the postcombustion chamber poses conflicting observations. From the one hand, the
calculated path lines demonstrated a turbulent pattern of flow, with the creation
of tortuous streamlines and recirculating zones. This is clearly due to the
presence of sharp-edge corners, abrupt changes of direction and stagnate zones.
All these factors do not help the smoothness of the flow, although help the
settling of fine particles, which may present in the off-gas. On the other hand,
this does not adversely affect the overall performance of the chamber, since
mixing is enhanced.
4 Conclusions
In the present study, we applied CFD calculation for the simulation of postcombustion chamber. First, we obtained that the mixing in the chamber is
efficient, due to the reacting zone, which is located at the beginning of the
reactor. Particularly, we focused on the importance of the gap size. When the gap
size was set to 30 and 40 cm, thanks to the oxygen content, the conversion rate
of hydrogen and carbon monoxide was high and the oxidation is complete. In
addition, we also obtained that the thermal efficiency decreases when the gap
size is larger, however the reduction is not significant. Therefore, larger gap size
is desirable in practice to avoid the carbon monoxide and hydrogen escape to the
ambient.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] M. Kirschen, V. Velikorodov, H. Pfeifer, Mathematical modeling of heat
transfer in dedusting plants and comparison to off-gas measurements at
electric arc furnaces, Energy, 31, pp. 2926-2939, 2006.
[2] Yun Li, Richard J. Fruehan, Computational Fluid-Dynamics Simulation of
Post combustion in the Electric-Arc Furnace, Metallurgical and Materials
Transactions B, 34B, pp. 333-343, 2003.
[3] P. Gittler, R. Kickinger, S. Pirker, E. Fuhrmann, J. Lehner, J. Steins,
Application of computational fluid dynamics in the development and
improvement of steelmaking processes, Scandinavian Journal of
Metallurgy, 29, pp. 166-176, 2000.
[4] K. Chattopadhyay, M. Isac, R. I. L. Guthrie, Application of Computational
Fluid Dynamics (CFD) in iron- and steel making: Part 1, Iron making and
Steel making, 37(8), pp. 554-561, 2010.
[5] K. Chattopadhyay, M. Isac, R. I. L. Guthrie, Application of Computational
Fluid Dynamics (CFD) in iron- and steel making: Part 2, Iron making and
Steel making, 37(8), pp. 562-569, 2010.
[6] D. Mazudmar, J. W. Evans, Modelling of steelmaking processes, Boca
Raton, FL, CRC Press, 2009.
[7] A. Habibi, B. Merci, G. J. Heynderickx, Impact of radiation models in CFD
simulations of steam cracking furnaces, Computers and Chemical
Engineering, 31, pp. 1389-1406, 2007.
[8] A. Cuoci, A. Frassoldati, G. Buzzi Ferraris, T. Faravelli, E. Ranzi, The
ignition, combustion and flame structure of carbon monoxide/hydrogen
mixtures, Note2: Fluid dynamics and kinetic aspects of syngas combustion,
International Journal of Hydrogen Energy, 32, pp. 3486-3500, 2007.
[9] Magnussen B.F, On the structure of turbulence and a generalized Eddy
dissipation concept for chemical reactions in turbulent flows. 19th AIAA
aerospace science meeting, St. Louis, Missouri, 1981.
[10] Y. R. Sivathanu & G. M. Faeth. Generalized State Relationships for Scalar
Properties in Non-premixed Hydrocarbon/Air Flames, Combustion and
Flame, 82, pp. 211-230, 1990.
[11] L. Labiscsak, G. Straffelini, F. Trivellato, M. Bodino, C. Corbetta,
Computational fluid dynamics simulations of post combustion
chambers,33th AIM National Congress, Brescia, Italy, 2010.
[12] G. Krishnamoorthy, A new weighted-sum-of-gray-gases model for CO2H2O gas mixtures, International Communications in Heat and Mass
Transfer, 37, pp. 1182-1186, 2010.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
215
Abstract
The flow structure in any compound channel is a complicated process due to the
transfer of momentum between the deep main channel section and the adjoining
shallow floodplains. The boundary shear stress distribution in the main channel
and floodplain greatly affects the momentum transfer. In the present work, the
shear stress distributions across an assumed interface plane originating from the
junction between the main channel and flood plain using the Divided Channel
Method (DCM) are analyzed and tested for different compound channels and
their flow conditions using global data. An improved equation to predict the
boundary shear distribution in compound channels for different width ratios is
derived that gives better results than other proposed models. Analyses are also
done to suitably choose an appropriate interface plane for evaluation of stagedischarge relationship for compound channels having equal roughness in the
channel beds and walls. The effectiveness of predicting the stage-discharge
relationship using the apparent shear stress equation and boundary shear
distribution models are discussed.
Keywords: apparent shear, main channel, floodplain, compound channel,
discharge estimation, interface planes.
1 Introduction
During floods, a part of the river discharge is carried by the main channel and the
rest is carried by the floodplains. Momentum transfer between the flow of deep
main channel and shallow floodplain takes place making the discharge prediction
in compound channel more difficult. In the laboratory the mechanism of
momentum transfer between the channel section and floodplain was first
investigated and demonstrated by Zheleznyakov [30] and Sellin [24].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110201
2 Experimental analyses
In the present work, a compound channel is fabricated using Perspex sheets
inside a tilting flume in the Hydraulic Engineering Laboratory of the Civil
Engineering Department, National Institute of Technology, Rourkela, India. The
compound channel is symmetrical about the centerline of main channel making
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
217
the total width of the compound section as 440 mm (Figure 1). The main channel
is rectangular in cross section having 120 mm width and 120 mm at bank full
depth. Longitudinal bed slope of the channel is taken as 0.0019. The roughness
of the floodplain and main channel are identical. From the experimental runs in
the channel, the bed roughness coefficient (Manning n) is estimated to be 0.01. A
re-circulating system of water supply is established with pumping of water from
an underground sump to an overhead tank from where water flow under gravity
to the experimental channel through stilling chamber and baffle wall. A
transition zone between stilling tank and the channel helps to reduce the
turbulence of the flowing water. An adjustable tailgate at the downstream end of
the flume is used to achieve uniform flow over the test reach in the channel for a
given discharge. Water from the channel is collected in a volumetric tank that
helps to measure the discharge rate. From the volumetric tank water runs back to
the underground sump.
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Test
channel
(1)
Present
Channel
Knight and
Demetriou
Main
Series Longitudinal channel
No.
slope (S)
Width (b)
in mm
Main
channel
depth (h)
in mm
Ratio of
Main
Mannings
channel
roughness
side slope
coefficients
(s)
(nfp/nmc)
(6)
(7)
Width
ratio ()
(2)
(3)
(4)
(5)
Type-I
0.0019
120
120
01
02
03
0.00096
0.00096
0.00096
1.02710 3
1.02710 3
1.02710 3
1.02710 3
1.02710 3
304
456
608
76
76
76
0
0
0
1
1
1
( 8)
B/b =
3.667
B/b =2
B/b =3
B/b =4
1500
150
1.0
B/b=6.67
1500
150
1.0
B/b=4.2
1500
150
1.0
B/b=2.2
1500
150
B/b = 4.0
1500
150
2.0
B/b = 4.4
01
02
FCF
Series-A
channels
03
08
10
Observed
discharge
(Q) range in
cm3/s
(9)
Relative
depth
(ranges =
(H-h)/H
(10)
8726-39071
0.118- 0.461
5200-17100
5000-23400
4900-29400
2082001014500
2123001114200
225100834900
1858001103400
2368001093900
0.108-0.409
0.131-0.491
0.106-0.506
0.056-0.400
0.0414-0.479
0.0506-0.500
0.0504-0.499
0.0508-0.464
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 1:
219
a number of points at the pre defined grid points. The overall discharge obtained
from integrating the longitudinal velocity plot and from volumetric tank
collection is found to be within 3% of the values. Using the velocity data, the
boundary shear on the channel beds and walls are evaluated from a semi log plot
of velocity distribution. Boundary shear stresses are also obtained from the
manometric readings of the head differences of Preston tube techniques using
Patels [17] relationship. Error adjustments to the shear value are done by
comparing the corresponding shear values obtained from the energy gradient
approach. The results so obtained by the two methods are found to be
consistently within 3% values. Summary of the discharges and percentage of
boundary shear in the floodplain (%Sfp) for different relative depths () observed
from the experimental runs are given in Table 1.
a
(1)
c
Floodplain
(2)
H
h
Floodplain
o
(3)
(3)
b2
(1)
(2)
Vertical
interface plain
Diagonal interface
Main channel
b Horizontal interface
Figure 2:
3 Methodology
3.1 Shear force on the assumed interface planes
In Figure 2, the vertical, horizontal, and diagonal plains of separation of the
compound channel are represented by the interface lengths o-g, o-o, and o-c
respectively. Various boundary elements comprising the wetted parameters are
labeled as (1), (2), (3) and (4). Label (1) denotes the vertical wall(s) of floodplain
of length [2(H h)], where H = total depth of flow from main channel bed, h =
depth of main channel. Label (2) denotes floodplain beds of length (B b),
where B = total width of compound channel, and b = width or bed of main
channel represented by label (4) Label (3) denotes the two main channel walls of
length (2h). Experimental shear stress distributions at each point of the wetted
perimeter are numerically integrated over the respective sub-lengths of each
boundary element (1), (2), (3), and (4) to obtain the respective boundary shear
force per unit length for each element. Sum of the boundary shear forces for all
the beds and walls of the compound channel is used as a divisor to calculate the
shear force percentages carried by the boundary elements (1) - (4). Percentage of
shear force carried by floodplains comprising elements (1) and (2) is represented
as %Sfp and for the main channel comprising elements (3) and (4) is represented
as %Smc. Following Knight and Demetriou [11], Knight and Hamed [12]
proposed an equation for %Sfp for a compound channel section as
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
fp
48 ( 0 . 8 )
0 . 289
(2 )
(1)
Equation (1) is applicable for the channels having equal surface roughness in the
floodplain and main channel. For non-homogeneous roughness channels the
equation is improved as
% S fp 48( 0.8) 0.289 ( 2 ) m {1 1.02 log }
(2)
(3)
For homogeneous roughness section (=1), equation (2) reduces to the form of
Knight and Hamed [12] i.e. equation (1). Due to complexity of the empirical
equations proposed by the previous investigators, a regression analysis is made
by Khatua and Patra [10] and equation for %Sfp is proposed as
% S fp 1 . 23
0.1833
( Ln {1 1 . 02
log }
(4)
Once the shear force carried by the floodplain is known from equation (4), the
momentum transfer in terms of apparent shear force acting on the imaginary
interface of the compound section can be calculated. The analysis of momentum
transfer helps in predicting the stage-discharge relationship of a compound
channel, which is discussed in the later part of the paper. For any regular
prismatic channel under uniform flow conditions, the sum of boundary shear
forces acting on the main channel wall and bed, along with an apparent shear
force acting on the interface plane originating from the main channelfloodplain junction must be equal to the resolved weight force along the main
channel. Using the concept, Patra and Kar (2000) derived the percentage of shear
force ASFip acting at the interface plane as
% ASF
ip
100
mc
{ 100
% S
fp
(5)
221
( 2 tan )
(100 % S fp )
1 1
(6)
where = aspect ratio of the main channel =b/h, = width ratio and relative
depth which are defined earlier. The second case is when interface op lies
between oc to oe, the ranges of angle for this situation can be calculated from
b
the relations given as tan 2 h and tan
. The area of main
b
2 H h
(7)
For any given interfaces the angle is known so the equation (6) and (7) can be
directly used to find the apparent shear along any interfaces.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
%oferrorfor(%Sfp)
Presentmodel
Khatua&Patramodel
Knight&Demetrioumodel
0.1
0.2
0.3
0.4
0.5
Valuesofrelativeoverbankflowdepth
Figure 3:
For a better understanding the boundary shear stress distribution, the authors
have studied the five series of FCF phase A channels [=2.6, 4.0, 4.2, 4.4, and
6.67], three series of compound channel data [= 2, 3, and 4] of Knight and
Demetriou [11] along with the data of present compound channel [= 3.67].
These compound channels have homogeneous roughness both in the main
channel and floodplain sub-sections. Details of the experimental arrangements
relating to phase A to C of the FCF work are obtained from Greenhill and Sellin
[5]. Experiments for other three channels are described by Knight and Demetriou
[11]. On the basis of total 62 overbank flow depth from nine different
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
223
Calculated % of
Floodplain shear
100
80
60
40
20
Present Model
Knight & Demetriou Model
0
50
100
Observed % of Floodplain shear
types of compound channels with width ratios ranging from 2.0 to 6.67 and
relative depths ranging from 0.1 to 0.5, the authors have tried to plot the
variation of percentages of floodplain area %Afp verses %Sfp in Figure 4 given as.
%S
By substituting A fp
A
is rewritten as
fp
4 . 1045
% A
0 . 6917
fp
(8)
1 ( 1)
0.6917
(9)
Now, the equation (9) can be used for the channels having equal surface
roughness in the floodplain and main channel. For non-homogeneous channels,
equation (9) can further be modified as
100 ( 1)
% S fp 4 . 105
1 ( 1)
0 . 6917
{1 1 . 02 log }
(10)
Using the equation (10), the variation between the calculated %Sfp and observed
values for all the ten types of compound channels are shown in Fig.5. In the
same plot the variation of calculated %Sfp by previous investigators (i.e.
equations 2 and 4) are also shown. The plot indicates high correlations (R2 =
0.98) for(10) and R2 = 0.68 and 0.74 respectively using equations (2) and (4).
4.2 Shear force on the assumed interface planes for an angle
Further to account for the momentum transfer, the present experimental results
are analysed using equation (10) and are plotted in Figure 6. The convention for
momentum transfer is positive from the main channel to flood-plain and that
from flood-plain to main channel is negative. As can be seen, the apparent shear
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
50
40
Vertical Interface
30
20
Horizontal Interface
10
0
-10
-20
-30
-90
Figure 6:
-60
-30
0
30
60
90
Angle with vertical interface
120
150
180
in the vertical interfaces is found to be 13.5% of the total shear for the overbank
flow depth of 2.12 cm (= 0.15). It is found that the apparent shear decreases as
the flow depth increases and reaches to 9.1 % for a overbank flow depth of 8.21
cm ( = 0.406). This shows that apparent shear is higher at low floodplain depths
and gradually reduces as depth over floodplain increases. Similar results are also
obtained for horizontal and diagonal interface plane for the present channels as
well as when global data sets are tested using the concept. The interface plain of
zero shear is located near the horizontal interfaces (approximately at = 99o) at
= 0.15 and for higher over-bank flow depths, the interface plane of zero shear
is observed near a diagonal line of separation (approximately at = 40o).
4.3 Estimating discharge using different approaches
Let Qc denotes the calculated discharge and Qm the measured discharge. The
percentage of error and standard error for each series of experimental runs are
computed using the equation given as
Error (%)
(Q c Q m )
100
Qm
(11)
Using the vertical, horizontal, diagonal, and other interface planes, the error in
discharge estimation for the experimental channel and one channel from the FCF
(series A) are plotted in Figures 7(a) and 7(b) respectively.
In DCM, proper selection of the interface plane is required using the value of
the apparent shear at the assumed interface plane. If the shear stress at this
interface is zero, then there is no momentum transfer across the selected interface
and therefore the length of the interface is not included to the wetted perimeter of
the main channel or the floodplain for discharge calculation using divided
channel method and Mannings equation. However due to the difficulty in
locating this plane for all channel geometry and flow depths, investigators either
include or exclude the interface lengths in calculating the wetted perimeter for
the estimation of discharge. By including this interface length to the wetted
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
225
perimeter of the main channel, a shear drag of magnitude equal to the interface
length times the average boundary shear is included. However, in such situations
the actual interface shear is not considered as the shear along this interface is not
equal to the boundary shear for all depths of flow in the channel. Similarly, by
excluding these lengths, a zero shear along these interfaces is assumed.
Single channel method (curve SCM, where the whole compound section is
considered as single one) is found to give higher discharge error for lower overbank flow depths and very less error for high over-bank flow depth which is in
line with the findings of Sekin [23]. These also show that at very high overbank
depths, the compound channel behaves like a single unit (Bhowmik and
Demissie [2]). SCM also gives the maximum point error [e.g. for = 6.67
discharge error is more than 45% (Figures 7b). Similarly, VDM-II (curve Vie)
gives better discharge results than VDM-I (curve Vee) which is in line with the
findings of Mohaghegh and Kouchakzadeh [15]. VDM-1 (curve Vee) provides
higher error for compound channels of wider floodplain (e.g. = 6.67 of
Figure 7). For all the compound channels studied, the error results from HDM-I
(curve Hee) is less than that from HDM-II (i.e curve Hie) which is in line with
the findings of Sekin [23]. It is again observed that HDM-I approach gives
better discharge results than the corresponding vertical interface method (VDMI) for low depths of flow over floodplain but gives large discharge error at higher
depths. These findings are similar to the results of Mohaghegh and
Kouchakzadeh [15]. It is also noticed that, DDM (curve Dee) gives less error
(Figure 7) than all the VDM and HDM for all the compound channels studied.
This finding follows the results of Wormleaton et al [28]; Knight and Hamed
[12], Khatua [9] and Sekin [23] etc. Both the area method (curve AM) and the
variable inclined plain method (curve VI) gives higher standard error for
10
20
SCM
AM
Vee
Vie
Hee
Hie
Dee
VI
-5
-10
Percentage of error
Percentage of error
10
5
0
-10
-20
-30
-40
-50
-15
0
0.2
0.4
Values of
0.6
0.2
(a)
Figure 7:
0.4
0.6
Values of
(b)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
5 Conclusions
The following conclusions can be drawn from the above investigation
For a compound channel the important parameters affecting the boundary
shear distribution are the relative depth (), the width ratio (), and the
relative roughness (). These three dimensionless parameters are used to
form equations to represent the total shear force percentage carried by
floodplains. The present formulations for estimating the percentages of
shear force carried by floodplain boundary %Sfp has the distinct
improvement when compared to the previous investigators in the sense
that it is quite adequate for all types of straight compound channel
geometry (narrow as well as wide flood plain channels). Equations by the
previous investigators give %Sfp more than 100 % when applied to a
compound channel of wider floodplains (i.e. width ratio > 10).
Using the boundary shear distribution results, the momentum transfer at
different interfaces originating from the main channel and floodplain
junction for all types of geometry are quantified. The proposed equation
provides the estimation of apparent shear stress for any assumed interface
in terms of an angle it makes with the vertical line. Furthermore the
stage-discharge relationship of a compound channel using divided channel
method is decided only after finding the apparent shear stress across the
interface planes.
Basing on the present analysis using DCM, it can be concluded that both
HDM-1 and DDM are better than other approaches. HDM-1 is good for
low overbank depths and the DDM subdivision is better for higher
overbank depths. The adequacy of the developed equation for shear stress
distribution along the boundary of compound channels is verified using
the data from FCF-A channels.
References
[1] Ackers P, Hydraulic Design of Two Stage Channels, Proc. of the Inst. of
Civil Engineers, Water Maritime and Energy, 96(4) (1992) 247257.
[2] Bhowmik N.G., Demissie M, Carrying Capacity of Flood Plains, J Hydraul
Eng, ASCE ,108(HY3) (1982) 443-453.
[3] Bousmar D, Zech Y, Momentum Transfer for Practical Flow Computation
in Compound Channels, J Hydraul Eng, ASCE,125(7) (1999) 696706.
[4] Chow V.T, Open Channel Hydraulics, McGraw Hill Book Co. Inc., New
York, 1959.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
227
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
229
Abstract
In this work, we have theoretically re-visited the capillary rise process into a
circular tube for very short time scales, retaining in this manner, the physical
influence of the inertial effects. We use the boundary-layer technique or matched
asymptotic expansion procedure in order to treat this singular problem by
identifying two appropriate time scales: one short time scale related with inertial
effects, , and the other, , the large scale which is basically associated with
imbibition effects. Considering that the well-known Washburns law was derived
by neglecting the inertial effects, the corresponding solution has a singular
behavior for short times, which is reflected by an infinite mass flow rate. Then,
for this purpose we derive a zero-order solution which is enough to avoid the
singular behavior of the solution. In this manner, the Washburns solution
represents only the external solution only valid for the large time scale . The
above analytical result is compared with a numerical solution including the case
when the contact angle between the meniscus and the inner surface of the
capillary tube becomes a dynamic contact angle. On the other hand, the presence
of inertial effects can induce oscillations of the imbibition front which are
controlled by the dynamic contact angle. Therefore, in the present work we
predict a global asymptotic formula for the temporal evolution of the height of
the liquid. In order to show the importance of the inertial terms, we present this
evolution for different values of the dimensionless parameters involved in the
analysis.
Keywords: wicking process, inertial effects, singular perturbation, matched
asymptotic expansions.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110211
1 Introduction
In recent years, the phenomenon of capillary wicking has strongly stimulated
theoretical studies together with experimental evidences to show some peculiar
aspects of these complex processes. In real situations, the wetting of a surface is
controlled by rates of spreading of a liquid over the substrate and in general, this
effect is devoted for very short times before to reach the well-known equilibrium
thermodynamics Young's equation, where the surface tension force is exactly
balanced with the gravity force. In addition, the movement of the contact line of
the liquid front depends strongly on molecular kinetic of the dynamic contact
angle. The existing theories and experimental results about the position and
velocity of the contact line are not well understood yet. For instance, relevant
studies of the spreading of a drop over horizontal and inclined flat plates have
been developed to clarify that, in some cases; the macroscopic contact line can
be preceded by a precursor film, where the van der Waals forces are not
negligible. This idea was originally proposed by De Gennes [1]. Nowadays, an
acceptable point of view to treat the dynamic of the contact angle is to include
molecular forces, like van der Walls forces, improving in this manner, the
hydrodynamics macroscopic models. In this direction, Trevio et al. [2, 3]
carried out a theoretical analysis to predict the influence of the precursor film on
the dynamics of an axisymmetric drop spreading over a horizontal surface. The
state of the art can be found in the book of Middleman [4], where relevant topics
and applications are conducted to illustrate different wicking phenomena.
Since the pioneer work of Washburn [5], several mathematical models have
been proposed to analyze those cases where the capillary forces always have a
predominant effect. In this direction, the classical works of Joos et al. [6] and
Batten [7] show rigorously the main forces that act on a liquid rising up a
capillary tube. Hamraoui et al. [8] using high-speed imaging technique and
solving a particular WashburnRidealLucas equation based on a fully
developed flow, showed the physical influence of the dynamic contact angle on
the wicking process. In this study, the authors postulated a fundamental
relationship between the dynamic contact angle and the rate of the liquid rise
within the capillary tube and the mathematical solution was validated through the
experimental results. A similar study was reported by Hamraoui and Nylander
[9] including numerical predictions of oscillations for the imbibition front. In
reality, these oscillations were previously reported by Qur [10] if the liquid
viscosity is low enough.
In this paper, following the proposed analytical models by Hamraoui et al. [8]
and Duarte et al. [11], we present an asymptotic analysis of zero-order to
characterize the initial step of the wicking penetration process into a capillary
circular tube, for the case when the inertial terms are important. The fundamental
idea is to improve the existing theoretical predictions reported by previous
schemes and related with the velocity of the front, using singular perturbation
techniques. We anticipate that a singular behavior prevails if, for example,
inertial and gravity forces are neglected. In this case, the most simple
Washburn's law shows that the penetration velocity is of order of
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
U dh / dt t
1/ 2
231
2 Theoretical analysis
The present model corresponds to eqn. (1a) from Hamraoui et al. [8], and the
main forces that act on a liquid rising up a capillary tube are due to surface
tension, gravity, viscosity and inertia, respectively:
2 R 1
dh
d dh
dh
2
2
R gh 8 h R h
dt
dt dt
dt
(1)
where R is the radius of the capillary, is a constant related with the dynamic
contact angle, h t is the height of the liquid at time t . , , are the surface
tension, density and dynamic viscosity of the liquid, respectively, and g is the
acceleration due to gravity force. In order to derive the above equation were
basically neglected the entrance hydrodynamic effects and the liquid was
assumed Newtonian, then we can write the average velocity of liquid rising at
the capillary from Poiseuilles law as U dh / dt R P / 8 h. In addition, we
2
adopt the relationship between dynamic contact angle, d and the rate dh / dt
given by cos (t ) 1 / dh / dt . The details of the above considerations
d
can be found elsewhere, [8]. In this form, the present forces included in eqn. (1)
are expressed in terms of the unknown height h (t ) . This scheme has widely
been used in lubrication theory to analyze the fluid flow in thin-liquid films
(Oron et al. [12]). The above non-linear ordinary differential equation must
satisfy two initial conditions. Traditionally, the majority of the published works
only include the first of these initial conditions. The reason is based on
neglecting the inertia terms. However, for very short times, the inertial terms
must be included, even more for those cases of large radius. Therefore, we
propose the following initial conditions:
dh
t 0 : h 0 and
Q.
(2)
dt
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
t teq : h heq
(3)
gR
Now, for solving eqn. (1), we use appropriate dimensionless variables taking
into account that, in a first approximation, the characteristic time tc of the
wicking penetration process is determined by a balance between the surface
tension and viscosity forces. This order relationship can be written as
, Y
tc
(4)
he
1 dY Y Y dY d Y dY ,
d
d
d d
(5)
with
dY
0 : Y 0 ,
; eq : Y 1
(6)
gR
8
8 Q
gR
gR gR
2
and =
128
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Bo Ga
128
(7)
233
Y Y , eq , , , .
In
the
above
relationship
eq is
defined
as
remainder of this paper we analyze and classify the solutions according to the
assumed values of , taking advantage of the fact that in general, and
are very small compared with unity. We anticipate that the values of are
irrelevant for the zero-order solution of Y .
2.1 Asymptotic limit of 1 (pre-wetting surface)
1 Y Y
dY
d
Y dY .
d d
d
(8)
The numerical values for are generally small and therefore, the above
equation dictates that the inertial terms are only important for a time scale of the
order of . Otherwise, for 1 , the inertial term represented by the last term
of the right-hand side of the above equation is negligible, in a first
approximation. Thus, we can introduce two time scales to study the problem.
2.1.1 Formulation for large times ( ~ 1)
For this relevant limit, we propose the following expansion
Y Y0 Y1 O
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(9)
1 Y0 Y0
dY0
d
(10)
and the solution is readily derived and given by (undetermined only by the
constant C0 ):
1 Y0 Ln 1 Y0 C0 .
(11)
y dy
d d
d
y dy 1/ 2 y 1.
(12)
Now, in order to find the zero-order solution of this equation, we propose the
following expansion:
y y0 O
(13)
and the leading order equation is governed by a balance between the inertia,
viscosity and surface tension forces, given by
y dy0 y dy0 1,
0
0
d d
d
d
(14)
0 : y0 0 and
dy0
d
O 1 .
(15)
The non-linear differential equation (14) of second order can be easily solved
because admits a first integral. The resulting first order non-linear equation is
solved by using an appropriate integrating factor; for simplicity the details are
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
235
omitted. Following the above procedure and applying the initial conditions (15),
we obtain:
y0
2 2 exp 1.
(16)
It is very important to note that the asymptotic solution for short times
Y Y0 y0 Ymatch ,
(17)
where Ymatch represents the intermediate solution valid in the matching region
lim 0
y .
(18)
lim
Thus, applying this condition to the solutions (11) and (16) and expanding
adequately both solutions, can be easily shown that the constant C0 1 . In this
form, the zero-order global solution is based effectively on an intermediate
region 1 , where the short and large times solutions are equivalent. This
matching solution is given as Ymatch
2 2 exp / 1 Y0
2 ,
(19)
Y0 Ln 1 Y0 .
(20)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Y0 1 W e
(21)
where W is the Lamberts function. Finally, substituting eqn. (21) into eqn. (19),
we obtain that
2 2 exp / 1 1 W e
(1 )
2 ,
(22)
W e
(1 )
2(1 e )
1
1
valid for e e
4.13501 2(1 e )
12.7036
(23)
2(1 e )
e 2.718281...
2.2 Numerical scheme
In this subsection, we present some details related with the numerical procedure
to complete the solutions of eqns. (5)(6) for values of 0 . We have
included these numerical estimations to compare with the asymptotic solution of
zero-order. The class of governing equation given by eqn. (5) can be readily
integrated by the classical Runge-Kutta method of fourth-order. In our case, we
can define the following variables
Y 1
and
dY
d
2 ,
(24)
and the second-order nonlinear differential equation (5) with the aid of the above
relationships can be transformed to system of two first-order equations given by
d1
d
d 2
d
2
1 1 ( 1 ) 2 2
(25)
It is well-known that this method requires initial conditions to begin the first
iterations. In our case and from eqn. (6), the part that only corresponds to the
initial conditions is provided by Y (0) 0 and dY (0) / d , which can be
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
237
rewritten as 1 (0) 0 and 2 (0) . The use of the above initial conditions
yields a divergent behavior for both functions 1 and 2 . Therefore, the
numerical procedure to integrate eqns. (25) is to replace the above initial
conditions by the asymptotic relationships for short times derived in section
2.1.2. In terms of the functions 1 and 2 , the initial conditions are the
following
1 (0) 1 ( 0) =
1/ 2
5/ 2
+O
(26)
and
2 (0) 2 ( 0) =
2
1 +O .
5/ 2
1/ 2
3
(27)
We can appreciate that for finite and small values of the time increment, ,
the initial condition (26) remains always different to zero and the other initial
condition, given by eqn. (27), depends only the parameter , which assumes
finite values in our numerical essays. In the present estimations, we use
0.00001 and different values of the parameter . In this manner the
divergence is eliminated numerically.
0.
Figure 1:
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
239
Acknowledgements
This work has been supported by the research Grant No. 58817 by Consejo
Nacional de Ciencia y Tecnologa anf 201000375 of Instituto Politcnico
Nacional at Mexico.
References
[1] De Gennes, P. G., Wetting: statics and dynamics. Rev. Mod. Phys. Rev.,
(57), pp. 827- 863, 1985.
[2] Trevio, C., Ferro-Fontn, C., Mndez, F., Asymptotic analysis of
axisymmetric drop spreading. Phys. Rev. E., (58), pp. 4478-4484, 1998.
[3] Trevio, C., Mndez, F., Ferro-Fontn, C., Influence of the aspect ratio of a
drop in the spreading process over a horizontal surface. Phys Rev. E., (58),
pp. 4473-4477, 1998.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 3
Heat transfer and
thermal processes
243
Abstract
This paper analyses heat transfer across multilayer systems when boundary
conditions are unsteady. The results of analytical simulations and experimental
tests were compared in order to validate the analytical formulation. The
formulation that is proposed to solve this problem uses Greens functions to
handle the conduction phenomena. The Greens functions are established by
imposing the continuity of temperatures and heat fluxes at the interfaces of the
various layers. The technique used to deal with the unsteady state conditions
consists of first computing the solution in the frequency domain (after the
application of time and spatial Fourier transforms along the two horizontal
directions), and then applying (fast) inverse Fourier transforms into space-time.
The thermal properties of the multilayer system materials have been previously
defined experimentally.
For the experimental measurements the multilayer system was mounted on a
guarded hotplate capable of imposing a controlled heat variation at the top and
bottom boundaries of the system. Temperatures were recorded using a
thermocouple set connected to a data logger system. Comparison of the results
showed that the analytical solutions agree with the experimental ones.
Keywords: experimental validation, transient heat conduction, Greens functions
formulation, frequency domain.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110221
1 Introduction
A dwellings interior comfort is a fundamental issue in building physics and it
depends on the buildings envelope. In order to better evaluate the thermal
performance of the construction elements used throughout the building envelope,
more accurate models must be developed. Thermal behaviour depends largely on
unsteady state conditions, and so the formulations for studying those systems
should take the transient heat phenomena into consideration.
Most schemes devised to solve transient diffusion heat problems have either
been formulated in the time domain (time-marching approach) (e.g. Chang et
al. [1]) or else use Laplace transforms (e.g. Rizzo and Shippy [2]). An alternative
approach is to apply a Fourier transform to deal with the time variable of the
diffusion equation, thereby establishing a frequency domain technique, and then
obtain time solutions are obtained by using inverse Fourier transforms into
space-time (e.g. Tadeu and Simes [3]).
In general, multilayer systems, built by overlapping different layers of
materials, are used to ensure that several functional building requirements, such
as hygrothermal and acoustic comfort, are met. One of the requirements is to get
high thermal performance and thus reduce energy consumption and promote
building sustainability. The importance of multilayer solutions has motivated
some researchers to try and understand the heat transfer in those systems
(e.g. Kaka and Yumruta [4], Chen et al. [5] and Sami A. Al-Sanea [6]).
In this paper is presented an experimental validation of a semi-analytical
Greens functions solution that simulates heat conduction through multilayer
systems when they are subjected to heat generated by transient sources. The
proposed semi-analytical solutions allow the heat field inside a layered medium
to be computed, without having to discretize the interior domain. The problem is
formulated in the frequency domain using time Fourier transforms. The
technique requires knowing the Greens functions for the case of a spatially
sinusoidal, harmonic heat line source placed in an unbounded medium. The
Greens functions for a layered formation are formulated as the sum of the heat
source terms equal to those in the full-space and the surface terms required to
satisfy the boundary conditions at the interfaces, i.e. continuity of temperatures
and normal fluxes between layers. The total heat field is found by adding the
heat source terms equal to those in the unbounded space to the set of surface
terms arising within each layer and at each interface (e.g. Tadeu and Simes [3]).
The experimental results were obtained for several systems built by
overlapping different materials. These test specimens were subjected to a
transient heat flow produced by cooling and heating units which established a
heat flow rate that could reach a pre-programmed mean test temperature in the
specimen. The temperature changes in the different specimen layers were
recorded by a thermocouple data logger system. The thermal properties of the
different materials, such as thermal conductivity, mass density and specific heat
were obtained experimentally. The temperature variation in the top and bottom
surfaces of the multilayer system was used as an input for the semi-analytical
model designed using the thermal properties obtained experimentally. This
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
245
paper first formulates the three-dimensional problem and presents the Greens
function in the frequency domain for a heat point source applied to a multilayer
formation. A brief description of the mathematical manipulation follows, and the
experimental setup is then described. Some final remarks are presented after the
experimental measurements have been compared with computational results.
2 Problem formulation
Consider a system built from a set of m plane layers of infinite extent bounded
by two flat, semi-infinite media, as shown in Figure 1. The top semi-infinite
medium is called medium 0, and the bottom semi-infinite medium is assumed to
be m+1. The thermal material properties and thickness of the various layers may
differ. This system is subjected to a point heat source somewhere in the domain.
Medium 0
X
Interface 1
h1
Medium 1
Interface 2
Interface m
hm
Medium 2
Medium m
Interface m+1
Medium m+1
Figure 1:
T (t , x, y, z )
,
T (t , x, y, z ) j c j
t
(1)
heat.
3 Semi-analytical solutions
The solution is defined in the frequency domain as the superposition of plane
heat sources. This is done after applying a Fourier transform in the time domain
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
where i 1 , K j k j
j cj
i
K j
T ( , x, y, z ) 0 ,
(2)
medium,
of
Tinc , x, y , z, t x x0 y y0 z e
i t
the
form
, where x x0 , y y0
and z are Dirac-delta functions, the fundamental solution of eqn (2) can be
expressed as
Tinc ( , x , y , z )
2k j
i
Kj
x x0 y y 0
2
z2
x x0 2 y y 0 2 z 2 .
(3)
i
i
Tinc ( , x, y , k z )
H0
k z 2 r0
Kj
4k j
where H 0
r0
(4)
x x0 2 y y0 2 .
i
H0
k zm 2 r0 e ik zm z
Kj
m M
,
M
(5)
2
m . The distance Lz
Lz
chosen must be big enough to prevent spatial contamination from the virtual
sources. Eqn (5) can be further manipulated and written as a continuous
superposition of heat plane phenomena,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
i
Tinc ( , x, y , k z )
4 k j
where j
i j y y0
ik x x
0
dk x
e x
247
(6)
i
k z 2 k x 2 and Im j 0 , and the integration is performed
Kj
4k j
n nj
where E0 j
i
,
2k j Lx
Ej e
i nj y
Ed e
ik x n x
Ed
, nj
(7)
i
k z 2 k xn 2
Kj
2
n , which can in turn be approximated by a finite
Lx
sum of equations ( N ). Note that k z 0 is the two-dimensional example.
The total heat field is achieved by adding the heat source terms, equal to those
in the unbounded space, to the sets of surface terms arising within each layer and
at each interface, that are required to satisfy the boundary conditions at the
interfaces, i.e. continuity of temperatures and normal fluxes between layers.
For the layer j , the heat surface terms on the upper and lower interfaces can
be expressed as
and Im nj 0 , and k xn
Tj1 ( , x , y , k z ) E0 j
Tj 2 ( , x, y , k z ) E0 j
where E j1 e
i n j y
j 1
hl
l 1
, E j2 e
i n
E j1
E j2
nj
nj
Anjt E d
(8)
Anjb Ed
(9)
hl
l 1
layer l . The heat surface terms produced at interfaces 1 and m+1, which govern
the heat that propagates through the top and bottom semi-infinite media, are
respectively expressed by
T02 ( , x, y, k z ) E00
E01
n0
Anb0 Ed
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(10)
E( m 1)2
T( m 1)2 ( , x, y , k z ) E0( m 1)
n ( m 1)
Ant ( m 1) Ed
(11)
1
k
0 n0
0
...
0
e i n1h1
...
e i n1h1
k1 n1
...
e i n1h1
...
e i n1h1
k1 n1
1
k1 n1
...
...
0
...
0
...
...
...
1
...
e i nm hm
...
... e i nm hm1
e i nm hm
...
km nm
1
km nm
k1 n1
k1 nm
e i n1 y0
0
i n1 y0
b
0
An 0 k1 n1
At i n1 h1 y0
n1 e
0
Anb1 e i n1 h1 y0
...
k1 n1
...
t
Anm
0
...
b
Anm
0
0
At
n ( m 1)
0
1
0
1
0
km 1 n ( m 1)
0
ei nm hm
km nm
(12)
The resolution of the system gives the amplitude of the surface terms at each
interface. The temperature field for each layer formation is found by adding
these surface terms to the contribution of the incident field, which leads to the
following equations:
top semi-infinite medium (medium 0 )
T ( , x, y, k z ) E00
E01
n0
Anb0 Ed , if y 0
(13)
E
i
T ( , x, y, k z )
H 0 K t1 r0 E01 11 Ant 1 12 Anb1 Ed , if 0 y h1 (14)
4k1
n n1
n1
layer j ( j 1)
T ( , x, y, k z ) E0 j
E j1
nj
Anjt
E j2
nj
Anjb Ed , if
j 1
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
l 1
y hl (15)
l 1
249
T( m 1)2 ( , x, y , k z ) E0( m 1)
E( m 1)2
n ( m 1)
Ant ( m 1) Ed
(16)
Note that when the position of the heat source is changed, the matrix F
remains the same, while the independent terms of b are different. As the
equations can be easily manipulated to consider another position for the source,
they are not included here.
4 Experimental validation
4.1 Specimen description
Mass density,
(W.m -1 .C -1 )
(kg.m -3 )
( J.kg -1 .C -1 )
1638.0
Conductivity,
Material
Specific heat,
Natural Cork
0.046
130.0
0.041
14.3
1430.0
Medium-Density Fiberboard
0.120
712
1550.0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
t ot
Thermocouples 1
Layer 1
Thermocouples 2
Layer 2
Layer 3
Thermocouples 5
Thermocouples 4
Layer 4
Thermocouples 5
tob
Figure 2:
Thermocouple positions.
251
the heat transfer through the multilayer systems where the temperature variations
are prescribed for the top and bottom surfaces. The materials thermal properties
(see Table 1) were used in these simulations.
5.1 Semi-analytical model
Equations (8) and (9) are manipulated by removing the media 0 and m+1 and by
imposing temperatures t0t and t0b on the external top and bottom surfaces
(interfaces 1 and m+1). Temperatures t0t and t0b are obtained by applying a
Fourier transformation in the time domain to the temperatures recorded at the
external multilayer system surfaces during the guarded hot plate test.
The total heat field is achieved by adding together the sets of surface terms
arising within each layer at each interface and by imposing continuity of
temperatures and normal fluxes at the internal interfaces.
The following system of 2m equations is obtained:
1
k1 n1
e i n1h1
i h
e n1 1
k
1 n1
...
e i n1h1
k1 n1
...
...
1
k1 n1
...
...
0
...
...
...
1
...
1
k1 nm
...
e i nm hm
km nm
0
t0 t
t
0
A
0 nb1
0
An1
... ... ...
t
e i nm hm Anm
0
i nm hm b
A
e
0
nm
km nm
t0 b
1
km nm
0
(17)
Given that the temperatures t0t and t0b are uniform along the interfaces, this
system is solved by imposing k xn 0 and k z 0 . The resolution of this system
gives the amplitude of the surface terms at each interface, leading to the
following temperature fields at layer j :
E j1 t
Ej2 b
T ( , x, y, k z ) E0 j
A0 j
A0 j , if
0 j
0j
j 1
h
l 1
y hl
(18)
l 1
5.2 Results
Temperatures t0t and t0b were first defined by applying a direct discrete fast
Fourier transform in the time domain to the temperatures recorded by the
thermocouples on the external surfaces of the system and subtracting the initial
temperature. Analysis of the experimental responses led to an analysis period of
16h being established. This was enough to find the energy equilibrium of the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
1.0
Hz ,
32 3600 2048
The temperature variation imposed on the top and bottom multilayer surfaces
may be of any type. To obtain the temperature in the time domain, a discrete
inverse fast Fourier transform was applied in the frequency domain. The aliasing
phenomena were dealt by introducing complex frequencies with a small
imaginary part, taking the form c i (where 0.7 , and is the
frequency increment). This shift was subsequently taken into account in the time
domain by means of an exponential window, e t , applied to the response.
The final temperatures were obtained by adding the initial test temperatures to
these responses.
Legend
Amplitude (C)
30
Simulation result
Thermocouples 4
Thermocouples 3
Thermocouples 5
Thermocouples 2
Thermocouples 1
25
20
15
10
15
Time (hours)
a)
30
Amplitude (C)
Amplitude (C)
30
25
20
20
15
25
15
10
15
Time (hours)
b)
Figure 3:
10
15
Time (hours)
c)
253
6 Conclusions
Three-dimensional semi-analytical solutions for transient heat conduction in a
multilayer system in the frequency domain have been validated experimentally.
The results showed a good agreement between experimental measurements and
the computed solutions, thus we can conclude that the proposed semi-analytical
model formulated in the frequency domain is reliable for studying transient heat
conduction in multilayer systems.
Acknowledgements
The research work presented herein was supported by the Portuguese Foundation
for Science and Technology (FCT), under research project PTDC/ECM
/114189/2009 and doctoral grant SFRH/BD/48138/2008, and by the
Coordination of Improvement of Higher Education Personnel (CAPES), a
Brazilian government agency.
References
[1] Chang, Y.P., Kang, C.S., Chen, D.J., The use of fundamental Greens
functions for solution of problems of heat conduction in anisotropic media.
International Journal of Heat and Mass Transfer 16, pp. 1905-1918, 1973.
[2] Rizzo, F.J., Shippy, D.J., A method of solution for certain problems of
transient heat conduction. AIAA Journal 8, pp. 2004-2009, 1970.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
255
Abstract
Liquid crystal thermography (LCT) has been employed by researchers in heat
transfer and fluid flow communities as a reliable and non-intrusive temperature
measurement technique. This technique correlates the colour response to
temperature for a heated surface which has been treated with thermochromic
liquid crystals (TLCs). The LCT has been used extensively in convective heat
transfer research in duct flows. In this paper, some experiences by LCT in
thermal measurements for rectangular duct flows are provided. A few TLCs
examples associated with continuous ribs for two different values of rib pitch-toheight ratio of 4 and 8 at Re=8900 and 28500 are illustrated. Important issues
such as heating principles, calibration of TLCs, image acquiring and analysis,
problems of treating the surface by TLCs, and expected measurement accuracy
are summarized. The main purpose is to assist newcomers in the field and
provide some guidelines for proper use of the LCT in heat transfer research.
Keywords: LCT, convective heat transfer, measurement accuracy.
1 Introduction
Liquid crystal thermography (LCT) has emerged as reliable measurement
sensors for the visualization and determination of surface temperature
distributions leading to convective heat transfer coefficients [1]. The advantages
of LCT are, e.g., its flexibility to be used from micron sized electronic circuits to
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110231
257
light which primarily degrade the colour response. Several approaches have been
developed to make TLCs more practical to use [12]. Micro-encapsulation is the
most popular method of protection and the liquid crystal is enclosed in polymer
spheres which can be as small as a few microns. Commercial micro-encapsulated
TLCs may be purchased in water-based slurry and are generally applied to flat or
curved surfaces using an air-brush spray. The thickness of coating must be
controlled carefully and further reading can be found in ref. [13, 14]. The most
convenient manner to measure the thermal field on a surface is to coat it with a
pre-packaged assembly consisting of a liquid crystal layer painted on a plastic
sheet having a background colour with black ink [12]. The pre-packaged
assembly TLC is limited in application because they are not typically available
in a sprayable medium and limited to relatively flat surfaces. Edge effects due to
chemical contamination may destroy the pre-packaged surface when a cutout
portion of a manufactured sheet is used. To be used in advanced implementations
like research on heat transfer coefficients on duct walls one needs to heat the
surface of interest, apply calibration for the color-temperature response, acquire
images and analyze the images by suitable software for image processing. Huebased image processing [15] is the most common technique used in applications
of LCT to interpret the TLC signal. The color of TLCs observed by a RGB-data
(red-green-blue) acquisition system is transformed to hue (H), saturation (S) and
intensity (I) color space. HSI color space provides approximately the human
description of color [1]. Hue is an optical attribute related to actual colour of the
point of observation [12]. Hue is the only among these parameters being retained
due to its monotonic relationship with surface temperature [16]. The hue value of
each pixel indicates the temperature of corresponding location on the surface.
This property of hue and its independence of the light intensity levels make it
suitable for temperature measurements [17].
In experimental heat transfer, one demanding issue related to liquid crystals is
the colour-temperature relationship, and any quantitative application of TLCs
requires an accurate calibration [13]. It is essential to keep the lighting and
viewing conditions of the thermal experiments identical to the calibration
experiment as a colour pattern can be significantly affected by lighting and
viewing angles [12]. There are several papers related to TLC calibration and the
main factors being pertinent to calibration. Some of these are summarized here.
The illumination source has a significant effect on the shape of the huetemperature calibration curve and this may lead to higher or lower temperature
uncertainties over the applicable range. If the hue-temperature curve is a straight
line, this would be the sensitivity at every hue; however, hue-temperature curves
have regions of higher and lower sensitivity [18]. The background lighting is
another important factor which may influence the shape of the hue-temperature
calibration curve [18]. The hysteresis is characterized by a decrease in
reflectivity and a shift in temperature associated with the peak reflected intensity
for each of R, G and B components during cooling. This causes a shift in the
hue-temperature relation of the TLC. This results in temperature biases when
cooling occurs instead of heating [19]. The hysteresis is not a limiting
measurement error factor if the temperature of a measured surface remains
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
259
(a)
Figure 2:
(b)
4 Thermal measurement
4.1 Thermal test section apparatus
The experimental apparatus is not shown here for the sake of brevity and the
reader is referred to ref. [16]. The test section is a rectangular duct as wide and
high as the entry and exit sections and delimited by a thin heated plate (0.1 m
wide and 0.28 m long) lateral and frontal Plexiglas walls. The heated side of the
test section consists of 0.5 mm-thick stainless steel foil to which a plane heating
foil is glued. A thin TLC sheet is applied on the stainless steel foil, on the side
exposed to the air flow, to measure local surface temperature. The rear side of
the heated plate is thermally insulated to convey as much as possible of the
electric power dissipated by the heater to the convective air flow. Electric power
is supplied by a DC to measure voltage and current of the heater. Fine-gauge
thermocouples were placed inside the rectangular duct, directly exposed to the
airflow, and at several sites inside the duct wall material. These sensors are used
to measure the air temperature at the test section inlet, to estimate conduction
heat losses to the surroundings and to control the attainment of the steady state
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
261
Qel Qloss
A TLC Tb
(1)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
config.
L (mm)
W (mm)
H (mm)
l (mm)
p (mm)
e (mm)
p/e
280
100
20
20
280
100
20
40
Figure 3:
H [W/m2K]
Figure 4:
Configuration A
263
Configuration B
Figure 5:
B (p/e = 8), the heat transfer coefficient reaches a maximum value at the point of
flow reattachment. Downstream of the reattachment point, the heat transfer
coefficient slightly decreases up to the vicinity of the successive rib. The
periodic fully developed conditions are attained approximately after the third
rib [16].
5 Estimation of uncertainties
Several factors (image noise, lighting angle, TLC coating thickness and coating
quality) may influence the hue-temperature relationship and the expected
measurement accuracy. In addition, the uniformity of the heating procedure and
various sources of losses to the surroundings may influence the accuracy of the
measurement. The image noise can be reduced by using filtering techniques to
erase hue spots not related to temperature and that leads to improvement in
accuracy of the measurements. The lighting angle is known to have a significant
effect on TLCs hue-temperature curve and thus on the accuracy of the
measurements. Therefore, it is important that the lighting set up in a real
measurement is the same as at the calibration stage. The coating thickness has a
significant effect on a hue-temperature curve but it has been found to have a nondistinctive effect on the measurement uncertainty. The coating quality affects
both the hue-temperature curve and the accuracy of the measurement. Further
reading is available in ref. [6]. In applications where the overall purpose is to
determine convective heat transfer coefficients, other uncertainties impact on the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
6 Concluding remarks
In this paper, some important issues related to the use of TLCs for heat transfer
measurements in duct flows (treatment of the surface, TLC calibration, image
acquiring and analysis, etc.) are summarized. Regardless of calibration method,
each TLC has its own calibration curve. The linear portion of the calibration
curve is recommended to be adopted due to higher sensitivity and thus more
reliable for thermal measurements. Concerning the thermal boundary condition
typically adopted in the LCT, steady-state, heat measurement method, and care
should be taken in the use of an electrically heating foil to attain a uniform heat
flux; often the insertion of a thin metallic plate between the heater and the TLC
sheet could improve the uniform distribution of heat flux at the wall. Another
concern associated with the heating issues is the thermal contact resistance and
resulting temperature drop when different conducting surfaces, e.g., heating foil,
metallic foil, TLC are not in perfect thermal contact.
There are several factors which impact the accuracy of thermal measurements
associated with the LCT technique and some of those were outlined in this paper.
The main purpose was to highlight some important issues collected from several
researchers to assist newcomers and provide some guidelines for proper use of
LCT in heat transfer research.
Acknowledgements
Part of this work is financially supported by the Swedish Energy Agency, Volvo
Aero Corporation and Siemens Industrial Turbines through the national Swedish
research program TURBO POWER, project TURB3. The work is also supported
by a separate project by the Swedish Energy Agency.
Nomenclature
A
e
h
H
HSI
L, W
l
p
Qel
Qloss
Re
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
265
References
[1] Chan, T.L., Ashforth-Frost, S. & Jambunathan, K. , Calibrating for viewing
angle effect during heat transfer measurements on a curved surface, Int. J.
Heat and Mass Transfer , 44, pp. 2209-2223, 2001.
[2] Sunden. B., On heat transfer and fluid flow in ribbed ducts using liquid
crystal thermography and PIV measurements, EXHTF-7(CD-ROM
Proceedings), 139-152, 2009.
[3] Tanda. G., Heat transfer in rectangular channels with transverse and Vshaped broken ribs, Int. J. Heat Mass Transfer, 47, 229-243, 2004.
[4] Wang, L. & Sunden, B., Experimental investigation of local heat transfer in
a square duct with continuous and truncated ribs, Experimental Heat
Transfer, 18, 179-197, 2005.
[5] Abdullah, N., Talib, A.R.A., Jaafar, A.A., Salleh, M.A.M. & Chong, W.T,
The basics and issues of thermochromatic liquid crystal calibrations, Exp.
Thermal Fluid Science, 34, 1089-1121, 2010.
[6] Rao, Y. & Zang, S., Calibrations and measurement uncertainty of wideband liquid crystal thermography, Meas. Sci. Technol., 21, paper no.
015105, 2010.
[7] Ireland, P.T. & Jones, T.V, Liquid crystal measurements of heat transfer
and surface shear stress, Meas. Sci. Technol., 11, 969-986, 2000.
[8] Critoph, R.E., Holland, M.K. & Fisher, M., Comparison of steady state and
transient methods for measurement of local heat transfer in plate fin-tube
heat exchangers using liquid crystal thermography with radiant heating, Int.
J. Heat Mass Transfer, 42, 1-12, 1999.
[9] Behle, M., Schulz, K., Leiner, W. & Fiebig, M., Colourbased image
processing to measure local temperature distributions by wide-band liquid
crystal thermography, Applied Scientific Research, 56, pp. 113-143, 1996.
[10] Grodzka, P.G. & Facemire, B.R. Tracking transient temperatures with
liquid crystals, Letters Heat Mass Transfer, 2, pp. 169-178, 1975.
[11] Smith, C.R., Santino, D.R. & Praisner, T., Temperature sensing with
thermochromic liquid crystals, Exp. Fluids, 30, 190-201, 2001.
[12] Tanda, G., Application of optical methods to the study of convective heat
transfers in rib-roughened channels, PhD thesis, The City University
London, 1996.
[13] Kakade, V.U, Lock, G.D, Wilson, M., Owen, J.M. & Mayhew, J.E,
Accurate heat transfer measurements using thermochromic liquid crystal.
Part 1: Calibration and characteristic of crystals. Int. J. Heat Fluid Flow,
30, No. 5, pp. 939-949, 2009.
[14] Baughn, J. W., Liquid crystal methods for studying turbulent heat transfer,
Int. J. Heat and Fluid Flow, 16, 365-375, 1995.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
267
Abstract
The paper presents an integral computational model for the prediction of the
thermal performance of a conceptual two-floor, zero energy house (ZEH) in the
Arabian desert. The ZEH is powered by PV modules which shade the roof
during the day time and retract at night to expose it the sky, thus enhancing night
time cooling. The house boasts all modern comforts, including air-conditioning.
Solar radiation models coupled with recently published ASHRAE environmental
data and models are integrated with a time dependent heat conduction model to
predict the heating and cooling loads, for given equipment and storage
characteristics. The application of the computational model is demonstrated by
employing it to predict the effect of various design parameters on performance
and equipment sizing, for a typical desert site in the Kingdom of Saudi Arabia.
Keywords: zero-energy-house, solar energy, desert environment, sustainability,
modelling.
1 Introduction
The paper is concerned with the presentation and application of an efficient
computational model for the investigation of the thermal performance of the
integrated energy systems in a modern two floor, zero energy house (ZEH)
located in the Arabian desert. Solar energy drives the whole energy system after
being converted to electrical energy employing roof mounted PV modules.
Several investigations have been reported in the past for single floor, roof
mounted ZEH designs, e.g. Serag-Eldin [1] and Beshr et al. [2], as they are much
less challenging to meet the zero external energy requirement than for two floor
houses. A two floor ZEH design was considered by Serag-Eldin [3], however
synthetic data was used for solar radiation and environmental properties and only
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110241
269
3 Mathematical models
The basic models employed comprise a solar radiation model, an environmental
model, a heat conduction model, and an air-conditioning COP model; they are
each presented here in turn.
3.1 The Solar radiation model
The solar beam angles are determined by first deriving the declination angle
(degrees) for any day of the year from:
(n 284)
(1)
23.450 sin(2 (
)
365.25
where n is the number of the day of the year measured from January 1st.
Next the sun beam incidence (Zenith) angle s on a horizontal plane is
calculated from:
(2)
cos s cos . cos . cos sin . sin
where is the solar angle. The sunset angle ss is given by
(3)
The azimuth angle from due South, s, is determined uniquely from the two
equations:
cos . sin
(4.a)
sin s
sin s
and
cos . cos .sin sin . cos
(4.b)
coss
sin s
For a vertical wall, incidence angle is given by:
(5)
Idir I0e[ b m
and
(6)
I dif I 0e
[ d m d ]
where b and d are site specific values which vary over the year and are
obtained from long period meteorological measurements; values of b and d
are reported by ASHRAE [8] for more than 4000 sites world-wide, on the 21st
day of each month. The mass exponents b and d are correlated to b and d
through the following equations:
b 1 .219 0 . 043 b 0 .151 d 0 .204 b d
and
(7)
(10)
The total solar energy, Iglo,vert received at a vertical wall is expressed by:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
271
T
( cT ) ( k
)
t
x x
(12)
the monthly design dry bulb temperature Tdb, coincident wet bulb
temperature Twb, and their mean daily temperature ranges, Tdb-r, Twb-r,
respectively, are read from the ASHRAE table for the specified location.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
ii-
Since cooling load is considered, Tdb represents the peak daily temperature
Tdb,max to be employed in cooling load calculations. Like-wise, Twb
represents Twb,max.
the hourly values of Tdb and Twb are calculated from:
Tdb= Tdb,max Tdb-r
and
Twb= Twb,max Twb-r
iiiiv-
(13)
(15)
(17)
273
C7
C8
C9
C10
C11
C12
C13
1.3915
-4.864010-2
4.176510-5
-1.445210-8
6.5459
C6
-5800.2
-9.6778
C5
4.1635
6.3925
C4
-9.484010-13
C3
2.074810-9
C2
6.221610-7
C1
-5674.5
Table 1:
(19)
where r is the long wave emissivity of the roof surface or PV/cover material, Tr
is the temperature of the roof surface or PV/cover material and Ta is the ambient
temperature. Whereas a is the apparent emissivity of the atmosphere, according
to Bliss [11] and ASHRAE [12]; it is defined as the ratio of the atmospheric
radiation on a horizontal surface per unit area to ( Ta4), and is only a function of
Tdp near ground. The latter is derived as shown in the previous section. Values of
a at given Tdp increments were deduced from Blisss graphical display, then
cubic splines were used to interpolate intermediate values.
3.4 Air conditioning model
In a desert environment water is scarce; hence cooling of the air conditioning
system condenser must rely primarily on ambient air. The COP of the airconditioning system suffers heavily with a rise of cooling media temperature;
indeed some vapour compression systems will shut down at ambient air
temperatures above 450C to protect themselves. However peak summer
temperatures in the desert often reach 500C.
Thus for exceptionally hot summer days, it is proposed here to use small
amounts of water for cooling of condenser coils by misting and evaporative
cooling; just enough to reduce the effective cooling air temperature to an upper
limit, Tul. This water need not be potable, and may be the accumulated output of
treatment of residential waste water, stored over a long period of time (extending
to cooler seasons if need be). The model employed here allows the calculation of
the daily cooling water usage for a pre-set Tul. Setting Tul above ambient
temperature naturally deactivates this feature.
Whereas different models and makes display different COP characteristics,
two characteristic profiles are considered here, one based on the average of
currently operating European residential units, ECODESIGN [13], and the other
a state of the art unit displaying an electronically controlled expansion valve,
EEV [14]. The two characteristics are displayed simultaneously in Fig. 1.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
In order to reduce cooling load, the fresh air is cooled prior to entering
indoors by exhaust air leaving at room temperature, in a counter flow heat
exchanger. Thus the sensible heat load is reduced, the gain increasing with
increase of outdoor temperature. Since the heat capacity of the return air is at
least as large as that of the fresh air(actually larger because of infiltration),
theoretically it is possible in a very long double pipe heat exchanger to cool the
fresh air down to room temperature and eliminate this component of the sensible
heat load altogether. However, for practical reasons a relatively short heat
exchanger is employed and it is assumed that the effectiveness of the heat
exchanger, HX = 0.7; i.e. 70% of cooling of fresh air occurs in heat exchanger.
4 Results
Figures 2-3 present results of the basic case, which employs a 2% design factor,
the EEV COP characteristic, evaporative cooling on exceptionally hot days to
reduce peak cooling air temperature to 430C , and room temperatures of 250C
and 220C for June and December months, respectively; without cooling of freshair via a heat exchanger. They display the distribution of the cooling loads for
the envelope walls, windows and auxiliary loads (fresh air, persons, infiltration,
equipment, ..) as well as their sum, for June 21st and December 21st, respectively.
Table 2 displays the results of various parameters on cooling load electricity
consumption (kW.h/day), auxiliary appliances and lighting electrical loads
(kW.h/day), and the total daily output of the PV modules (kW.h/day). It also
displays the required equipment rating for each case, namely battery storage
capacity (kW.h), PV module capacity (kW) and the cooling equipment rating
(kW).The first three rows display results of the basic design case for three key
days representing spring/autumn, summer and winter. It is apparent that
equipment rating depends only on the requirements of June 21st; only 69.8% of
the roof area is required to be covered by PV modules for June 21st, which still
leaves ample roof area for other equipment.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
Figure 3:
275
Rows 46 display the above results for June 21st, for 3 different ASHRAE
design percentile values, namely 0.4%, 2% and 5%. The lower the percentile
value the higher the peak values of the ambient temperature; hence the larger the
heat gain by convection at the envelope surface, sensible heat of fresh-air and
infiltration, and long-wave atmospheric radiation. Thus the cooling load
increases from 592 kW.h/day for 5%, to 635 kW.h/day for 2%, to 691 kW.h/day
for 0.4%, corresponding to cooling equipment capacities of 36.5 kW, 38.3 kW
and 40.6 kW, respectively. Although, the basic case calculations employ 2%, as
this is the more common practice, using a 5% value is justified here since our
calculations assume that the design day conditions are continuously repeating
themselves every 24 hours; whereas in reality the previous day should be cooler
since the design criteria are for the hottest day of the month. Since our building
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Daily PV output
(kW.h/day)
Min battery-storage
(kW.h)
PV modules rated
capacity (kW)
Cooling equipment
capacity(kW)
Case Description
base case, March 21st
base case, June 21st
base case, Dec. 21st
design percent,0.4%
design percent,2.0%
design percent,5.0%
cooling-air limit 400
cooling-air limit 430
cooling-air limit 460
room temp. 230C
room temp. 250C
room temp. 270C
COP Europe average
COP of EEV A/C
EEV + fresh-air HE
Electric appliances +
lighting (kW.h/day)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Cooling load
(kW.h/day)
Row number
Table 2:
372.7
668.7
91.9
691.4
634.8
592.0
634.8
634.8
634.8
700.8
634.8
568.7
634.8
634.8
579.8
35.6
33.87
37.2
33.87
33.87
33.87
33.87
33.87
33.87
33.87
33.87
33.87
33.87
33.87
33.87
115.6
197.2
65.3
206.5
189.2
176.0
183.3
189.2
190.6
204.9
189.2
173.5
279
189.2
175.5
53.6
83.8
36.3
88.3
80.0
73.9
79.4
80.0
79.9
87.7
80.0
72.2
113.
80.0
74.8
26.5
36.4
25.7
39.1
35.8
33.3
34.7
35.8
36.1
38.8
35.8
32.9
52.9
35.8
33.2
28
39.6
13.8
40.6
38.3
36.5
38.3
38.3
38.3
41.1
38.3
35.5
38.3
38.3
34.5
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
277
Rows 10-12 reveal the variation of equipment capacities and cooling load
with specified internal room temperature for June 21st. As the room temperature
rises from 230C to 250C and then to 270C, the cooling load decreases from 701
kW.h/day, to 635 kW.h/day and to 569 kW.h/day, respectively; with
corresponding drops in PV and cooling equipment capacities. This drop is
substantial. A room temperature of 270C in a desert environment may be quite
acceptable since the relative humidity is exceptionally low; moreover, outdoor
temperatures are exceptionally high in summer and hence local summer clothing
is generally made of porous cotton and loose fitting; well adapted to high
temperatures with low humidity.
Rows 13,14 show the effect of COP on performance, the two COP
characteristics corresponding to the ones displayed in Fig. 1. The impact of COP
on capacity of equipment (PV modules, battery and air-condition) is remarkable.
For the European averaged COP, the required PV rated capacity is 53 kW,
whereas for the EEV COP the corresponding output is only 36 kW. Adding an
exhaust-air/fresh-air heat exchanger reduces the latter to 33 kW. Since PV
modules and battery storage are expensive items, it is expected to be well worth
it to purchase state of the art, highly efficient air-conditioning systems. The last
row presents results of using a fresh-air heat exchanger of effectiveness 0.7 on
capacities; it produces an attractive 10% saving in PV capacity alone.
Further savings may be introduced if clothes are dried outdoors rather than
using an indoor electric clothes drier. According to [7] the energy consumed in
clothes drying is at least 25% of the total 980 W estimated here, i.e. 245 W.
Indeed, a solar clothes drier [15] may even be employed to lower peak air
temperatures. If the load for the internal electric clothes drier is removed, the
basic case PV output will be reduced by about 6 kW.h/day, i.e. 3%.
Acknowledgements
This work was funded by King Abdalla University for Science and Technology
(KAUST) project on Integrated Desert Building Tech, grant held by AUC.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] Serag_eldin, M.A., Thermal design of a modern, air-conditioned, singlefloor, solar-powered desert house, in press, Int. J. of Sustainable
energy, 2011.
[2] Beshr, M., Khater, H. and Abdel Raouf, A., Modeling of a Residential
Solar Stand-Alone Power System, Proc. of 1st Int. Nuclear and Renewable
Energy Conf., Amman, Jordan, March 21-24, 2010.
[3] Serag-Eldin, M.A., Thermal Design of a Modern, Two Floor, Zero Energy
House in a Desert Compound, Proc. Thermal Issues in Emerging
Technologies, Theta-3, Cairo, Egypt, Dec 19-22, 2010.
[4] Serag-Eldin, M.A., Influence of site on thermal design of a two floor ZEH
in the desert, Theta-3 Conference, Cairo, December 19-21, 2010.
[5] Kreider, J. and Rabl, A. 1994, Heating and Cooling of Buildings: Design
for Efficiency, McGraw-Hill Inc., New York, pp.257.
[6] Serag-Eldin, M.A., Displacement Ventilation for Efficient Air-conditioning
and Ventilation of GYMs,
Proc. of Heat-SET-2007 Conference,
Chambery, France, 18-20 April, 2007, paper # P-128.
[7] NAHB Research Center, Inc., Zero Energy Home Armory Park Del Sol,
Tucson, Arizona, Final report submitted to NREL, 1617 Cole Boulevard,
Golden, CO 804101-3393, June 30th, 2004.
[8] ASHRAE Fundamentals Handbook, 2009, American Society of Heating,
Refrigerating and Air-Conditioning Engineers, Inc., USA.
[9] Hedrick, R. Generation of Hourly Design-Day Weather Data (RP-1363).
ASHRAE Research Project, Final Report, 2009.
[10] Thevenard, D. Updating the ASHRAE Climatic Data for Design and
Standards, (RP -1453). ASHRAE Research Project Report, 2009.
[11] Bliss, R.W., Atmospheric Radiation Near the Surface of the Ground: A
Summary for Engineers, Solar Energy 59(3), pp.103-120.
[12] ASHRAE HVAC Applications Handbook, 2007, American Society of
Heating, Refrigerating and Air-Conditioning Engineers, Inc., USA.
[13] ECODESIGN-Preparatory Study on the Environmental Performance of
Residential Room Conditioning Appliances, Draft report Task 4, July 2008,
Contract TREND/D1/40-2005/LOT10/S07.56606.
[14] Chinnaraj, C. and Govindarajan, P., Performance Analysis of Electronic
Expansion Valve in 1 TR Window Air Conditioner using Various
Refrigerants, I. J. of Eng. Sc. and Tech., Vol. 2(9), 2010, 4020-4025.
[15] Suntivarakorn, P., Satmaromg, S., Benjapiyaporn, C. and Theerakulpisut,
An experimental Study on Clothes Drying Using Waste Heat from Split
Type Air Conditioner, World Academy of Sc., Eng. and Tech., 53, 2009,
pp.168-173.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
279
Abstract
This article investigates the film cooling effectiveness and heat transfer in three
regimes for a film-cooled gas turbine blade at the leading edge of the blade with
450 angle of injection. A Rolls Royce blade has been used in this study as a solid
body with the blade cross section from Hub to Shroud varying with a degree of
skewness. A 3-D finite-volume method has been employed (FLUENT 6.3) with
a k turbulence model. The numerical results show the effectiveness cooling
and heat transfer behavior with increasing injection blowing ratio BR (1, 1.5 and
2). In terms of the film cooling performance, high BR enhances effectiveness
cooling on pressure side and extends the protected area along the spanwise
direction from hub to shroud. The influence of increased blade film cooling can
be assessed via the values of Nusselt number in terms of reduced heat transfer to
the blade.
Keywords: turbine blade, film cooling, blowing ratio, CFD, heat transfer,
effectiveness cooling.
1 Introduction
Increasing the thrust, overall efficiency and reducing the fuel consumption as
much as possible are major issues in modern gas turbine engineering, and this is
generally achieved via increasing the turbine inlet temperature. These higher
temperatures however have detrimental effects on the integrity of high pressure
turbine components and materials composing the turbine blades. Film cooling
technology is justified to protect blades surfaces from incoming hot gas and for
increasing life time. Numerical and experimental studies of three-dimensional
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110251
281
interacting coolant jet process especially near the hole region, at different hole
rows. On the pressure surface the results generated however were poor.
After reviewing the previous published studies, the majority of studies
communicated hitherto have focused on 2-D or 3-D aerodynamic flow and heat
transfer for a simple blade geometry, flat or curved plate, NACA 0021,
symmetrical turbine blade and simple cross section blade from hub to tip.
Consequently, this paper aims to extend these studied by focusing on:
Using different cross section blade geometry (from hub to shroud) with
angle of twist.
The main flow (hot gas) and coolant system (cooled fluid) differing in
temperature, pressure and chemical composition for the hot gas and cooled
air.
Solid body thermal properties which will be simulated by inserting the type
of material type e.g. carbon steel (SAE4140) in the FLUENT software
property specification pre-processor. The blade taken in the previous studies
as a shell surface is shown in Fig. 1.
Aerodynamic flow and heat transfer in modern gas turbine constitutes a very
complex flow field with high turbulence levels; therefore we apply film cooling
in this paper from the blade leading edge. This is additionally complicated due to
the resulting interference between the main flow and injected coolant.
Figure 1:
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
283
mesh elements for face and volumes are the selection option size function from
the tool mesh.
In this paper a high quality of mesh for the blade, hot gas and fluid coolant is
achieved via the multi-block method with fewer cells. Therefore, a size function
is employed for meshing the volumes to control the size, growth rate and (owing
to the geometric complexity in the Rolls Royce blade) the holes geometries.
Tetrahedral elements were used to complete the volumetric mesh and the final
mesh generated for the three volumes contains in excess of 10 million cells
(elements). All simulations were executed until the solution convergence factor
1*10-5and 1*10-7 were satisfied on the energy equation. The solution has been
controlled by the select SIMPLIE algorithm to solve velocity-pressure coupling
with an implicit procedure, such that more than 3500 iterations are sustained to
convergence (solution stability).
4 Boundary conditions
The initial boundary condition details are inserted in the code the inlet main
velocity at 163.4 m/sec and the mass flux (blowing) ratio values for the plenum
are 1, 1.5 and 2. In addition, the ratio of the hot gas temperature to the cooled air
temperature was specified as ( T TC 1.6 ) where; ( TC ) is the coolant temperature
and ( T ) designates the incoming hot gas temperature ( T 460 K). In the
numerical simulation the outlet flow was defined as a static pressure and the
turbulence flow (Tu) was calculated as a function of the Reynolds number. The
Reynolds number (Re) is approximately 2.57*105 based on the maximum axial
chord of the blade model. So, Tu can be written as defined as:
Tu=0.16*Re(-1/8)
(1)
The Rolls Royce blade is simulated as a solid body. The blade material
properties are demonstrated in table 1.
Table 1:
Material type
Density ( )
Thermal conductivity
specific heat
(low carbon
kg/m3
(k) W/m-K
(CP) J/kg-K
8030
42.7
473
steel )
SAE 4140
BR C VC / V
(2)
Fig. 3(a), 3(b) and 3(c) depict contour temperature distributions for blowing
ratio BR=1, 1.5 and 2, respectively. The colour temperatures have been
graduated from low to high temperature (leading edge to trailing edge area).
Consequently, with increasing of mass flux ratio (BR) the blade surface
temperature will be reduced on both the pressure side and the suction side and
also from the hub to shroud. Thereby, the protected area of the blade from
incoming hot gas will be increased with increasing blowing ratio. In this study
film cooling technology will be studied firstly- convection cooling though
blowing coolant fluid in to the lateral hole from hub to shroud (internal cooling).
Secondly, coolant fluid will be injected for both the pressure and suction side as
a secondary fluid to create a blanket above the blade surface (external cooling)
simultaneously. According to Fig. 3, the drops in predicted blade temperature
436.3K
418.6 K
410 K
453.9 K
445 K
(a)
(b)
401 K
410 K
436 K
427.5
(c)
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
285
with rise of blowing ratio (BR) indicate that the effectiveness cooling will be
enhanced from the hub to the shroud and from the leading to trailing edge, a
result which correlates very well with the computations of Kadja and
Bergeles [1].
A unique feature of our study is to demonstrate the effect of cooling on the
blade as a solid body. In Fig. 4 therefore presented the trajectory of temperature
and effectiveness cooling distribution for both pressure side and suction side on
the solid body blade (always the effectiveness will be reversed trajectory
compares with Temperature trajectory). In addition the hub, mid and shroud area
temperature at blowing ratio BR=2 are also illustrated. The temperature along
the span of the blade can be detected from distribution of temperature at the hub
area to mid and from the mid to shroud area. Invariably the hub area
temperatures will be lower than the mid and shroud area temperatures at the
leading edge region. This is a result of the coolant fluid being blowing (injected)
from the blade base (hub area). In the midspan section the predicted temperatures
curve descends much more than the hub temperature profile at X/Cx>0.25 on the
pressure surface and at X/Cx< -0.02 on the suction side Fig. 4. This pattern is
due to the camber of blade and the angle of twist- the blade cross section with
the span and axial cord at each section changes from hub to shroud (blade design
shape).
Local film cooling effectiveness () is analyzed in this paper for both blade
sides (pressure and suction side) as a function of ( T ), ( T W ) wall temperature
and ( TC ) using this equation:
(T TW ) (T TC )
460.00
(3)
0.40
BR=2
0.35
450.00
Hub area
Mid area
0.30
Effectiveness cooling
Temperature (K)
440.00
430.00
420.00
BR=2
Shroud area
0.25
0.20
0.15
0.10
Hub area
410.00
Mid area
0.05
Shroud area
400.00
-1.00
-0.75
-0.50
Suction side
Figure 4:
-0.25
0.00
X/Cx
0.25
0.50
0.75
Pressure side
1.00
0.00
-1.00
-0.75
-0.50
-0.25
Suction side
0.00
X/Cx
0.25
0.50
0.75
1.00
Pressure side
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Mid area
0.35
BR=1
BR=1.5
Effectiveness cooling
0.30
BR=2
0.25
0.20
0.15
0.10
0.05
0.00
-1.00
-0.75
-0.50
-0.25
Suction side
(a)
0.00
X/Cx
0.25
0.50
0.75
1.00
Pressure side
(b)
(c)
Figure 5:
Effects of blowing ratio (BR) on the blade effectiveness cooling(a) hub, (b) mid and (c) shroud.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
287
suction side than that on the pressure side, but with increasing BR, the film
cooling effectiveness on the pressure side will gradually increase compared with
the suction side.
Fig. 5(c) shows variation of effectiveness cooling with blowing ratio in the
shroud regime. The trajectory of the pressure and suction side is nearly
analogous (matched). About 18.3% and 12.7% film cooling effectiveness will
be enhanced with increase blowing ratio changing from BR=1 to 1.5 and from
BR=1.5 to 2 respectively. The film cooling effectiveness contours for the
pressure and suction side on the blade model at blowing ratio BR = 2 is shown in
Fig.6.
The values of effectiveness cooling change gradually form the leading edge to
trailing edge (colder to hottest place). Briefly, when the values of blowing ratio
(BR) are reduced the hottest area will be increased on both sides and the blade
will be exposed to incoming hot gas.
A good correlation has been found between previous studies e.g. Kadja and
Bergeles [1] and Guangchao et al. [14] and other published paper results on film
cooling effectiveness studies with cylindrical hole shapes. With increases in
values of BR the blade surface temperature drops and the effectiveness cooling is
enhanced. BR is proportional to temperature difference ( T ) and inversely
proportional to effectiveness cooling ().
Fig.7 shows the comparison between the CFD prediction of Burdet et al. [15]
and our computed film cooling effectiveness () on the blade model near hub
position for BR=1 at angle of injection 350 . At first glance, the values of
effectiveness cooling () at X/d < 0.3 on the pressure side are significantly
higher than Burdet et al. [15], which is beneficial. On the suction side at X/d <
0.18 the values of () seems to coincide with Burdet et al. [15] while, the profile
of the effectiveness cooling () descend after this region. The plummet in values
of effectiveness cooling for both sides is attributed to the blade being modeled as
a solid body which is much more realistic than the previous studies (where
blades were analyzed as a shell) and also the effects of blade design, holes
0.13
0.09
0.25
0.02
Figure 6:
0.29
0.06
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Effectiveness cooling
[15]
CFD Burdet, etal [14]
0.15
Pressure side
Suction side
0.10
0.05
0.00
0.00
0.20
0.60
0.40
1.00
0.80
X/d
Figure 7:
diameter and number of holes along the span which are different from previous
investigations.
Fig. 8 illustrates the effects of coolant fluid on the distribution of film cooling
effectiveness for the pressure side and suction side at the midspan location with
BR=1, 1.5. In this article two different coolant fluids (air at T=287.5 and T=153
K) have been injected to resolve the best way to achieve improved blade
protection. High effectiveness cooling was obtained through injected air as
coolant fluid at temperature 287.5K. Evidently, from equation (3) depends
on T , TW and TC , even with a decrease TC there is not necessarily an increase
in effectiveness cooling () since the blowing ratio (BR) will be also affected by
the temperature property as indicated by equation (2).
0.25
0.25
0.23
0.23
0.20
Tc=287.5 K
0.20
Tc=287.5 K
Tc=153 K
Tc=153 K
0.18
Effectiveness cooling
Effectiveness cooling
0.18
0.15
0.13
0.10
0.15
0.13
0.10
0.08
0.08
0.05
0.05
0.03
0.03
0.00
-1.00
-0.75
-0.50
Suction side
Figure 8:
-0.25
0.00
X/Cx
0.25
0.50
0.75
Pressure side
1.00
0.00
-1.00
-0.75
-0.50
Suction side
-0.25
0.00
X/Cx
0.25
0.50
0.75
1.00
Pressure side
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
289
Prediction of heat transfer from the hot gas towards the blade can be achieved
via the Nusselt number (NU) based on the axial chord Cx and this is defined as:
N U =(q*Cx)/k( T TW )
(4)
where, q is the blade wall heat transfer rate. Fig. 9 shows the predicted profiles
of Nusselt number (NU) at the mid span for BR=1 1.5 and 2.
Near the leading edge along the span (from hub to shroud) the heat transfer
attains the maximum level and this is in excellent agreement with the results of
Burdet and Abhari [7], Garg and Abhari [8] and Burdet et al. [15]. On the
pressure side (NU) suddenly drops after the leading edge. Certainly, the effects of
film cooling are manifested via the creation of a layer of protection from the
incoming hot gas on this side. At X/Cx>0.15 the BR influence is clearly observed
as reducing (NU) again this trend concurs with the studies of Kadja and Bergeles
[1] and Guangchao et al. [14]. On the suction side, the Nusselt number (NU)
suddenly falls after the leading edge and in the region X/Cx=-0.25 the values of
(NU) slightly increase owing to the effects of turbulence and the camber of blade
and gradually falls; the better protection of blade from hot gas through increases
effectiveness cooling, and reduced blade Nusselt number (NU).
Figure 9:
Fig. 10 shows Nusselt number contours at the leading edge and film cooling
regions at BR=2. The maximum value of Nusselt number (NU) is determined at
the leading edge along the blade span. This implies heat transfer from the hot gas
in the direction of the blade. Near the holes there is no heat transfer to the blade
due to coolant fluid injection. Thus, the Nusselt number (NU) drops to negative
values (heat transfer will be in the opposite direction).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
800
116
981
Figure 10:
-282
6 Conclusions
The film cooling performance for a complex gas turbine blade at the leading
edge with three blowing ratios has been studied numerically. The main findings
of this investigation are as follows:
1) Film cooling effectiveness near the leading edge significantly increases
with blowing ratio ( is proportional with BR). The influence of
increasing BR on the film cooling effectiveness appeared on the pressure
and suction side; the pressures side effectiveness cooling is much more
enhanced than the suction side (high BR enhanced film cooling on
pressure side).
2) The response of film cooling effectiveness along the span from hub to
shroud section is enhanced with increasing blowing ratio. However, if
there is dissimilarity between the published paper and the previous
studies certainly this is due to the number of holes, holes diameter, blade
section from hub to shroud and skewness.
3) There is no benefit in injection of coolant fluid at low temperature to
enhance blade cooling at constant BR (see equation (5)).
4) The heat load on the blade represented by Nusselt number (NU) is
strongly influenced by increasing BR.
Acknowledgement
The authors would like to thank the Government of Iraq, Ministry of Education
for their generous financial support during Harbi A. Dauds studies at Sheffield
Hallam University.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
291
References
[1] Kadja M. & Bergeles G., Computational study of turbine blade cooling by
slot- injection of a gas. Applied Thermal Engineering, 17(12), pp. 11411149, 1997.
[2] Hung M.S, Ding P.P & Chen P.H, Effect of injection angle orientation on
concave and convex surface film cooling, Experimental Thermal and Fluid
Science, 33, pp.292-305, 2009.
[3] Kassim, M.S, Yoosif, A.H. & Al-Khishali K.J.M., Investigation into flow
interaction between jet and cross mainstream flows, PhD thesis, University
of Technology, Mechanical Engineering Department, Iraq, Baghdad, 2007.
[4] Lakehal D., Theodoridis G.S. & Rodi, W., Three-dimensional flow and heat
transfer calculations of film cooling at the leading edge of a symmetrical
turbine blade model Int. J. Heat and Fluid Flow, 22, pp.113-122, 2001.
[5] Theodoridis, G.S., Lakehal, D., & Rodi, W., Three dimensional calculation
of the flow field around a turbine Blade with film cooling injection near the
leading edge, Flow, Turbulence and Combustion, 66, pp. 57-83, 2001.
[6] Forest, A.E., White, A.J., Lai, C.C., Guo, S.M., Oldfield, M. L. G. & Lock,
G. D., Experimentally aided development of a turbine heat transfer
prediction method, Int. J. Heat and Fluid Flow, 25, pp. 606-617, 2004,
available online at www.ScienceDirect.com.
[7] Burdet, A. & Abhari, R.S., Three-dimensional flow prediction and
Improvement of Holes Arrangement of a Film-Cooled Turbine Blade Using
Feature Based Jet Model, ASME J. Turbomachinery, 129, pp. 258-268,
2007.
[8] Garg, V. K. & Abhari, R. S., Comparson of predicted and experimental
Nusselt number for a film cooled rotating blade, Int. J. Heat Fluid Flow,
18, pp. 452-460, 1997.
[9] Azzi, A. & Jubran B.A., Influence of leading edge lateral injection angles
on the film cooling effectiveness of a gas turbine blade, Heat and Mass
Transfer,
40,
pp. 501-508,
2004,
available
online
at
www.ScienceDirect.com.
[10] Renze, P., Schroder, W. & Meinke, M., Large- eddy simulation of film
cooling flow at density gradients, Int. J. Heat Fluid Flow, 29, pp.18-34,
2008, available online at www.ScienceDirect.com.
[11] Eghlimi, A., Kouzoubov, A. & Fletcher, C.A.J., Anew RNG-Based tow
equation model for predicting turbulent gas particles flow, Int. Conference
on CFD in Mineral & Metal Processing and Power Generation CSIRO,
Sydney Australia, 1997.
[12] Lakehal, D., Near wall modeling of turbulent convective heat transport in
film cooling of turbine blades with the aid of direct numerical simulation
data, ASME J. Turbomachinery, 124, pp.458-498, 2002.
[13] Tao, Z., Yang, X., Ding, S., Xu, G., Wu, H., Deng, H.& Luo, X.,
Experimental Study of Rotation effect on Film Cooling over the Flat wall
with a Single Hole, Experimental Thermal and Fluid Science, 32, pp.10811089, 2008, available online at www.ScienceDirect.com.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
293
Abstract
An experimental investigation of spray cooling performed with water based
nanofluid containing multi-walled carbon nanotubes and Fe nanoparticles was
carried out. The concentrations of carbon nanotubes in the liquid used in the
experimental program were 1 wt.%, 0.1 wt.%, 0.01 wt.%, the concentrations of
Fe nanoparticles were 40 wt.%, 10 wt.%, 1 wt.%. The liquid was sprayed on the
surface by a full cone nozzle from distances of 40, 100 and 160 mm with flow
rates of 1 to 2 kg/min (liquid impingement densities of 1 to 40 kg/m2s). A steel
sensor measuring temperature history was cooled by spraying from 190 C. The
heat transfer coefficient was calculated at an interval of the surface temperature
from 100 C to 50 C by inverse modelling, and compared with the heat transfer
coefficient of water cooling. Using Fe nanoparticles showed a decrease of the
heat transfer coefficient on the cooled surface. The majority of experiments with
carbon nanotubes also showed a decrease of the heat transfer coefficient.
However, there were some conditions during which an increase was observed.
Keywords: nanofluids, multi-walled carbon nanotubes, Fe nanoparticles, heat
transfer, spray cooling, experimental.
1 Introduction
It was anticipated that some fluid heat qualities would be improved by adding
metal parts, metal oxides parts, or generally those parts from materials which
have suitable heat transfer characteristics. Some attempts focused on cooling
were made by liquid additives in water, with particles sized in mm or m mixed
into the fluids and with nanofluids. Nanofluid is a suspension of fluid (water,
ethylene glycol, oil, bio-fluids, polymer solution, etc.) and particles (metals,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110261
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
295
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
3 Experiment
3.1 Experimental plan
The hot surface was sprayed by the full cone nozzle Lechler 460.443.17 CA with
a spray angle of 45 from three distances of 40 mm, 100 mm, and 160 mm. The
flow rates of 1 kg/min, 1.5 kg/min, and 2 kg/min were steady during the
experiment (the appropriate liquid impingement density is shown in Tab. 1). The
initial experiments were conducted by spraying pure water and after that
nanofluid. Finally, the nanofluid spray cooling intensity was compared with the
pure water spray cooling intensity. The summary of experiments is shown in
Tab. 1.
Table 1:
Fluid
Concentration
Pure water
wt.%
100
C nanotubes
in water
Fe
nanoparticles
in water
1
0.1
0.01
40
10
1
Experimental plan.
Nozzle Distance
mm
40
Full
cone
100
160
Flow
rate
kg/min
1
1.5
2
1
1.5
2
1
1.5
2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Liquid
impingemen
t density
kg/m2s
19.3
29
38.7
3.1
4.6
6.2
1.2
1.8
2.4
297
Figure 2:
Figure 3:
Cooled steel sensor; the drawing is on the left side (steel sensor
body is colored white, Teflon is light grey, channel for
thermocouple is dark grey); the photo is on the right side.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
Experimental equipment.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
299
Figure 5:
5 Results
An example of record and results from the cooling experiment is shown in fig. 6.
The measured temperature history in thermocouple position, computed surface
temperature and computed HTC is shown. Some fluctuations can be seen on the
HTC curves in fig. 6, it is the result of the random fluctuation of falling droplets
in the impact area. The results for spraying with pure water were found to match
the results published by Ciofalo et al. [14]. The temperature field in the whole
steel sensor after 5 s of cooling is illustrated in fig. 5.
The average results from the experiments described in this paper are shown in
figs. 710. The average values of the heat transfer coefficient HTC were
computed for the intervals 50100 C of the surface temperatures. The graphs
show that the HTC increases with an increase in liquid impingement density and
it is possible to say that adding carbon nanotubes and Fe nanoparticles to pure
water decreases HTC in most of researched cases. The HTC was surprisingly
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
25000
200
20000
15000
100
10000
Measured Temp.
50
HTC [W/m2K]
Temperature [C]
HTC
150
5000
Surface Temp.
0
0
Time [s]
Figure 6:
HTC [W/m2K]
100000
10000
Pure Water
C_1 wt.%
C_0.1 wt.%
C_0.01 wt.%
1000
1
Figure 7:
10
2
Liquid impingement density [kg/m s]
100
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
301
HTC [W/m2K]
100000
10000
Pure Water
Fe_40 wt.%
Fe_10 wt.%
Fe_1 wt.%
1000
1
10
100
2
Figure 8:
100000
HTC [W/m2K]
38.7
29
19.3
10000
6.2
4.6
3.1
2.4
1.8
1.2
1000
1E-06 0.00001 0.0001
0.001
0.01
0.1
10
C nanotubes [wt.%]
Figure 9:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
HTC [W/m2K]
38.7
29
19.3
6.2
4.6
3.1
10000
2.4
1.8
1.2
1000
1E-06 1E-05 0.0001 0.001
0.01
0.1
10
100
Fe [wt.%]
Figure 10:
Pure Water
HTC [W/m2K]
32000
30000
C_1 wt.%
28000
C_0.1 wt.%
26000
C_0.01 wt.%
24000
22000
20000
18000
16000
14000
12000
10000
2
Figure 11:
4
6
2
Liquid impingement density [kg/m s]
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
303
Pure Water
14000
Fe_40 wt.%
Fe_10 wt.%
13000
HTC [W/m K]
Fe_1 wt.%
12000
11000
10000
9000
2
8
2
Figure 12:
6 Conclusions
Most of the research confirmed an improvement in thermal conductivity and
convective heat transfer coefficient during liquid convection by adding carbon
nanotubes or Fe nanoparticles to pure water. This paper investigated spray
cooling by these nanofluids. It was found that by adding carbon nanotubes or Fe
nanoparticles to pure water the cooling intensity during spraying of the steel
surface with its temperatures of 100 to 50 C was not increased. A high increase
in cooling intensity by spraying only with 1 wt.% of carbon nanotubes at a
distance of 100 mm from the nozzle was observed. For 3.1 kg/m2s (flow rate
1 kg/min) the heat transfer coefficient was increased about 174% to the heat
transfer coefficient of pure water, for 4.6 kg/m2s (1.5 kg/min) it was 119% and
for 6.2 kg/m2s (2 kg/min) 101%. For other liquid impingement densities a
decrease in HTC up to 32%, 32%, 26% for nanofluids with 1 wt.%, 0.1 wt.%,
0.01 wt% of carbon nanotubes, respectively, and up to 22%, 18%, 23% for
nanofluids with 40 wt.%, 10 wt.%, 1 wt.% of Fe nanoparticles, respectively, in
comparison with pure water was observed. The decrease in HTC was also
observed in the work of Bansal and Pyrtle [2] by using alumina nanoparticles in
water. Contrary to this the results in Chakrabortys work [1] show an increase in
HTC by adding TiO2 nanoparticles. The reason for this could be in the
temperature range of the conducted experiments. The experiments carried out for
this paper and the Bansal and Pyrtles experiments [2] were conducted between
200 and 50 C. However, Chakrabortys research [1] was conducted for
temperatures between 1200 and 500 C.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] Chakraborty, S., Chakraborty, A., Das, S., Mukherjee, T., Bhattacharjee, D.
& Ray, R.K., Application of water base- TiO2 nano-fluid for cooling of hot
steel plate. ISIJ International, 50, pp. 124127, 2010.
[2] Bansal, A. & Pyrtle, F., Alumina nanofluid for spray cooling enhancement.
ASME-JSME Thermal Engineering Summer Heat Transfer Conference,
pp. 797803, 2007.
[3] Choi, S.U.S., Zhang, Z.G., Lockwood, F.E. & Grulke E.A., Anomalous
thermal conductivity enhancement in nanotubes suspensions. Physics
Letters, 79, pp. 22522254, 2001.
[4] Assael, M.J., Chen, C.F., Metala, I. & Wakeham, W.A., Thermal
conductivity of suspensions of carbon nanotubes in water. International
Journal of Thermophysics, 25, pp. 971985, 2004.
[5] Amrollahi, A., Hamidi, A.A. & Rashidi, A.M., The effects of temperature,
volume fraction and vibration time on the thermo-physical properties of a
carbon nanotube suspension (carbon nanofluid). Nanotechnology, 19 (31),
pp. 18, 2008.
[6] Zhu, H., Zhang, C., Liu, S., Tang, Y. & Yin, Y., Effect of nanoparticle
clustering and alignment on thermal conductivities of Fe3O4 aqueous
nanofluids. Applied Physics Letters, 89, article number 023123, 2006.
[7] Hong, K.S., Hong, T.K. & Yang, H.S., Thermal conductivity of Fe
nanofluids depending on the cluster size of nanoparticles. Applied Physics
Letters, 88, articles number 031901, 2006.
[8] Ding, Y., Alias, H., Wen, D. & Williams, R.A., Heat transfer of aqueous
suspensions of carbon nanotubes (CNT nanofluids). Heat and Mass
Transfer, 49, pp. 240250, 2006.
[9] Liao, L. & Liu, Z.H., Forced convective flow drag and heat transfer
characteristics of carbon nanotube suspensions in a horizontal small tube.
Heat and Mass Transfer, 45, pp. 11291136, 2009.
[10] Park, K.J. & Jung, D.S., Enhancement of nucleate boiling heat transfer
using carbon nanotubes. Heat and Mass Transfer, 50, pp. 44994502, 2007.
[11] Park, K.J., Jung, D. & Shim, S.E., Nucleate boiling heat transfer in aqueous
solutions with carbon nanotubes up to critical heat fluxes. International
Journal of Multiphase Flow, 35, pp. 525532, 2009.
[12] Shi, M.H., Shuai, M.Q., Li, Q. & Xuan, Y.M.: Study on pool boiling heat
transfer of nano-particle suspensions on plate surface. Journal of Enhanced
Heat Transfer, 14, pp. 223231, 2007.
[13] Beck, J., Blackwell, B. & Clair, C.R., Inverse Heat Conduction, Wiley,
1985.
[14] Ciofalo, M., Caronia, A., Di Liberto, M. & Puleo, S. The Nukiyama curve
in water spray cooling: Its derivation from temperature-time histories and
its dependence on the quantities that characterize drop impact. Heat and
Mass Transfer, 50, pp. 49484966, 2007.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 4
Stress analysis
307
Abstract
The drilling industry is faced with many challenges, and the sudden failure of a
drill string during drilling is one of major concern. Exploration of the causes for
the failures reveals vibrations as the major cause. In order to test and analyze the
vibration patterns of rotary drilling, a laboratory proto type of the process is set
up. The mathematical model developed to analyze the vibration presents residual
error. Robustness issues pertaining to model error and modelling error is
discussed. Methods to counter the errors and minimize the vibrations are also
discussed.
Keywords: rotary drilling, robustness, modeling error, vibration, experimental
set up, unbalanced mass, parameter uncertainty.
Figure 1:
pipe for the length of the drill string, and the rotation is achieved by turning a
square or hexagonal pipe (the Kelly) on a rotary table at drill floor level.
Before drilling, a large, heavy bit is attached to the end of a hollow drill pipe.
As drilling progresses, the drill bit forces its way underground and additional
sections of pipe are connected at the top of the hole. The derrick is the name for
the structure which supports the rig above the surface. The taller the derrick, the
longer the sections of drill pipe that the derrick can hold at a time. Although
early derricks were made of wood, modern derricks are constructed of highstrength steel. Throughout the rotary drilling process, a stream of fluid called
drilling mud is continuously forced to the bottom of the hole, through the bit, and
back up to the surface. This special mud, which contains clay and chemicals
mixed with water, lubricates the bit and keeps it from getting too hot. It also acts
as a cap to keep the oil from gushing up.
The drill strings experience high vibrations and strong rocks in the path of
drilling. These cause the dynamics presented by drill strings to be highly
complex, non linear and unexpected. The drill string vibrations coupled together
with well bore friction result in many phenomenon such as bit bounce, stick slip,
forward and backward whirl. There are three main types of drill string vibration:
Axial vibration is mainly caused when drilling with roller cone bits. It leads to
jumps in drilling; bouncing and can slow down the rate of penetration.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
309
Torsional vibration results due to twisting of the drill string, and sometimes
breaking. It makes the rotation of the drill bit irregular and leads to the stick slip
phenomenon.
Lateral vibration occurs when the drill string is bent or when drilling in a non
vertical well. The drill bit rotates with a center of rotation not coincident with the
center of well, leading to hole enlargement and forward or backward whirl of the
bit.
This research concentrates on lateral vibrations occurring in the drill pipe due
to a bend. Ideally with zero well bore friction and assuming the drill string is a
perfect straight beam rotated with an axial load, there will be no nonlinearities or
vibrations during drilling. However, in the presence of curved/inclined boreholes
or unbalanced WOBs the friction between the drill bit and well borehole contact
is uneven and different at different contact points. This result in the drill bit
centerline not being in the center of the hole, hence the centrifugal force will
now act as the center of gravity causing the drill string to bend. Bend drill strings
do not follow circular trajectories, causing the drill bit to hit the sides of the
borehole. This will eventually lead to the stick slip phenomenon, in which large
vibrations and sudden unexpected drill bit movements occur. The usual solution
in oil rigs is to stall the entire drilling process, and restart. In extreme cases the
drill string would break requiring a call for an entire process up haul.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
source of vibration is the bit and hence the centrifugal forces developed when an
unbalanced drill string is rotated can be one of the major sources of vibrations.
Analyzing the literature, an unbalanced mass is placed on the lower rotor
representing the drill bit to simulate the bend drill string properties. The
experimental set up now has three DOFS. Apart from the rotation of the upper
rotor, and lower rotor, there is tangential angular displacement for the lower rotor
initiated by the new centre of rotation of the lower rotor not coinciding with the
centre of rotation of the upper rotor. The lower rotor now follows an elliptical
trajectory, also known popularly as bit whirl in the drilling field. This paper also
analyses the behaviour of the system at low and average operating speeds of
actual drilling.
3 Robustness issues
3.1 Residual error and Model error
The mathematical model for the process was identified using the system
identification black box modeling approach. The experimental set up was excited
with chirp input to obtain the required identification data. The chirp input has
correlation function properties very similar to white noise. A Box Jenkins model
was identified for the process. Box Jenkins models are especially useful when
the process is affected by disturbances entering late into the system.
The laboratory operating speeds are selected to represent the rotary drilling
process at its low and average operational speeds. The process is excited by
command inputs of 8 RPM and 38 RPM, figs. 3 and 4. The usual rotary drilling
speeds are around 35 to 45 RPM. The low speed is analyzed to understand the
drill string behavior in the transient time. The upper rotary speed is the input
speed of the process. The experimental responses of the process are recorded and
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
311
plotted in figures 3 and 4. The identified model is seen to have very close
response to the process response. The residual signal graph between the process
response and model response is seen to be very low with values around 180m
RPM, figs. 5 and 6. This suggests that the model provides a good fit for
analyzing the process behavior.
Figure 3:
Figure 4:
Figure 5:
Figure 6:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 7:
However, the very small vibrations in the speed of the process at the output
due to the unbalanced mass are noticeable, fig. 7. Here the unbalanced mass is
very small nearly 57 gms, which is about 5% of the mass of the lower rotor
representing the drill bit. This mass will represent only a very small bend in the
drill string. However, in reality, drill strings when they bend slightly, present
more severe vibrations due to the presence of well bore friction and higher mass
of the bottom hole assembly. The black box model of the process is identified in
a Box Jenkins model format specifically because the Box Jenkins models are
good for processes in which disturbances enter late in to the system.
The residual error, figs. 5 and 6 presents us with a model robustness issue
which needs to be dealt with. One suggestion is to combine the black box model
with a separate model describing effect of the unbalanced mass using analytical
principles and larger degrees of freedom Liao et al. [4]. The drilling system
prototype concerned here can be seen to be a strictly proper system. In other
words the gain tends to zero in the limit as frequency tends to infinity, fig. 8.
Figure 8:
Bode plot of the model with lower (green) and upper (red) bounds.
This can be attributed to the presence of inertia in the system. The model
itself will have robustness errors and they need to be analyzed further by looking
for RHP poles and zeros, cancellations and analyzing the internal stability of the
model.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
313
G p ( s ) G ( s ) G ( s )
(1)
i.e. if
N (s)
;
i 1 D ( s )
k
N ( s)
G(s)
i 1 D ( s )
G p (s)
(2)
(3)
and
G ( s )
where
N (s)
i k 1 D ( s )
(4)
N ( s)
is the numerator and denominator of the plant transfer function.
D( s)
G(s) is the modeling error, or the difference between the model and the best
possible plant model.
Another source of modeling error can be deduced from analyzing the
frequency response magnitude; figs. 9 and 10.The frequency response gains are
plotted for two different conditions, for small mass unbalance and large mass
unbalance. It can be seen that as the frequency increases the size of the resonant
peaks tend to decrease after a certain point . In the frequency response gain
plots, the point of the drilling system here can be seen to be around 150m Hz
for the two cases studied. This particular frequency is seen to be a constant
for a particular system and does not vary with added disturbance, here the
unbalanced mass. Hence we can safely assume that for frequencies higher than
the magnitude of the frequency response will never exceed the gain at that
value, i.e;
0; 0 '
20 log10 G ( jw)
; '
(5)
Figure 9:
Figure 10:
uncertainty associated with its value. These are plotted in the bode plot, fig. 8,
with the upper and lower bound of the magnitude and phase curves.
4 Vibration analysis
The analytical equation for the lower rotor with the unbalanced mass can be
written as Inman [12]:
mx cx kx m0 e r2 sin r t
(6)
where m is the lower rotor mass, m0 is the unbalanced mass, c is the damper
constant, k is the spring constant (drill string considered as a spring and damper),
e is the distance of the unbalanced mass from the center axis of rotation of lower
rotor, r is the drilling rotational frequency and is the damping ratio.
The steady state displacement of the lower rotor is
m0 e
r2
X
m (1 r 2 ) 2 (2r ) 2
where
r
n
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(7)
315
x p (t ) X sin( r t )
(8)
5 Summary
This paper discusses the model of a drill string system representing rotary
drilling. The experimental and simulated model responses are plotted and
analyzed. The residual error is discussed and the source of the error due to lack
of robustness in the model is also studied. Two major reasons for the uncertainty
and presence of modeling error are discussed. The source and control of
vibration is also discussed. Future work involves study and discussion of
robustness issues arising in the model itself, and further expansion of the
modeling errors and ways to overcome the errors for a better plant model.
References
[1] Dykstra, M., Christensen, H., Warren, T., and Azar, J., Drill string
component mass imbalance: A major source of drill string vibrations, SPE
Drilling and completion,Vol.11, pp.234-241,1996.
[2] Germay, C., van de Wouw, N., Nijmeijer, H., and Sepulchre, R., Nonlinear
drill string dynamics analysis, SIAM J. Applied Dynamical systems, Vol. 8,
pp.527-553, 2009.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
317
Abstract
Early age cracking of concrete bridge deck is a frequent problem for bridges
worldwide. This work provides a framework for a thermo-hygro-mechanical
mathematical model for the analysis of early age transverse cracking of the
bridge deck. The model includes the determination of the temperature and
moisture gradients in the deck slab, and the prediction of the thermal and drying
shrinkage strains. These strains were superimposed with the strains from creep
and mechanical loads and applied to an elasto-plastic damage approach to
quantify the damage and stresses in deck slab. The model was implemented in
finite element computer software to accurately predict the cracking and damage
evolution in concrete bridge decks. Accurate prediction of crack tendency is
essential for durability design of bridge decks, thus more sustainable bridges
with increased usable life span and low life-cycle costs.
Keywords: transverse cracking, bridge deck, thermo-hygro-mechanical model.
1 Introduction
Bridges usually developed early cracking of their decks [1]. Early age cracks
usually develop in the transverse direction of the traffic. The cracking could
initiate almost immediately after construction and sometimes appear within a
few months after deck is constructed. The problem of deck cracking is still
significant, even after the adoption of high performance concrete (HPC) for
bridge decks. In a survey conducted by New York Department of Transportation
(NYSDOT), it was observed that 48% of an 84 bridge decks built in New York
State between 1995 and 1998, using HP concrete, have developed transverse
cracks [2].
Figure 1 shows the mechanism of the transverse cracking of a concrete deck
slab. The composite action between the deck and the girders provides restraining
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110281
Figure 1:
Deck cracking has no immediate effect on the bridge safety, but it has
detrimental effects on the long-term performance. Cracks interconnect the voids
and the isolated microcracks in the concrete deck to form preferential pathways
for the ingress of chlorides from deicing chemicals thus accelerate reinforcement
corrosion. Fanous et al. [3] observed severe corrosion of black and epoxy coated
rebars extracted from cracked location in different bridge decks. Also, leakage of
water thorough cracks increases the degree of water saturation in the bridge
substructure, therefore increases the risk of freeze-thaw damage. As a result, the
bridge service life is reduced and maintenance and rehabilitation costs rise.
Bridge deck cracking occurs when restrained volumetric changes associated
with moisture and temperature changes take place. Volumetric changes are
mainly result from autogenous shrinkage, drying shrinkage, thermal shrinkage
and creep. These major causes of concrete volume change with time depend
primarily on the material properties and mix design, design details, construction
practices, and environmental conditions. Concrete properties are the most
important factors affecting transverse deck cracking since they control the
shrinkage and thermal strains that cause stresses and control the relationship
between strains and stresses. Understanding the concrete properties is central to
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
319
2 Significance
The main objective of this research is to develop a mechanistic approach for the
analysis of transverse cracking of composite bridge decks. This work will allow
better understanding of the cracking mechanism and in turn help practical
engineers to develop preventive and remedial strategies to eliminate or at least
mitigate them. The FEM simulations will result in better-determined stresses and
crack widths in the bridge deck structures subjected to the combined effects of
hygro-thermal volume changes and load-induced cracking. Accurate prediction
of crack tendency is essential for the reliability and long-term performance of
newly constructed bridge decks at service load levels.
The report with recommendations that will result from the literature survey
and parametric studies will provide engineers with the ability to analyze the
impacts of the material properties and mix design parameters, structural design
details, and construction practices on cracking of concrete bridge decks. This
will enable engineers to develop materials and methods to construct bridge deck
structures that can withstand a multitude of harsh environmental conditions at
low life-cycle cost. These measures will result in more sustainable structures
with increased usable life span and low life-cycle costs.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
T
div [k grad(T)] Q
t
(1)
(2)
where w is the moisture content, t is the time, H is the relative humidity, DH is the
moisture diffusion coefficient, w/H is the moisture capacity and G is the rate
of moisture loss due to hydration. The heat of hydration generation model
mentioned above can be also used to determine G.
3.2 Prediction of the volumetric changes
The numerical solution of the equations above (Eq. (1) and (2)) results in
temporal and spatial distribution of the temperature (T) and relative humidity (H)
inside the concrete bridge deck. The volumetric changes in concrete are related
to environmental factors including temperate and humidity variations. The
environmental strain, ev is the summation of the drying and thermal strains
ev sh T
(3)
The thermal strain T is a function of heating and cooling cycles and can be
expressed in terms of temperature change as follows
T T
(4)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
321
The drying shrinkage strain is related to moisture loss and so it can be linked
to the change in the relative humidity as follows
sh H
(5)
(6)
E:( p ev )
(7)
(8)
eff
1 d
where d is the damage parameter. It is assumed that 0 < d < dcr, where dcr is the
critical damage in which a complete local rupture takes place. In practice, a
dcr = 1 is usually employed.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
eff Eo :( p ev )
(9)
(10)
From Eqs. (9), (10) and (11), the constitutive relation becomes:
( 1 d)Eo:( p ev ) E:( p ev )
E ( 1 d)Eo
(11)
Following the effective stress concept, it is logical to assume that the plastic
flow takes place only in undamaged area [12]. Thus, the formulae from
elastoplastic theory that are dependent on stress must be modified by substituting
the effective stress in place of the nominal stress. The problem can then be
solved by using standard elastoplastic theory. The elastoplastic behavior of
concrete will be assumed to follow the pressure-sensitive DruckerPrager
criterion. The damage initiation and evolution can be characterized by a damage
model developed by Mazars and Pijaudier-Cabot [13].
Figure 2:
a. Humidity profiles
Figure 3:
323
b. Damage evolution
Humidity profiles and damage versus drying time for the bridge
deck.
References
[1] Krauss, P. D. & Rogalla, E. A., Transverse cracking in newly constructed
bridge decks. Rep. No. NCHRP Report 380, Transportation Research
Board, National Research Council, Washington, DC, 1997.
[2] Alampalli, S. & Owens, F.T., Improved Performance of New York State
Bridge Decks. HPC bridge Views, Issue 7, 2000.
[3] Fanous, F., Wu, H., & Pape, J., Impact of deck cracking on durability,
Center for Transportation Research and Education, Iowa State University,
Ames, IA, 2000.
[4] French, C., Eppers, L., Le, Q. & Hajjar, J.F., Transverse Cracking in
Concrete Bridge Decks, Transportation Research Record, No. 1688, TRB,
National Research Council, Washington, D.C, 1999.
[5] Saadeghvaziri, M. A., & Hadidi, R., Cause and control of transverse
cracking in concrete bridge decks, Rep. No. FHWA-NJ-2002-19, Federal
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
325
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
327
Abstract
High-cycle fatigue tests were carried out on smooth specimens of ultrafine
grained (UFG) copper produced by equal channel angular pressing for 12 passes.
The growth behavior of a small surface-crack was monitored. A major crack,
which led to the final fracture of the specimen, initiated from shear bands (SBs)
at an early stage of stressing. Different tendencies of growth behavior occurred
depending on the ranges of crack length. To understand the changes in growth
rate and fracture surface morphologies, a quantitative model describing a crack
growth mechanism were developed considering the reversible plastic zone size at
a crack tip. In addition, the crack growth rate of UFG copper was evaluated by
applying the small-crack growth raw.
Keywords: fatigue, surface damage, fine grains, copper, crack propagation.
1 Introduction
Ultrafine grained (UFG) materials processed by equal channel angular pressing
(ECAP) have many unique properties due to the unusual characteristics of the
microstructure with non-equilibrium states. Regarding the fatigue of UFG
materials, most studies have concentrated on cyclic deformation, fatigue life,
surface damage formation and underlying microstructural mechanisms [16].
Since the fatigue life of machine components and structures are mainly
controlled by the growth life of a fatigue crack, the crack growth behavior should
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110291
2 Experimental procedures
The material used was pure oxygen-free copper (OFC, 99.99 wt% Cu). Prior to
performing the ECAP process, the materials were annealed at 500 C for 1 hr
(average grain size: 100 m). An ECAP die used had an angle of 90 between
intersecting channels. The angles at the inner and outer corners of the channel
intersection were 90 and 45, respectively. Repetitive ECAP was accomplished
according to the Bc-route. 12 passages of extrusion resulted in an equivalent
shear strain of about 11.7, respectively. The mechanical properties before ECAP
were 232 MPa tensile strength, 65% elongation and a Vickers hardness number
of 63. After 12 passages of ECAP the properties changed to 402 MPa, 32%, and
131, respectively. The coarse grained copper and UFG copper processed through
12 passes are referred to hereafter as CG and UFG, respectively.
Round bar specimens of 5 mm diameter were machined from the respective
processed bars. The fatigue specimens were electrolytically polished ( 25 m
from the surface layer) prior to mechanical testing in order to remove any
preparation affected surface layer.
All fatigue tests were carried out at room temperature using a rotating
bending fatigue machine operating at 50 Hz. The observations of fatigue damage
on the specimen surface were performed using both optical microscopy (OM)
and SEM. The fracture surface analysis was performed by SEM. The crack
length, l, is a length measured along the circumferential direction of the surface.
The stress value referred to is that of the nominal stress amplitude, a, at the
minimum cross section (5 mm diameter).
For EBSD-analyses, the cross section perpendicular to press direction was
observed. EBSD mappings were carried out using a Tescan Mira II SEM
incorporating an EDAX-TSL Hikari EBSD detector. Each pixel was 40 nm for
UFG samples and 1.0 m for coarse grain samples and hexagonal in shape.
Transverse cross sections of as-ECAPed and post-fatigued bars were cut to
prepare specimens for transmission electron microscopic (TEM) observation.
Specimens were mechanically polished to a thickness of 100 m, and then
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
329
Figure 1:
Figure 2:
Regarding the UFG copper processed by ECAP, most studies [1, 1214] on
low-cycle fatigue indicated that the hardness measured on cross sections of postfatigued specimens is considerably dropped but its drop is found to decrease at
lower strain amplitudes. Even in the present specimens fatigued under lower
stress amplitudes, the surface hardness exhibited a dramatically large drop. The
hardness drop was closely related to the formation behavior of surface damage:
the formation of surface damage was accelerated in the latter half of fatigue life,
resulting in a significant hardness drop, whereas the mild damage formation in
an initial stage of cycling brought a moderate hardness drop. It should be
concluded that, thus, the initial (moderate) drop in hardness appears to result
mainly from a decrease in the dislocation density inside the grains/GB-regions
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
331
and the formation of SBs. Regarding the dislocation density after fatigue, Xu et
al. [15] conducted strain-controlled fatigue tests of commercial copper (99.8%
Cu) processed by ECAP for 6 passes through C-route (after each pressing, the
billet bar was rotated around its longitudinal axis through 180). From TEM
observations and EBSD grain maps, they indicated that post-fatigued structures
have narrower GBs and lower dislocation density in grain interiors when
compared to those in virgin microstructures. For stress-controlled fatigue tests
under low plastic strain amplitude (less than 5x10-4), Kunz et al. [12] indicated
the formation of narrower GBs and lower dislocation densities in the grain
interior for post-fatigued specimens of UFG copper (99.9% Cu).
In the latter half of the fatigue life, on the other hand, the surface hardness
exhibits a significant drop tendency with a large, simultaneous extension of
damaged regions (Fig. 2). As the microstructural background of hardness drop
(softening) of copper processed by ECAP, the coarsening of ultrafine grains has
been discussed. Hppel et al. [3] showed that pronounced grain coarsening is
related to a marked cyclic softening in strain-controlled fatigue tests. Thermally
activated grain coarsening must be considered as one other main reason for the
Figure 3:
cyclic softening [16, 17]. Fig. 3 shows the TEM micrographs of as-ECAPed and
post-fatigued samples. In spite of very low applied stress amplitude (a = 100
MPa: about 25% of tensile strength) under stress controlled fatigue, coarsened
grains embedded within the original fine grain/cell regions are generated after
3.1x106 repetitions. Evidently, purity and fatigue time might be important in
determining the coarsening of microstructure.TEM micrograph of post fatigued
specimen indicated grain coarsening and decrease in dislocation density inside
coarsened grains. A cyclic softening in the latter half of the fatigue life results
from the decreased dislocation density, shear banding and cell/grain-coarsening.
The primary factor of significant hardness drop appears to be the grain
coarsening. It has been suggested that the plastic strain localization during cyclic
deformation induces the dynamic grain growth and causes the development of
SBs [18]. The heavily extended surface damage in later fatigue stages indicates
the formation of coarse grains, leading to the significant drop in surface
hardness.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
Fig. 5(a) shows the growth curve (ln l vs. N) of major cracks. Like
conventional grain-sized materials, the crack growth life from an initial size
(e.g. 20 m) to fracture accounts for about 6090% of the fatigue life of the
specimens. The growth curves at higher stress amplitudes tend to approximated
by straight lines, whereas the crack growth curves at a= 100 and 120 MPa are
roughly divided into three stages. In the first stage the crack length increases
sharply with stressing and this was followed by a change in the slope of the
growth curve. In the second stage the actual crack length after the slope change
is smaller than the length expected from an extension of the crack growth curve
from the first stage.
Figure 5:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
333
Figure 6:
To clarify the reason for transient retarded crack growth, SEM analysis of
fracture surface were carried out. Fig. 6 shows the post-fatigued fracture surface
at a = 100 MPa. Fig. 6a shows a whole macroscopic view of the crack initiation
site. Fig. 6b shows a magnified view of the fracture surface, a few micrometers
beneath the surface (a = 5 m, a: crack depth). A flat fracture surface is
observed. With further crack propagation the morphological features of the
fracture surface changed as a granulated surface is observed at 22 m beneath
the crack initiation site (Fig. 6c). Interestingly, the grain size on the granulated
fracture surface is roughly equivalent to the grain size of the material (Fig. 6c).
At about 80 m below the surface, striation features that are nearly perpendicular
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
rrp =
Keff
2 c
1
22
(1)
where Keff is the effective stress intensity factor range and 0.2c is the cyclic
0.2% proof stress. Keff was calculated from the relation Keff = UK. U and K
are the crack opening ratio and the stress intensity factor range, respectively.
Jono et al. [19] conducted plane-bending fatigue test (with a stress ratio, R = -1)
of smooth specimens, for structural steels. They measured the opening-closing
behavior of small surface-cracks by using the unloading elastic compliance
method. The measurements showed that U-values of crack depths under 0.1 mm
are between 0.8 to 0.6. In the present calculations, U = 0.7 was used. The
solution for K was taken from Fonte and Freitas for semi-elliptical surface
cracks in round bars under bending [20]. Calculated values of rrp and ratios of the
grain size of the material, rrp /d, are shown in Table 1 for the three crack depths.
Essentially, different fracture surfaces were observed (Fig. 6). Consequently,
planer, granular and striated fracture surfaces were formed under a range of rrp
/d< 1, rrp /d 1 - 2 and rrp /d>2, respectively.
Table 1:
Sample
(Value of d)
Values of reversible plastic zone sizes and their ratio to the grain
size.
a
MPa
l
m
15
UFG12
100
100
(d = 295 nm)
250
*: Corresponding micrographs in Fig. 6.
a
m
4.8
32
80
r rp
nm
52
348
863
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
r rp /d
0.18
1.18
2.93
Fracture
surface*
Fig. 6b
Fig. 6c
Fig. 6d
335
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 8:
It has been shown that the growth rate of small cracks cannot be treated by
stress intensity factor range K. In such a case, the CGR of small crack is
determined uniquely by a term anl, which is derived from an assumption that the
CGR is proportional to RPZ size [11]. Fig. 8 shows the growth data of a
mechanically small crack; dl/dN vs. anl relation. The value of n is material
constant and was 4.4. All growth data plotted based on anl are on a straight line.
The CGR of small cracks growing by striation formation mechanism is estimated
by the term anl.
4 Conclusions
The main findings of this study can be summarized as follows:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
337
(1) The surface damage of UFG coppers was gradually formed with cycling, but
significant extension of damaged areas was occurred in the latter half of
fatigue life. Correspondingly, surface hardness exhibited an initial mild
dropping term and subsequent severe drop in the latter half of fatigue life.
The initial hardness drop strongly depends on the absorption of mobile
dislocations at non-equilibrium GBs. The subsequent severe drop may be
attributed to the grain coarsening.
(2) The CGR temporarily dropped at around dl/dN = 3 x 10-7 mm/c, and then
was gradually recovered with subsequent cycling.
(3) The fracture surface showed a planar, granular and striated surface as the
crack continued to grow. The ratio of the RPZ size at the crack tip to the
grain size, rrp/d, was calculated for crack lengths where a planar, granular
and striated surface was observed. The values of rrp/d for crack lengths of
the planer, granular and striated fracture surfaces corresponded to a range of
rrp/d < 1, rrp/d between 1 and 2, and rrp/d >3, respectively.
(4) To understand the change in fracture surface morphologies, a quantitative
model describing the crack growth mechanism was developed based on the
RPZ size and microstructural factors. The changes in the CGR and
morphological features of the fracture surface were successfully explained
by this model.
(5) The CGR of a mechanically small surface crack could not be estimated by
the stress intensity factor range, but it was uniquely determined by a term,
anl, which is derived from an assumption that CGR is proportional to RPZ
size. The n is material constant and was 4.4.
Acknowledgements
This study was supported by a Grant-in-Aid (20560080) for Scientific Research
(C) from the Ministry of Education, Science, and Culture of Japan, and a grant
from the Fundamental R&D Program for Core Technology of Materials funded
by the Ministry of Commerce, Industry and Energy, Republic of Korea.
References
[1] Agnew, S.R. & Weertman, J.R., Materials Science and Engineering, A 244,
pp. 145-153, 1998.
[2] Vinogradov, A. & Hashimoto, S., Materials Transactions, 42, pp. 74-84,
2001.
[3] Hppel, H.W., Zhou, Z.M., Mughrabi, H. & Valiev, R.Z., Philosophical
Magazine, A 87, pp. 1781-1794, 2002.
[4] Mughrabi, H., Hppel, H.W. & Kautz, M., Scripta Materialia, 51, pp. 807812, 2004.
[5] Vinogradov, A., Nagasaki, S., Patlan, V., Kitagawa, K. & Kawazoe, M.,
NanoStructure Materials, 11, pp. 925-934, 1999.
[6] Goto, M., Han, S.Z., Yakushiji, T., Lim, C.Y. & Kim, S.S., Scripta
Materialia, 54, pp. 2101-2106, 2006.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
339
Structural characterization of
electro-thermally driven micro-actuators
with immeasurable temperature-dependent
material characteristics
W. Szyszkowski & D. Hill
Department of Mechanical Engineering, University of Saskatchewan,
Canada
Abstract
Multi-cell cascaded Electro-Thermal Micro-Actuators (ETMA) made of nickel
alloys are analyzed by the finite element (FE) method. The computer simulation
runs over the electrical, thermal, and mechanical phases of the ETMA
operations. The main challenges of modeling are discussed. Some of the
materials parameters such as the electrical resistivity, thermal expansion
coefficient and emission are strongly temperature dependent. Furthermore, any
measurements of such dependences are complicated by a magnetic phase
transition occurring in nickel within the operating range of temperature. All the
properties are sensitive to a particular composition of the material. The surface
convection is additionally shape-dependent and, mainly due to small dimensions
of the actuators, cannot be determined experimentally with sufficient accuracy.
For the above reasons the use of the material data estimated from the available
literature usually does not render reliable simulations.
In the approach proposed the material characteristics of the ETMA considered
are determined by utilizing the fact that for a given applied voltage the total
current and displacement in the real actuators (performance parameters) are
measured with a relatively high precision. Similar performance parameters, i.e.
the total current and displacement can be obtained as output from the FE
simulation in which some important material properties of the actuators model
are assumed in a parametric form (materials parameters). The FE simulation
procedure is integrated with these real measurements in such a way that the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110301
1 Introduction
Electro-thermal micro-actuators (ETMA) are constructed as monolithic
compliant mechanisms. They have the displacement/force outputs typically
higher than other micro-actuators, and should be easier to control [1, 2].
However, mainly due to relatively high temperatures of the operating regimes [3,
4], any analytical predictions and numerical simulations are rather challenging.
Several parameters to characterize three different physical environments
involved, i.e. electrical, thermal, and mechanical, may vary quite substantially
over the temperature ranges the actuators usually experience.
While the finite element (FE) technique is capable of handling complicated
geometrical shapes, obtaining data and modeling these material parameters for a
particular ETMA remains difficult and seems to be the main source of the
discrepancies between the computed and measured results. Some of the
parameters are also scale dependent and almost impossible to measure for such
small devices, or may be affected by the changes in the materials microstructure
triggered at some transition temperatures difficult to detect with a sufficient
precision. Consequently, analytical predictions of the structural characteristics of
such devices have not been particularly accurate so far, and thats why any new
designs still have to rely heavily on costly, time consuming, and numerous
experimental testing.
Most of the FE analysis of ETMA reported in the literature adopted the
material parameters as temperature-independent constants at best averaged over
the expected operating temperature range [5, 6], while in fact their values may
vary several times over that range.
The above issues are discussed here on the example of cascaded ETMA made
of nickel alloy and manufactured by using laser micro-fabrication technology
[7]. All the main material parameters are assumed temperature-dependant. The
materials description is assessed by comparing the ETMAs measured and the
FE simulated performance parameters. In order to improve accuracy the FE
simulation is combined with experimental testing to modify the values of some
uncertain material parameters. The procedure is iterative in which the changes in
temperature distribution, indicating the coupling between the electrical and
thermal fields, and the ETMA performance, which is sensitive to the current
values of the material parameters, are monitored. It is demonstrated that only the
measurements pertaining to the electrical and mechanical fields are needed,
while rather cumbersome, unreliable, or just impossible measurements of the
heat transfer and temperatures can be avoided.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
341
1.33mm
motion platform
u
1.06mm
F
hot arm
constrainer
a) A single actuator
Figure 1:
b) Micro-tweezers
Multi-cell cascaded ETMA.
The distributions of voltage V, current density i, and power Q in the actuator are
governed by the equations:
( 1 V ) 0
i V
(1a,b,c)
Q VV
1
ij Dijkl ( kl T kl )
(3abc)
ij (ui , j u j ,i )
1
2
343
temperature was measured with the help of a hot plate and thermal glasses. It
was found that the resistivity varies quadratically with temperature up to about
306C, above this temperature the variation is linear. It was concluded that this is
due to the phase change that takes place at Tp 306 C. The following
relationship was used in the FE simulation (SI units are used):
6.77 10 5 (1 0.00476T (1 0.00303T )) if T 306
(T )
(4a,b)
if T 306
16.4 10 5 (1 0.001872T )
In should be noticed that the magnitude of (T) changes by some 420% over
the range of 20600C.
The emissivity was found to vary pretty much linearly with temperature. The
variation was approximated by:
e(T ) 0.1 (1 0.0008T )
(5)
Note that the magnitude of this parameter varies only about 50% over the
same range of temperature.
According to [10], variation of the thermal conduction coefficient should be
less significant over that temperature range. Typically its value first slightly
drops with temperature, and then starts to increasing, the change most probably
associated with the magnetic phase change already mentioned.
92.4 0.093T
K (T )
56.3 0.025T
if
T 306
if
T 306
(6)
It should be noted that this coefficient varies about 30% over the whole
temperature range.
The relations h(T) is more difficult to establish. This is because generally the
convection coefficient dependents on the geometry of heat transferring body. On
the other hand, any direct temperature or heat flux measurements on a microsample with the dimensions comparable to the hot arms dimensions are very
challenging (too small for thermal glasses, for example). Nevertheless the
macro-sample mentioned above was used to determine this coefficient as well.
The results are denoted by the curve ht(T) shown in Figure 2. This curve is
generally in a range for a flat surface undergoing free convection [12]. Note that
for this test the value of the convection coefficient changed about 8 times over
the temperature range from 20C to 600C.
What was the most important, however, was that ht(T) used in the FE
simulation were giving consistently much higher then expected temperature of
the hot arms (around 2400C), which in turn indicated that the convection
coefficient for small dimensions members of the actuator should be much higher
than those obtained for a relatively flat surface of the macro-sample. However, it
was noticed that the convection coefficient data for small diameter wires
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
hww
h
250
h (W/m K)
200
150
~1000%
100
hht t
50
0
0
100
200
300
400
500
600
700
800
Temperature (C)
Figure 2:
When the curve hw(T) was used in the ETMAs simulation, the resulting
temperature was in turn lower than expected. It indicated, when combined with
the numerical test performed on ht(T), that for the members of the actuator the
convection coefficient h(T) should be limited by ht h hw . Therefore this
coefficient is assumed in the form:
h(T ) hw (1 )ht ,
(7)
where the value of parameter is to be determined by comparing the simulation
and experimental results.
The mechanical properties such as Youngs modulus, E, and Poissons ratio v,
as well as the thermal expansion coefficient are less affected by the
temperature and the alloy composition.
The variation of E with temperature up to 600C were adopted from [8] in the
form:
E (T ) 206.4 109 (1 0.000286T )
(8)
The Poissons ratio v, according to [11], can be considered approximately
constant and equal to 0.31.
Reference [8] also suggested the linear variation of with temperature as:
(T ) 13 106 (1 0.000343T )
(9)
However, the detailed data reported in [13,14] for a similar Nickel alloy
shown that increased with temperature up to the magnetic phase transition,
then dropped down, and started increasing again for temperatures above
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
345
approximately 450C. The data as well as the linear formula (11) are plotted in
Figure 3. In the FE simulation the (T ) relation is approximated by piecewise
linear functions also indicated in Figure 3, with the parameters
0 , 1 , 2 , 3 , T1 , T2 , and T3 . The values of these parameters indicated in this
figure were determined by matching the simulation and experimental
displacement results as closely as possible, as explained in the next section.
2.5E-05
2.0E-05
(1/C)
1.5E-05
1.0E-05
From [8]
[16]
from
From [22]
from [13]
5.0E-06
T2
T1
0.0E+00
0
100
200
300
400
500
T3
600
700
800
900
Temperature (C)
Figure 3:
Note that for the temperature range considered the values of E(T) or (T )
parameters change only about 20%. Therefore any inaccuracies in these
properties should have much less effects on the response of the simulated
actuator than the inaccuracies in the convection or resistivity coefficients.
Electrical phase:
Need: (T) to calculate:
i(x, T), V(x, T), Q(x, T)
ia
Thermal phase:
Need: Q and h(T), e(T), K(T)
to calculate: T (x)
k=k+1
T ( x ) T ( x )
|| T ( x ) T ( x ) ||
YES
NO
Mechanical phase:
Need: T(x) and E(T), (T), (T)
to calculate: u(x), (x)
ua, F
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
347
140
Simulated, 0.83
120
is (mA)
Experimental
100
i a (mA) 80
60
40
20
0
0
0.5
1.5
2.5
3.5
Vsa (V)
(V )
Figure 5:
Note that the magnitude of this displacement depends on the values assigned to
parameters 0 , 1 , 2 , 3 , T1 , T2 , and T3 discuss before. If the platform motion
is constrained then force F generated by the actuation unit to constrain it is
calculated.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
ua (um)
30.0
25.0
Experimental
20.0
Simulated
Modified
u a ( m15.0
)
10.0
5.0
0.0
0
20
40
60
80
100
120
140
160
180
isi a(mA)
(mA)
Figure 6:
The displacement obtained by the applying the linear relation (9) for the
thermal expansion coefficient is also shown, for comparison.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
349
conduction of heat into the anchor. Since resistivity is lower in colder cells and
the same current passes through each cell, the voltage drop is not uniform.
The corresponding temperature distribution is presented in Figure 7b. It
should be noted that roughly half of the actuators material undergone the
transition phase occurring at Ta 306 C.
0.63 V
Figure 7:
The FE results.
On average the constrainers are cooler by about 35C than the hot arms.
Closer analysis reveals that this difference is responsible for about 90% of the
displacement generated by the unit at the motion platform
6 Conclusions
Detailed knowledge of the temperature dependency of several electro-thermalmechanical material properties is necessary to accurately simulate the ETMA by
the FE method. A great deal of caution should be exercised when selecting data
from the literature, because some of these properties are very sensitive to a
particular composition of the material. Also, some changes in the material
internal structure may have significant effect on these properties. For example,
the resistivity of the nickel alloys used in the ETMA presented varies
quadratically below the temperature of the magnetic phase change, but linearly
above it. Similarly, the conduction drops with temperature below the transition
point, and increases above it.
The simulation results appear to be mostly affected by the uncertainty in the
convection coefficient, the values of which may vary several times over the
temperature range involved. This coefficient happens to be also scale dependant
and impossible to measure accurately for the members of the actuator due to its
small dimensions.
The uncertain material characteristics can be tuned up by combining the
FE simulation with the experimental measurements of the ETMA performance.
It should be noticed that the electrical and thermal characteristics should be
adjusted simultaneously due to coupling, while the mechanical properties could
be modified independently.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] T. Moulton, G.K. Ananthasuresh. Micromechanical devices with embedded
electro-thermal-compliant actuation. Sensors and Actuators A, 90, 2001,
p 38-48.
[2] D. Hill, W. Szyszkowski, and E. Bordachev, On Modeling and Computer
Simulation of an Electro-thermally Driven Cascaded Nickel Microactuator, Sensors and Actuators A, 126, 2006, pp. 253-263.
[3] C.P. Hsu, W.C. Tai, W. Hsu. Design and Analysis of an Electro-Thermally
Driven Long-Stretch Micro Drive with Cascaded Structure. Proceedings of
IMECE2002, 2002.
[4] H. Du, C. Su, M. K. Lim, W. L. Jin. A micro-machined thermally-driven
gripper: a numerical and experimental study. Smart Mater. Struct, 8, 1999,
p. 616-622.
[5] P. Lerch, C.K. Slimane, B. Romanowicz, P. Renaud. Modelization and
characterization of asymmetrical thermal micro-actuators. J. Micromech.
Microeng. 6, 1996, p 134-137.
[6] N. Mankame, G.K. Ananthasuresh. Comprehensive thermal modeling and
characterization of an electro-thermal-compliant microactuator. J.
Micromech. Microeng. 11, 2001, p 452-462.
[7] E.V. Bordatchev, S.K. Nikumb, and W. Hsu, Laser micromachining of the
miniature functional mechanisms, Proceedings of SPIE, Vol. 5578 (SPIE,
Bellingham, WA, 2004), paper # 5578D-77, pp. 579-588.
[8] King, J., Materials Handbook for Hybrid MicroElectronics. Artech House,
Boston, 1988.
[9] Everhart, J., Engineering Properties of Nickel and Nickel Alloys. Plenum
Press, New York, 1971.
[10] Temperature Dependent Elastic & Thermal Properties Database, MPDB
(JAHM) Software, Inc., USA, 1999.
[11] Yao, Y D, Tsai, J H. Magnetic Phase Transition in Nickel-Rich NickelCopper Alloys, Chinese Journal of Physics, Vol. 16, No 4. p. 189 195,
1978.
[12] Incropera F.P. and Dewitt D.P. Fundamentals of Heat and Mass Transfer,
Fifth Edition. John Wiley and Sons, New York, 2002.
[13] T A Faisst. Determination of the critical exponent of the linear thermal
expansion coefficient of nickel by neutron diffraction. J. Phys.: Condens.
Matter 1, 1989, p 5805-5810.
[14] T G Kollie. Measurement of the thermal-expansion coefficient of nickel
from 300 to 1000 K and determination of the power-law constants near the
curie temperature. Physical Review B, V16, N11, 1977, p 4872-4882.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
351
Abstract
Inelastic response of elliptical plates to impact and initial impulsive loading is
studied. For determination of the deflected shape of plates the concept of mode
form motions amalgamated with the conical velocity field in used. Theoretical
predictions of residual deflections are developed for plates of piece wise constant
thickness. The cases of circular and annular plates subjected to initial impulsive
loading are studied as particular cases of an elliptical plate.
Keywords: impulsive loading, plasticity, plate, elliptical plate, mises condition.
1 Introduction
Thin plates and shells are important elements of structures. In accidental
situations the plates can be subjected to impact and shock loadings. This involves
the need for methods of evaluation of maximal residual deflections caused by
initial impact and impulsive loading.
Exact and approximate theoretical predictions and experimental results
regarding to the behaviour of inelastic structures have been presented by several
authors. Reviews of these studies can be found in the books by Jones [2],
Kaliszky [5], Stronge and Yu [17], also in papers by Kaliszky [4], Jones [3],
Kaliszky and Logo [6], Nurick and Martin [12], Yu and Chen [21]. Shen and
Jones [15], also Wen et al. [20] studied the dynamic plastic response of fully
clamped circular plates in the cases of rate sensitive and rate insensitive
materials. Liu and Stronge [11] considered simply supported circular plates
subjected to dynamic pressure at the central part of the plate. Wang et al. [19]
used the concept of the unified strength theory in the dynamic plastic analysis.
Lellep and Hein [7], Lellep and Mrk [8, 9] studied stepped plates and shallow
shells. In papers [10, 9] the concept of plates with stable cracks located of the reentrant corners of steps was used for determination of optimal parameters of the
plate.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110311
(1)
1,
a2 b2
(2)
ab
a sin b 2 cos2
2
(3)
r*
r1
r2
Figure 1:
An elliptical plate.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
353
The plates under consideration have piece wise constant thickness or the
thickness can be approximated with a stepped distribution, e.g. h h j for
r r j , r j 1
where
(4)
In eqn. (4) M1. M2 are the bending moments with respect to axes Ox and Oy,
respectively, M12 is the shear moment on the xy-plane whereas M0 stands for the
yield moment. In the case of a solid plate M 0 0h 4 / 4 ; h being the thickness of
the plate and 0 the yield stress of the material.
According to the associated flow law one has
1 (2M 1 M 2 ),
2 (2M 2 M 1 ),
(5)
12 6 M 12 ,
where is a non-negative scalar multiplier and
2W
2W
2W
, 2 2 , 12 2
.
2
xy
x
y
(6)
Making use of eqns. (6) one can present the power of the plastic dissipation
per unit area of the middle surface as
di M 11 M 2 2 M 1212
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(7)
d dS ,
i
D i A e ,
(8)
where A e is the power of external forces. Note that the work done by inertial
forces is included in A .
e
where r is specified with eqn. (3). According to the latter we can write
W r , , t W0 t f r , ,
(9)
where W is the transverse deflection and W0 stands for the deflection rate at a
specific point. Here dots denote time derivatives. As the particular case of eqn.
(9) one can take
f r , 1
(10)
r j 1
M
j 0 rj
0 j ij rdr
i 0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(11)
355
where the simplifications suggested in [2] are taken into account. In eqn. (11)
M 0 j stands for the yield moment for the plate with thickness h j and ij is the
slope discontinuity at the hinge line located at i
r r j , r j 1 . Evidently
i
in the region
W
r , i , t W r ,i , t .
However, in the case of a continuous field of straight yield lines, called yield
fan, it is judicious to calculate the internal energy dissipation according to eqn.
(7). Making use of eqns. (5)-(7) and eqn. (10) one obtains
2
2M 0 j W0
1 2 r r
di
3 rr
r r
(12)
for r (r j , r j 1 ) ; j 0,..., n .
In eqn. (12) and henceforth primes denote the differation with respect to the
polar angle . Note that that the relation (12) can be reached by different ways
(Skrszypek and Hetnarski [16]; Sawczuk and Sokol-Supel [14]; Ranitsyn [13]).
Since the middle plane of the elliptical plate covers the area
0 r r , 0 2 and dS rdrd it follows from eqn. (12) that
2
D i
W0
3
n r j 1
j 0 r j
r
M0 j
r
1 2
drd
r
r
r
2
D i
W0
3
j 0
0j
r j 1 r j
r
r
1
2
d
r
r
r 2
(13)
WWdS ,
(14)
where stands for the density of the material. Calculating accelerations from
eqns. (9)-(10) and substituting in eqn. (14) yields
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
W
Ae W
0 0
r j 1
r
h j 1 rdrd
r
0 j 0
rj
(15)
A W
W B h h hn r* d
0 0
e
j j 1
j
12
0 j 1
(16)
where
1 r2 2 2 3 1 4
r j r r j r j
3
4
r*2 2
Bj
(17)
for j 0,.., n .
Making use of eqns. (13)-(17) one can determine
W0
2
3
M r
n
0j
j 0
j 1 r j
13 r2 2r2 rr d
0
2 n
h
B j h j 1 h j n r2 d
12
0 j 1
(18)
W0
2r 2 r
M 0 1 2 * d
r
r
3
0
24
h0 r2 d
0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(19)
r*
r*
357
a 2 (b 2 a 2 ) cos 2 3 / 2
ab(b 2 a 2 ) cos sin
(cos
2
a 2 (b 2 a 2 ) cos 2 5 / 2
sin 2 ) a 2 (b 2 a 2 ) cos 2 3 cos 2 c sin 2
(20)
and integrals
2
2r2 r*
1
d 4
2
r
r
(21)
r d 2ab
2
(22)
Evidently, if a=b=R then eqn. (22) presents the acceleration for a circular
plate of radius R.
const . Integrating twice
It can be easily seen from eqns. (18), (19) that W
0
with respect to time under initial conditions
W0 0 V0 , W0 0 V0
(23)
t V ,
W0 W
0
0
(24)
1
W0 W0t 2 V0t .
2
(25)
one obtains
and
The motion of the plate stops at t t1 when W0 t1 0 . From eqns. (24) and
(25) one easily obtains that
V2
V
t1 0 , W f 0 .
(26)
2W0
W0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
r
hr , V02 1 rdrd .
(27)
2
r
0 0
where h h j for r r j , r j 1 ; j 0,.., n and W0 0 V0 . From eqn. (27) one
K0
can define
V02
2K0
2 n
h
B j h j 1 h j n r2 d
12
0 j 1
(28)
Making use of eqns. (26)-(28) one obtains the maximal residual deflection
W1
3K0
M
n
0j
j 0
(29)
1
r r j r2 2r2 rr d
3 j 1
r
4 Discussion
The accuracy of the approximate approach suggested in the present paper is
evaluated in the particular case of a circular plate in fig. 2. Note that the present
method is accommodated for the case of a Tresca material. In fig. 2 maximal
permanent deflections of a circular plate simply supported at the edge versus the
impulse are presented. The highest curve 1 in fig. 2 corresponds to the exact
solution for a circular plate made of a Tresca material and subjected to impulsive
loading [2], whereas the curve labeled with triangles is obtained in the present
study. Intermediate curves in fig. 2 are obtained for different values of the
parameter in the case of a rectangular pressure pulse. In this case following
notation is used
I
p0
V0
p0
pc
V0 R 2
M 0h
Here p0 is the load intensity, pc stands for the static load carrying capacity and
is the time instant when the loading is removed.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
359
It can be seen from fig. 2 that in the case of a circular plate the method used in
the present study in the case of impulsive loading leads to the results which are
comparable to those corresponding to a rectangular impulse of medium value.
Maximal residual deflections versus 1 are presented in fig. 3. It is assumed
herein that the step is located at the ellipse with the semiaxes a1 1a and
b1 1b , e.g. r1 1r . Here and henceforth
W1
M
3
W0 (t1 ) 0 .
2
K0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
361
5 Concluding remarks
Dynamic plastic response of elliptical plates to impulsive and impact loading
was considered. An approximate method for evaluation of residual deflections of
plates with elliptical boundaries has been developed. The method can be easily
extended for stepped plates of arbitrary shape with arbitrary number of steps.
Calculations carried out in the case of elliptical plates showed that maximal
residual deflections can be remarkably shortened under given weight when redistributing the material in the plate.
Acknowledgement
The support of the Estonian Science Foundation (grant ETF7461) is
acknowledged.
References
[1] Chakrabarty, J., Applied Plasticity, Springer: Berlin, 1989.
[2] Jones, N., Structural Impact, CUP: Cambridge, 1989.
[3] Jones, N., Recent progress in dynamic plastic behaviour of structures. The
Shock and Vibration Digest, 17, pp. 33-47, 1985.
[4] Kaliszky, S., Dynamic plastic response of structures. Plasticity Today:
Modelling, Methods and Applications, eds. A. Sawczuk, G. Bianchi,
Elsevier: Science Publishers, London, 1984.
[5] Kaliszky, S., Plasticity. Theory and Engineering Applications, Elsevier:
Amsterdam, 1989.
[6] Kaliszky, S. and Logo, J., Layout optimization of rigid-plastic structures
under high intensity, short-time dynamic pressure. Mechanics Based
Design of Structures and Machines, 31, pp. 131-150, 2003.
[7] Lellep, J. & Hein, H., Optimization of clamped plastic shallow shells
subjected to initial impulsive loading. Engineering Optimization, 34, pp.
545-556, 2002.
[8] Lellep, J. & Mrk, A. Inelastic behaviour of stepped square plates. Theories
of Plates and Shells. Critical Review and New Applications, eds. R.
Kienzler, H. Altenbach & I. Ott, Euromech Colloquium 444. Springer:
Berlin, pp. 133-140, 2004.
[9] Lellep, J. & Mrk, A., Inelastic response of axisymmetric plates with
cracks. International Journal of Mechanics and Material Design, 3(3),
pp. 237-251, 2006.
[10] Lellep, J. & Mrk, A., Optimization of inelastic annular plates with cracks,
Structural and Multidisciplinary Optimization, 35(1), pp. 1-10, 2008.
[11] Liu, D. & Stronge, W., Deformation of a simply supported plate by central
pressure pulse. International Journal of Solids and Structures, 33(2),
pp. 283-299, 1996.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
363
Abstract
A new computer method for bi-axial ultimate strength analysis of composite
steel-concrete cross-sections of arbitrary shape subjected to axial force and
biaxial bending moments is developed. An incremental-iterative procedure based
on arc-length approach is proposed in order to determine, in a unitary
formulation, both interaction diagrams and moment capacity contours,
overcoming the difficulties and inaccuracies of the previously published
methods. This procedure adopts a tangent stiffness strategy for the solution of the
non-linear equilibrium equations thus resulting in a high rate and unconditionally
convergence. An object oriented computer program, to obtain the ultimate
strength of composite cross-sections under combined biaxial bending and axial
load was developed. Examples run and comparisons made have proved the
effectiveness and time saving of the proposed method of analysis.
Keywords: composite cross-sections, ultimate strength, arc-length method,
bi-axial bending.
1 Introduction
In recent years, some methods have been presented for the ultimate strength
analysis of various concrete and composite steel-concrete sections such as
rectangular, L and T -shape, polygonal and circular under biaxial moments and
axial loads [15]. Among several existing techniques, two are the most common;
the first consists of a direct generation of points of the failure surface by varying
the position and inclination of the neutral axis and imposing a strain distribution
corresponding to a failure condition. This technique generates the failure surface
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110321
Failure su rface
Figure 1:
365
cross-section analysis method employed herein uses the accuracy of the fibber
element analysis through the use of path integral approach for numerical
integration of the cross-sectional nonlinear characteristics, and addresses its
efficiency and modelling shortcomings both to failure surface generation
procedure, overcoming the difficulties and inaccuracies of the previously
proposed methods, and to postprocessing procedure of the axial force and
bending moments obtained at a cross-section level, in order to check directly that
they fulfil the ultimate limit state condition.
2 Mathematical formulation
2.1 Assumptions and problem definition
Consider the arbitrary cross-section shape subjected to the action of the external
bending moments about each global axes and axial force as shown in Figure 2. It
is assumed that plane section remains plane after deformation. This implies
perfect bonding between the steel and concrete components of a composite cross
section. Thus resultant strain distribution corresponding to the curvatures about
global axes ={x, y} and the axial compressive strain 0 can be expressed at a
generic point x, y in a linear form as:
0 x y y x
(1)
Neutral axis
Tension side
My
Compresion side
C (x c, yc)
Mx
Interior boundary
Exterior boundary
cu
Figure 2:
2x 2y
y
x
tan
(b)
2
2
f c f c'' 2
c0 c0
c 0
f c f c' ' 1
cu c 0
f'' c
f''c (1-)
c0
Figure 3:
Strain,
cu
, , dA N 0; , , ydA M 0; , , xdA M 0
0 x y
0 x y
x
0 x y
y
(2)
A
A
A
y , x , 0
x c x y
y c x y
u
0
0 ,x , y
first three relations represent the basic equations of equilibrium for the axial load
N and the biaxial bending moments Mx,, My respectively, given in terms of the
stress resultants. The last equation represents the ultimate strength capacity
condition; that is, in the most compressed or most tensioned point the ultimate
strain is attained, and in which xc x , y and yc x , y represent the coordinates
L1 N , M x , M y N N 0 0
L1 N , M x , M y M x M x 0 0
(3)
(a)
; (b )
L
N
M
M
M
M
,
,
0
2
x
y
y
y0
L2 N , M x , M y M x M x 0 0
where N0,Mx0,My0 represents the given axial force and bending moments,
respectively.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
367
N
(a)
(b)
N0
(N 0, M y)
My0
(N0 , Mx0, M y)
My
My
My
Mx0
(Mx0 , My0)
M x0
Mx
Mx
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
x , y u x y y c y x xc
(5)
In this way, substituting the strain distribution given by the Eq. (5) in the
basic equations of equilibrium, the unknown 0 together with the failure
constraint equation can be eliminated from the nonlinear system (2). Thus the
basic equations of equilibrium together with the linear constraints Eqs. (3(a))
(3(b)) forms a determined nonlinear system of equations (i.e. 5 equations and 5
unknowns):
x , y dA N 0
A
L1 N , M x , M y 0
x , y ydA M x 0
L2 N , M x , M y 0
A
x , y xdA M y 0
A
(6)
, dA N 0; , ydA M
x
x0
0;
, xdA M
x
y0
0 (7)
in which axial load N and curvatures x and y represents the unknowns and
represents the load parameter defining the intensity of the bending moments. If
we regard the curvatures as independent variables in axial force equation, the
curvatures and the load amplifier factor are given by solving the following
nonlinear system of equations:
, ydA M 0
x y
x0
x , y xdA M y 0 0
A
(8)
369
K T 1F K T 1f ext F T
(11)
where F represents the out-of-balance force vector (Eq. 9) and KT represents the
tangent stiffness matrix of the cross-section:
M x int M x int
y
F x
(12)
KT
M y int M y int
x
x
in which the partial derivatives are with respect to the strains and stresses
evaluated at current iteration k. Assuming the strain distribution given by the
Eq.(5), the coefficients of the stiffness matrix can be symbolically evaluated as:
k11
M xint
x , y ydA
x
x
A
k12
ydA ET y y yc dA
M xint
x , y ydA
ydA ET y x xc dA
y
y
y
k 21
k 22
M int
y
x
M int
y
y
xdA ET x y yc dA
x
, xdA
xdA ET xx xc dA
y
x , y xdA
x
(13)
where the coefficients kij are expressed in terms of the tangent modulus of
elasticity Et. Thus the incremental curvatures for the next iteration can be written
as:
(14)
k 1 k
This procedure is iterated until convergence upon a suitable norm is attained.
Assuming that a point (, ) of the equilibrium path has been reached, the next
point (+, +) of the equilibrium path is then computed updating the loading
factor and curvatures as:
k 1
k 1
(15)
In this way with curvatures and loading factor known, the axial force
resistance N is computed based on the resultant strain distribution corresponding
to the curvatures x and y through Equation (7), and the ultimate bending
moments, Mx and My, are obtained scaling the reference external moments Mx0
and My0 through current loading factor given by Equation (15). Graphical
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Mx (or My )
det KT=0
det KT>0
det KT>0
det KT<0
Equilibrium path
det KT=0
(balance point)
Interaction curve
det KT<0
x (or y )
Figure 5:
2.2.2 Moment capacity contour for given axial force N and bending
moment Mx
In this case, injecting the linear constraints (3b) in the nonlinear system (6), and
arranging the system accordingly with the decoupled unknowns, we obtain:
, dA N 0
0
x y
A
x , y xdA M y 0
x , y ydA M x 0 0 A
A
(16)
x, y dxdy dd
x, y ydxdy sin cos dd M cos M sin
x, y xdxdy cos sin dd M sin M cos
,int
,int
371
,int
,int
(17)
where Nint, M,int and M,int are the internal axial force and bending moments
about the and axis respectively and can be computed based on the Greens
path integral approach. The tangent stiffness matrix coefficients are computed in
the same way. In order to perform the integral of a determined side of the
contour, polygonal or circular, of the integration area, the interpolatory GaussLobatto method is used. Though this rule has lower order of accuracy than
customary Gauss-Legendre rule, it has integration points at each ends of interval,
and hence performs better in detecting yielding. However, because the stress
filed is defined by a step function and there is no continuity in the derivative, the
polynomial interpolation can produce important integration errors. In this case,
an adaptive quadrature strategy can be applied. In this context of the adaptivity
quadratures, the Lobatto integration scheme has another advantage over the
Legendre integration scheme, observing that the point corresponding to the left
end in one interval is the same as the point corresponding to right end in the next.
Consequently, the cost of evaluating a Lobatto rule is reduced by about one
integrand evaluation comparing with Legendre rule. The steel bars are assumed
discrete points with area Asj, co-ordinates xsj, ysj and stress fsj.
3 Computational example
Based on the analysis algorithm just described, a computer program ASEP has
been developed to study the biaxial strength behaviour of arbitrary concrete-steel
cross sections. In order to demonstrate the validity, accuracy, unconditionally
convergence and time saving of the analytic procedure developed here, the
interaction diagrams and moment capacity contours of a rectangular crosssection with asymmetrically placed structural steel (Fig. 6(a)) are determined and
compared with the numerical procedure developed in [2].
Characteristic strength for concrete in compression is fc=31.79 Mpa and the
stress-strain curve which consists of a parabolic and linear- part was used in the
calculation, with the crushing strain 0=0.002 and ultimate strain cu=0.0035. The
Young modulus for all steel sections was 200GPa while the maximum strain was
su=1%. The yield strength of steel reinforcing bars is fy=420 MPa, whereas for
the structural steel the following values has been considered fy,flange =255 MPa
fy,web =239 MPa. In order to demonstrate the unconditionally convergence of the
algorithms developed in the current paper the cross-section has been analysed,
drawing the interaction diagrams and moment capacity contours for axial loads
near the axial load capacity, considering both geometric and plastic centroid of
the cross-section. Convergence problems have been experienced by the Chen
et al. [2] in this portion of the moment capacity contour when the geometrical
centroid of the cross-section has been taken as the reference loading axes.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(a)
Figure 6:
(b)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 7:
373
Figure 8:
4 Conclusions
A new computer method based on incremental-iterative arc-length technique has
been presented for the ultimate strength analysis of composite steel-concrete
cross-sections subjected to axial force and biaxial bending. Comparing the
algorithm presented in the current paper with the existing methods it can be
concluded that the proposed approach is general and complete, can determine
both interaction diagrams and moment capacity contours, and, of great
importance, it is fast, the diagrams are directly calculated by solving, at a step,
just two coupled nonlinear equations, and assures convergence for any load case,
even near the state of pure compression or tension and is not sensitive to the
initial/starting values, how the origin of the reference loading axes is chosen or
to the strain softening effect for the concrete in compression. Furthermore, the
proposed method as is formulated can be applied to provide directly the ultimate
resistances of the cross-section, supposing that one or two components of the
section forces are known, without the need of knowing in advance the whole
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Acknowledgement
The writer gratefully acknowledges the support from Romanian Research
Foundation (CNCSIS- Grant PNII-IDEI No. 193/2008) for this study.
References
[1] Rodrigues, J.A. & Aristizabal-Ochoa, J.D. Biaxial interaction diagrams for
short RC columns of any cross section, Journal of Structural Engineering,
ASCE, 125(6): 672-683, 1999.
[2] Chen S.F., Teng J.G., Chan S.L. Design of biaxially loaded short composite
columns of arbitrary section, Journal of Structural Engineering, ASCE,
127(6), 678-685, 2001.
[3] Sfakianakis, M.F., Biaxial bending with axial force of reinforced, composite
and repaired concrete cross sections of arbitrary shape by fibber model and
computer graphics, Advances in Engineering Software, 33, 227-242, 2002.
[4] Rosati, L., Marmo, F., Serpieri, R., Enhanced solution strategies for the
ultimate strength analysis of composite steel-concrete sections subject to
axial force and biaxial bending, Computer methods in applied mechanics
and engineering, 197(9-12), 1033-1055, 2008.
[5] Charalampakis, A.E., Koumousis, V.K., Ultimate strength analysis of
composite sections under biaxial bending and axial load, Advances in
Engineering Structures, 39(11), 923-936, 2008.
[6] Crisfield, M.A. Non linear finite element analysis of solids and structures,
Wiley, Chichester, 1991.
[7] Chiorean, C.G., A fast incremental-iterative procedure for inelastic analysis
of RC cross-sections of arbitrary shape, Acta Technica Napocensis, 47, 8598, 2004.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 5
Composite materials
377
Abstract
This paper describes a method to recycle waste polyurethane rubbers. Waste
rubbers after manufacturing rubber products and most of the used rubber
products have affected bad effects on the environment. In the case of larger
rubber products, the waste rubbers and the used products have been recycled.
However the waste rubbers of smaller rubber products and the used products are
not recycled and are incinerated. An aseismatic mat is one of the smaller
products and it is made of polyurethane rubber. The mats are manufactured a lot
in Japan, so a lot of waste rubbers occur and are incinerated. In order to recycle
the waste polyurethane rubbers, we created a composite cube as a structural
material by using the waste rubbers. The composite cube had 15mm on a side.
Unsaturated polyester was used as a matrix, and then the waste polyurethane
rubbers were put in the unsaturated polyester. The waste polyurethane rubbers
were minced and the minced rubbers were made into a ball. Some composite
cubes on different quantities of the rubbers were prepared. Static compression
tests were carried out with the composite cubes and with a cube made of only
unsaturated polyester. Comparing with the load-displacement curves, it has been
shown that a maximum load of the 0.1g-rubber-containing composite cube is
larger than that of the rubber-free cube. However it has been shown that a
maximum load of the 0.5g-rubber-containing composite cube is not larger than
that of the rubber-free cube. So it is possible that polyurethane rubber reinforce
unsaturated polyester, if the rubber is minced and is made into a ball and is used
in the appropriate contained amount.
Keywords: waste rubber, recycle, composite, compression property.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110331
1 Introduction
Waste rubbers after manufacturing rubber products and most of the used rubber
products have affected bad effects on the environment. In the case of larger
rubber products which are usually manufactured in big companies, the waste
rubbers and the used rubber products have been recycled [1, 2]. For example,
waste tire rubbers are used for a pavement material. In the case of smaller rubber
products which are usually manufactured in small companies, the wastes and the
used smaller products are not recycled and are incinerated.
An aseismatic mat is one of the smaller products and it is made of
polyurethane rubber. The mat was developed for keeping up home electric
appliances and furniture in quake-prone Japan. A lot of aseismatic mats are
manufactured in Japan, so a lot of waste polyurethane rubbers occur and are
incinerated. It is difficult to recycle them, because the polyurethane rubber is
thermosetting and is not able to re-thermoform. Additionally, all of the waste
rubbers are too small and different in shape and in size to be recycled.
In order to recycle the waste polyurethane rubbers, we created a composite
cube as a structural material by using the waste rubbers. Unsaturated polyester
was used as a matrix, and then the waste polyurethane rubbers were put in the
unsaturated polyester. If compression properties of the polyurethane
rubber/unsaturated polyester composite cubes were better than that of a rubberfree cube, the composite cubes will be able to recycle the waste polyurethane
rubbers.
In this study, therefore, compression tests were carried out to examine the
compression properties of the composite cubes and the rubber-free cube.
2 Composite cubes
The composite cube had 15mm on a side and unsaturated polyester was used as a
matrix, and then the waste polyurethane rubbers were put in the unsaturated
polyester. In this section, characteristics of the polyurethane rubber and how to
put the waste rubbers in the matrix were described. Then a procedure for
preparing the composite cubes was proposed.
2.1 Waste polyurethane rubbers
Aseismatic mats are manufactured in Japan and a lot of waste polyurethane
rubbers occur as shown in Figure 1(a). The polyurethane rubber is thermosetting
and is not able to re-thermoform. And all of the waste rubbers are different in
shape and in size as shown in Figure 1(b). To solve the problems, we focused on
characteristics of the polyurethane rubber. The characteristics are rubber
elasticity and high-adhesive property. After the waste polyurethane rubbers were
minced, the minced rubber can be made into a ball because of the high-adhesive
property. It was thought that behaviour of the balled-up rubber was as same as
the rubber elasticity of a solid polyurethane rubber when they were put in the
unsaturated polyester.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
379
Products
Cutting
Waste rubbers
(a) Production flow of aseismatic mats.
Figure 1:
A 0.1g balled-up rubber, a 0.3g balled-up rubber and a 0.5g balled-up rubber
were prepared as shown in Figure 2 because of the minced rubbers was able to
be adjusted. The different quantities of balled-up rubbers were put in unsaturated
polyester and composite cubes contained different quantities of rubber were
prepared.
10mm
0.1g
Figure 2:
0.3g
0.5g
Adhesive
Rubber-free Cube
Minced 0.1g
Minced 0.3g
Composite Cubes
contained
balled-up rubber
High
Unsaturated Polyester
Minced 0.5g
Minced 0.1g
Minced 0.3g
Low
Minced 0.5g
Balled-up rubber
Unsaturated
polyester
15
15
(a) A section of composite cube.
Figure 3:
polyester composite cubes have higher compression properties than the lowadhesive rubber/unsaturated polyester composite cubes.
3 Compression test
Static compression tests were carried out with the cubes shown in Section 2.
Hydraulic type universal testing machine (SHIMADZU CORPORATION
Model: UH-500kNI) was used for the static compression tests. The test speed is
set on 2mm/min. and the stroke is set on 2mm. Mechanical model of the
compression test and a photo of the testing situation are shown in Figure 4.
Lubricant agent was applied on upper and lower planes of the cubes. It reduces
friction on the planes faced on upper testing head and testing platform.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
381
Compression load
Balled-up rubber
Load cell
Unsaturated
polyester
Model of boundary
(a) Mechanical model
Figure 4:
A composite cube
(shown in Figure 3)
Testing platform
(b) Testing situation.
Contained
rubber 0.1g
Rubber-free
Contained
rubber 0.3g
Contained
rubber 0.5g
Figure 5:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Rubber
Plane of loading
Plane of loading
Fractures
Rubber
Rubber
Side plane of cube
(b) Contained 0.5g balled-up rubbers.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
383
Unsaturated polyester
Compression load
Balled-up rubber
Slide constraint
Plastic strain
Fractures
Figure 7:
Figure 8:
Minced rubbers
Voids
Zoom
Minced rubbers
(a) Contained
high-adhesive
rubbers.
Figure 9:
(b) Contained
low-adhesive
rubbers.
5 Conclusion
It is possible that polyurethane rubber reinforce unsaturated polyester if the
rubber is minced and is made into a ball and is used in the appropriate quantity.
The quantity is 0.1g in the case of composite cubes which size is 15mm on a
side. It is thought that waste polyurethane rubbers can recycle as a structural
material.
Acknowledgements
We thank SHINAGAWA Co., Ltd. (Rubber Manufacturing) and RODAN21 Co.,
Ltd. (Manufacturing Coordination) for the provision of waste polyurethane
rubbers.
References
[1] Masahito FUJII, Kenji HIMENO, Kenichi KOUGO & Masato
MURAYAMA, Physical Properties of Asphalt Rubber Mixtures, 6th ICPT,
Sapporo, Japan, July, 2008.
[2] Kenzo FUKUMORI & Mitsumasa MATSUSHITA, Material Recycling
Technology of Crosslinked Rubber Waste, R&D Review of Toyota CRDL
Vol. 38 No.1, p.39-47, 2003.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
385
Abstract
The interaction diagram is a surface which defines the maximum capacity of
compression members that are subjected to axial force and bending moments. As
a result, these diagrams provide the engineers with an additional tool for the
design of such members. When the compression members are confined with FRP
their capacity increases, however in many cases the increase in capacity is
normally neglected which sometimes can lead to very conservative designs. This
work includes the development of interaction diagrams for circular compression
members confined with CFRP using the fiber model. The longitudinal
reinforcement is considered to be symmetric whereas the confinement can vary.
The method presented herein defines the location of the neutral axis and based
on that calculates the axial force and bending moment. A comparison of the
unconfined to the confined section shows a considerable difference in the
interaction diagram plot in the compression controlled region.
Keywords: interaction diagrams, confinement, section equilibrium, RC section
strength.
1 Introduction
The analysis of concrete columns using an analytical solution is not trivial. As a
result the analysis of columns is basically reduced to the development of the
interaction diagram and the plot of the load condition in order to define failure or
not for the section. Normally the confinement for compression reinforced
concrete sections is provided either by ties or spirals. However, other methods
and materials are used in the later years which can provide increased
confinement and thus satisfy the requirement for increased ductility. The column
wrapping with CFRP composites is a popular alternative for improving the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110341
387
shape is the most effective shape for confinement while the square with sharp
corners the least effective of the three. Teng et al. [11] wrapped bridge columns
in the field using FRP wraps. Laboratory specimens were also tested with the
columns exhibiting a ductile behavior. Shahawy et al. [12] tested standard
concrete cylinders wrapped with carbon fiber fabrics in an epoxy matrix. The
results varied depending on the number of carbon layers applied. For an
unconfined concrete strength of 41.4 MPa the confined strength of cylinders was
increased to 70 MPa for the 1-layer wrap and 110 MPa for the 4-layer wrap. The
ultimate strain for the 1-layer wrap was 0.007 and for the 4-layer wrap 0.016.
Prefabricated FRP tubes can be filled with concrete and serve at the same time as
formwork, flexural reinforcement and confinement reinforcement. Davol et
al. [13] tested prefabricated round shells filled with concrete in flexure with
satisfactory results. The concrete filled FRP shells exhibited a ductile behavior.
Michael et al. [1] used a light CFRP composite grid to confine concrete. Through
a series of cylinder tests they found that the grid provides light confinement to
concrete. The crushing strain of confined concrete was twice as high compared
to the unconfined concrete tested. Michael et al. [1] used the CFRP composite
grid in a series of flexural members and had improvements in the member
ductility of more than 30% with minimal confinement reinforcement.
2 Interaction diagrams
The interaction diagram (fig. 1) is a graphical representation of the ultimate
capacity of a column subjected to axial load (Pn) and uniaxial bending (Mn). The
Pn
Po
Point #4
Pb
Point #3
Mn
Point #2
Pt
Point #1
Figure 1:
Interaction diagram.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
389
3.3.1 Concrete
The stress strain relation in the concrete that is used in this work is represented
by the parabola defined by Hognestad as this is defined in the literature [14]. The
tensile part of the graph is neglected. In order to define the curve it is required to
have the concrete strength (fc), the strain at peak stress, o, and the concrete
modulus of elasticity (Ec).
3.3.2 Steel
The stress strain relation is assumed to be elastic-plastic and it is the same in
tension and compression [14]. In order to define this curve it is required to define
the steel yield stress (fy) and the modulus of elasticity of steel (Es).
3.3.3 Experimental confinement model
Most models for concrete confined with CFRP reinforcement are based on the
fact that in most cases even one layer of carbon fabric or a carbon jacket will
provide enough reinforcement to have highly confined concrete. Therefore, the
confinement effectiveness is high leading to a failure of the CFRP jacket or
encasement at the peak axial stress. When the CFRP grid is used as confinement
reinforcement the confining pressure and confinement effectiveness is low and
therefore models developed using data from relatively high confined concrete
may not be adequate. To model the behavior of CFRP grid confined concrete
existing models were used. Existing models are based on a constant thickness of
the FRP material that covers all of the surface area of the confined concrete core.
Michael et al. [1] used the modified Hognestad stressstrain curve to model the
behavior of CFRP grid confined concrete as shown in fig. 2 [1]. In fig 2 c is the
Stress fc
Region AB
2 2
c
c
fc f
o
o
''
c
f c''
Region BC
f c f c'' 1 Dc c o
A
Figure 2:
cu
Strain c
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
70
60
50
40
30
20
10
0
0.000
Figure 3:
0.003
0.006
0.009
Axial Strain (mm/mm)
0.012
0.015
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
391
the failure curve of the section. The calculation of individual points ensures
equilibrium of the section and it includes:
The definition of the neutral axis location
The calculation of the plastic centroid
The definition of the strain plane over the entire section
The calculation of strains using compatibility and the corresponding
stresses based on the stress vs strain relation
The integration of stresses over the section to calculate the axial force
and the bending moment
4.1 Neutral axis location
The neutral axis location is calculated using the values of the maximum
compressive strain in the concrete, cu, and a variable value for the strain in the
extreme reinforcing steel fiber, st. Each combination of strains (cu, st) will
define a strain distribution over the section at failure and thus a point on the
interaction diagram (axial load vs bending moment). The calculation of the
neutral axis in a circular section can take advantage of the symmetry of the
section. One point on the section (P1) is assigned the maximum compressive
strength and it is considered the extreme concrete compression fiber. The
extreme steel fiber is located at the steel bar which is located at the maximum
distance from the extreme compression fiber. Having the location of the two
extreme fibers and the values of the corresponding strains the neutral axis can be
defined as shown in fig. 4.
Y
-cu
P1
P3
P2
gc (pc)
NEUTRAL AXIS
st
Figure 4:
dFi i dAi
(1)
P dFi i dA
(2)
M dFi x i x dA
(3)
These points are calculated independently and they define three (3) sub
regions on the diagram. For each Point the important element to be known is the
value of the net tensile strain at the extreme tension reinforcement fiber. The
strains at Point #1, Point #3 and Point #4 are known directly from material
properties. The other one has to be calculated. The strain at Point#2 represents
the point with zero axial load. However, the strain in the extreme reinforcement
bar is not known. As a result an iteration convergence procedure (secant method)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
393
is used to calculate the strain in the extreme steel fiber when the axial load equals
to zero. Once the strains for the boundary points of the sub regions are defined,
the diagram can be generated by assigning different values of strains for the
extreme steel fiber in each sub region and thus calculating intermediate points
within the sub regions on the interaction diagram. Fig. 5 shows the flowchart of
the numerical procedure.
Calculate Point #4
Net Compressive Force
(log the strain in the steel)
Calculate Point #1
Net Tensile Force (log
the strain in the steel)
Sub-Region #1
Calculate Points Using
Strain Values for Steel
Between Points 1 and 2
Calculate Point #2
Zero Axial Force. Use
Secant Iteration. (log
the strain in the steel)
Sub-Region #2
Calculate Points Using
Strain Values for Steel
Between Points 2 and 3
Calculate Point #3
Balance Point (log
the strain in the steel)
Sub-Region #3
Calculate Points Using
Strain Values for Steel
Between Points 3 and 4
Plot Pn vs Mn
Figure 5:
5 Example
The presented procedure has been used for the development of the interaction
diagram of different sections. Fig. 6 shows the interaction diagram of the same
section with three different levels of concrete strength. The inner line shows the
unconfined section whereas the intermediate line shows the section with
confinement strength as described in section 3.3.3 (Experimental results). The
outer line shows the section using the same model for confinement as that of
section 3.3.3 but with a different value of the maximum compressive strength.
Particularly the data is shown below:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
25 cm
10 bars (dia=16 mm) distributed uniformly
0.003 mm/mm
0.00725 mm/mm
40 MPa
54 MPa
60 MPa
130
8000
6000
5000
4000
3000
2000
1000
-1000
50
100
150
200
250
300
350
MOMENT (KNm)
Figure 6:
5.2 Discussion
Looking at the plots on fig. 6 it is obvious that there is a trend defined as the
value of the maximum compressive strength is increased. Specifically we see
that the plots look virtually the same at the tension controlled regions and they
diverge in the compression controlled regions as the maximum compressive
strength increases. The maximum compressive strength obviously increases as
the level of confinement increases. It is interesting to point out that the value of
the maximum compressive strain, cu, does not have an effect on the interactive
diagram. Therefore the increase of ductility which is gained due to confinement
does not play a significant role in the maximum capacity of the section. The
decisive factor that affects the section capacity is the maximum compressive
strength.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
395
6 Conclusions
The following conclusions have been drawn at the end of this work:
Confinement Increases the maximum compressive strength of the
section
Increase in the confinement reinforcement increases the capacity of the
section in the compression controlled region
Confinement affects significantly the capacity of the section when the
section is in the compression controlled region (pure compression to
balance point)
The effect of confinement is small in the region between pure bending
and the balance point
Confinement has no effect in the region between pure tension and pure
bending since concrete is primarily in tension. Therefore the presence of
reinforcement in the hoop direction offers no improvement in concrete
strength
References
[1] Michael, A. P., H. R. Hamilton III, Ansley, M. H, Concrete Confinement
Using Carbon Fiber Reinforced Polymer Grid, 7th International Symposium
on Fiber Reinforced Polymer (FRP) Reinforcement for Concrete Structures
(ACI 2005 Fall Convention), American Concrete Institute, Kansas City,
MO, Vol. 2, pp. 991-1010, 2005.
[2] Bresler, B., Design Criteria for Reinforced Concrete Columns under Axial
Load and Biaxial Bending, ACI Journal, Proceedings, Vol. 57, 1960.
[3] Parme, A. L., Nieves, J.M., Gouwens, A., Capacity of Reinforced
Rectangular Columns Subjected to Biaxial Bending, ACI Journal,
Proceedings, Vol. 63, No 9, 1966.
[4] Xiao, Y. and Wu, H., Compressive Behavior of Concrete Confined by
Carbon Fiber Composite Jackets, Journal of Materials in Civil Engineering,
Vol. 12, No 2, pp. 139-146, 2000.
[5] Xiao, Y. and Wu, H., A Constitutive Model for Concrete Confinement with
Carbon Fiber Reinforced Plastics, Journal of Reinforced Plastics and
composites, Vol. 22, No 13, pp. 1187-1201, 2003.
[6] Lam, L., and Teng, J. G., Ultimate Condition of Fiber Reinforced PolymerConfined Concrete, Journal of Composites for Construction, Vol. 8, No 6,
pp 539-548, 2004.
[7] Li, Y., Lin, C. and Sung, Y., Compressive Behavior of Concrete Confined
by Various Types of FRP Composite Jackets, Mechanics of Materials, Vol.
35, No 3-6, pp. 603-619, 2002.
[8] Harries, K. A., and Kharel, G., Experimental Investigation of the Behavior
of Variably Confined Concrete, Cement and Concrete Research, Vol. 33,
No 6, pp. 873-880, 2002.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
397
Abstract
The aim of the current study is to develop a constitutive model which captures
the full orthotropic behaviour of a laminated composite by employing 9 material
parameters and also taking into account strain-rate sensitivity to loading.
The formulation is an extension of the work by Ogihara and Reifsnider
(DOI: 10.1023/A:1016069220255), whose model considers 4 parameters, and
with the inclusion of strain-rate effect considerations using the method employed
by Thiruppukuzhi and Sun (DOI: 10.1016/S0266-3538(00)00133-0).
A plastic potential function which can describe plasticity in all directions,
including fibre plasticity, is chosen and using an associated flow rule, the plastic
strain-rate components are derived. The plastic compliance matrix is assembled,
using a rate-dependent visco-plastic modulus. The elastic compliance matrix is
combined with its plastic counterpart to give a rate-form constitutive law.
It is found that the proposed model accounts for strain-rate dependence and
by correct choice of model parameters, the model can also be used for various
composite architectures, including woven and uni-directional architectures.
Keywords: Composites, orthotropic, constitutive modelling, strain-rate effects.
1 Introduction
The formulation of a comprehensive constitutive model is imperative for the
proper understanding of a materials behaviour under different loading
conditions. This includes fibre-reinforced laminated polymeric composites. For
efficient use of composite materials under extreme loading, it is necessary to
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110356
399
In fact, Ogihara and Reifsnider [4] expanded the work of Sun and Chen by
using a more general plastic potential with four unknown parameters. These
parameters were determined by a number of simple tension experiments for
different specimen angles. The effective stress and effective plastic strain were
found for each test angle and the parameter combinations resulted in all effective
stress-effective plastic strain curves to converge into a single master curve.
The concept of using a master curve has been shown to be valid for various
materials by Sun, both in his original 1989 work [2] and also in subsequent
works [58]. Non-linearity is expressed by a function representing the master
curve. Sun and Chen [2] had proposed a power law relating effective stress to
effective strain:
(1)
(2)
Thus:
(3)
More recently, Hufner and Accorsi [9] have extended the four parameter
plastic potential function of Ogihara and Reifsnider to include strain-rate effects
using the strain-dependent power law of Thiruppukuzhi and Sun described
above.
It should be noted that the non-linear behaviour is characterised on a
macroscopic level. Although it is desirable to predict non-linear response of
woven composites on a micromechanical scale, i.e., based on the properties of
the fibres and the matrix, many workers have shown that it is difficult to achieve
this, even for uni-directional composites [4].
1.2 Choice and method of formulation
The present work will extend the formulation of Ogihara and Reifsnider from a
model with 4 parameters to consider all possible parameters (i.e., 9) and taking
into account the strain-rate effects as in the work of Thiruppukuzhi and Sun.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
2
2
(4)
Using appropriate values for the various parameters, the function could be
used to describe a range of material systems. For example, choosing
0and
1 reduces the above potential function to the one parameter
potential of Sun and Chen [2] for uni-directional composites. Similarly, by
setting
0 gives the function used by Thiruppukuzhi and Sun
[7] for woven composites.
The generalised anisotropic constitutive equations, in rate form, are expressed
as follows:
(5)
(6)
The strain-rate tensor is decomposed into two components, namely the elastic
and the plastic strain-rate components:
(7)
Thus, the compliance matrix is expressed as the sum of the elastic and plastic
components:
(8)
Each part of the compliance matrix will be derived in turn in the following
sections.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
401
1
0
(9)
0
1
This matrix is symmetric, i.e.
constants since:
(10)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
2
2
2
(12a)
(12b)
(12c)
(12d)
(12e)
(12f)
The proportionality factor rate, , is derived using the equivalence of the rate
of plastic work,
, namely:
(13)
(14)
2
3 and thus:
(15)
3
This implies that
3
2
3
2
(16)
, is defined as:
(17)
(18)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(19)
403
(20)
3
2
2
2
(21)
3
2
3
2
(22)
(23)
2
Thus:
9
4
2
2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
9
4
(24)
9
4
(25)
2
2
2
9
4
(26)
2
A similar procedure yields the rest of the terms in the plastic compliance
matrix, which are presented in Appendix 1. It is noted that the plastic
compliance matrix is also symmetric, i.e.
405
Acknowledgements
This work is part of a research project jointly funded by the Defence Science and
Technology Laboratory (DSTL) and the Engineering and Physical Sciences
Research Council (EPSRC).
Appendix
The complete list of the elasto-plastic compliance matrix terms are given below:
9
4
9
4
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
9
4
9
2
9
2
9
2
9
4
9
2
9
2
9
2
9
4
9
2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
407
9
2
9
2
References
[1] Hahn, H.T. and S.W. Tsai, Nonlinear elastic behaviour of unidirectional
composite laminae. Journal of Composite Materials, 1973. 7(1): p. 102-118.
[2] Sun, C.T. and J.L. Chen, A simple flow rule for characterizing nonlinear
behaviour of fibre composites. Composites, 1989. 23(10): p. 1009-1020.
[3] Hill, R., A theory of the yielding and plastic flow of anisotropic metals.
Proceedings of the Royal Society of London. Series A. Mathematical and
Physical Sciences, 1948. 193(1033): p. 281-297.
[4] Ogihara, S. and K. L. Reifsnider, Characterization of nonlinear behavior in
woven composite laminates. Applied composite materials, 2002. 9(4):
p. 249.
[5] Thiruppukuzhi, S.V. and C.T. Sun, Testing and modeling high strain rate
behavior of polymeric composites. Composites Part B: Engineering, 1998.
29(5): p. 535-546.
[6] Weeks, C.A. and C.T. Sun, Modeling non-linear rate-dependent behavior
in fiber-reinforced composites. Composites Science and Technology, 1998.
58(3-4): p. 603-611.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 6
Detection and
signal processing
(Special session organised by
Prof. A. Kawalec)
411
Abstract
A method of signal synthesis is presented. After theoretical introduction an
algorithm of frequency modulated (FM) signal synthesis is presented. Simulation
results made in Matlab are presented in the last chapter.
Keywords: NLFM, signal synthesis, autocorrelation function.
1 Introduction
This paper presents the problem of the synthesis of signals modulated in
frequency with an autocorrelation function that implements an optimal
approximation to a given autocorrelation function.
The output signal of the matched filter is proportional to the autocorrelation
function of the expected signal. Because of that one would want to use a signal
whose autocorrelation function R would be similar to certain perfect
Ropt in the sense of a criterion that would provide a desirable property or
properties. In this case it is the square criterion of similarity
opt
d .
(1)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110361
(2)
e j d
G
2
(3)
Re
d ,
(4)
the meaning of the square criterion. The synthesis of the optimum signal xopt t
(5)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
413
START
Quality criterion of
R y t R x
opt
dt min
1
2
a b
d max
(i)
where:
b x
T
2
B t e
j t t
dt
(ii)
1
2
B t 0
c t 0
d max
(iii)
1
a 2 c d c
2
(iv)
Main program
Determination of zero approximation signal.
x0 , a , B t
Iterative program
Increasing the accuracy of the result
STOP
Figure 1:
(6)
(7)
If the signal does not have the desired properties, the signal x is changed
(moving inside X set), and the distance d x, y is determined again, this task is
carried out by iterative program from fig. 1. Successive distance values should
form a decreasing sequence
d1 d 2 d3 .
This operation is repeated until the minimum of eqn (6) is achieved. Both
algorithms were implemented in Matlab using numerical methods.
The task of the signal synthesis is to synthesize the frequency modulation
function c t that implements the best approximation to the desired signal
y t . Suppose that the elements of the set of possible signals X are given as
xt Bt e
jt
for t T ,
(8)
2
where: T duration of the pulse, Bt signal envelope, t phase
modulation function.
The spectrum of the signal xt is given by
~
j x
X b x e
,
(9)
dt
dt
(10)
j t
(11)
j
~
y ae
,
where:
At
t
a
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(12)
415
1
2
ab d max ,
x
(13)
bx ~
x
Bt e
j t t
dt .
(14)
1 2
a c dc .
2
(15)
(16)
In order to determine the phase spectrum of signal xt one must resolve the
integral
~
X
Be
T
2
jt t
(17)
Using the method of stationary phase [2, 4] one obtains a solution of the form
x t t 0 ,
(18)
(19)
3 Simulation results
With given functions a and Bt the phase function t (or x ) of the
determined.
In order to verify the correctness of the programs performance, the synthesis
of a signal with a nonlinear frequency modulation (NLFM) was made. The signal
has a bell shaped amplitude spectrum, given by
a
, .
(20)
= 200,
T = 4.
The goal of this simulation was to confirm the correctness and usefulness of
the iterative procedure for the synthesis of the signal having a nonlinear
frequency modulation function. On the basis of the given amplitude spectrum
a signal with non-linear frequency modulation function was obtained (fig. 2).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
417
c t
t
Figure 2:
R t dB
t
Figure 3:
xopt
Figure 4:
R t dB
t
Figure 5:
the second (iterative) program. After the execution of thirty iterations significant
improvement of the properties of the autocorrelation function can be seen (fig. 5
and fig. 6).
In fig. 5 and fig. 6 one can see the effectiveness of the iterative method. After
thirty iterations the level of the first side lobe decreased by 7 dB (to 31 dB). In
fig. 6 one can see how the shape of the power spectral density (PSD) function
PSD of thirtieth iteration rise much slower than after the first iteration and in the
same time both PSD-s overlap in 5050 region.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
419
1st iteration
xopt
30th iteration
Figure 6:
Power spectral density after first iteration and after thirty iterations
( 200 ).
2nd
xopt
1st
data
Figure 7:
Fig. 7 shows how the iterative method changes the PSD function in a chosen
number of iterations. The curve of the thirtieth iteration is filtered to improve the
clarity of the figure.
The final autocorrelation function, after 150 iterations, and the PSD are
shown in fig. 8 and fig. 9 respectively.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
R t dB
t
Figure 8:
0
xopt
Figure 9:
In fig. 8 the resulting autocorrelation function was plotted. The side lobes are
on equal level of -40 dB. The first side lobe is barely noticeable and its level is
48 dB. That is an improvement of 24 dB in comparison to the first iteration. Also
the PSD plotted in fig. 8 is almost the ideal bell shaped. This proves the
effectiveness of the iterative method. The noise that can be seen in fig. 9 comes
from used numerical methods.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
421
4 Conclusion
This paper discusses the key aspects of the signal synthesis needed for the
selection of signals with a desired autocorrelation function, for example in radar
technology. The results of previous theoretical studies and numerical results
confirm the usefulness of the method discussed in the article from the standpoint
of the signal optimisation. The iterative method reduces the errors introduced by
the method of the stationary phase and the errors that are coming from used
numerical methods. The presented method of the signal synthesis is very useful
for cases where the desired autocorrelation function and the subsequent results of
the calculations cannot be represented in a strict analytical form. Another key
advantage of the presented algorithm is that the used numerical methods allow us
to find the optimal solution, having at the entrance, only a discrete set of desired
signals PSD points without any prior knowledge about the signal itself.
Acknowledgement
This work was supported by the Polish Ministry of Science and Higher
Education from sources for science in the years 2009-2011 under project
OR00007509.
References
[1] Soowicz, J., Institute of Radioelectronics Technical Report, Warsaw, 2008,
(in Polish).
[2] Vakman, D.E. & Sedleckij, R.M., Voprosy sinteza radiolokacionnych
signalov, Sovietskoye Radio: Mosva, 1973 (in Russian).
[3] Kawalec, A., Lesnik, C., Solowicz, J., Sedek, E. & Luszczyk, M., Wybrane
problemy kompresji i syntezy sygnaw radarowych. Elektronika, 3,
pp. 76-83, 2009 (in Polish).
[4] Cook, C.E. & Bernfeld, M., Radar Signals: An Introduction to Theory and
Application. Artech House: Boston and London, 1993.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
423
Abstract
Simulated raw radar signals prove to be very useful at the first stages of testing
radar signal processing algorithms and procedures. This is particularly true for
synthetic aperture radar (SAR), where the costs of real signals acquisition are
very high due to the costs of building the system as well as the costs of mission
(air or space-borne). This paper describes a multifunctional SAR raw signal
simulator that has been used for the verification of SAR image synthesis
algorithms. Generated signals can be imitated for pulsed as well as FM-CW
radars both in SLAR and squinted cases, it is also possible to choose between
video and intermediate frequency signals. The simulator allows us to generate
echo signals from stationary and moving targets. The user is able to differentiate
the statistic properties of received echo signals for each target, thus allowing us
to generate different types of reflecting surfaces. If present, a real raw SAR
signal can be merged with a simulated one to produce more complicated
scenarios. The paper presents results of the simulation of raw signals and their
image after SAR processing.
Keywords: synthetic aperture radar, SAR signal simulator, FM-CW SAR.
1 Introduction
Airborne and space borne radar imaging allows us to collect the pictures of a
terrain fragment independently of the time of the day, the weather or visibility
conditions over the imaged scene. Unfortunately, due to relatively long waves
used by the radar (compared to photography), it needs to employ large antennae
in order to achieve the desired image resolution. This can make the system
impossible to build onboard an airplane or a satellite. The solution to this
problem is the technique of the synthetic aperture radar (SAR) that is able to
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110371
2 Principle of SAR
A SAR system is typically a radar installed onboard an airplane or a spaceship
whose antenna system is illuminating the Earths surface perpendicularly or at
some squint angle to the carriers route.
A typical SAR configuration for the airborne case is Side-looking Airborne
Radar (SLAR) whose geometry is presented in fig. 1.
The carrier is moving along a set, well known and, preferably, straight, route.
During the operation, as its carrier moves the radar emits the electromagnetic
energy and receives echo signals reflected from the terrain objects. Received
signals are then pre-processed and stored in system memory.
This signal pre-processing includes filtration and down-conversion to
intermediate frequency or can be extended to baseband conversion and
subsequently range compression, depending on the type of the algorithm used.
After the radar has covered the distance equal the antenna length L needed to
achieve the desired azimuth resolution A , signals stored in memory are
processed as if they were received by a real phased array of a length L.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
425
Rmax
Rmin
Y
0
Figure 1:
(1)
4
R
(2)
2R
,
c
(3)
4
R T
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(4)
x R T xT 2 y R T yT 2 z R T z T 2
(5)
(7)
z R T const H R ,
(8)
where: x0 is the initial radar position along the X axis, v R is the radar speed
and H R is the radar height above the ground. It is also assumed that the target
lies on the ground meaning
zT 0 .
(9)
x 0 v R T xT 2 yT2 H R2
(10)
and then substitute the result to eqn (4) which takes then the following form
2 RT j
s T , t sT t
e
c
x0 v RT xT 2 yT2 H R2
(11)
If the signal is sampled with sampling frequency f s and define the range
resolution cell dimension as
dR
c
,
2 fs
(12)
x0 ld xT 2 yT2 H R2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(13)
427
where: m is a number of the sounding impulse emitted by the radar with the
Pulse Repetition Frequency ( PRF ), l is a number of range cell and d is
distance between two consecutive sounding positions of the radar
d
vR
.
PRF
(14)
In order to obtain a high resolution radar image one should compress the
signal in the range (fast-time) and azimuth (slow-time) domains.
The range compression is performed by a filter matched to the form of the
sounding signal sT and can be done either before the azimuth compression or
after it. Some of the SAR image synthesis algorithms combine those two
compressions in one block. They will be, however, considered here as separate
processing steps.
Memory
A
Receiver
with
a quadrature
detector
Q
Complex raw signal
Range Migration
correction
Output
s*h
Complex SAR image R
Figure 2:
It will be assumed that the obtained raw SAR signal is already rangecompressed.
The azimuth compression is in fact matched filtering [1, 2], where filter
impulse response is so called azimuth chirp being actually the complex
conjugate to the exponential form in eqn (13).
Therefore the operation of azimuth compression of the SAR image would be
a convolution of the signal with the azimuth chirp. However simultaneously with
the changes of signal phase due to the changes of range the signal position in
system memory changes as well. This phenomenon, called Range Migration
(RM), if not compensated for seriously decreases maximum achievable synthetic
aperture length and consequently the maximum image resolution. Range
Migration correction procedure consists of computing the actual number of range
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
429
(15)
where: x Rm and y Rm are the radar position coordinates in the m -th sounding,
and xTm,n and yTm,n are the coordinates of the n -th scatterer in the m -th
sounding, H R is the height of the carrier above the ground.
If the position of no. 0 antenna element would be assumed as the position of
radar x Rm , then the position of the q -th antenna element can be defined as
xaqm x Rm qd a .
(16)
As it was mentioned all the objects are flat and are placed at the height equal
hTm hT 0 .
(17)
Considering the above and the radar movement the statement for the distance
between n -th scatterer and q -th antenna element in the m -th sounding can be
rewritten as follows
Rm,n,q
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(18)
mv mvTny
y Tn 0 H R2 , (19)
R m, n, q x R 0 qd a md xTn 0 Tnx
PRF PRF
where: xTn 0 and yTn 0 are the coordinates of the n -th scatterer.
2)
The received echo signals reflected from the simulated objects are
created either as impulses with linear frequency modulation (LFM) or
the LFM-CW signals after dechirping. The pulsed signals can have
zero central frequency (video signals) or else their spectrum can be
centred around a non-zero intermediate frequency f IM . The video
signals are assumed to be after quadrature down-conversion and
therefore are stored as complex samples whereas signals with
nonzero f IM are real with imaginary part equal to zero.
3)
4 Rm , n , q
(20)
2 R m,n,q
c
(21)
characteristic m,n,q
Am,n,q
G A2
Rm2 ,n,q
PN m,n,q
43
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(22)
431
s T l cos 2 f IM
lt s lt s .
2
2Ti
(23)
In the case of the baseband (i.e. video) signals it takes the form of a series of
complex valued samples
f
f
f
f
lt s .
lt s j sin 2
s T l cos 2
lt s
lt s
2
2
2Ti
2Ti
(24)
lt s 2
U m, n, q lt s Am, n, q cos 2 f 0 lt s
(25)
After computing the values of the signal samples and their position in the
system memory the signals are added to the respective cells (their values are
summed with those already existing it the memory).
If simulated radar is working as LFM-CW the FFT in the range domain is
performed in order to obtain the range compression.
The program has the functionality of synthesising the SAR image using
described earlier the TDC algorithm, which can be used as the tool for
verification of the generated raw signals. It is also possible to merge new
situation with earlier generated signals or even with real radar signals if only all
of the needed parameters are known.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
Fig. 3 presents the program window with radars parameters tab showing also
the distribution of the simulated objects. In fig. 4 the tab with generated raw
SAR signal is presented and in fig. 5 the tab with SAR image computed with the
TDC SAR image synthesis algorithm.
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 5:
433
5 Conclusions
A Synthetic Aperture Radar raw signal generator developed in Military
University of Technology in Warsaw, Poland was descried in this paper.
Algorithm of the generation of echo signals from both stationary and moving
targets was presented as were the simulation results.
Acknowledgement
This work was supported by the Polish Ministry of Science and Higher
Education from sources for science in the years 2009-2011 under project
OR00007509.
References
[1] Cumming, I.G. & Wong, F.H., Digital Processing of Synthetic Aperture
Radar Data. Algorithms and Implementation, Artech House: London and
Boston, 2005.
[2] Skolnik, M., Radar Handbook, Second Edition, McGraw-Hill Book
Company: New York, 1990.
[3] Stove, A.G., Linear FMCW radar techniques. IEE Proceedings F Radar and
Signal Processing, 139(5), pp. 343-350, 1992.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
435
Abstract
Great numerical complexity is a characteristic of synthetic aperture radar (SAR)
image synthesis algorithms that poses a particularly serious problem for realtime application. Advances in the operating speed and density of the field
programmable gate arrays (FPGA) have allowed many high-end signal
processing applications to be solved in commercially available hardware. A realtime SAR image processor was designed and implemented with the commercial
off the shelf (COTS) hardware. The hardware was based on the Xilinx Virtex 5
FPGA devices. Under the assumption of squinted SAR geometry and range
migration effect present the SAR image synthesis algorithm was developed and
implemented. The results of the processor tests conducted with simulated and
real raw SAR signals are presented in the paper.
Keywords: SAR, radar, real-time processing, FPGA, COTS.
1 Introduction
Airborne radar systems constitute the essential part of radio-electronic terrain
imaging and recognition systems. Their primary advantage is the insensitivity to
the time of the day or atmospheric conditions. However the images obtained by
radar characterises much lower than in the photographic case resolution.
In the radar imaging two resolutions can be distinguished: the range
resolution called also the fast-time resolution and the azimuth or the slow-time
resolution.
If the task of achieving the high range resolution is relatively easy - it is
realized by using sounding signals with internal frequency or phase modulation
or manipulation, the high azimuth resolution is much harder to achieve. It
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110381
m-(bmax-1)
m-b
Rl,min
m m+1
Rmin
Rl,b
Rmin
lR
wl,bR
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
437
.
PRF
(1)
ct s
c
,
2
2 fs
(2)
(3)
The distance between the radar and the point object in an arbitrary b -th
sounding period inside the synthesis window is equal to
2
b
1
Rl , b Rl2, min max
b d .
2
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(4)
(5)
where symbol
. denotes the maximal integer number not greater than the
argument.
The effect of the change of the signal position in radar memory as a function
of the number of sounding period inside of the synthesis window (number of the
synthetic aperture element) is referred to as Range Migration (RM).
The main effect of the change of this distance, however, is an additional,
dependant on the number of the synthetic aperture element, phase shift l, b ,
equal to
l , b 2k Rl , b Rl , min ,
(6)
bmax 1
s R m b, wl , b exp j l , b .
(7)
b 0
Eqn. (7) constitutes the basis of the hardware implementation of the SAR
image synthesis algorithm.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
439
Due to the experimental nature of the project, the decision was made on the
purchase of the COTS (Commercial Off-The-Shelf) FPGA modules and the
dedicated software enabling development of a specialised signal processing
system with the use of PC.
Hardware platform was build on the basis of Sundance DSP/FPGA functional
modules utilising Xilinx FPGA Virtex-5 SX95T chips and Texas Instruments
C6455 DSP. The following module types were used:
1. SMT362 1 piece; DSP/FPGA module containing 2 C6455 digital signal
processors and Virtex-4 FX FPGA,
2. SMT351T 2 pieces.; a module containing Virtex-5 SX95T FPGA and
(external to FPGA chip) 2 GB of DDR2SDRAM memory.
These modules are designed to mount on a SMT310Q motherboard, made as
a long PCI card.
The SMT362 module was used for organisational tasks in the SAR
application. Mainly it was used to load the application code to the FPGA
modules and as an interface between the code executed in the FPGA modules
and the base PC (host).
The SMT351T were used as the main hardware resources for algorithm
implementation - processors. One of them was designed for the range
compression and the other one for the azimuth compression.
Fully mounted hardware platform is presented in fig. 2. This one was
described in [35].
Figure 2:
The platform used for the project requires four packets of the utility software:
1. the Sundance hardware dedicated packet, containing mainly the drivers,
2. the FPGA software development and diagnostic packet,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
4 Research results
The correctness of the hardware implementation of the SAR signal processing
module has been verified with a simulated raw SAR echo signal of an isolated
single point object. Signal was created in simulated raw SAR signal generator
described by Serafin [6]. As the sounding signal a Linear Frequency Modulated
(LFM) pulse was used. Such form of the testing signal simplified the application
diagnostic and debugging process.
Figure 3a presents the raw SAR signal echo of a single point object. The RM
effect is clearly visible. Moreover the effects of the real antenna sidelobe
reception are visible at the edges of the image.
In fig. 3b, the matched filtration effect of the above raw SAR signal in the
range domain is presented. The matched filtration operation was carried without
the weighting function, therefore relatively high range sidelobes are visible.
In fig. 3c, the result of SAR image synthesis is presented. As it would be
expected, a single point in the image is visible, but also the range and the
azimuth sidelobes are present.
Another testing signal used for the testing contained echo signals of some
complex objects having forms of flat geometrical figures consisting of a large
number of elementary point objects. The raw form of this signal is presented in
fig. 4a. The testing signal after the range compression is presented in fig. 4b.
Figure 4c presents the SAR image synthesised from the simulated raw signal.
In order to test the accuracy and the quality of the hardware implementation
of the SAR echo signal processing algorithm, besides the simulated signals, real
measure data from SAR/GMTI sensor AER-II was used. This data has been
obtained by the courtesy of the Director of the Fraunhofer Institute for High
Frequency Physics and Radar Techniques FHR, Germany.
Figure 5a presents the real raw SAR signal of an exemplary fragment of a
land infrastructure. Figure 5b presents the same signal after the range
compression.
In fig. 5c, the real SAR image of an exemplary terrain fragment picturing the
intersection of a highway and a local road. The next picture (fig. 5d) presents the
same intersection, but in the process of image synthesis the squint angle of the
main beam of the radar antenna was taken into account. This allowed for a better
resolution of fine details in the image.
The results obtained during the algorithm verification are consistent with our
expectations i.e. the developed hardware implementation generates the images of
a quality comparable to the ones obtained with the application written in
C language and ran on a PC in floating point format. The main difference
between the two applications lies in the time of computations.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
a.
441
b.
c.
Figure 3:
The raw SAR signal echo of a single point object (a), signal after
range compression (b) and SAR image of a simulated single point
object (c).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
b.
c.
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
a.
b.
c.
d.
Figure 5:
443
Real raw SAR echo signal (a), real raw SAR echo signal after range
compression (b), real SAR image without taking the squint angle
into account (c) and real SAR image with a proper value of the
squint angle applied during the processing (d) (courtesy of the
Director of the Fraunhofer FHR).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
5 Conclusions
The hardware implemented in FPGA modules SAR image synthesis algorithm
proved to be able to generate SAR image of a width of about 0.5 km in the real
time, with the PRF of an order of a few hundreds of Hertz. Those values are
acceptable for the SAR sensors mounted onboard UAVs. We should, however,
mention that the implemented algorithm is very computationally demanding, and
its current implementation is the first one without any optimisations.
Despite very high complexity of the implemented algorithm, the report from
the resultant code generation indicates on a relatively low degree of the logical
FPGA resources occupied by the application (about 30%), with the exception of
the block memory (RAMB) whose usage exceeds 90%.
Presented research results of the hardware implementation of the SAR image
synthesis module confirmed the feasibility of single FPGA implementation of
the algorithm. This leads to creation of compact and low energy consuming
applications working in the real time, especially attractive for the UAV
applications.
Acknowledgement
This work was supported by the Polish Ministry of Science and Higher
Education from sources for science in the years 2009-2011 under project
OR00006909.
References
[1] Cumming, I.G. & Wong, F.H., Digital Processing of Synthetic Aperture
Radar Data. Algorithms and Implementation, Artech House: London and
Boston, 2005.
[2] Wang, Bu-Chin, Digital Signal Processing Techniques and Applications in
Radar Image Processing, John Wiley & Sons: Hoboken, USA, 2008.
[3] SMT310Q User Manual, version 2.1, Sundance Multiprocessor Technology
Ltd. 2005.
[4] SMT351T User Guide, version 1.0.6, Sundance Multiprocessor Technology
Ltd. 2008.
[5] User Manual for SMT362, version 2.2, Sundance Multiprocessor
Technology Ltd. 2009.
[6] Serafin, P., Institute of Radioelectronics Technical Report, Warsaw, 2009,
(in Polish).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
445
Abstract
The software for the simulation of electromagnetic waves propagation through
the soil structure with buried objects has been described in the paper.
The calculations are based on the finite element method (FEM) and allow
prediction of the ground penetrating radar (GPR) echo structure from different
buried objects. Such a virtual radar seems to be useful for test of chosen objects
detection possibility in different soil structure configurations as well as for test of
different kinds of radar signals. The comparison of simulation results and real
measurements data has been also presented and discussed in the paper.
Keywords: finite element method (FEM), ground penetrating radar (GPR).
1 Introduction
The most popular method for simulation of electromagnetic wave propagation
inside the soil structure is finite difference time domain (FDTD) method [1, 2].
The main reason for the FDTD method popularity is its relative simplicity and
good enough accuracy. An alternative approach is possible using finite element
method (FEM). The solutions in this case are easier for interpretation and it gives
more precise information about material discontinuities. The last feature is very
important as far as ground penetrating radar working conditions are concerned.
Additionally, the construction of the simulation boundaries in the FEM approach
is easier and more precise [3]. Contrary to the FDTD in the case of FEM method
the main field quantity characterizes all elements of volume instead of nodes.
The material properties , and are assumed to be constant inside the single
simulation cells.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110391
2 Simulation description
To obtain the discrete centered differential equations spatial shift between
magnetic and electric field components is assumed. The magnetic field
components operate among simulation cells around [i, j, k] node and occupy 1/8
volume of elements [i, j, k], [i+1, j, k], [i, j+1, k],[i, j, k+1], [i+1, j+1, k],
[i+1, j, k+1], [i, j+1, k+1] and [i+1, j+1, k+1].
For time derivative centering the Yee method was used. The electric field is
defined for t nt and magnetic one for t n 1 2 t .
For simulation the integral forms of Maxwell equations was used:
E dl
H n
t
(1)
E n J 0 E n
t
(2)
dA
H d l d A
The contour integral of magnetic field on the left hand side of equation (2) is
calculated with positive rotation along the path presented in the Fig 1.
Figure 1:
IntH xn 1 2 i, j , k H yn 1 2 i 1, j 1, k 1 H yn 1 2 i, j 1, k 1
H yn 1 2 i 1, j , k 1 H yn 1 2 i, j , k 1 H yn 1 2 i, 1 j , k
H yn 1 2 i, j , k H yn 1 2 i 1, j 1, k H yn 1 2 i, j 1, k y j / 4
H
i 1, j, k 1 H i, j, k 1 H i 1, j, k
H
i, j, k H zn1 2 i 1, j 1, k H zn1 2 i, j 1, k
H zn 1 2 i 1, j 1, k 1 H zn 1 2 i, j 1, k 1 z k / 4.
n 1 2
z
n 1 2
z
n 1 2
z
n 1 2
z
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(3)
447
The field component of the surface integral on the right-hand side of (1) is
equal to:
1
n
Exn 1 i, j , k Exn i, j , k
2
i
,
j
,
k
J
0x
t
i, j , k
n 1
x
i, j, k E i, j, k y
n
x
(4)
j z k
In the form (4) the terms containing electric field are centered in time
t n 1 2 t . From (3) and (4) the electric field component can be obtained in
the following form:
2 i, j , k
i, j , k Exn
2 i, j , k
t
i, j , k
t
2 IntH xn 1 2
2 J 0nx1 2
y j z k
1
Exn 1 i, j , k
(5)
The electric field can be calculated from (2). It has the form as follows:
IntExn i, j , k E yn i, j , k E yn i 1, j , k E yn i, j 1, k
E yn i 1, j 1, k E yn i, j , k 1 E yn i 1, j , k 1
E yn i, j 1, k 1 E yn i 1, j 1, k 1 y j / 4
(6)
E i, j 1, k E i 1, j 1, k E i, j 1, k 1
n
z
n
z
n
z
The surface integral from right-hand side of (2) contains the magnetic
permittivity averaged from 8 neighbour cells. Using time centered derivative it
has got the following form:
1
i, j, k i 1, j, k y j zk
8
i, j 1, k i 1, j 1, k y j 1zk
i, j, k 1 i 1, j, k 1y j zk 1
i, j 1, k 1 i 1, j 1, k 1y j 1zk 1
n
Hx
1
2
i, j, k H x 2 i, j, k
n
t
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(7)
Hx
1
2
1
2
i, j , k H x i, j , k
t
IntExn i, j , k ,
MuAx
(8)
where
1
MuAx i, j , k i 1, j , k y j z k
8
i, j 1, k i 1, j 1, k y j 1 z k
i, j, k 1 i 1, j, k 1 y j z k 1
i, j 1, k 1 i 1, j 1, k 1 y j 1 z k 1 .
(9)
x i y j z k
t min
,
,
.
v i v j v k
(10)
For appropriate time step, centre frequency f0 and excitation pulse bandwidth
fw of the discretized excitation wave has the form:
u n exp n t0 T sin 2 f 0 nt
2
(11)
where T 1 f w t , t0 5T .
In the FEM method the boundary conditions formulation is relatively simple.
The simulated region is usually surrounded by one layer with suitable material
parameters. Out of the layer the nodes having fixed magnetic field values
(usually equal to zero) are placed.
The ideal absorber has to behave like an open circuit with current value equal
to the current induced by the incident field. It can be obtained by appropriate
selection of boundary layer specific conductivity.
When flat wave incidents perpendicularly to the boundary layer with specific
conductivity and thickness and the layer has the same permittivities like
neighbour material, and electric component E x is constant in all layer
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
449
thickness, then electric field induces current density J x E x in the layer. For
H = 0 outside of the simulated space the magnetic field component H y inside is
equal to J x . Using well-known formula:
Ex
Hy
(12)
one can obtain the specific layer conductivity necessary to terminate the wave.
(13)
If the incident angle is different from 90 then effective thickness of the layer
is equal to sin .
3 Model verification
The model shortly described above was implemented to the software operating in
Windows 7 x64 system and prepared using a six-core Pentium processor.
The software allows us to define up to ten soil layers with different thickness,
permittivities and conductivities, as well as a random number of cuboidal
objects. It is also possible to define the basic properties of antennas as well as
radar signals. Thanks to the features mentioned above, the reconstruction of real
conditions inside the simulated medium is possible with a fidelity good enough.
Significant increase of the layers or objects number makes the simulation
duration time very long. The calculations concerning for example air and two
soil layers with two buried objects takes about 14 hours in 0.7 m3virtual sandbox.
Numerical calculations were verified in laboratory conditions using dry river
sand and two buried objects: metal plate and wooden box as a surrogate of
antipersonnel mine PMD 7 (Fig. 2).
Figure 2:
The measurements were carried out using vector network analyser and
Vivaldi antennas inside anechoic chamber (Fig. 3). The stepped frequency
continuous wave (SFCW) signal was generated in the range from about 1 to 3
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
depth
Figure 4:
The virtual and real environment conditions were similar as much as possible.
The results both theoretical and measured data have very good similarity.
Conclusions
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
451
The FEM simulations of the GPR imaging seem to be more precise than in
the FDTD case. Main reason of the difference is caused by numerical boundary
conditions construction that is much easier for FEM method. The FDTD method
models tested earlier by authors required much bigger simulated volume for
absorbing boundary conditions implementation (PML) to obtain similar effects
as FEM with one layer only.
The simulation results give very good similarity with real data from
the laboratory environment. Moreover, it is possible to simulate GPR imaging in
more realistic environment because the software is flexible enough.
The simulation results analysis is very important from GPR image recognition
problem point of view.
Acknowledgement
This work was supported by the Polish Ministry of Science and Higher
Education from sources for science in the years 2009-2011 under project
OR00006909.
References
[1] Taflove, A. & Hagness, S. C., Computational electrodynamics the finitedifference time domain method, Artech House, 2000.
[2] Pasternak, M. & Silko, D., Software for simulation of electromagnetic
waves propagation through the soil with buried objects, Proc. of the 11th
International Radar Symposium, pp. 524-527, 2010.
[3] Humphries, S. Jr., Field Solutions on Computers, CRC Press, 1997.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
453
Abstract
Respiratory activity is an important parameter for observation of human body
activity. A normal adult has a respiratory rate of 1215 breaths/min in normal
conditions. Breath sensor based on non-contact Doppler detector allows for the
measurement and detection and the absence of the breathing action, which may
cause death in a few minutes. The monitoring and detection of some respiratory
abnormality presents interest in certain situations (i.e. patients from intensive
therapy, of newborn children and many others). The paper covers a new
technical solution of the low-cost breath sensor, which consists of the microwave
generator together with a resonant patch antenna. There is only one antenna for
transmitting and receiving. The technical solution of the oscillator is based on a
single FET transistor. The microwave oscillator may be tuned by change of the
antenna dimensions. The solution presented here is designed for 2.4GHz (ISM
band). The respiratory activity is mainly detected due to the motion of the body.
The wave reflected by the moving body surface is mixed with the oscillating
frequency by FET transistor junction. Filtering the low frequency signals gives a
component that represents the Doppler frequency due to body surface motion.
Next, it is processed with high a resolution analogue-digital converter and
digitally filtered, and time-frequency analyzed as well. The processing enables
the detection of the respiration rate with accuracy 1 beat/min or 0.016Hz. Signal
processing in digital domain includes removal of DC offset and out-of-band
noise. Experimental results confirm the possibility of using a microwave Doppler
detector to measure and analyze respiratory activity signals. This method permits
to measure the breathing rate period.
Keywords:respiratory sensor, microwave Doppler detector.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110401
1 Introduction
Recently, interest in non-contact breath sensors has increased rapidly. Advances
in microwave technology make it possible to design very small and simple
devices for detecting vital signs. New applications of non-contact sensing of
human beings have also been conceived in medical applications and as tools for
detecting human beings hidden behind walls for instance, or as rescue tools for
finding survivors trapped under rubble [1].
The research activities are currently focussed on the use of two different
techniques for vital signs detection by means of microwave sensors: ultrawideband (UWB) radars and continuous wave (CW) radars.
UWB radars transmit short pulses with a pulse duration of the order of
nanoseconds [6]. This type of radar, as well as CW radars, is able to detect the
movement of a target by measuring the low-Doppler variation that affects the
received backscattered signal. UWB radars also provide a range resolution that
permits to eliminate the interfering pulses due to reflections of other targets in
the background. However, that characteristic requires a fast switching time
discriminator that opens the receiver when the wanted reflected pulse arrives. If
the distance is changed, the delay of the time window of the discriminator must
be adjusted.
CW radars are simpler systems than the UWB radars and the received signal
processing is independent on the target distance [3]. But in order to measure the
displacements due to breathing, other movements of the subject under
observation, different from that of respiratory, should be avoided. Several CW
radar transducer configurations have been developed to deal with sensitivity and
detection capability. Sensors working in various frequency ranges (i.e. S-band,
C-band and K-band) have been tested to adapt the transmitted wavelength to the
chest movements. Quadrature direct-conversion systems or double sideband
indirect-conversion systems have been carried out to resolve the null point
problem, which causes an accuracy decrease related to the distance between the
sensor and the chest. However, all the above-mentioned systems involve a
typical transducer with transmitting and receiving parts.
455
s n t An cos2ft t
where:
f signal frequency, (t) phase noise of the oscillator,
t time.
The transmitted signal reflects from body surface and returns to the receiving
antenna. After amplification it is led to a phase detector. The received signal is
phase-shifted and this phase shift is a function of emitted wavelength and
distance between radar and monitored object. Moreover, when the surface is
being radiated (e.g. chest surface) changes its position periodically xt (with
mean value equal to zero), the resulting phase shift is periodically modulated.
4L 4xt 2 L
So t Ao cos2ft
S D t cos 0
4L
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
457
Figure 2:
Figure 3:
The DSP is based on fact that required respiration signal and other signals are
separated in frequency domain. Resting respiration rate is between 0.15 and
0.4Hz what corresponds to 9 and 24 breaths per minute [2]. Heart rate is between
0.83 and 1.5Hz what corresponds to 50 and 90 beats per minute. This means that
the breath signal can be isolated by a lowpass filter. A sample of respiration and
heart signals measured with a microwave Doppler sensor are shown in fig. 4 and
fig. 5.
Figure 4:
Figure 5:
Respiration and heart signals are separated with digital filters. Filter isolating
heartbeat must attenuate the respiration signal at last 50dB. Signal processing
with finite impulse response (FIR) is presented. The FIR filters use current and
past input samples. The number of coefficients ant its values determine the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
459
filters properties in frequency and time domain (i.e. cut-off frequency, the
steepness of transition between the passband and stopband, group delay and how
long the filter has an output signal given a step at the input). The phase shift of
an FIR filter is linear within passband and group delay is constant and its value
depends on filter order. To separate heartbeat and respiration signal FIR 250order Kaiser filter with parameter =2 with 3 second group delay is used.
The Fourier Transform is used to determine the frequency characteristics of
respiration signal (fig. 6). When the frequency characteristics varies in time
(non-stationary signals) what is typical for physiological signals short-time
Fourier transform should be used for analysis.
Figure 6:
For heart monitoring the heartbeat signal must be isolated by filtering with
highpass filter. The Doppler sensor signal filtered with 400-order Kaiser (=2.5)
filter is shown in fig. 7.
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
5 Conclusions
A low cost microwave sensor for human breath and heart rate activity is
presented. The circuit concept is based on a loop-type microwave oscillator with
resonant circuit acting as antenna. The oscillator is designed to be frequencytuned by the change of target distance from the antenna. This feature allows
avoidance of the nulling effect. The sensor structure is very compact. The
concept has been verified by the measurement of the model sensor with the use
ECG and pulse oximeter as the reference.
Proper detection of breath action has been obtained form the distance of
20 cm from the chest surface.
This kind of sensors may be used in distributed systems for monitoring the
patient activity in the field of health-care or to verify the condition of machine
operator (pilot, drive, etc.).
References
[1] Droitcour A., Lubecke V., Lin J.C., Boric-Lubecke O.: A microwave radio
for Doppler radar sensing of vital signs. 2001 IEEE MTT-S Int. Microwave
Symp. Dig., 2001, vol. 1, pp. 175-178.
[2] Droitcour A. D., Boric-Lubecke O., Lubecke V. M., Lin J., Kovacs G.:
Range correlation and I/Q performance benefits in single-chip silicon
Doppler radars for noncontact cardiopulmonary monitoring. IEEE
Transactions on Microwave Theory and Techniques, vol. 52, no. 3, pp. 838848, 2004
[3] Lin J.C.: Microwave sensing of physiological movement and volume
change: a review. Bioelectromagnetics, vol. 13, no. 6, pp. 557-565, 1992
[4] Ichapurapu R.: A 2.4GHz Non-Contact Biosensor System for Continuous
Vital-Signs Monitoring, Proc. WAMICON 2009
[5] Baltag O.: Microwaves Doppler Transducer for Noninvasive Monitoring of
the Cardiorespiratory Activity, IEEE Transactions On Magnetics, vol. 44,
No. 11, November 2008
[6] Immoreev I.J.: Practical Application of Ultra-Wideband Radars.
International Conference "Ultra Wideband and Ultra Short Impulse Signals"
(UWBUSIS'06), 18-22 September, 2006, Sevastopol, Ukraine.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
461
Abstract
Nitrous oxide (N2O) plays a significant role in many different fields therefore its
monitoring is an important task. In this paper the opportunity of application of
cavity enhanced absorption spectroscopy for N2O detection is presented. This
method is a modification of cavity ring down spectroscopy. The laser radiation
tuned to absorption line of N2O is injected into an optical cavity under a very
small angle in respect to its axis. In the case of lack of the absorption, the result
is determined by the mirrors reflectivity coefficient, diffraction losses and length
of the cavity. When the absorber is present in the cavity, the result additionally
depends on absorption and scattering of light in cavity. The method provides the
determination of a very weak absorption coefficient as well as the concentration
of the absorbing gas.
Our N2O sensor consisted of a pulsed radiation source, optical cavity,
detection module and a digital oscilloscope. As the light source anoptical
parametric generator was applied. It enabled the wavelength tuning in a broad
spectral range with resolution of 1 nm. The optical cavity was composed of two
high-reflective spherical mirrors. Optical signal registration was done with
detection module equipped with HgCdTe photodetector.
The spectral range of 4.524.53 m is the best for N2O detection. Operation
at these wavelengths provides opportunity avoiding of interferences with other
atmosphere gases, like CO2 and H2O. Assuming 2% uncertainty of
measurements and the effective value of the absorption cross section of about
610-19 cm2 the detection limit of 10 ppb was achieved.
Keywords: CEAS, N2O detection, optoelectronic sensor.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110411
1 Introduction
Nitrous oxide (N2O) is a colorless gas with a slightly sweet odor. N2O is an
important greenhouse gas and the major natural source of NO. In consequence, it
initiates the catalytic NOx ozone destruction cycles in the stratosphere. The gas is
used as an anesthetic, especially in dentistry and minor surgery. It produces mild
hysteria and laughter preceding the anesthetic effect. Thus it is also known as
laughing gas. Excessive exposure may cause headache, nausea, fatigue and
irritability. Simultaneously, N2O is a strong oxidizer above 300C and it selfexplodes at higher temperatures. Nitrous oxide is also a characteristic compound
emitted by majority of explosive materials. Therefore, measuring and monitoring
of N2O concentration is very important [1, 2].
N2O can be analyzed by gas chromatography (GC) on a molecular sieve
column using a thermal conductivity detector (TCD). For this method detection
limit of 4 ppm is achieved [3]. In the case of the electron capture detector (ECD)
application, detection limit is about 50 ppb [4]. Nitrous oxide may be identified
by GC/MS basing on its mass spectra as well. For solid phase microextraction
GC/MS the sensitivity of 72 ppb for N2O is reported [5]. Other nitrous oxide
detection system with detection limit of 50 ppb is described by Hellebrand [6]. It
consists of Fourier transform infrared spectrometer (FT-IR), heated measuring
cell with an optical path length of 20 m and HgCdTe-detector (MCT).
During the last several years some spectroscopic methods for gas detection
were alsosignificantly developed. In the paper the opportunity of application of
cavity enhanced absorption spectroscopy (CEAS) for N2O detection is presented.
This method is a modification of cavity ring down spectroscopy (CRDS).
463
small part of laser radiation leaves the optical cavity due to residual transmission
of mirrors. The transmitted light is registered with a photodetector. The signal
from the photodetector can be measured e.g. with digital oscilloscope. The
amplitude of singlemode radiation trapped within the cavity decays
exponentially over time with a time constant . The time constant is often
referred to as the decay time or ringdown time. The decay of light intensity I(t)
can be described as
(1)
(2)
(3)
(4)
100%,
(5)
where L denotes a decay time of the optical cavity for minimal absorber
concentration. The analysis presented in Figs. 1 and 2 show that sensitivity of the
N2O experimental setup better than 10 ppb can be obtained.
Effective storage of light in the resonator is ensured only when laser
frequency is well-matched to a cavity mode. Then the best sensitivity can be
achieved. However the major disadvantage of this method is a strong
dependence of cavity modes frequency on mechanical instabilities. They damage
cavity Q-factor and provide fluctuations of the output signal [10].
Such disadvantage is eliminated in cavity enhanced absorption spectroscopy.
This modification of CRDS technique was described in 1998 by Engel et al. [11].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
465
CEAS is based on off-axis arrangement of the cavity and laser beam. The
beam is injected under a very small angle in respect to the cavity axis. Usually
the beam is repeatedly reflected by the mirrors, however, the reflection points are
spatially separated. As a result a dense structure of weak modes is obtained or
the modes do not occur due to overlapping. The system is much less sensitive to
mechanical instabilities. CEAS sensors attain the detection limit of about
10-9 cm-1 [12, 13]. Another advantage is that due to off-axis illumination of the
front mirror, the source interference by the optical feedback from the cavity is
eliminated.
Figure 3:
Nitrous oxide sensing setup consisted of: source of infrared pulses, optical
cavity, detection module and an oscilloscope. Nowadays, QCL lasers just
elaborated by Alpes Lasers are the best light sources for this application. FWHM
duration time of their pulses reaches hundreds of microseconds pulses while the
repetition rate might be of some kHz. The emission wavelength can be easily
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
The optical cavity was built of two spherical mirrors, which reflectivity
reaches about 0.9998 at the wavelength of interest (Los Gatos Research, Inc.,
USA).
The radiation leaving the cavity was registered with the low noise detection
module PVI-2TE (VIGO System S.A., Poland). In the module, a photodetector
(photodiode), cooling system and a preamplifier were integrated in a common
housing [14]. Such construction provides opportunity of room temperature
operating. Both mechanical and spectral parameters of the module were
optimized to the N2O sensor application. Photodetector maximum responsivity
corresponds to the absorption bands of nitrous oxide (Fig. 5).
Figure 5:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
467
Figure 6:
Vn
Vi
Iph I nph Rsh Ind
Photodiode
Figure 7:
I nb Cd
A
In
Ri Ci
Preamplifier
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Vo
Vno
12
N
2
I 2 I 2 I 2 I 2 4kTf Vn
nph
nd
nb
n
Rf
Rf
In the N2O detection system, the shot noise from a background current Inb can
be negligible, then, equation (6) becomes
I ph
S
.
2 12
N
4kTf Vn
2
2
I nph
I nd
I n2
R
Rf
(7)
In the case of high frequency, the last term of the denominator should contain
additional combination of impedances across the input of the preamplifier, i.e.
(Cd)1 and (Ci)1. Assuming that Vnd is the voltage noise of the serial
resistance of the photodiode and the preamplifier capacitance Ci is conveniently
grouped with photodiode capacitance Cd, the noise equivalent signal current,
In total, is given by
2
4kTf
2
2
2
2
2
I n2total I nph
I nd2 I n2
Vnd Cd Vn Cd Ci . (8)
R f
There are two terms: a white noise term in the first square brackets, and a
second term which gives frequency proportional increase in a noise current.
Although a capacitor does not add noise, the photodetector noise voltage (Vnd),
and preamplifier noise voltage (Vn) is increased by the Cd and the Cd+Ci
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
469
(9)
4 Experimental results
The main task of experiments was to check the opportunities of N2O detection by
constructed CEAS system. The research was performed for wavelengths range
from 4.519 m to 4.536 m. Measurement procedure began from setting the
appropriate wavelength of interest. Then 0 (3) was determined for the cavity
filled with pure N2. Subsequently the flow of N2O-N2 mixture was set to the
cavity and the corresponding decay time was registered. The comparison of
both signals is presented in Fig. 8.
Figure 8:
The exemplary output signals from the cavity filled with reference
gasN2 (0) (line A), and from the cavity filled with 10 ppm N2O()
(line B).
The values of decay times were used to calculate the concentration of N2O
accordingly to the relationship (3). However the line width of optical parametric
generator was about 0.001 m. Thus it overlapped several absorption peaks of
N2O (Fig. 9). Therefore, in order to determine the absorber concentration the
effective absorption cross section was taken into account. In the vicinity of the
investigated lines the mean value of this parameter reached about 610-19 cm2.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 9:
During the experiments the gas samples containing about 10 ppm N2O were
investigated. Concetration measurements were carried out at the different
wavelengths. A good agreement between various results was achieved (Fig. 10).
The differences are caused by uncertainity of gas sample preparations (~10%
precision). Furthemore, 2% uncertainty of the decay time determination was
obtained. In accordance with equation (4) the value of concentration limit of
detection CL of 10 ppb was achieved.
Figure 10:
471
Figure 11:
Acknowledgement
This work is supported from the Ministry of Science and High Education of
Poland (Project No. OR00002807).
References
[1] Seinfeld J. H. & Pandis S. N. Atmospheric Chemistry and Physics: From
Air Pollution to Climate Change, 2nd Edition, John Wiley & Sons, Inc.,
New Jersey 2006.
[2] Pohanish R. P., Sittigs Handbook of Toxic and Hazardous Chemicals and
Carcinogens (5th Edition), William Andrew Publishing, 2008.
[3] Mosier A. R. & Klemedtsson L., Methods of Soil Analysis, Part 2
Microbiological and Biochemical Properties, Soil Science Society of
America Inc., Madison, Wisconsin 1994.
[4] Shimadzu Scientific Instruments,
http://www.mandel.ca/application_notes/SSI_GC_Green_Gasses_Lo.pdf
[5] Drescher S. R. & Brown S. D., Solid phase microextraction-gas
chromatographic-mass spectrometric determination of nitrous oxide
evolution to measure denitrification in estuarine soils and sediments,
Chroma. A1133(12), pp. 300304, 2006.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
473
Abstract
For up-to-date commercial short-distance communication systems, e.g. RF
systems of VHF bandwidth or WiFi wireless networks, the development of
efficient and reliable algorithms and procedures for management of the radio
network subscriber identity is becoming still more important and urgent need.
The existing systems for that purpose are limited to checking of a simple
authorization code against the tables for supervising the RF information. Identity
of a radio network subscriber is verified on the basis of the rule that hidden
messages and hidden responses are generated during the communication
sessions. The concept of the centralized system for management of identities is
based on the rule that a unique set of PIN numbers is assigned to each subscriber
of the RF communication system. Each PIN number is repeatedly changed
during a communication session in order to avoid the attack when a part of the
signal including the PIN code is copied from the communication signal for
further redistribution.
Keywords: VHF radio, hidden communication, hidden authorisation.
1 Introduction
Electronic identity understood as an unambiguous assignment of a digital ID to
an individual person fulfils many important functions in contemporary
telecommunications. Mechanisms of identification, authentication and
authorization ensure the function of incontrovertibility of sent information and
the use of abbreviation function mechanisms makes it possible also to verify the
integrity of the received information. Safe data transmission through
a telecommunication network requires the sent data to be provided with the
following information: who sent it, who the addressee is; often, also
confirmation is required of: who received the data and whether the received data
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110421
475
an insight into the existing threats. They include data breach, i.e. an
unauthorized revealing of information that compromises the safety, privacy and
integrity of user identification data; identity fraud, i.e. unauthorized use of part of
personal data for financial gain (it may be done without identity theft
familiarization with data, randomly generated credit card numbers); identity
theft, i.e. unauthorized access to personal data; Man-in-the-Middle (MTM)
defined as unauthorized access to personal information in which the perpetrator
is able to decipher, modify and embed information between two parties during a
session without their knowledge; Synthetic Identity Fraud denoting a fictional
identity created in order to deceive an organization, e.g. identity generated using
real social insurance numbers and various first and last names. Of course, there
are many other definitions and notions related to electronic identity loss.
Electronic identity theft is the most common internet offence and losses suffered
as a result of an improper distribution and protection of electronic identity are
huge.
According to the aforementioned Javelins report and the Spendonlife service
[3], in 2008 the United States of America recorded a loss of $31 billion on
account of various forms of identity loss, whereas total losses in the world
amount to $221 billion. In USA, 26% of identity theft cases involved an
unauthorized takeover of credit card numbers and purchases of material goods
made by third parties based on those numbers; 18% was theft of public utility
services (gas, electricity) involving assigning a given service to a person residing
outside their area of permanent residence; 17% was banking fraud (change of
account assets, theft of access codes for ATM systems); 12% involved theft of
social security numbers, e.g. in order to obtain employment; 5% included loan
fraud (applying for a loan on someone elses behalf, e.g. assigning a social
security number to other personal data); 9% was fraud related to taxes, drivers
license, etc.; 13% involved other types of identity theft. In 2008, 10 million
people in the US alone have become victims of identity theft, i.e. 22% more than
in 2007. In the same year (2007) 1.6 million households in USA experienced
theft which was unrelated to credit card losses, but instead - to breaches of bank
accounts or debit card accounts. Moreover, 38 to 48% of people notice the theft
of their electronic identity within 3 months, while 9 to 18% does not notice this
fact for 4 or more years.
The seriousness and scale of electronic identity loss was noted also in recent
years by decision-making authorities in the European Union, including the
European Commission. One of the many initiatives aimed at solving the
problems with assigning and distributing electronic identity is the pilot program
STORK [4] (Secure idenTity crOss boRders linKed) which pertains to a crossborder recognition of existing national electronic identity systems (eID), thus
allowing for obtaining access to public services in member states. Several dozen
million people in EU use national eID identity cards when accessing services
related to social insurance, filling in tax returns and other services. Thus, the
project is related to electronic identity management (eIDM) through a system
which is a federation of already existing systems. This is a fundamental
difference in comparison to other projects realized by EU regarding identity,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
3 Hidden authentication
The need to introduce effective authentication procedures is especially noticeable
in military heterogenic systems which offer speech transmission services. A
complex solution to the problem of authentication is called for especially by
short-term prototyping and implementation of new systems in accordance with
the COTS (commercial, off-the shelf) rule. The use of the networks multiservice character in the concept of Next Generation Networks (NGN) may lead to
a new approach to designing innovative and effective solutions in subscriber
authentication.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
477
4 Experimental results
In order to realize the procedure of subscriber authentication in
telecommunication links, a decision has been made to use one of the known
authentication models which involve transmitting a binary signature to
subscribers who exchange correspondence between themselves. However, it
must be noted that the authentication model has been supplemented by a hidden
signature sent through a watermark added to the call signal, which constitutes its
significant modification. This chapter presents the results of experiments related
to subscriber authentication with the use of an information hiding technique for a
radio VHF link. The link uses the drift correction modulation as a method of
embedding a watermark in the original signal. Moreover, results were discussed
from the carried out experiments.
A supplementation to the carried out experiment is a test of the hardware
encoder and decoder of the watermark a Personal Trusted Terminal (PTT) with
an algorithm based on the phase angle scanner method that uses a detector of
spectral line amplitude instead of a detector of phase angle mistuning values. A
description of the method using the detector of spectral line amplitude, as well as
a description of the hardware encoder and decoder of the watermark may be
found in [12] and [1317].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
Scheme of the realized VHF radio link with the use of watermark
encoder and decoder.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
479
5 Conclusions
The presented authentication VHF system does not interfere with the
telecommunication infrastructure of currently functioning radio systems.
The lack of a special handset with a hidden authorization function does not, in
any way, prevent the realization of a connection provided the correspondent has
a regular handset. An attempt to discover a realized connection with a watermark
sent in the background of the call signal is made more difficult because of the
inaudibility of the watermark and the necessity to analyze the speech signal in
terms of a hidden message in all call tracks realized at a given moment, e.g.
during telecommunication rush hour. The digital watermark is perceptually
transparent and inaudible on the host signals presence and is robust against
intentional and unintentional attacks. The developed system allows for
transmission together with the speech signal a watermark signal with dedicated
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Acknowledgement
This paper has been co-financed from science funds granted within the years
2010-2012 as a research project of the Ministry of Science and Higher Education
of the Republic of Poland No. 0181/R/T00/2010/12
References
[1] http://eric-diehl.com
[2] Javelin Strategy & Research, 2009 Identity Fraud Survey Report, Feb.
2009, www.javelinstrategy.com
[3] http://www.spendonlife.com/guide/2009-identity-theft-statistics
[4] http://www.eid-stork.eu/
[5] http://ec.europa.eu/information_society/activities/egovernment/policy
[6] http://identity20.com
[7] http://openid.net
[8] www.openideurope.eu
[9] A.J. Elbirt, Who Are You? How to Protect Against Identity Theft, IEEE
Technology and Society Magazine, vol. 24, Issue 2, 2005, p. 5-8
[10] Z. Piotrowski, Drift Correction Modulation scheme for digital audio
watermarking. Proceedings 2010 Second International conference on
Multimedia Information Networking and Security MINES 2010, Nanjing,
China, 4-6 November 2010, ISBN: 978-0-7695-4258-4, IEEE Computer
Society, Conference Publishing Services (CPS), pp. 392-397
[11] Z. Piotrowski, Gajewski P., European Patent Application no. 09151967.8
(EP 2 085 964 A2), Method and apparatus for subscriber authorization
and audio message integrity verification. European Patent Office
[12] Gajewski P., opatka J., Piotrowski Z., A New method of frequency offset
correction using coherent averaging, Journal of Telecommunications And
Information Technology, 1/2005, National Institute of Telecommunications, ISSN 1509-4553, Warsaw 2005
[13] Piotrowski Z., Effectiveness of the frequency offset computation procedure,
Elektronika, nr 3/2010, s.76-79, Wydawnictwo Sigma-NOT, 2010
[14] Piotrowski Z., Zagodziski L., Gajewski P., Nowosielski L.: Handset with
hidden authorization function, European DSP Education & Research
Symposium EDERS 2008, Proceedings, pp.201-205, Published by Texas
Instruments, ISBN: 978-0-9552047-3-9
[15] Piotrowski Z., Nowosielski L., Zagodziski L., Gajewski P.:
Electromagnetic Compatibility of the
Military Handset with Hidden
Authorization Function Based on MIL-STD-461D Results, Progress In
Electromagnetics Research Symposium PIERS 2008 Cambridge
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
481
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
483
Abstract
SAW gas sensors are attractive because of their remarkable sensitivity due to
changes of the boundary conditions (mechanical and electrical in the
acoustoelectric effect) propagating of the Rayleigh wave, introduced by the
interaction of a thin chemically active sensor film with gas molecules. This
unusual sensitivity results from the fact that most of the acoustic wave energy is
concentrated near the waveguide surface within approximately one or two
wavelengths. In the paper a new theoretical model of analysing a SAW gas
sensor is presented. The effect of SAW velocity changes depends on the profile
concentration of diffused gas molecules in the porous sensor film. Basing on
these analytical results, the sensor structure can be optimized. Some numerical
results are shown.
Keywords: gas sensor, SAW, acoustoelectric effect, Knudsens diffusion in
porous film, numerical modelling.
1 Introduction
A very interesting feature of SAW sensors is the fact that a layered sensor
structure on a piezoelectric substrate provides new possibilities of detecting gas
making use of the acoustoelectric coupling between the Rayleigh waves and the
free charge carriers in the semiconductor sensor layer. Using a configuration
with a dual delay line and an adequately chosen active layer, a sensor with a high
sensitivity and good temperature stability can be designed [1, 2]. SAW gas
sensors are attractive because of their remarkable sensitivity due to changes of
the boundary conditions of propagating the surface wave. This unusual
sensitivity results from the simple fact that most of the acoustic wave energy is
concentrated near the crystal surface within approximately one or two
wavelengths.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110431
k 0 m k 0
4P
where the index refers to the change of the wave number due to electrical
surface perturbations.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
485
Z E (0)
iD
y y 0
(3)
Z E ( h )
Z E ( 0)
SAW
i D y
1
i k 0 0
LiNbO3 Y-Z
y
Figure 1:
2=0
(4)
The unperturbed potential function is therefore:
(5)
R ( y )e ikz e ky e ikz , y<0
and the normal component of electrical displacement is:
(6)
D y k 0 e ky e ikz , y<0
Consequently, the unperturbed surface impedance is:
1
Z E ( 0)
i k 0 0
The perturbed normalized surface impedance is:
Z ( 0)
z E (0) E
ik 0 0
D
Z E ( 0)
y
(7)
(8)
y 0
where indicates perturbed quantities. The potential (0)of the perturbed fields
and electrical displacement Dy(0) are now related to the unperturbed fields:
(0) (0) A
(9)
D y (0) D y (0) k 0 Tp A
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
D y (0)
0 i Tp z E ( 0 )
k 0 0 0 Tp
(0 )
(0)
(11)
0 i Tp z E (0)
Finally, we obtain:
v
k
k0
v 0 sc
1 i z E ( 0 )
1 i
Tp
0
z E ( 0 )
K2
2
1 i z E ( 0 )
1 i
Tp
0
(12)
z E ( 0 )
v
K 2 2 - the electromechanical coupling factor, and the index sc
v 0 sc
indicate perturbation due to an electrical short circuit on the boundary. Eq. (12)
is the Ingebrigtsen formula for electrical surface perturbations.
2.2 Thin semiconducting sensor layer
In the case of the semiconducting layer the surface impedance may be
determined considering the motion of the charge carriers in the layer. Let us
assume that the semiconductor layer on the piezoelectric substrate is thinner than
a Debye length. In the one-dimensional case the equation of the current density
in the direction z (fig.1) is expressed as [6]:
d
(13)
I z 01 E z D 1
dz
where 01 and 1 are the intrinsic and whole charge density per unit length,
respectively. The quantities and D are the carrier mobility and diffusion
quantity, respectively, Ez electric intensity field in the direction z. For the time
dependence of the Ez expressed as exp i(t kz) the continuity equation [8],
concerning current density in the sensor layer is:
dI z
(14)
i l 0
dz
The surface density of charge carriers s=l/a(a width of the layer) induced
in the layer by the electric field Ez will be:
c
kh
s
Ez
1 i
(15)
where: c=
0
relaxation frequency,
D
v 02 k B T diffusion frequency, kB
D
q
487
0 v0
z E (0) i
1 i
(16)
(17)
z E ( 0) i s
v
0 0
4r
3
2 RT
M
(18)
If the pore radius r>2 nm, a free molecular flow exists. That means that the
gas molecules collide more frequently with the boundaries than with other gas
molecules. The original studies of a free-molecule flow were limited to small
holes in very thin plates. Gas molecules are consumed rapidly or slowly due to
the surface reaction in the sensing layer [7].
Let us consider the gas diffusion in a porous thin semiconducting film, as
shown in fig. 2.Two assumptions, i.e. Knudsen diffusion and first-order surface
reaction, allow to formulate the well-known diffusion equation [7, 9]:
C A
2C A
DK
kC A
t
x 2
(19)
Gas
-h
Porous sensor layer
SAW
Piezoelectric substrate
y
Figure 2:
k
k
C A C1 exp y
C 2 exp y
DK
D K
C A ( y) C As
cosh( y k / Dk )
cosh( h k / Dk )
(20)
0the
(21)
CAs is the target gas concentration outside the film on the surface y = -h. The
concentration profile depends on the thickness of the sensor layer and the
constants k and DK. Fig. 3 presents an example of the molecule gas concentration
profile. Similar as in resistance sensors let us now assume that the electrical
conductance (y)of the sensor layer is linear to the gas concentration CA (y) [7]:
(22)
y 0 1 a C A ( y )
where 0 is the initial layer conductance in the air, a is the sensitivity coefficient
and the sign denotes the combination layer type conductivity and oxidation or
reduction properties of the target gas. Experimental data for H2 have been
presented by Sakai et al. [7], when CAs was fixed at 800 ppm at a temperature of
350C, and assumed to be a = 1 ppm-1. For these data the sensitivity data for H2
fit fairly well to the correlation line for k / DK 0,01 nm-1.
The electrical conductance of the whole film is obtained by integrating (y)
over the whole range of y (y = -h; 0).That treatment has been proposed by Sakai,
Williams and Hilger [7, 10]. A semiconductor layer in the SAW sensor cannot be
treated in the same way.Because the profile of molecule gas concentration in a
semiconducting sensor layer changes with the distance from the piezoelectric
substrate (fig. 3), the acoustoelectric interaction differs in every sheet of the
layer. In order to analyze such a sensor layer we assume that the film is a
uniform stack of infinitesimally thin sheets (fig. 4) with a variable concentration
of gas molecules and a different electric conductance. Each sub-layer is in
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
489
1.0
0.8
CA/CAs
0.6
0.4
0.2
0.0
0
50
100
150
200
250
300
y [nm]
Figure 3:
n
4
3
2
1
Sensor layer
Piezoelectric substrate
Piezoelectric
Figure 4:
z E' ( y )
B k 0 y k0 y
e )e
A
B
e k 0 y )e k0 y
A
ik 0 0 (e k0 y
k 0 0 (e
k0 y
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(23)
Sensor layer
3
2
1
Diffusion equation
Impedance
transformation law
Ingebrigtsen
Formula
Resultant admittance
Gas concentration
profile
v/v = f/f
Figure 5:
The constant B/A is evaluated by setting y=-h in (23), and this gives the
impedance transformation law for a single layer in the distance -h:
z E' (0)
(24)
It can be shown, that the Ingebrigtsen formula for n-sub-layers takes the
following form [12]:
k
K2
v
Re
v0
2
k0
n 1
2T (1 aCA )2 1 f ( yi , ( yi ))
i 1
2
2
n 1
n 1
2T 2 (1 aCA )2 1 f ( yi , T 2 ) 1 g( yi , ( yi )) v0CS 2
i 1
i 1
(25)
T1=300K, T1 = 0
n 1
f y i , CS
i 1
1 tanh ky
1 tanh(ky) tanh(ky) CS
0 v0
v
0 0
1 tanh(ky)2 tanh(ky) CS
1 tanh(ky)2 tanhky CS
n 1
g yi , CS
i 1
0 v0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(26)
491
-2.3986
-2.3988
-2.3990
-2.3992
-2.3994
-2.3996
H2
CO2
NO2
NH3
-2.3998
-2.4000
0
500
1000
1500
2000
2500
3000
3500
4000
Figure 6:
0,0
-0,5
-1,0
-1,5
-2,0
-2,5
-3,0
270
280
290
300
310
320
Temperature [K]
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
500
x 10-4
Gas concentrations
Air/ 2%H2 - Air/ 1,5%H2 - Air/ 1%H2 - Air/ 0.5%H2
Air/ 0.5%H2 - Air/ 1%H2 - Air/ 1.5%H2 - Air/ 2%H2
(fmax - f)/fmax
400
100 nm
300
200
50 nm
100
150 nm
0
1200
2400
3600
4800
6000
7200
8400
9600
Time [s]
(a)
H2
-2.40996
-2.40997
-2.40998
-2.40999
-2.41000
50
100
150
200
250
300
350
Thickness [nm]
(b)
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
493
5 Conclusions
In this paper a new analytical model of a SAW gas sensor and numerical results
have been shown for the gases H2, CO2, NO2, NH3 and sensing layers WO3.The
profile of the concentration of gas in the sensor layer has been applied in order to
model the acoustoelectric effect in the SAW gas sensor. A porous semiconductor
layer has been divided into sub-layers. The influence of the impedance above the
piezoelectric substrate on the relative change of the SAW velocity has been
calculated. In the sensor layer the Knudsen diffusion is assumed.
Analytical results are compatible with experimental results. The new
analytical model may be used to optimize the structure of SAW gas sensors.
Acknowledgement
This work is financed by a grant from the Ministry of Science and Higher
Education No. N N505 374237.
References
[1] W. Jakubik, M. Urbaczyk, E. Maciak, T. Pustelny, Bilayer structures of
NiOx and PdIn Surface Acoustic Wave and Electrical gas sensor systems,
Acta Phys. Pol. A, vol. 116, no 3, pp. 315-320, 2009.
[2] W. Jakubik, Investigation of thin film structure of WO3 and WO3 with Pd
for hydrogen detection in a surface acoustic wave sensor system, Thin Solid
Films515, pp. 8345-8350, 2007.
[3] G. Kino, T. Reeder, A Normal Mode Theory for the Rayleigh Wave
Amplifier, IEEE Transaction on Electron Devices, Ed-18, 10, October
1971.
[4] W. Jakubik, M. Urbaczyk, The electrical and mass effect in gas sensors of
the SAW type, J. Tech. Phys., 38(3), pp. 589-596, 1997.
[5] B.A. Auld, Acoustic Fields and Waves, vol. 2, J Willey and Sons, NY1973.
[6] D. L. White, Amplification of ultrasonic waves in piezoelectric
semiconductors, J. Appl. Phys., Aug. 1962.
[7] Go Sakai, Naoki Matsunaga, Engo Shimanoe, Noboru Yamazone, Theory
of gas-diffusion controlled sensitivity for thin film semiconductor gas
sensor. Sensors and Actuators, B 80, pp. 125-131, 2001.
[8] K.M. Lakin, H.J. Shaw, Surface Wave Delay Line Amplifiers, IEEE Trans.
MTT-17, pp. 912-920, 1969.
[9] J. W. Gardner, A non linear diffusion reaction model of electrical
conduction in semiconductor gas sensors, Sensors and Actuators, B1,
pp. 166-170, 1990.
[10] D. E. Williams, A. Hilger, Solid state gas sensors, Bristol, 1987.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 7
Advances in measurements
and data acquisition
497
Abstract
In order to realize prevention and ecological risk analysis systems, the world
environmental policy (UNEP, IMO, etc) is implementing complex decision
systems based on economically sustainable activities including forecasting
models, satellite images and sustainable observatory networks.
The oceanographic measurement networks play a key role both for satellite
data calibration and mathematical models validation and feeding as well as to
support the early warning systems for environmental pollution control and
prevention.
The high costs of offshore mooring systems and traditional oceanographic
cruises have suggested the use of VOS (Voluntary Observing Ships) to obtain
affordable data.
Moreover, marine coastal areas can be monitored using small measure
platforms integrating on demand the various measuring systems
(meteorological stations, water samplers, automatic chemical analyzers, in situ
and pumped oceanographic probes, etc).
For this purpose a big effort has been dedicated to the design, development
and realization of new oceanographic devices.
This paper shows the advances in new technological devices: the TFLAP
(Temperature Fluorescence LAunchable Probe) and the Automatic Multiple
Launcher for expendable probes, to be used by VOS in open seas, and a coastal
buoy to be moored near Civitavecchia, as a starting point of integrated coastal
monitoring networks.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110441
1 Introduction
The focus of international policy for the seas and oceans is steadily increasing
due to the growth of consciousness, in the collective mentality, of their
importance to man and his activities.
In addition to the use of the sea and its resources, that are traditionally
exploited by humanity, and in addition to the essential role that the oceans and
phytoplankton play in the climate balance, we constantly add new and important
roles, such as exploitation of renewable energy and marine conservation of
biodiversity, conservation meant as a pool of organic molecules of
pharmaceutical compounds for possible future interest for humanity.
The oceanography sets itself as a synthesis science that can uniquely address
the complex physical, geochemical and biological processes occurring in the sea:
Oceanography personifies interdisciplinary science of the blue planet, Earth,
as reported by Dickey and Bidigare [1].
The operational oceanography has been engaged in the development of new
acquisition, transmission and assimilation systems in order to have the widest
possible coverage of real time information, reflecting the guidelines of the World
Meteorological Organization (WMO) and of the Intergovernmental
Oceanographic Commission (IOC).
Moreover, physical and biological processes of marine ecosystems have a
high spatial and temporal variability, whose study is possible only through high
resolution and synoptic observations that require the simultaneous use of
different platforms. Until satellite appears, in the early 1970s, oceanography
had endured a long period of undersampling, so that, the most profund effect
of the satellite oceanography was that for the first time ocean processes were
adequately sampled as stated by Munk [2].
However, the satellites, the data assimilation of mathematical models and the
compliance with the standards for coastal studies, require in situ data. More than
for the physical variables, the biological ones have to be observed in situ as
reported by Marcelli et al. [3].
The attention to the state of marine environments is growing worldwide and
the assessment of their resources needs always innovative methodologies, in
order to develop policies and environmental governance for the sustainable
management of the marine ecosystems.
Environment monitoring systems and networks were, in the years, designed
and presented by various authors, among which, Carof et al. [4], Grisard [5],
Eriksen [6], Griffiths et al. [7], Irish et al. [8], Paul [9], Seim et al. [10], Zappal
et al. [11], Nittis et al. [12], Crisafi et al. [13], Zappal et al. [14, 15].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
499
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
501
Originally designed to work with standard XBT probes, the system was
enhanced to manage also TFLAPs, with a new serial interface and new
communication and data display routines.
An example of Temperature and Fluorescence profiles measured using the
TFLAP is shown in Fig. 2.
FLS
Figure 2:
2.2 Moorings
The buoy (or, more properly, the platform) was originally moored in Siracusa
(Sicily) coastal waters as a part of a monitoring network funded by the Italian
Ministry for University and Research, described by Zappal et al. [22, 23].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
503
2.2.2 The data acquisition and transmission systems and the control
software
The data acquisition system uses PC/104 boards that implement a PC-like
architecture; the technological developments obtained in the years made
available new, more powerful electronic boards to substitute the Intel 386 used in
the first buoy version and the Pentium family board adopted for the first version
of the automatic multiple launcher for expendable probes.
In the current standard version up to 12 serial ports are available to connect
measuring instruments, switched on and off by solid state and electromechanical
relays.
To monitor power system status (battery level, solar panels and wind
generator voltage) a 16 single ended channels, 12 bit resolution Analog to Digital
Converter (ADC) board is installed; it is also possible to add a 16 bit resolution
ADC board to connect sensors with voltage or current output.
The original GSM modem has been substituted by a GSM-GPRS one, able
(thanks to an embedded TCP-IP stack) to directly connect to Internet, to transfer
data as e-mail messages, so allowing to de-localize the base station and to better
disseminate acquired data.
A GPS receiver is included in the system to control its position: should the
platform go out of the allowed area range, an unmoored platform alarm is sent
to the base station and to selected cellular phones.
The system software is a new release of the original one, first described by
Zappal [24] and further expanding the enhancements reported by Zappal [25].
It is coded partly in compiled BASIC and partly in Assembly in a DOS-like
environment; remote control software is written in Visual Basic in Windows
environment.
Basically, it consists in a time machine executing at pre-defined times
sequences of macro-commands able to fully control the instruments and the data
acquisition and transmission systems.
The sequences can be different for each time and are remotely
reprogrammable without suspending system activity.
Acquired data are immediately stored in the local system disk (a solid state
one), then formatted and sent via e-mail.
Being the messages sequentially numbered, it is easy for the receiving system
to automatically detect a missing message and asking for its retransmission,
simply sending an e-mail message or an SMS to the buoy.
2.2.3 The pumping system
The pumping system (shown during refitting operations in Fig. 4) uses a
peristaltic pump to drive water from five depths to feed instruments
(e.g. multiparametric probes, water samplers, colorimetric analyzers) into a
measurement chamber or in an open flow piping; after the measurements the
system can be automatically washed with fresh water contained in a reservoir.
Of course, possible alterations of sample temperature and dissolved oxygen must
be taken into account; no influence has been noticed on other parameters,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
The water pumping system in the buoy (left) and on the bench
(right) during refitting operations.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
505
References
[1] Dickey, T. D. & Bidigare, R. R., Interdisciplinary oceanographic
observations: the wave of the future, Scientia Marina, 69(Suppl. 1), 23-42,
2005.
[2] Munk, W., Oceanography before, and after, the advent of satellites,
Satellites, Oceanography and Society, edited by David Halpern, 9 Elsevier
Science B.V., 2000.
[3] Marcelli, M., Di Maio, A., Donis, D., Mainardi, U. & Manzella, G.M.R..
Development of a new expendable probe for the study of pelagic
ecosystems from voluntary observing ships. Ocean Sci., 3, 1-10, 2007.
[4] Carof, A., H., Sauzade D. & Henocque, Y., Arcbleu, an integrated
surveillance system for chronic and accidental pollution. Proc. of OESIEEE OCEANS 94 Conference, III, pp. 298-302, 1994.
[5] Grisard, K., Eight years experience with the Elbe Estuary environmental
survey net. Proc. of OES-IEEE OCEANS 94 Conference, I, pp. 38-43,
1994.
[6] Eriksen, C. C., Instrumentation for Physical Oceanography: the last two
decades and beyond. NSF APROPOS Workshop Ailomar, CA 15-17
December 1997.
[7] Griffiths, G., Davis, R., Eriksen, C., Frye, D., Marchand, P. & Dickey, T.,
Towards new platform technology for sustained observations. Proc. of
OceanObs 99, http://www.bom.gov.au/OceanObs99/Papers/Griffiths.pdf
[8] Irish, J. D., Beardsley, R. C., Williams, W. J. & Brink, K. H., Long-term
moored observations on Georges Bank as part of the U. S. Globec
Northwest Atlantic/Georges Bank program. Proc. of MTS-IEEE OCEANS
99 Conference, I, pp. 273-278, 1999.
[9] Paul, W., Buoy Technology. Marine Technology Society Journal, 35(2),
pp. 54-57, 2001.
[10] Seim, H., Werner, F., Nelson, J., Jahnke, R., Mooers, C., Shay, L.,
Weisberg, R. & Luther, M., SEA-COOS: Southeast Atlantic Coastal Ocean
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
507
[22] Zappal, G., Caruso, G., & Crisafi, E. The SAM integrated system for
coastal monitoring. Proc. Of the 4th Int. Conf. On Environmental Problems
in Coastal Regions, Coastal Environment IV, ed C.A. Brebbia, WIT Press:
Southampton, pp. 341-350, 2002.
[23] Zappal, G., Caruso, G., Azzaro, F., Crisafi, E. Integrated Environment
Monitoring from Coastal Platforms. Proc. of the Sixth International
Conference on the Mediterranean Coastal Environment, ed E. Ozhan,
MEDCOAST, Middle East Technical University: Ankara, 3: pp 2007-1018,
2003.
[24] Zappal, G., A software set for environment monitoring networks. Proc. of
the Int. Conf. On Development and Application of Computer Techniques to
Environmental Studies X. Envirosoft 2004, eds. G. Latini, G. Passerini, &
C. A. Brebbia, WIT Press, Southampton, pp. 3-12, 2004.
[25] Zappal, G., A versatile software-hardware system for environmental data
acquisition and transmission. WIT Transactions on Modelling and
Simulation, eds C. A. Brebbia & G. M. Carlomagno, WIT Press,
Southampton, 48, pp 283-294, 2009.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
509
Abstract
This paper provides a snapshot of the current work of the Technical Investigation
Department (TID) of Lloyds Register EMEA, focusing on how computer-based
tools have been integrated to collect, process and analyse data. There are three
main parts to the paper: (1) description of the data acquisition and processing
systems both traditional analogue and more modern digital systems;
(2) discussion of the analysis techniques categorised into spreadsheets, signal
processing, and engineering design and predictive analysis; (3) example
combinations on jobs in both marine and non-marine industries.
The paper concludes with a look ahead to developments foreseen in the
coming years.
Keywords: data, acquisition, processing, technical, investigation.
1 Introduction
Since its formation in 1947, Lloyds Register EMEAs Technical Investigations
Department (TID) has been invited to investigate a wide range of engineering
problems across the marine, land-based industrial and off-shore oil and gas
industries (Carlton and Bantham [1]). The experience gained from these
investigations has been used to develop new and innovative measurement and
analysis techniques alongside more orthodox technologies. The departments
accumulated experience and contemporary measurement toolkit allows TID to
offer services for an extensive range of investigations.
This paper looks at the data acquisition and processing tools and the analysis
techniques available. Through a series of case studies, the paper demonstrates
how a technical investigation department can combine measurement, analysis,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110451
acceleration;
velocity;
displacement;
frequency;
pressure;
force;
inclination.
511
small hole of the order of 15mm diameter in the ships hull is required to obtain
observations. This technique also allows TID to refer back to the observation
when performing analysis, resulting in a more informative investigation.
Example images obtained from this method can be seen in Figure 1.
Figure 1:
Figure 2:
Tubular K joint
instrumented
with
acoustic
emission
sensors and strain
gauges
undergoing
fatigue testing.
Figure 3:
An acoustic emissions
sensor attached to a
sub-sea node joint.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
Instrumented sledgehammer.
Figure 5:
Seismic
accelerometers.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
Figure 7:
513
Telemetry equipment
attached to a gear
wheel.
these measurements are recorded and later downloaded onto a PC for subsequent
analysis if required. Such devices are used for the following measurements:
-
sound;
vibration;
dye penetrant;
presence of cracks, using Magnetic Particle Inspection (MPI);
thickness, using ultrasonics.
2.7 Calibration
Knowledge of the uncertainty of the measurement is vital to the success of the
investigation. As a general rule equipment is calibrated in a controlled laboratory
environment before and after an investigation. This provides confidence in the
measurement data and ensures that equipment has not been damaged or altered
during transit.
TID use an externally calibrated vibration table to check the frequency
response of accelerometers and velocity transducers. Sensors are calibrated with
their associated cabling to remove the effects of resistance between cable
lengths. The TID laboratory also has the equipment required to calibrate
pressure, eddy current and other types of transducer.
3 Analysis techniques
TID have a range of analysis techniques available, the application dependant on
the type of investigation and often adapted to suit the individual requirements of
the specific job.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
statistics;
time domain;
frequency domain;
JTFA (joint time frequency analysis);
wavelets and filtering;
2D and 3D graphical presentation;
modal analysis.
propeller hydrodynamics;
ship performance;
shafting systems analysis;
diesel engine and performance;
gear tooth load distribution.
515
Figure 8:
Figure 9:
Vortex generator.
Figure 10:
Propulsion shafting.
Figure 11:
517
main reduction gear elements and alignment of a sister ship and to conduct load
tests while still in service.
Micro strain gauges were installed to the roots of the main wheel and
measurements were conducted during sea trials to measure the gear tooth load
distribution of the second reduction gear elements (Figures 12 and 13). Strain
gauges were also used to measure the gearbox and propulsion shaft bearing loads
and were confirmed using jacking measurements. Additionally, the main wheel
pin readings were measured and Magnetic Particle Inspection (MPI) of the low
speed gear elements was undertaken.
Figure 12:
Main gear
wheel.
Figure 13:
Figure 14:
Figure 16:
519
point the raw data history would be recovered via an acoustic telemetry link and
used to help identify the existence of a leak.
References
[1] Carlton, J.S., Bantham I. The Technical Investigation Department 50 Years
of Operation. Lloyds Register Technical Association, 1998.
[2] Rogers, L.M., Structural and Engineering Monitoring by Acoustic
Emission Methods Fundamentals and Applications. Lloyds Register
Technical Association, pp. 5-25, 2001.
[3] Technical Matters, Lloyds Register EMEA, Issue 1, 1997-1998.
[4] Technical Matters, Lloyds Register EMEA, Issue 3, 2009.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
521
Abstract
By analyzing laws of growth and reproduction of microorganisms in the
activated sludge (AS) sewage treatment system, this paper proposes a multineural network (NN) COD (chemical oxygen demand) soft-measuring method
based on clustering approach for the sewage treatment project. Various reasons
which might affect the accuracy of the model are analyzed. Experiments show
that the same radiuses of multiple neural network diffusion constant are quite
close which means the prediction accuracy is high and the soft-measuring
method based on clustering approach is suitable for COD measuring.
Keywords: sewage treatment, clustering approach, neural network,
microorganisms.
1 Introduction
Cybenko [1] has proved theoretically that if there are plenty of training data and
without restriction of the size of the network, modelling based on NN can always
get a satisfactory model structure. But, in the actual industrial process, people
often need to face the limited effective process data and due to the real-time
requirements, the network structure also cannot be expanded unlimited.
Modeling effect normally relies on the good generalization ability of the
network. Paper [2] proposed a robust classification method by combining
different models which are based on neural networks with fuzzy combination
approach. Paper [3] presented a method to improve model prediction accuracy
and robustness by adding different models together. Paper [4] discussed Stacked
Neural Network Approach for process modeling. The basic idea of these
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110461
Figure 1:
Figure 2:
523
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(1)
v i2
1/2
Figure 3:
Normalized
sulphide.
value
of
Figure 4:
I-COD
value.
and
525
E-COD
It can be seen from Figs. 5(a) to (d) that the variation trends of T, S, pH, Q
and MLSS do not have any correspondence with the removal ratio of COD or its
time delay. While from Fig. 5(d), there is an obvious correspondence between
COD removal ratio and I-COD values. Furthermore, from the definition of COD
removal ratio ( (CODinCODout)/CODin), it is quite reasonable to choose
I-COD value as the main factor, which is also consistent with Zhangs research
results of orthogonal test[11].
(a)
(c)
Figure 5:
(b)
(d)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Sample
size
14,36,37,41,44,45
Figure 6:
8
20
8
Data distribution.
527
between the two methods. For a new process input data, the fuzzy clustering unit
of PCA-DRBF method firstly identifies its membership for each sub-network,
and according to the different memberships, outputs of all sub-networks are
integrated as the total output of the entire distributed network.
Because the cluster radius r equals 0.05 in this paper, the cluster centers are
also fixed as 0.150.250.350.450.550.650.750.85 and 0.95.
According to the characteristics of samples, the cluster centers in this paper are
limited to 0.15, 0.25, 0.4 and 0.75. The respective subordinate network of each
new input is determined by Euclidean distance. In addition, to reflect the
fuzziness, using the formulation of kernel radius presented for reference [9], this
paper defines the kernel radius as 0.005, which means if an input data lies in the
interval of [-0.005, 0.005], it can be viewed as belonging to two sub-networks
and do not be classified strictly. In the training process, it is determined by the
comparison of output values of both sub-networks with test value that which
sub-network is more suitable for such an input data. Such a sub-network is
retained as retention program. In the forecasting process, the calculating process
is a little complicated. Firstly, the correlation degree of input value which lies in
the interval [-0.005, 0.005] and the corresponding sample in the retention
program will be calculated [12], then the correlation degree will be compared
with a set value to determine which subnet will be chosen. Through the above
fuzzy approach, the complexity of the wastewater treatment process has been
taken into fully consideration.
Assuming that there are N sub-networks {RBFi,i = 1,,N} in the distributed
network shown in Fig. 8, the membership degree set of new input variable x
(influent COD value) for each sub-network is {i, i = 1, ,N}, then the output of
the whole neural network Y is:
Y
i 1
f RBF i ( x ) i 0 O R i 1
(2)
T
PCA unit
Fuzzy classifier
x Influent COD
Figure 7:
PCA-DRBF architecture.
Figure 8:
RBF architecture.
d i x x centeri
i 1, j i 0, if : 0 d i 0.005, i 1, , N
(3)
(4)
If the input data x lies in the interval [-0.005, 0.005], then, assuming there are
two sample groups X1 and X2, corresponding to the subnet i and subnet i+1
respectively, based on the calculation of d X X 1 and d X X 2 , the
finally subnet is chosen by the comparison of the relative size of the two
membership degree.
4.1 Evaluation of the model adaptability
Through the clustering analysis, there are only 8 data in the first class. Taking 7
data among the 8 as training sample data and another one as input data,
researchers can always find the best fit point because of the convergence
performance of RBF network. However, because the sample size is too small, the
expansion constant distribution of the network does not show any regularity and
cannot be used for accurate forecast. Similar problems lie in the third class and
fifth class. Taking relative error (RE) and recognition rate (RR) as criteria of
assessing the forecast accuracy of RBF network, the forecast analysis is
performed according to the data of second class.
R E (Y P _ test ) / P _ test 1 0 0 %
(5)
RR m / n 100%
(6)
where, m is the number of actual outputs which are satisfied with the condition
(Y P _ test ) / P _ test 100% , is the forecast precision, =10% in
this paper. n is the total output number. There are two understandings about
parameter Y and P _ test . One explanation is that the two parameters represent
the desired output and actual output respectively, the other is that the two
parameters represent the anti-normalized values of the desired and actual output.
This paper adopts the second understanding. Comparing with reference [8], the
differences include: (1) calculation accuracy. The relative error of the model
proposed in this paper ranges from 0.06% to 23.01%, average relative error is
7.66%, recognition rate is 83.3% ( =10%). The relative error of the original
model ranges from 0.9% to 9.2% and the average relative error is 3.4%,
recognition rate is 92.3% ( =5%). (2) details of models. There are eight hidden
layer neurons in the original BP neural network model, training times are not less
than 10,000, excitation function of the hidden layer is tansig and learning sample
size is 30. When the SSE of the BP model is less than 0.001, the training stops
and the predict process begins when the test data are input. The hidden layer of
this RBF network provided in this paper are defined in the calculation process,
after only 13 times of iteration, the SSE drops to lower than 10-28, the learning
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
529
sample size is only 13 groups. The above analysis shows that even the
effectiveness of the proposed new model is not as good as the original model, but
it is obviously that convergence of the RBF neural network is much better than
the original. So, the new neural network is the best choice for online operation.
4.2 Analysis of the results of model prediction
Theoretically, when the diffusion constant of network is 0.25, the prediction
results will be highly accurate; while the actual constant is 0.265, with a small
deviation from the ideal value, which verify the correctness of the theory
analysis of this paper. The new neural model can greatly reduce search range in
selecting the optimal value. From the above technical indicator comparison, the
forecast precision of the new model is not high enough because of various
reasons. Through analysis of these causes, this paper put forward several issues
that should be paid attention to in future research.
4.2.1 The lack of sample size
To achieve the desired generalization ability (prediction error is less than a given
value ) requires a large number of samples, which in turn will increase the
amount of network calculation. So, there is contradiction between the
generalization ability and the amount of calculation of learning. Literature [13]
provides an experiential formula: m d vc , where d vc is the VC dimension
of the network. It represents the capacity of function class and can be
approximate evaluated by the number of independent parameters. According to
this experiential formula, the sample size m should equal 50 ( = 0.1, dvc = 5),
while there are only 20 groups used in this case actually. The lack of sample size
is one of the reasons causing the lower prediction precision. From the common
ideas of researching an issue, we rerun the model by taking cluster radius r as 0.2
so that the sample size increases to 30. The distribution of training and testing
samples are shown in Table 2. Data in Table 3 and Fig. 3 shows that the relative
error of the model prediction results ranges from 0.16% to 13.27%, the average
relative error is 4.67% and the recognition rate is 83.3%. Compared with the
initial results above, network performance has been greatly improved. The
calculation results indicate that with the increase of sample size, the prediction
ability of the model is also improved apparently. Meanwhile, if the cluster radius
is 0.2, the diffusion constant needs to be taken 0.371 to ensure the best prediction
results. Comparing with the theoretical best value of 0.3, the deviation is
increasing, which means the value of cluster radius r (0.2) is not as appropriate
as the original value (0.05). In addition, test results also confirm that with the
sample size increasing, the number of hidden layer neurons increases
synchronously. For BP neural network, the convergence performance declines
sharply with the increasing of sample size. Even the operation time of RBF
network also increase with the increasing of sample size, its iterations (101) is
much less than that of BP neural network (104), which proves that RBF network
has obvious advantages when the sample size is large.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 9:
Table 3:
SN
10
12
13
18
19
27
Prediction
results
103.9
128.8
136.1
142.8
81.2
128.9
RE (%)
3.95
5.26
0.46
0.17
4.93
13.27
Desired
output
0.541
0.658
0.729
0.768
0.464
0.784
Actual
output
0.563
0.694
0.732
0.767
0.443
0.694
RE (%)
3.84
5.14
0.45
0.16
4.75
12.97
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
531
Acknowledgements
This paper is funded by Chongqing Natural Science Foundation (CSTC,
2009BB7175) and National Water Pollution Control and Management Science
and Technology Major Projects of China (2008ZX07315-003).
References
[1] Cybenko G. Approximation by superpositions of a sigmoidal function.
Math [J]. Control Signal System, 2, pp. 303-314, 1989.
[2] Cho, S. B. & Kim, J.H. Combining multiple neural networks by fuzzy
integral for robust classification. IEEE Trans on System, Man and
Cybernetics, 25(2), pp. 380-384, 1995.
[3] Bates, J.M. & Granger C.W.J. The combination of forecasts. Operations
Research Quarterly, 20(1), pp. 319-325, 1969.
[4] Sirdhar, D.V., Seagrave, R.C & Bartlett, E.B., Process modeling using
stacked neural networks. AICHE J, 42(9), pp. 2529-2539, 1996.
[5] Shi Han chang, Diao Huifang, Liu Heng et al. Application of treatment
plant operation simulation and forecast software. China Water and
Wastewater, 17(10), pp. 61-63, 2001.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
533
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
535
Abstract
This paper deals with surface heat flux estimation on the basis of transient
temperature measurements using inverse method. As a direct problem the
transient heat conduction through a plane slab isolated at one boundary and
exposed to the heat flux on the other side is considered. The numerical
experiments have been conducted in order to simulate real measurements. The
inverse problem consists of heat flux estimation on the basis of experimental
temperature response. The objective function, which has to be minimized in
order to estimate unknown heat flux, is the least square function of experimental
and calculated temperature data. A variant of genetic algorithm has been used for
minimization of the objective function. Different convergence histories are
presented, compared and discussed. Also, comparison between estimated heat
flux change and exact solution and comparison between calculated and
experimental transient temperature response are presented.
Keywords: genetic algorithm, heat flux, inverse problems, parameter estimation.
1 Introduction
An inverse approach to parameter estimation in the last few decades has become
widely used in various scientific disciplines: like mechanical engineering, heat
and mass transfer, fluid mechanics, optimization of the processes, optimal shape
defining etc. By inverse method, on the basis of known effects, causes have to
be defined, in contrast to the standard, direct method when causes are known and
effects have to be determined. Application of inverse method is especially
important in investigation of the processes where necessary direct measurements
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110471
(1)
3 Genetic algorithm
The genetic algorithm [46] is categorized as a random search method. During
performing calculations the generator of random numbers is called several
thousand times. However, the method also contains the certain rules which
systematically lead toward solution. Because of the random search characteristic
the method has slow convergence but in the same time with its reproduction and
selection rules it is stable and reliable.
There are certain variations of this method depending on the application. The
basic algorithm, fig. 1, processes the following characteristics:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
537
Create of population
of members
Determine the fitness of
each individual
next
generation
Select next
generation
Display
results
Perform reproduction
using crossover
Perform mutation
Figure 1:
In this paper a variant of the genetic method is applied for minimization of the
objective function E of N parameters Pi:
E f (P1 , P2 ,..., PN ) .
(2)
That means that values of parameters Pi have to be chosen in such a way that
the objective function E reaches the global minimum.
The applied variant of genetic algorithm can be described by the following
steps.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(3)
(4)
(5)
In this work the second approach is used, which enables faster convergence
when the initial solution is far from final one.
Selection of parents The selection of parents also may be performed in different
ways. In this work, the members of one population are sorted from the worst to
the best one depending of values of the function E. The best member of the
population has the smallest value of the function E. Then, father and mother are
defined by random choice. In addition, in order to provide higher probability of
choice for parents with smaller values of E, random numbers are reduced from
the smallest to the largest one by special procedure.
Reproduction Using the method of crossover a new child is created, which
typically shares many of the characteristics of its parents. The new child may be
simple arithmetic mean of its parents. In this work the weighted mean is used
and weight is randomly defined:
Pi ,child (1 R )Pi ,father RPi , mother
(6)
The value of the random number R in eqn (6) in classical problems usually
ranges from 0 to 1, but in this case R ranges from -1 to 2 to provide wider range
of combinations of features from parents.
Mutations The algorithm allows for a small chance of mutation with probability
defined in advance. Parameters of new child are chosen randomly in the range
[ Pi ,min - Pi ,max ].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
539
Figure 2:
x 2
T(0, )
0 , x = 0, > 0,
x
T(1, )
q p , x = 1, > 0,
x
T(x,0) = 0,
(7)
(8)
(9)
= 0, 0 x 1.
(10)
T ( x, ) q p () d 2 e
0
m 1
m
2
where m = m.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
m
2
cos m x cos m q p () e
2
2 0
d (11)
5 Results
5.1 Constant heat flux example
This example considers constant heat flux (unknown parameter) applied at the
free surface of the plane slab. The dimensionless value of the heat flux is qp = 1.
A simulated temperature sensor is placed at the isolated surface, xm = 0.
For this case, temperature response Y(0,i), i = 1,,imax obtained from eqn
(11), is shown, fig. 3. In the time interval from 0 to 2, imax (= 501) simulated
experimental values Yi are obtained. In order to analyze influence of
measurement errors, the numerical experiment in whish random distributed error
with = 0.025Tmax = 0.05 is added to the exact solution. It is significant error
value for this type of problems.
The problem is to estimate value of qp on the basis of measured
temperatures by minimization the objective function defined as the sum of the
squared errors between model-predicted value and corresponding experimental
data value, RMS. The problem has been solved using previously described
genetic algorithm. The solution has been searched in the interval -100 qp 100,
with initial value qp0 = -100.
In fig. 4, the various convergence histories are presented. Figs from 3a to 3h
represent different convergence histories until value RMS = 0.001 is achieved. It
means that the numerical value of the heat flux is estimated with error less than
0.1%. These figures represent change of RMS with number of objective function
calls. In this example it takes 7.3 s of CPU time on Pentium 4 (3.06 GHz), for
100 calls. The figures also represent the influence of the number of population
2.00
Y [-]
1.50
__ exact
- with error
1.00
0.50
0.00
0.00
0.50
1.00
1.50
2.00
t [-]
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
1000
541
1000
RMS
a.
10
RMS
Npop = 20
b.
10
N bred = 10
Npop = 50
N bred = 10
0.1
0.1
0.001
0.001
0
100
200
300
400
500
NRMS
100
200
300
400
500
NRMS
1000
1000
RMS
c.
10
RMS
Npop = 30
d.
10
Nbred = 10
0.1
Npop = 15
Nbred = 10
0.1
0.001
0.001
0
100
200
300
400
500
NRMS
1000
100
200
300
400
500
NRMS
1000
RMS
e.
10
RMS
Npop = 10
Nbred = 10
0.1
0.1
0.001
0.001
0
100
200
300
400
500
NRMS
1000
Npop = 10
Nbred = 5
NRMS
1000
RMS
g.
10
Npop = 20
N bred = 15
RMS
h.
10
Npop = 50
N bred = 20
0.1
0.1
0.001
0.001
0
Figure 4:
f.
10
NRMS
100
200
300
400
500
NRMS
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
1000
RMS
10
Npop = 20
N bred = 10
0.1
0.001
0
Figure 5:
100
200
300
400
500
NRMS
In this example the heat flux changes in time according to the following
exponential relation:
q p () a e b
(12)
where a and b are unknown parameters need to be estimated.
As in the previous case, transient heat flux is determined on the basis of the
response of temperature sensor at the location xm = 0, fig. 2. In the same way,
temperature response is obtained using simulated experiment i.e. from eqn (11)
with inserted function for transient heat flux eqn (12). 101 temperature values
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
543
2.50
__ exact
- with error
Y[-]
2.00
1.50
1.00
0.50
0.00
0.00
Figure 6:
0.50
1.00
1.50
t [-]
2.00
have been calculated in the time interval from 0 to 2. In order to simulate real
experiment, random distributed error is added to the exact solution values with
= 0.014Tmax = 0,025, fig.6.
Fig. 7 represents convergence history. The solution is obtained for about 4000
objective function calls. In this case unknown parameters a and b can be
calculated with error less than 1% and 3% respectively. In this example it takes
1.9 s of CPU time on Pentium 4 (3.06 GHz), for 100 calls.
Model estimated and exact values of the heat flux are compared in fig. 8.
1000
RMS
10
Npop = 100
N bred = 50
0.1
0.001
0
Figure 7:
2000
NRMS
4000
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
2.50
__ exact
calculated
__
qp [-]
2.00
1.50
1.00
0.50
0.00
0.00
0.50
1.00
1.50
2.00
t [-]
Figure 8:
2.50
T [-]
2.00
__ exact
_ calculated
1.50
1.00
0.50
0.00
0.00
0.50
1.00
1.50
2.00
t [-]
Figure 9:
6 Conclusion
In this paper the application of the genetic algorithm to the heat flux estimation
on the bases of transient temperature response is presented and analyzed. The
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
545
case of constant heat flux (one unknown parameter) and case of exponential
transient heat flux (two parameters) are analyzed. The convergence history
dependences of different combinations of population and breed number are
presented. It can be concluded that the genetic algorithm may be successfully
used in both cases. The constant heat flux can be estimated with high accuracy
even if measurement is realized with relatively significant error. In the case of
transient heat flux, estimation depends of the measurement error.
References
[1] Kanevce, G.H. Kanevce, L. P., Mitrevski, V.B., Dulikravich, G. S. &
Orlande, H.R.B., Inverse approaches to drying of thin bodies with
significant shrinkage effects, Journal of heat transfer transactions of the
ASME, 129(3), pp. 379-386, 2007.
[2] Beck, J. V. & Arnold, K. J., Parameter Estimation in Engineering and
Science, John Wiley & Sons, Inc., New York, p. 370, 1977.
[3] Ozisik M. N. & Orlande H. R. B., Inverse Heat Transfer: Fundamentals and
Applications, Taylor and Francis, New York, 2000.
[4] Woodbury K. A., Application of Genetic Algorithms and Neural Networks
to the Solution of Inverse Heat Conduction Problems, 4th International
Conference on Inverse Problems in Engineering , Ed. H. R. B. Orlande, epapers, Rio de Janeiro, pp. 73-88, 2002.
[5] Mitchell, Melanie, An introduction to Genetic Algorithms, MIT Press, 1999,
pp. 2-11, 1999.
[6] Adam Marczyk, Genetic Algorithms and Evolutionary Computation, What is
a genetic algorithm, Copyright 2004. http://www.talkorigins.org/faqs
/genalg/genalg.html
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
547
Abstract
Microfabrication, characterization and analytical application of a new thin-film
organic membrane based lead-selective micro-electrode have been elaborated.
Prior to the fabrication of the assembly, the gold thin-film substrate has been
electrochemically treated using a new technique. The developed micro-electrode
based on tert-Butylcalix[4]arene-tetrakis(N,N-dimethylthioacetamide) as
electroactive sensing material, carboxylated PVC as supporting matrix, 2nitrophenyl octyl ether as solvent mediator and potassium tetrakiss (4chlorophenyl) borate as lipophilic additive, respectively, provides a nearly
Nernstian response (slope 280.5 mV/concentration decade) covering the
concentration range 1x10-6 1x10-2 mole L-1 of Pb(II) ions with reasonable
selectivity over some tested cations. The merits offered by the new
microelectrode include simple fabrication, low cost as well as automation and
integration feasibility. Moreover, the suggested microelectrode has been
successfully applied for the determination of lead ions in some aqueous samples.
These samples were also determined using inductively coupled plasma atomic
emission spectroscopy (ICP-AES) for comparison. The proposed electrode offers
a good accuracy (the average recovery was 95.5%), high precision (RSD was
<3%), fast response time (<30 s.) and long life span (>4 months).
Keywords: all-solid-state microelectrode, thin-film, substrate surface treatment,
organic membrane, lead determination.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110481
1 Introduction
Ion-selective electrodes (ISEs) and potentiometric sensors are the most widely
used sensor types for the measurement of toxic heavy metal ions [1, 2]. The
development of all-solid-state micro-sensor devices originating from
potentiometric sensors has accelerated during the last few years, and this is likely
to continue. The realization of such devices seems to be accelerating as microscale construction makes it possible to apply principles that would not work in
macro-scale analogous. In addition, accurate and reliable analysis using
miniaturized chemical sensors is a very useful analytical technique because of
the avoidance of laborious and time consuming preliminary sample treatment.
Moreover, micro-scale analyses of chemical species have many advantages over
conventional methodologies, including high spatial resolution, rapid response,
and minimal disturbance of the analyzed substrate [3]. Use of different organic
and inorganic sensing materials with versatile properties in fabrication of allsolid-state micro-electrodes makes them suitable for the detection of many
chemical species in solution at concentrations lying in the ppm range [410]. In
realization of such devices, chalcogenide glasses were proven to be very
promising ion-selective membranes especially for the detection of heavy metals
in solution (Pb2+, Cd2+, Fe2+, Cu2+, Ag+...) [46]. However, organic membranesensitive layers prepared on transducers, fabricated using different, less or more
complicated and expensive, technologies for measurements of potassium [7] and
lead [8] have been reported, too. In addition, nanoparticle labels (i.e., gold
nanoparticles, silver tags, and semiconductor nanocrystals) have been used in the
fabrication of potentiometric micro-sensors for detection of DNA hybridization
[9] and carbon dioxide [10]. Miniaturization of solid-electrolyte gas sensors to
thin-film micro-devices have been discussed in literature [11]. Microfabrication
of chemical sensors and biosensors [12] as well as ISFET-based micro-sensors
[13] for environmental monitoring has been reviewed. A Pt-Ir wire-based ISE
has been suggested for monitoring the local spatial distribution of magnesium,
pH and ionic currents [14]. Moreover, the realization of micro-sensors, based on
a lab-on-a-chip has also been reviewed [15].
On the other hand, the development of organic membrane based microsensors has been recently introduced to overcome the low selectivity of
chalcogenide glass and inorganic based thin-film micro-sensors [1619].
However, there is an additional problem arise that the adhesion of the organic
membrane to the thin-film substrate is usually poor, which produces an early
degradation of those micro-sensors. To solve this problem, we had recently
developed a new approach (Arida Approach) for the organic-based sensors
micro-fabrications [1719]. In this technique, the organic membrane-based
sensitive layer has been nebulized in combination with a substrate surface
treatment. Using these two steps in combination has distinctly improved the
adhesion on the wafer surface. It decreases the leaching out of the ionophore and
plasticizer, stabilizes the organic membrane, and consequently increases the
micro-electrodes life-time.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
549
2 Experimental
2.1 Chemicals and apparatus
The solvent mediator, 2-nitrophenyl octyl ether and the lipophylic additive
potassium tetrakis(4-chlorophenyl) borate were purchased from Sigma-Aldrich
(CH-9471 Buchs, Switzerland). The membrane support matrix, high molecular
weight (220,000) poly(vinylchloride) carboxylated and the membrane solvent,
THF (tetrahydrofurane) were purchased from Riedel-de Han Chemical
Company (Germany). The lead ionophore used was tert-Butylcalix[4]arenetetrakis(N,N-dimethylthio-acetamide) (15343) purchased from Sigma-Aldrich
chemical company. All the standard solutions of cations were prepared from
their analytical reagent grade chemicals in deionized water, and then diluted to
the desired concentration. Nitrate or chloride salts of the metal used were
purchased from Riedel-de Han. High purity standards (2% HNO3, Pb 1000 mg
kg-1) were used for ICP-AES validation measurements after appropriate dilution
with deionized water. Deionized water with conductivity <0.2 S/cm used in the
preparation and dilution of the reagents was produced using a Millipore
Deionizer (Millipore, France, Elix 10).
The potentiometric measurements were performed at 20oC with a HANNA
microprocessor pH/ion analyzer (Model pH 211) using a thin-film lead microelectrode in conjunction with a double-junction reference electrode immersed in
stirred test solutions. The response characteristics and the selectivity coefficient
pot
K Pb
, M (obtained by separate solution method) of the thin-film lead micro-
Figure 1:
551
Fig. 2. The surfaces of all films do not present any observable defects. While, the
untreated film (a) appears smooth and luster with poor adhesive properties, the
treated substrate surface (b) becomes more mountain-like with high roughness
and consequently, good adhesion to the organic membrane. The organic
membrane film (c) is textured, homogenous and uniformly distributed. This
significantly enhances the stability and consequently, the life span of the
suggested micro-electrode.
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Potentiometric performance properties of the thin-film lead microelectrode and the bulk macro-electrode.
Parameter
All-solid-state thin-film
micro-electrode
Slope (mV/decade)
280.5
Response time t95%(s)
<30
Linear range (mol L-1)
110-2-110-6
Detection limit (mol L-1)
5x10-7
pH
2.2-6.3
Life span (months)
>4
*
Lead bulk macro-electrode [Ref. 20].
The response time and life span of an electrode are important features for
analytical applications. Hence, the potentials of the suggested micro-electrode
corresponding to four decade additions of Pb2+, starting from deionized water to
102 mol L-1 have been measured. The results obtained are presented in Fig. 4.
The response time (t95%) of the suggested micro-electrode in the whole linear
concentration range was about 30 s. The stability and lifetime of the lead microelectrode were also investigated by repeated calibrations at every two or three
days for more than four months. During this period, the response time, slope and
linear range are reproducible. Hence, the micro-electrode can be used for at least
four months with practically unimpaired performance. Moreover, the suggested
micro-electrode properties are almost similar or even somewhat better (life span)
than the results observed for the lead macro-electrode based on the same
ionophore [20]. The recently developed, nebulization technique of the organic
membrane-sensitive layer in combination with the electrochemical treatment of
the substrate surface significantly improves the adhesive properties of the
membrane to the substrate surface, decreases the leaching out of the ionophore
and plasticizer, stabilizes the organic membrane, and consequently increases the
micro-electrodes life-time (4> months).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
553
Figure 4:
Figure 5:
2+
2+
Pb
Pb
0
-1
Log KPb,B
-2
2+
-3
-4
-5
-6
-7
Hg
+
Na Al3+
3+
Cr
+
NH4
+
Li
2+
2+
Cu
+
K 2+
Cd
3+
Fe
Na Li+
+
NH4
2+
2+
Sr
2+
Ba Ca
+
Na Co2+
2+
Cu
2+
Cd
+
K 2+
Zn
2+
Mg
2+
Ni Co2+
Mg
2+
2+
Zn
2+
Ca
Ba
-8
Figure 6:
Thin-film micro-electrode
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
555
1
0.21
0.22
2
5.23
4.88
3
25.67
23.78
4
93.01
86.11
5
210.64
199.57
Average
recovery
*
The data is a mean of n= measurements.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Recovery %
104.7
93.3
92.6
92.5
94.7
95.5
4 Conclusion
A new thin-film organic membrane-based lead micro-electrode incorporating
tert-Butylcalix[4]arene-tetrakis(N,N-dimethylthioacetamide) as electroactive
material in a PVC matrix has been realized. The micro-electrode responds to lead
ions in a nearly Nernstian behavior and presents a good selectivity and detection
limit. The micro-electrode reveals a fast response time and long-term stability. It
has been successfully applied for the determination of lead in some aqueous
samples. The micro-electrode showed a good accuracy and reproducibility in
comparison to the independent ICP-AES method.
Acknowledgement
The authors wish to thank Taif University for financial supports (Project number
766/431/1, 2010).
References
[1] Eric, B. & Ern, P., Potentiometric sensors for trace-level analysis. Trends
in Analytical Chemistry, 24(3), pp. 199-207, 2005.
[2] Ru-Qin, Y., Zong-Rang, Z., Guo-Li S., Potentiometric sensors: aspects of
the recent development. Sensors and Actuators B: Chemical, 65(1-3),
pp. 150-153, 2000.
[3] Wrblewski, W., Dybko, A., Malinowska, E., Brzzka, Z., Towards
advanced chemical microsensorsan overview. Talanta, 63, pp. 3339,
2004.
[4] Taillades, G., Valls, O., Bratov, A., Dominguez, C., Pradel, A., Ribes,
M.,ISE and ISFET microsensors based on a sensitive chalcogenide glass for
copper ion detection in solution. Sensors and Actuators B, 59, pp. 123127,
1999.
[5] Guessous , A., Sarradin, J., Papet, P., Elkacemi, K., Belcadi, S., Pradel, A.,
Ribes, M., Chemical microsensors based on chalcogenide glasses for the
detection of cadmium ions in solution. Sensors and Actuators B, 53,
pp. 13 18, 1998.
[6] Schning, M.J. & Kloock, J.P., About 20 Years of Silicon-Based Thin
Film Sensors with Chalcogenide Glass Materials for Heavy Metal Analysis:
Technological Aspects of Fabrication and Miniaturization- A review.
Electroanalysis, 19, pp. 2029-2038, 2007.
[7] Zachara, J. & Wrblewski, W., Design and performance of some
microflow-cell potentiometric transducers. Analyst, 128, pp. 532536,
2003.
[8] ISILDAK, I., All Solid-State Contact Lead(II) Ion-selective PVC
Membrane
Electrode
Using
Dimethylene
Bis(4methylpiperidinedithiocarbamate) Neutral Ionophore. Turk J Chem, 24,
pp. 389-394, 2000.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
557
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
559
Abstract
Sudden methane outbursts which often cause serious accidents with numerous
fatalities can be produced in underground coalbed roadways and when driving
tunnels for highways or railways passing through carboniferous materials.
Most of the factors which take part in the generation of sudden outbursts are
related to methane conditions into the coalbed and specifically to gas pressure
and coal permeability.
When excavating roadways or tunnels crossing coalbeds, it is convenient to
have easy to use equipment which allows us to know at any moment the
measured pressure and its variation in factors such as roadway and tunnel driving
velocity, geological faults which lead to fractured formations and natural or
induced overstresses.
The designed gas-pressure-measurement-tube set goes into an 8 m deep and
45 mm in diameter borehole drilled in front of the working face. The gaspressure-measurement-tube set is connected to a flexible rod with a manometer
at the end of it.
The obtained measurements are used both in the empirical formulae and in
Computational Fluid Mechanics; this allows us to predict the probability of the
occurrence of sudden outbursts.
Keywords: gas pressure, methane, outburst, gas-pressure-measurement tube,
underground mine.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110491
1 Introduction
Outbursts in the underground coal mines have been and will be a highly topical
question as a consequence of the repercussion which they have on miner safety.
The statistics of fatal accidents are unfortunately widespread and extensive in
mines worldwide [13].
Alterations which are originated by mining works generate a reorder of
tensional state of coalbed till it achieves a new equilibrium, being able to make
different phenomena evident during these processes. This affects the stability of
coal mine created and consequently the safety of the workers.
Under the denomination of dynamic phenomena, a set of manifestations, in a
sudden way, due to existing energetic conditions emitted, are included. As a
consequence, a process of projection of coal and rock which can be accompanied
by the gas content in the coal is produced. These energetic conditions are
generated by the interactions of the following three factors: tensional state of
coalbed, presence of gas in the coal and in some cases gravity.
In figure 1, the triangular graphic which related the three factors giving way
to the different types of gas emission phenomena is shown. Moreover, it can
clearly be seen which properties are more influenced in these factors.
Figure 1:
The outburst of coal and methane consists of sudden and violent interruption
of coal methane in the vacuum created by an explosion [4, 5]. This phenomenon
correspond to a modification of the most favorable conditions for the
disintegration of the coal close to the face and favours the intense and sudden
desorption of methane ([6] and Coal Group of Energy Division of United
Nations Economic Commission for Europe (UNECE) [7]).
In the Complementary Technical Instruction developed by the Principado de
Asturias (legislation applicable to the location of the mine, S.A. Hullera Vasco
Leonesa, and where the experimental measurements are carried out) outbursts of
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
561
coal and gas are defined as one of the most complex gas dynamic phenomena
with the following characteristics:
-The sudden destruction of face with a common cavity in the coalbed.
-The pneumatic transport of broken coal.
-A higher gas outburst to the normal content of the coal, which is
progressively reduced.
In underground coal mine of S.A. Hullera Vasco Leonesa, the outbursts could
be classified as a medium type, since from 50 - 400 t of coal are affected. In this
study, 300 t of coal were ejected and about 1000 m3 of gas have been emitted.
This classification orders the outbursts in five groups from 0.5 10 t to > 1000 t.
Most of the factors which take part in the generation of sudden outbursts are
related to methane conditions in the coalbed, especially to gas pressure and coal
permeability.
In this paper, the necessity to make use of simple equipment which allows us
to know at any moment the measured pressure and its variation in factors such as
roadway and tunnel driving velocity, geological faults which lead to fractured
formations and natural or induced overstresses is manifested.
For experimental measurements a gas-pressure-measurement tube set which
goes into an 8 m deep and 45 mm in diameter borehole drilled in front of the
working face is designed. This gas-pressure-measurement tube set is connected
to a flexible rod with a manometer at the end of it.
A continuous record of the gas pressure measurements in the coalbed allows
us to take preventive measures against the outbursts. This avoids the presence of
outbursts and reduces the risk of serious accidents for the miners.
2 Risk indexes
There are numerous indexes which quantify the different factors that determine
the presence of outburst, some of them are the following: methane concentration
in coalbed [8], desorption velocity of methane [9], methane concentration in
ventilation, V30 and the German Regulation [10], drill cuttings Jahns test [11],
measurements of gas pressure in the roadway and coalbed [2], the Polish ST
index [12], the Russian Ps index, from WostNII Institute in Kuznetsk, the Psi
index from the Karaganda coalfield in Kazakhstan and the k and D indexes from
China [13].
The nature of these indexes are variable, from normal ones (some of them are
applied in other disciplines) to those which are both consider mining experiences
and useful.
In both simple indexes and combined indexes, the gas pressure is of greater
importance; therefore, it is the most important factor studied in this research.
There are two types of gas pressure measurements in coalbed: measurements in
the coalbed and measurements in the heading face. The first measurements
which correspond to the phase of coalbed research have been carried out by
means of boreholes of greater length where a gas-pressure-measurement tube set
which registers the evolution of the gas pressure in long periods of time has been
placed.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
Diagram of
SolidWorks.
gas-pressure-measurement
tube
designed
563
by
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
a)
b)
Figure 3:
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
565
The horizontal distance that separates the face of heading and face of
shortwall-sublevel caving is denominated D. This distance changes as the mine
is being exploited and can be divided into 3 groups:
-D is positive when the two faces become closer.
-D is 0 when the two faces coincide.
-D is negative when the faces overlap.
Figure 5 illustrates the results of the evolution over time of gas pressure (kPa)
in 3 boreholes in the roof for five distances D = 33, 23, 2, -11.5 and -40m.
It can be seen how, for four hours after inserting the measurement tube
(D = 30 m), there is an increase in the gas pressure reaching maximum values of
900 kPa.
Figure 5:
When D decreases, the maximum gas pressure also decreases, reaching values
below 700 kPa. Later a faster decrease in this pressure occurs over time until
becoming 0.
When the distance D is negative, the maximum values of gas pressure are
lower than 500 kPa and when the absolute value of D is higher, the maximum
pressure is 300 kPa.
The evolution of the gas pressure shows that there is an outburst-prone zone
as a consequence of a gas pressure increase for the distance D (between 25 and
50 m).
For lower values of D, as a result of the decrease in the Protodiakonov index
of the heading face (average values of 1.31, maximum values of 2.13 and
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
4 Conclusions
The measurements and evolution of gas pressure by boreholes in the heading
face allow us to predict by means of simple and operative tools, the trend of the
generation of outbursts in the coalbed.
It is convenient for the gas-pressure-measurement tube to be motorized in
order to have a remote and continuous register of the gas pressure.
When higher elevations of gas pressure, indicated by the gas-pressuremeasurement tube are generated, the coalbed should be degassed by means of
boreholes in the heading face with an injection of pressure water.
If the pressure increase is excessive, penalization of the works could be
necessary.
References
[1] Belin, J. & Vandeloise, R., Rsultats des recherches efectes en Belgique et
en France sur les dgagements instantans dans les mines de charbon.
Annales des Mines de Belgique. N 2 Fvrier, 1969.
[2] Lama, R.D. & Bodziony, J., Management of outburst in underground coal
mines International Journal of Coal Geology, 35, pp. 83115, 1998.
[3] Basil Beamish, B.B. & Crosdale, P.J., Instantaneous outbursts in
underground coal mines: An overview and association with coal type
International. Journal of Coal Geology, 35, pp. 2755, 1998.
[4] Peng, S.S, Coal mine ground control. John Wiley & Sons. New York 1986.
[5] Hargraves, A.J., Update on instantaneous outbursts of coal and gas.
Proceeding of Australian Institute of Mining and. Metallurgy, 298, pp. 3
17, 1993.
[6] Belin, J., Mesures de prevention des degagement instantans de methane et
de charbon ou de roches. Organe permanente pour la securit et la
salubrit dans les mines de houlle. N 6039/81. Luxembourg, 1983.
[7] ECE, Coal Group of Energy Division of United Nations Economic
Commission for Europe. Luxembourg, 1995.
[8] ET0307-2-92. Especificacin tcnica: Mtodo para determinar la
concentracin de gas en la capa. Ministerio de Industria del Estado Espaol.
[9] ET0307-2-92. Especificacin tcnica: Mtodo para determinar la velocidad
de desorcin del gris. Ministerio de Industria del Estado Espaol.
[10] Renania, Circulares 23.10.87, 6.7.82 y 13.8.81 de la Direccin Superior de
Minas del Estado de Renania sobre DI. Dormund. Verlag. H. Bellmann,
1987.
[11] Bruner, G., Rockburst in coal mines and their prevention. Ed. Balkema.
Rotterdam, 1994.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
567
[12] Tarnowski, J., Calculations concerning coal and gas outburst. International
Symposium of Workshop on Management, Control of High Gas Emission
and Outburst. Ed. Lama. Wollongong, pp. 49-61, 1995.
[13] Lama, R.D., Bodziony, J., Sudden outburst of gas and coal in underground
coal mines. Final Report. Project No. C 3034. Australian Coal Association
Research Program, 1996.
[14] Torao, J., Torno, S., Menndez, M., Gent, M.R., Velasco, J., Models of
methane behaviour in auxiliary ventilation of underground coal mining.
International Journal of Coal Geology, 79, pp. 157-166, 2009.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
569
Abstract
Self-assembling biomolecules are widespread in nature and attractive for
technical purposes due to their size and highly ordered structures in nanometer
range. Surface-layer (S-layer) proteins are one of those self-assembling
molecules and their chemical and structural properties make them quite attractive
for nanotechnical purposes. They possess a high content of functional groups so
a sequential coupling of functional devices is possible and their ability to self
assemble in aqueous solutions or on surfaces, e.g. SiO2 wafers, qualifies them for
nanotechnical applications. In this work, first experiments were done in order to
construct a sensory device containing S-layer proteins as matrix to bind optical
elements and analytes for detection of specific substances. The S-layer proteins
were isolated from the Lysinibacillus sphaericus strain JG-A12 recovered from a
uranium mining waste pile in Germany. As optical elements fluorescent dyes or
quantum dots can be used. Three different fluorescent dyes which are able to
perform a Fluorescence resonance energy transfer (FRET) were used and
coupled to the S-layer proteins. As receptor molecule aptamers were chosen due
to their high specifity and stability towards many chemicals. Aptamers are short
oligonucleotides which are able to bind specific molecules via their three
dimensional structure. In this work, a model aptamer was used that is specific
towards human thrombin. The aim was to construct a sensor system which is
able to detect specific substances in very low concentration ranges in aqueous
solutions.
Keywords: S-layer proteins, fluorescent dyes, aptamers.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110501
1 Introduction
S-layer proteins represent the outermost cell envelop compound of numerous
bacteria and almost all archaea. These structure proteins feature a lot of functions
such as protection, shape and molecular sieves. They are able to self assemble in
aqueous solutions and on surfaces and form regular sheets exhibiting a
paracrystalline nanostructure [1]. Generally, S-layer proteins provide a huge
amount of orientated functional groups that can be used for coupling of
molecules to the surface, thus introducing a high level of functionality in a small
device. The possibility to create highly regular arrays of nanostructured
multifunctionalized surfaces, makes S-layer proteins highly attractive for the
construction of sensory layers.
Here we present a new concept of biosensors based on the application of
S-layers. This biosensor will be able to detect various substances in a very low
concentration range, e g. pharmaceuticals in clarified waste water. Figure 4
shows the functionality of the biosensor with following compounds:
(1) S-layers that are used for the nano-structuring and functionalization of
surfaces such as SiO2-wafers or glass.
(2) Aptamers that are coupled to S-layers and work as receptors. Aptamers are
oligonucleotides that specifically bind chemical compounds via their threedimensional structure. In the present study, aptamers that specifically bind
thrombin were used as a model.
(3) Two different fluorescence dyes with overlapping excitation and emission
spectra. A unit cell of the S-layers has a size of 12.8 nm and comprises four
protein monomers. Therefore coupled fluorescence dyes are in a close proximity,
thus favouring a fluorescent resonance energy transfer (FRET). FRET means a
radiation free energy transfer from one fluorescence dye (donor) to another
fluorescence dye (acceptor). The energy transfer causes a decrease in donor
fluorescence and an increase in acceptor fluorescence. Such a transfer system is
very sensitive against environmental changes and can be easily disturbed e.g. by
the binding of analytes to the aptamer.
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
571
The S-layer proteins of L. sphaericus JG-A12 that were used for the present
study contain 10 mol% NH2 and 12 mol% COOH groups, consequently
providing many residues for functionalization. These groups of the S-layers can
be easily coupled with fluorescent dyes by using crosslinking reagents like
1-Ethyl-3-(3-dimethylaminopropyl)carbodiimid (EDC). Additionally, hydroxylgroups of the protein can be coupled with aptamers. In the present study the
coupling of the three different fluorescent dyes Hilyte488, Hilyte555 and
TAMRA and the anti-thrombin aptamer was performed and the results analyzed.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
573
Figure 2:
Figure 3:
hinders a potential FRET detection due to the fact that a donor excitation
simultaneously excites the acceptor. Therefore further experiments will be done
with the Hilyte488 and TAMRA as potential FRET pair.
3.2 Binding anti-thrombin-aptamers to S-layer proteins of L. sphaericus
JG-A12
As presented in figures 4 and 5, anti-thrombin-aptamers were successfully
coupled to S-layer proteins of L. sphaericus JG-A12 via PMPI. Modification
degree was determined by measuring the absorption maxima of uncoupled antithrombin-aptamer that remained in the supernatant after crosslinking. Whereas
light microscopic images show that all S-layer polymers are modified with
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
Light microscopic image of S-layer proteins modified with antithrombin-aptamer in phase contrast mode and in fluorescence mode
using a filter between 540 and 550 nm.
Figure 5:
4 Outlook
For the construction of a biosensor that is composed of S-layers, fluorescence
dyes and aptamers, important steps are the stable coupling of the functional
molecules to the S-layer proteins without loosing the functionality of those
molecules and the structure of the S-layers. The presented results demonstrate
that such a functionalization is possible while keeping the structure of the S-layer
polymers. In further steps affinity studies of the coupled anti-thrombin-aptamers
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
575
Acknowledgement
This study was kindly supported by the German Federal Ministry of Education
and Research (BMBF), grant.01RB0805A.
References
[1] Sar, M. and Sleytr, U. B., S-layer proteins, Journal of Bacteriology,
182(4), pp. 859-868, 2000
[2] Raff, J. et al. Chem Mater 15(1), pp. 240-244, 2003
[3] Lowry, O. H., Rosebrough, N. J., Farr, A. L., Randall, R. J., Protein
measurement with the folin phenol reagent. Journal of Biological
Chemistry, 193(1), pp. 265-275, 1951
[4] Bock, L. C., Griffin, L. C., Latham, J. A., Vermaas, E. H., Toole, J. J.,
Selection of Single-Stranded-DNA Molecules That Bind and Inhibit
Human Thrombin. Nature, 355(6360), pp. 564-566, 1992
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
577
Abstract
Increasing the precision in timetable planning is a key success factor for all
infrastructure managers, since it allows us to minimize delay propagation
without reducing usable capacity. Since most running time calculation models
are based on standard and deterministic parameters an imprecision is implicitly
included, which has to be compensated by running time supplements.
At the same time, GPS or even more precise trackings are continuously stored
in the event recorders of most European trains. Unfortunately, this large amount
of data is normally stored but not used except for failure and maintenance
management.
To consider real running time variability in running time calculation, an
approach has been developed, which allows us to calibrate a performance factor
for each motion phase.
Given the standard motion equation of a train, and a mesoscopic model of the
line, the tool uses a simulated annealing optimisation algorithm to find the best
regression between calculated and measured instant speed. To increase precision,
the motion is divided into four phases: acceleration, braking at stops, braking for
speed reductions/signals and cruising. By performing the procedure over a
number of train runnings, a distribution of each performance parameter is
obtained. Once the infrastructure model is defined and the trackings are
imported, the procedure is completely automated.
The approach can be used in both stochastic simulation models and as a basis
for advanced timetable planning tools, where stochastic instead of deterministic
running times are used. The tool has been tested in the north-eastern part of Italy
as input for both running time calculation and microscopic simulation.
Keywords: railway simulation, railway planning, GPS, train performance,
calibration.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110511
1 Introduction
Increasing the precision in timetable planning is a key success factor for all
infrastructure managers, since it allows us to minimize delay propagation
without reducing usable capacity. When setting up a timetable, it is necessary to
estimate running times for different rolling stock using the current infrastructure.
Conventional running time calculators solve the motion equation, which is based
on a number of empirical parameters. Such parameters have been measured for
years for different kinds of rolling stock [1]; therefore it is possible to calculate
train speed profile in high detail. However, many influences on running times are
not deterministic, such as human behaviour, weather conditions and even
different trains of the same series could show different performances. To cope
with this variability, recovery times are inserted, implicitly including an
imprecision in the representation of train motion.
While a deterministic running time calculation is sufficient to plan timetables,
which of course must be deterministic, a more detailed representation is required
in micro-simulation and for ex-ante evaluate timetable robustness estimations,
which aim at reproducing train behaviour with highest detail.
A performance factor, which introduces a stochastic element in the motion
equation, has been proposed by some authors and inserted in proven simulation
tools. This factor in multiplied by the tractive effort, the speed limit and the
braking deceleration during acceleration, cruising and braking respectively.
To estimate the distributions of such parameters, an iterative approach was
proposed by the Authors [3]. A software tool which allowed us to compare onboard collected data and running time calculation was developed from scratch
and tested on a line in Northern Italy, demonstrating the benefits of calibrated
motion equation in stochastic micro-simulation.
The results appeared very promising, although calibration of motion equation
was performed manually by the user, graphically comparing simulated and real
speed profiles at given points.
To overcome this weakness, and to allow a more precise calibration based on
a higher number of records, a new software tool has been developed, which
allows an automatic calibration of performance factors.
579
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
581
and for a restrictive signal aspect. This separation has been decided after a first
analysis of real braking curves, which appeared significantly different depending
on the motivation of braking. Braking at halts showed lower deceleration
compared to braking for signals, and even lower values were recorded at speed
limit changes. Moreover, especially when heavy trains brake at a speed reduction
often run even lower speeds than allowed or show very variable deceleration to
avoid this excessive braking.
As a result, compared to conventional running time calculation, where the
braking distance is continuously calculated and stored, three values are
computed.
2.4 Calibration algorithm
During calibration, the best fitting set of performance factors for a train ride is
calculated. This computation is based on three assumptions:
1) The infrastructure model represents exactly the infrastructure used by
the train, in particular concerning signal positions, relative distances and
speed limits.
2) At the end of calibration, calculated running time correspond exactly to
the measured one. In other words, the set of performance factor must
lead to an exact calculation of the running time at the end of the
journey.
3) The integration period in running time calculation must correspond to
the tracking sampling period. This simplifies the calibration
significantly, since to each recorded value corresponds to a calculated
one, permitting a simple comparison in order to obtain an indicator of
the goodness of the estimated parameters.
The method used to compare the two arrays is the simple mean squared error
estimator. The software tool implements the algorithm (1)
N
(1)
t 1
To compute a fixed length speed vector one dependent variable and four
independent variables have to be considered. The software tool uses cruising
performance as dependent variable for three reasons:
1) It has a small variation and relative high value (always higher than 90% in
the test case)
2) Cruising running phase is the longest during a train run, therefore a
minimum variation of its value has a great impact on total train running
time
3) The value of cruising performance is inversely proportional to train
running time.
Assuming 3), the value of the dependant variable value can be found using the
bisection method.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
2
min{ [vGPS (t ) vC (t )] }
t 1
vC f (a, b, c, d , e)
size(v ) size(v ) N
GPS
C
(2)
In the equation a,b,c,d,e are the five performance parameters, vc is the arrays
of the calculated speeds and vGPS is the array of measured speeds.
This appears as a nonlinear optimization problem: an adequate method to
solve it has to be found; moreover, since it is difficult to determine a priori the
properties of the target function a robust method must be used. A number of
proven algorithms to perform this computation can be found in literature.
In the first tests, using only one parameter for breaking performance, the map
represented in figure 2 has been obtained. It represents the value of target
function (z-axis) with the variation of acceleration and braking factors.
Obviously the value of cruising performance is not represented because it
depends on the other two values. Its possible to notice the optimum value point
for this train run.
The simulated annealing (SA) optimization method [9] is used in the software
tool to find the best performance parameters that reduce the difference between
computed and measured train speed. The algorithm begins finding a solution at a
random point of the feasible region; after that, many steps are performed in order
to find a better solution. It is possible to limit the maximum number of steps to
be performed in order to limit computation time.
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
583
Figure 3:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
585
improvement will come from the calibration of the motion equation parameters
to fit the DIS data.
The approach appears very useful for the calibration of motion equation
within micro-simulation tools, while a promising application is represented by
the implementation of stochastic blocking time staircases instead of deterministic
ones in timetable planning software.
References
[1] Wende, D. Fahrdynamik des Schienenverkehrs. Wiesbaden, Teubner
Verlag, 2003.
[2] Hansen, I.A., Pachl, J., Railway Timetable & Traffic Hamburg,
Eurailpress, 2008.
[3] Medeossi, G. Capacity and Reliability on railway networks, a simulative
approach University of Trieste, 2010-07-23.
[4] de Fabris, S., G. Longo, et al. Automated analysis of train event recorder
data to improve micro-simulation models. In: J. Allan, E. Arias, C. A.
Brebbia et al., Computers in Railways XI, WIT Press, Southampton, 575585, 2008.
[5] Nash, A., Huerlimann, D., Schuette, J., Krauss, V.P., RailML A standard
data interface for railroad applications, In: Allan, J., Hill, R.J., Brebbia,
C.A., Sciutto, G., Sone, S. (eds.), Computers in Railways IX, WIT Press,
Southampton, 45-54, 2004.
[6] M. Montigel. Representation of Track Topologies with Double Vertex
Graphs. In T.K.S. Murthy, F.E. Young, S. Lehman, W.R. Smith, editor,
Computers in Railway, volume 2, Washington D.C., 1992. Computational
Mechanics Publications.
[7] Huerlimann, D., Nash, A., Railway simulation using Opentrack, In: Allan,
J., Hill, R.J., Brebbia, C.A., Sciutto, G., Sone, S. (eds.), Computers in
Railways IX, WIT Press, Southampton, 45-54, 2004.
[8] Robert Grower Brown, Patrick Y.C. Hwang, Introduction to Random
Signals and Applied Kalman Filtering Second Edition, John Wiley & Sons,
NavtechGPS, 1997.
[9] Hillier, F. and Lieberman, G., Introduction to Operations Research (8th ed.),
McGraw-Hill Science/Engineering/Math, 2005.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 8
Multiscale modelling
589
Abstract
This paper presents research on the development of a simulation method for the
study of the mechanisms of intergranular crack nucleation and propagation in
polycrystal metals on the mesoscale. Microstructural geometry models were
built randomly using Voronoi techniques. Based on these grain structure
geometry models, two-dimensional grain structure finite element models were
created using a Patran Command Language (PCL) program. Techniques for the
implementation of the cohesive elements between grain boundaries were
developed in PCL for the generation of two-dimensional cohesive models.
Simulations on intergranular crack nucleation and evolution along grain
boundaries using two-dimensional finite element cohesive models were carried
out on the mesoscale level. Several aspects that affect the crack nucleation and
propagation were studied, which included random grain geometries, grain
boundary misorientations, grain boundary peak strength, grain boundary fracture
energy, grain properties, and grain plasticity. The simulations demonstrated that
the cohesive model is a useful and efficient modeling tool for the study of the
intergranular crack nucleation and evolution on the mesoscale level. The
simulation results showed that the factors studied have large impacts on
intergranular crack nucleation and evolution based on the current model
capabilities and conditions.
Keywords: intergranular crack nucleation and evolution, cohesive zone model,
polycrystal metals, mesoscale.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110521
1 Introduction
A major challenge in life cycle prediction and management of aircraft structural
components is the lack of information on the crack nucleation and short crack
propagation stages. However, for many airframe materials, the life of an aircraft
component could be almost completely exhausted within these two phases.
Therefore, studies on smaller-scale levels, such as the microscopic scale or the
mesoscale, are strongly needed to better understand the physical nature of the
crack nucleation and propagation.
A fundamental research project was launched at Institute for Aerospace
Research in National Research Council Canada aimed at developing a coupled
atomic-meso-macroscopic modeling strategy for the simulation of crack
nucleation and propagation in aircraft components. To this end, the development
of modelling capabilities at each length scale was essential. Research work on
the mesoscale level is presented in this paper. The simulation methods,
developed capabilities, and the studied mechanisms of intergranular crack along
grain boundaries (GB) are discussed.
591
The major advantage of the cohesive zone models is that they can predict the
formation of damage without the need to pre-define any initial damage in the
model. Moreover, cohesive zone formulations can be easily implemented in
finite element codes using cohesive elements [4].
3.1 Traction-separation based CZMs
There are two types of cohesive zone models, the continuum-based constitutive
model and the traction-separation based constitutive model. If the interface
thickness is negligibly small (or zero), the constitutive response of the cohesive
layer can be defined directly in terms of traction versus separation. If the
cohesive layer has finite thickness and if macroscopic properties of the adhesive
material are available, the response can be modeled using conventional material
models. For the simulation of grain boundary characteristics, the tractionseparation based cohesive models can be used to define the grain boundary
behavior.
One of the existing traction-separation models assumes an initial linear elastic
behavior, followed by the formation and evolution of damage, as shown in
Figure 1. The nominal traction stress vector, S, consists of two components in
two-dimensional problems, Sn and St, which represent the normal and shear
traction, respectively. The corresponding separation vector is denoted by (n,
t).
S
S
Figure 1:
S
S
S
S
1.0
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(1)
(2)
0, S
0
The use of the Macaulay bracket is to signify that a pure compressive
deformation or stress state does not start damage.
3.2.2 Quadratic nominal stress criterion
Damage is assumed to nucleate when a quadratic interaction function involving
the nominal stress ratios reaches the value of one.
S
S
S
S
1.0
(3)
0
D S S
S otherwise
S
D S
(4)
(5)
where S and S are the stress components predicted by the elastic tractionseparation behavior for the current strain damage. D is a damage variable
representing the overall damage in the material.
For the linear softening, the evolution of the damage variable D reduces based on
the effective displacement:
,
D
(6)
1.0
593
(7)
whereG and G refer to the work done by the tractions and its conjugate relative
displacement in normal and shear directions, respectively. G and G are the
critical fracture energies required to cause failure in the normal and shear
directions, respectively.
GB properties
= 0.25 N/mm
= 500 MPa
kn = 2.5 e7 MPa
G Cos 4
(8)
S Cos 4
(9)
where
180
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
E2
Figure 2:
E1
2
E2
E1
1
The boundary conditions and load applied onto the samples are illustrated in
Figure 3. Three sides were pinned and one side was loaded by a displacement,
0.005mm (5 m), which was equivalent to 1% strain.
Figure 3:
5 Parametric studies
Using the mesoscale models, simulations were carried out to study the effects of
several sets of parameters on the crack nucleation and evolution in the
polycrystalline samples. The studied parameters and results are described in the
next sections.
5.1 Variations of random grain geometry
In this section, the effects of random grain geometry on the crack nucleation and
evolution are examined. The grain geometry was randomly created using
Voronoi tessellation method. By fixing the overall size of the sample and the
grain number (100), samples with different grain geometries (grain size, grain
shape, aspect ratio) were generated. Figure 4 demonstrates the crack paths and
the stress distributions in the three samples with different grain geometries. From
this figure, it can be seen that the crack patterns and stress distributions are quite
different. Figure 5 shows the damage dissipation energy versus applied strain for
the three samples shown in Figure 4. The figure indicates that the crack
evolution patterns, i.e. the energies dissipated by the damage, are quite different.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(i)
Figure 4:
Figure 5:
(ii)
595
(iii)
(i)
(ii)
(iii)
Figure 6:
Figure 7:
Figure 8:
Figure 9:
597
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 10:
599
energies were larger. This may be because less external work was converted
plasticity dissipation energy, and more was used to generate the crack surface
along grain boundaries. When the reference yield stress became much lower than
the GB peak strength, the damage energy dissipated along grain boundaries was
much lower, and most of the external work was converted into plasticity
deformation in the grains.
Figure 11:
6 Summary
In the present work, capabilities for the construction of grain structure finite
element cohesive zone models were developed. Modeling and simulation of
intergranular crack nucleation and propagation in polycrystal metals on the
mesoscale were conducted. The intergranular fracture characteristics were
investigated through parametric studies. The simulation results showed that grain
boundary cohesive properties, such as the grain boundary peak strength and the
grain boundary fracture energy, directly affected the intergranular crack
nucleation and evolution. The lower the grain boundary strength, the earlier the
cracks nucleated. The lower the grain boundary fracture energy, the earlier and
faster the cracks propagated after nucleation occurred. Different grain
geometries and grain boundary misorientations resulted in different crack
nucleation and evolution patterns. Moreover, the grain material properties, and
the competition between grain plasticity and GB strength also have influences on
the crack nucleation and evolution along grain boundaries.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Acknowledgements
This research was supported by the Technology Investment Funding (TIF) from
the Department of National Defence Canada.
References
[1] Zhao, Y. And Tryon, R., Automatic 3-D simulation and micro-stress
distribution of polycrystalline metallic materials, Computer Methods in
Applied Mechanics and Engineering, Vol. 193, pp. 3919-3934, 2004.
[2] Barenblatt, G.I., The formation of equilibrium cracks during brittle
fracture. General ideas and hypothesis, Axially-symmetric cracks, Journal
of Applied Math. & Mech. (PMM), Vol. 23, No. 3, pp. 622-636, 1959.
[3] Dugolale, D., Yielding of steel sheets containing slits. J. of Mech. Phys.
Solids, Vol. 8, pp. 100-104, 1960.
[4] Fan, C., Jar, P.-Y. B and Cheng, J.J.R., Cohesive zone with continuum
damage properties for simulation of delamination development in fibre
composites and failure of adhesive joints, Engineering Fracture
Mechanics, Vol. 75, pp. 3866-3880, 2008.
[5] Chandra, N., Li, H., Shet, C., Ghonem, H., Some issues in the application
of cohesive zone models for metal-ceramic interfaces, International
Journal of Solids and Structures, Vol. 39, pp.2827-2855, 2002.
[6] Moura, M.F.S.F. D., Goncalves, J.P.M., Chousal, J.A.G. and Campilho,
R.D.S.G., Cohesive and continuum mixed-mode damage models applied
to the simulation of the mechanical behavior of bonded joints,
International Journal of Adhesion & Adhesive, Vol. 28, pp. 419-426, 2008.
[7] Ruiz, G., Pandolfi, A. and Ortiz, M., Three-dimensional cohesive
modeling of dynamic mixed-mode fracture, International Journal for
Numerical Methods in Engineering, Vol.52, pp.97-120, 2001.
[8] Iesulauro, E., Ingraffea, A. R., Arwade, S., and Wawrzynek, P. A.,
Simulation of Grain Boundary Decohesion and Crack Initiation in
Aluminum Microstructure Models, Fatigue and Fracture Mechanics: 33rd
Volume, ASTM STP 1417, W. G. Reuter and R. S. Piascik, Eds., American
Society for Testing and Materials, West Conshohocken, PA, 2002.
[9] Luther, Torsten and Konke, Carsten, Polycrystal models for the analysis of
intergranular crack growth in metallic materials, Journal of Engineering
Fracture Mechanics, Vol. 76, pp. 2332-2343, 2009.
[10] Glaessgen, E., Seather, E., Phillips, D., and Yamakov, V., Multiscale
modelling for grain-boundary fracture: cohesive zone models parameterized
from
atomistic
simulations,
Proceedings
of
the
47th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics &
Materials Conference, AIAA 2006-1674, Newport, Rhode Island, 2006.
[11] Li, H. and Chandra, N., Analysis of crack growth and crack-tip plasticity
in ductile materials using cohesive zone models, International Journal of
Plasticity, Vol. 19, pp. 849-882, 2003.
[12] ABAQUS, ABAQUS reference user manual, version 6.8.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
601
Abstract
This paper presents the full components of macroscopic homogenized material
properties and the microscopic localized response obtained through a multiscale
finite element simulation using realistic crystal morphology. Crystal morphology
analysis was performed to reveal microstructure and texture of a polycrystalline
piezoelectric material. The insulative specimen of piezoelectric material was
coated with a conductive layer of amorphous osmium to remove an electric
charge, and crystal orientations were measured by means of electron backscatter
diffraction. Then the obtained crystal orientations were applied to a multiscale
finite element simulation based on homogenization theory.
Keywords: piezoelectric material, EBSD, crystal morphology, multiscale finite
element simulation, homogenization theory.
1 Introduction
Piezoelectric materials have been used in actuators or sensors as a component of
various electronic and mechanical devices. Generally these materials consist of
many crystal grains and domains at a microscopic scale. Since each domain
shows strongly anisotropic mechanical and electrical behaviours according to
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110531
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
x3
Poling direction
15
1
x2
603
Unit: mm
x1
Figure 1:
6000
{110}
Intensity [cps]
5000
4000
3000
{211}
2000
1000
0
Figure 2:
{111}{200}
{100}
{210}
15
30
45
{220} {310}
{222}
60
2[deg]
75
90
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
PQ hi 3 h
(1)
i 1
where hi means the peak height of the Hough transformed i-th Kikuchis band. h
is the standard deviation of Hough transform. The PQ value represents
quantitatively the fitting condition of Kikuchi patterns with the target crystal
<110>
0
<001> <100>
100
81.28m
(i)
63.5m
PQ = 92.19
(ii)
PQ = 86.45
(iii)
PQ = 69.34
(iv)
PQ = 80.88
(a) Orientation map
Figure 3:
(b) PQ map
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
605
S yc1'
i
c
i 1
e y1'
(2)
(ii)
(i)
(iii)
(iv)
Specimen
y2
<110>
y1
<001> <100>
PQ = 78.73
90.8 m
127.6 m
Figure 4:
where eci means the basis vector of c axis of the crystallographic coordinate
system in microstructural coordinate system at the ith measuring point. e y ' is
the basis vector of an optional direction y1 existing in the y1 y2 plane. And n is
the total number of measuring points. The crystal orienting degree becomes zero
when all points orient to the normal direction of the y1 y2 plane and it becomes
1
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
0.5
0.4
y1
0.3
0.2
0.1
y1
0.0
-0.5
-0.4
-0.3
-0.2
-0.1
0.0
0.1
0.2
0.3
0.4
0.5
Figure 5:
one when all points orient to y1 direction. Figure 5 shows the crystal orienting
degree for the crystal orientation map in figure 4. These values were calculated
through changing the angle between y1 and y1 from 0 to 180. The crystal
orienting degree indicated a multi-directionally uniform orientation in the y1 y2
plane.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Mechanical load
607
Electrical load
Y
x
Unit cell
y
Macrostructure
Figure 6:
Microstructure
y1
y2
Regular
cubic mesh
y1
Figure 7:
for PZT [13] indicated that the dependence on the sampling area of crystal
orientations falls off and the relative error of macroscopic homogenized material
properties becomes 1% or less if the number of grains is beyond approximately
200. Consequently, the crystal orientation map of 90.890.8 m2 can be regards
as a representative volume element of microstructure since it includes 233
grains.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
25
20
15
10
0.40 %
1.5 %
5
0
macro E
12
macro E
11
macro E
22
macro E
13
macro E
23
macro E
33
macro E
44
macro E
66
macro E
55
-5
2500
0.49 %
2000
1500
1000
500
0
macro
T
11
macro
T
22
macro
T
33
-500
Figure 8:
300
250
Computation
Experiment
200
150
100
50
0
0.40 %
macro
d31 macrod32
macro
d33
-50
macro
d24 macrod15
-100
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
609
And now, we discuss the microscopic localized behaviours of SEMEBSDmeasured realistic microstructure in response to a macroscopic external load.
Figure 9 (a) demonstrates the macrostructure and its boundary conditions, and
(b) shows the microstructure. While the above-mentioned 143 143 1 -divided
mesh was used for microstructure, a one-element mesh was employed for
macrostructure because of a linear displacement and electrical potential field.
The macroscopic homogenized material properties in figure 8 were introduced
into macrostructure, and the free deformation of a macrostructure was analyzed
under a uniform electric field along macrostructural x3 axis. Then material
behaviours of the SEMEBSD-measured realistic microstructure were evaluated
in response to the macroscopic external load.
The difference between single crystal and polycrystal is with and without the
interference of material behaviours among neighbouring grains. Namely, grains
show the various deformations according to their orientations and they have a
large influence on others in polycrystalline microstructure. The interference of
mechanical deformation under electrical loads appears obviously in piezoelectric
energy. Therefore, we picked out the almost [001]-orientating grains from many
grains in the microstructure as an example of computations, and investigated
their piezoelectric energy. To be more precise, firstly we calculated the angle
between crystallographic c axis, which is the direction of spontaneous
polarization in case of perovskite tetragonal structure, and microstructural y3 axis
for the microstructural finite element model. The elements whose angle between
both axes is less than 5 degree were picked out as almost [001]-orienting grains
and their piezoelectric energy was calculated from mechanical and electrical
states cased by macrostructural external load. Figure 10 shows the relation
between the piezoelectric energy and the average of orientation gap, which was
calculated from neighbouring eight elements, in the realistic microstructure.
x3
y3
1mm
90.8 m
E3 = 100 kV/m
y2
90.8 m
1mm
x2
x1
1mm
(a) Macrostructure
Figure 9:
y1
(b) Microstructure
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Unconstraint
0
-2
Constraint
-4
-6
Average of neighbouring
eight elements
-8
-10
-12
1 8
i
8 i 1
-14
0
Figure 10:
10
20
30
40
Average of orientation gap [deg]
50
polycrystal. This figure indicates that the piezoelectric energy of some grains
reaches beyond one of perfect-constraint single crystal when the average of
orientation gap is over 10 degree. However, there is no strong correlation
between low piezoelectric energy and the orientation gap.
4 Conclusions
The realistic crystal morphology of a BaTiO3 polycrystalline piezoelectric
material was obtained by SEMEBSD measurement utilizing an amorphous
osmium coating for the prevention of electrification. Then we employed a
multiscale finite element simulation and investigated the effect of realistic crystal
morphology on material properties and behaviours. As a computational result,
the macroscopic homogenized material properties correspond approximately to
experimental values and they satisfy the transverse isotropy. In a realistic
polycrystal under electric field, there are some grains whose mechanical
deformation is interfered more strongly by neighbouring grains compared with
an omnidirectional-constraint state.
Acknowledgements
One of the authors (Y. Uetsuji) was financially supported by a Grant-in-Aid for
Young Scientists (B) (No. 22760087) from the Ministry of Education, Culture,
Sports, Science and Technology of Japan.
References
[1] Venables, J. & Harland, C., Electron back scattering patterns A new
technique for obtaining crystallographic information in the scanning
electron microscope. Philosophical Magazine, 27, pp.1193-1200, 1973.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
611
[2] Dingley, D. & Randel, V., Microstructure determination by electron backscatter diffraction. Journal of Material Science, 27, pp.4545-4566, 1992.
[3] Wu, X., Pan, X. & Stubbins, J.F., Analysis of notch strengthening of 316L
stainless steel with and without irradiation-induced hardening using EBSD
and FEM. Journal of Nuclear Materials, 361, pp.228-238, 2007.
[4] Calcagnotto, M., Ponge, D., Demir, E. & Raabe, D., Orientation gradients
and geometrically necessary dislocations in ultrafine grained dual-phase
steels studied by 2D and 3D EBSD. Materials Science and Engineering: A,
527, pp.2738-2746, 2010.
[5] Yasutomi, Y. & Takigawa, Y., Evaluation of crystallographic orientation
analyses in ceramics by electron back scattering patterns (EBSP) method.
Bulletin of the Ceramic Society of Japan, 37, pp.84-86, 2002.
[6] Yang, L.C., Dumler, I. & Wayman, C.M., Studies of herringbone domain
structures in lead titanate by electron back-scattering patterns. Materials
Chemistry and Physics, 36, pp.282-288, 1994.
[7] Koblischka-Veneva, A. & Mcklich, F., Orientation imaging microscopy
applied to BaTiO3 ceramics. Crystal Engineering, 5, pp.235-242, 2002.
[8] Tai, C.W., Baba-kishi, K.Z. & Wong, K.H., Microtexture characterization
of PZT ceramics and thin films by electron microscopy. Micron, 33,
pp.581-586, 2002.
[9] Gudes, J.M. & Kikuchi, N., Preprocessing and postprocessing for materials
based on the homogenization method with adaptive finite element methods.
Computer Methods in Applied Mechanics and Engineering, 83, pp.143-198,
1990.
[10] Uetsuji, Y., Nakamura, Y., Ueda, S. & Nakamachi, E., Numerical
investigation on ferroelectric properties of piezoelectric materials.
Modelling and Simulation in Materials Science and Engineering, 12,
pp.S303-S317, 2004.
[11] Kuramae, H., Nishioka, H., Uetsuji, Y. & Nakamachi, E., Development and
performance evaluation of parallel iterative method. Transactions of the
Japan Society for Computational Engineering and Science, Paper
No.20070033, 2007.
[12] Uetsuji, Y., Yoshida, T., Yamakawa, T., Tsuchiya, K., Ueda, S. &
Nakamachi, E., Evaluation of ferroelectric properties of piezoelectric
ceramics based on crystallographic homogenization method and crystal
orientation analysis by SEMEBSD technique. JSME International Journal
Series A, 49, pp.209-215, 2006.
[13] Uetsuji, Y., Satou, Y., Nagakura, H., Nishioka, H., Kukamae, H. &
Tsuchiya, K., Crystal morphology analysis of piezoelectric ceramics using
electron backscatter diffraction method and its application to multiscale
finite element analysis. Journal of Computational Science and Technology,
2, pp.568-577, 2008.
[14] Jaffe, B., Piezoelectric ceramics, Academic Press: London and New York,
p.74, 1971.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 9
Ballistics
615
Abstract
This paper concerns an analysis of target penetration by a selected armourpiercing (AP) projectile: 7.62x54R with steel core. Numerical and experimental
research was carried out. The aim of this work was a comparison of the results
obtained in real conditions of ballistic test and computer simulation. In this
study, two three-dimensional targets and the core of the projectile were built.
The structure of the projectile is complex, but steel core plays the main role in
the perforation process. Then the numerical model of the projectile was reduced
to describe only steel core dynamics. The 3D Element Free Galerkin method is
applied to solve the problem under consideration. The algorithm implemented in
the Ls-Dyna code was used. Space discretization of the analyzed problem was
prepared by means of the HyperWorks Software (HyperMesh module). The total
amount of the elements reaches 500 000 in this model. The Johnson-Cook
constitutive model is applied to describe the behaviour of the metallic parts: steel
layers and the projectiles core. The experimental results were obtained using a
high speed video-camera. The target penetrations by the projectile were
recorded. The processing of the data obtained from a high speed camera was
carried out by means of the TEMA Software. In this paper, a good correlation
between the numerical and experimental results was obtained. A lot of
interesting mechanical effects observed during the experiment were analyzed.
Keywords: penetration, perforation, numerical model, constitutive model,
armour.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110541
1 Introduction
The paper describes a coupled numerical-experimental study of an armour
perforation by a given type of projectile. An analysis of an AP 7.62 x 54 R type
projectile with steel core was carried out. The structure of the projectile is
complex, but steel core plays the main role in perforation process. The numerical
model of projectile was reduced to describe only steel core dynamics. Two types
of targets are used in modeling. The numerical simulations were performed using
the Element Free Galerkin Method (EFG) implemented in LS-DYNA code.
Three dimensional numerical models for each version of impact were developed.
The initial stage of the problem is presented in Figure 1.
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
617
Figure 3:
2 Numerical model
For the purpose of this study numerical models of projectile and targets were
constructed. Target can be built using three types of configurations: with steel
plate and aluminum plate or four aluminum plates. Numerical models for panels
configurations: steel/aluminum plates, 4 aluminum plates are presented in Figure
4 (a), (b).
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
(a)
(b)
Figure 4:
619
the body, no mesh in the classic sense is needed to define the problem. The
initial condition was reduced to the given projectile velocity, 854 m/s.
The boundary condition was assumed as the target plate fixed at its edge a
5 millimeters thick ring.
3 Experimental setup
This experimental work was carried out at a firing ground. The experimental
setup is shown in Figure 3. There was a ballistic tunnel, in which firing takes
place. A ballistic barrel was used for firing. 7.62x54R type AP of projectiles was
used. All of projectiles were turned, because fuse of projectile induces a flash,
what makes recording a film impossible. A turned projectile is shown in Figure
5.
Figure 5:
A turned projectile.
The target plate is fixed at its edge with a 5 millimeters thick ring. A high
speed camera was put in the ballistic tunnel. It had a very resistant cover, made
of a transparent polycarbonate.
Figure 6:
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
621
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 9:
Change in the kinetic energy of the projectile in time for the second
case for numerical model.
Figure 10:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
623
Figure 11:
5 Conclusions
The presented analysis has provided only preliminary information about the
comparison between the numerical and experimental model of the perforation
problem. There is insufficient data about the mechanical properties of the steel,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] Paul J. Hogg, Composites for ballistic applications, Department of
Materials Queen Mary, University of London.
[2] G. R. Johnson, W. H. Cook, A constitutive model and data for metals
subjected to large strains, high strain rates and high temperatures,
Engineering Fracture Mechanics, Volume 21, Issue 1, p. 31-48, 1985.
[3] A. Morka, B. Jackowska, T. Niezgoda, Numerical study of the shape effect
In the ceramic based ballistic panels, Journal of Kones Powertrain and
Transport, Vol.16, No.4, 2009.
[4] C. Kambur, Assessment of Mesh-free Methods in LS-DYNA: Modeling of
Barriers in Crash Simulation, 2004.
[5] J. O. Hallquist, LS-DYNA Theory Manual, Livermore Software Technology
Corporation, 2006.
[6] LS-DYNA Keyword Users Manual, version 971, Livermore Software
Technology Corporation, 2007.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
625
Abstract
During the operation of the UN and NATO forces in Africa and Asia it appeared
that the equipment held and utilized by these organizations was not proof against
the missiles being shot by rocket propelled grenades. The missiles utilized by the
terrorist organizations belong to the first generation. They have a head with a
shaped charge jet permitting them to pierce up to 300 mm RHA steel.
There are four protection methods against this hazard: reactive armour, thick
proper armour, rod armour and active defence system. The reactive armour
cannot be applied on each kind of vehicle. The thick, proper armour limits other
important features of vehicles. The active defence systems continue to develop,
except the Russian system Arena. Therefore, the fast development of rod
armour appeared. The main task of this armour type is to disturb the symmetry of
the shaped charge jet as a result of the shock that is effective of the deflagration
and not the forming of cumulative jet.
The finite element model of the missile with the cumulative head and its
influence with two types of rod armour will be presented in this article. The
cases of the missile impact with circular and square section for different typical
speeds of missile will be considered.
Keywords: mechanics, finite element method, rods armour, RPG.
1 Introduction
As a result of armed conflicts observation all over the world and actions carried
on by Polish army divisions within a framework of stabilization missions, we can
notice a change in the way of fighting and the usage of available ammunition. It
appears that currently the biggest threat is posed by two types of ammunition.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110551
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
627
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 2:
The numerical model constructed and used for the calculations is composed
of 19 parts. Particular parts were attributed to material properties, that
characterize the behaviour of real materials used in the analysed projectile. For
the description of copper a simplified Johnson-Cook model was used. The
behaviour of other materials was reflected by the use of a multilinear model
(Piecewise_Linear_Plastic). This model allows to image materials characteristics
by introducing an experimental curve displaying the dependence of stress from
strain.
Between all of the parts of the model a contact was defined, that is usually
used in dynamical problems for example in analysis of crash type. In these
analyses, similarly as in described case, we deal with great deformations and
elements displacement speeds. Parts of the projectile element are connected to
each other with a screw thread. Between these elements of the model a contact,
simulating this connection, was defined. The chosen contact type enables
inserting a conditioned separation of connected elements.
During the simulation imaging of projectile PG-7G type impact into an
obstruction made out of two angle sections, being a fragment of the rod armour
structure, was endeavoured (fig. 3). In the first case rods with a circular
intersection with a diameter of 15 mm were considered, and in the second with
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
629
a square intersection (side length 14 mm). The rods dimensions were matched so
that the mass of the armour in both solutions was close. In both cases it was
accepted, that the total length of a singular rod amounted to 500 mm. It was
assumed, that the projectile hits centrally between the rods, at the half of their
length.
The simulation was carried out for three initial speeds of the projectile Vz. In
the first version the projectile moved with the minimal speed that it acquires
during firing (100 m/s), in the second with maximal speed (300 m/s). The third
case was put into practise for an intermediate speed amounting to 200 m/s.
Figure 3:
Initial-boundary conditions.
Mq Cq Kq f
(1)
This equation is solved then with the direct integration method (so called
Explicite), that finds a wide use in the analysis of highly nonlinear phenomena
(impacts, stamping, explosion wave influence on a structure). In calculations of
this type we have to manage with great strains and strain speeds.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 4:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
631
Figure 5:
The course of the projectiles deformation for the impact speed 300
m/s.
Figure 6:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 7:
Figure 8:
In figures 6, 7 and 8 the maximal destruction of the explosive material and the
accumulation insert recorded during the whole simulation, were presented. On
the basis of the figures we can observe that the destruction is greater in case of
the impact into the rods with a square intersection. Additionally, together with
the increase of the initial speed of the projectile, the explosive material is crushed
more uniformly on its whole length.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
633
5 Conclusions
From the point of view of protecting the soldiers and the equipment the most
important thing is the fact of the creation, or the lack of it, of the accumulation
stream. The condition of the explosive material and the accumulation insert at
the moment of the fuse actuation decide about the creation of it.
The analysis carried out showed that with the change of the rods crosswise
intersection shape (from circular to square) maintaining the same mass of the
whole armour, the efficiency of the projectile decreases. The key elements of the
projectile, such as the explosive material and the accumulation insert, are
deformed more and at the same time have lower ability of creating an
accumulation stream which is the greatest destructive power of this type of
weapon.
The speed of the projectile during impact with the armour is not without
meaning. The increase of the speed causes, before all, the damage of the
explosive material on a greater surface.
References
[1] Winiewski A., Pancerze budowa, projektowanie i badanie, WNT,
Warszawa, 2001.
[2] www.ruag.com.
[3] Military Parade, Russia, Moscow, 2001.
[4] J.O.Hallaquist: LS-Dyna. Theoretical manual, Livermore Software
Technology Corporation, California, 2005.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
635
Abstract
The objective of this paper is a feasibility study of a new concept for ceramic
backing in multilayer ballistic panels. The truss structures were applied as a
backing layer. They were based on a diamond crystal structure. The analysis of
length/diameter ratio of the bars was performed. It was aimed at the achievement
of the required mechanical properties maximizing the ballistic resistance and
minimizing panel surface density. The panel structure is considered as a
composition of Al2O3 ceramic tile backed by a plate in a truss form made of
aluminum alloy or steel. These results were compared with classic multilayer
solutions based on solid aluminum alloy backing plates. The study was carried
out for a normal impact of the AP (armor-piercing) 7.62x51 projectile type with
tungsten carbide (WC) core. A method of computer simulation was applied to
study the problem. The Finite Element Method (FEM) implemented in the LSDYNA code was used. The full 3D models of the projectile and target were
developed with strain rate and temperature-dependent material constitutive
relations. The Johnson-Cook constitutive model with Gruneisen Equation of
State (EOS) was applied to describe the behavior of metal parts: aluminum alloy
and projectiles core. However, the Johnson-Holmquist model was used to
describe the ceramic material. The Boundary Conditions (BC) were defined by
supporting the panel at its back edges. The obtained results show the alternative
solution to the classic solid plates supporting the ceramic layers in ballistic
panels. It was identified that the main deformation mechanism in truss-type
backing components is buckling if the L/D ratio goes up. A virtual prototyping
technique could be applied to manufacture the developed truss structure.
Keywords: computational mechanics, finite element method, ballistic protection,
multilayer armour, ceramic armour systems, truss structure.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110561
1 Introduction
Modern ballistic protection systems, especially for light-weight armored
vehicles, are based on the multilayer armour concept, fig. 1 [11]. The main task
to resist the projectile is typically given a ceramic layer. However, the ceramic
material is known as very brittle so its strength in tension or bending loads is
very low. Therefore, the next layer behind should compensate for this
disadvantage of the ceramics. It is known as a backing effect. The solid materials
like aluminum or polymer composite are considered as the backing plate in
classic solutions of ballistic panels. The surface density of the panel is a crucial
parameter at given effectiveness, and then its minimization is especially
important. There is no alternative to ceramics in case of Armour Piercing (AP)
projectiles containing hard cores made of steel or tungsten alloys. Then the
backing plate can be considered to be replaced with a lighter material. One
possibility is a choice of a truss structure. However, selection of the best
structure is not a simple task. So we turned to nature in looking for a hint. The
answer came from space. It could be lonsdaleite; in other words hexagonal
diamond. Its crystal structure is an allotrope of carbon with hexagonal lattice. It
is formed in nature when graphite-containing meteorites strike the Earth. The
great heat and pressure of the impact transforms the graphite into diamond, but
retains graphite's hexagonal crystal lattice. Lonsdaleite was first identified from
the Canyon Diablo meteorite, where it occurs as microscopic crystals associated
with diamond. It was first discovered in nature in 1967 [4]. Specific gravity of
3.2 to 3.3, and Mohs hardness of 78 [5]. The Mohs hardness of diamond is 10,
and the lower hardness of lonsdaleite is chiefly attributed to impurities and
imperfections in the naturally-occurring material. Numerical analysis showed
that pure sample has been found to be 58% harder than diamond [6]. Lonsdaleite
is expected to be 58% harder than diamond and to resist indentation pressures of
152 GPa, whereas diamond would break at 97 GPa [6].
Figure 1:
The authors do not intend to apply such rare and strange material as the
backing plate in ballistic panels. The truss structure in demand could be based on
the lonsdaleites crystal lattice instead. The main idea is that the very high
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
637
hardness of this crystal may partially be a result of its specific crystal structure. It
is clear that the real truss structure cannot reproduce the atomic forces of
interaction, where the force is proportional to the square of displacement. In
contrast, for macroscopic materials, the Hooks law yields only the linear
relation between force and displacement. However, it is still interesting to know
the role of the geometric factor in overall structure strength. The truss structure
formed as a plate is presented in fig. 2 with different points of view. The
characteristic size of the elementary cell is around 0.45mm.
Figure 2:
Figure 3:
a)
Figure 4:
b)
The result of the experimental impact test for the system of 10/10
mm thick Al2O3 ceramic tile backed by disc of PA11 aluminum
alloy, (a) front, (b) back view. The projectile was the 7.62x51mm
AP with WC core, initial velocity: 921 m/s.
639
The full 3D models of the projectile and target were developed with a strain
rate and temperature-dependent material constitutive relations. The JohnsonCook constitutive model with Gruneisen Equation of State (EOS) was applied to
describe the behavior of the metallic parts: aluminum alloy and projectiles core,
table 2. However, the Johnson-Holmquist (JH2) model was used to describe the
ceramic material Al2O3, table 1. Besides, the truss components were modeled as
a bilinear elastic-plastic material with the typical elastic parameters for
aluminum alloy (AA) or steel supplied with yield stress and hardening modulus:
350MPa, 7GPa and 1.25GPa, 20GPa respectively for AA and steel. Generally,
the failure model based on the effective plastic strain was applied, but the JH2
describes the damage evolution and even completely failed ceramics. In this
case, the given effective plastic threshold (150%) was used to limit the excessive
finite elements deformations which can lead to numerical errors like negative
volume.
The Boundary Conditions (BC) were defined by supporting the panel at its
back edges at a distance of 4.5mm. It was numerically realized by frictional
contact with a rigid body and a friction coefficient of 0.5 was assumed. The
initial conditions were limited to the given initial projectile velocity, 930 m/s.
A spatial problem discretization was conformed to available computing
resources. The projectile (WC core) mesh was built of the tetrahedronal elements
with one integration point sized from 0.1mm at the sharp head to 0.5mm
elsewhere. Similar mesh topology was selected for the hexagonal ceramic tile,
but its density gradually grows in the direction of the location of the impact
point. Characteristic single element length varies from 1 to 0.5 mm. The solid
components of the backing plate were divided into constant stress brick finite
elements with a typical size of 0.5mm. Finally, the numerical representation of
the truss structure was built with an application of 1D beam elements. A total
number of beam elements exceeded 500k with a single beam length of 0.10.15mm. Summarizing the mesh configuration of the studied problem, it comes
up in one million elements.
Table 1:
A
B
C
m
n
T
HEL
D1
D2
FS
EOS
k1
k2
k3
Units
kg/m
GPa
GPa
GPa
GPa
GPa
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Units
Johnson -Cook
93%WC6%Co
AA2024-T3
[9]
[10]
GPa
0.369
GPa
89
0.684
0.0083
1.7
0.65
0.73
Gruneisen
Equation of State
c
m/s
5210
5328
S1
1.14
1.338
S2
S3
0.48
D1 (JC)
0.03
0.5
PC (spall)
GPa
2.7
1.67
Failure
a)
b)
c)
d)
Figure 5:
641
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
a)
b)
c)
d)
Figure 6:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 3:
643
Backing type:
L/D
Panel surface
density
2
[kg/m ]
Minimum PKE,
t=50 s
[J]
Residual Length
of the Projectile
[mm]
none
34.2
1721
16.5
rigid body
34.2
637
14.2
AA
61.2
390
16.0
AA truss structure
44.7
1159
16.3
41.5
1360
16.5
40.2
1443
16.5
54.2
604
16.5
Figure 7:
The time histories of the projectile kinetic energy (PKE) for studied
cases.
5 Conclusions
The obtained results show that the satisfactory level of the ballistic protection
may be accomplished by application of the truss type backing plates with
preserving very low surface density of the panel. The hexagonal crystal lattice of
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Acknowledgement
The paper is supported by Grant No. O R00 0056 07, financed in year 2009-2011
by the Ministry of Science and Higher Education, Poland.
References
[1] Johnson, G.R., Cook, W.H., A Constitutive Model and Data for Metals
Subjected to Large Strains, High Strain Rates and High Temperatures.
Proc. of the 7th International Symposium on Ballistics: Hague,
Netherlands, 1983.
[2] Kury, W.J., Breithaupt, D., Tarver, M.C., Detonation Waves in
trinitrotoluene. Shock Waves, 9(2), pp. 227-237, 1999.
[3] Hallquist, J.O. (compiled), LS-Dyna Theory Manual, Livermore Software
Technology Corporation (LSTC), 2006.
[4] Frondel, C., Marvin U.B., Lonsdaleite, a new hexagonal polymorph of
diamond. Nature, 214, pp. 587589, 1967.
[5] The mineral and locality database, www.mindat.org/min-2431.html
[6] Pan, Z., Sun, H., Zhang, Y., Chen, Ch., Harder than Diamond: Superior
Indentation Strength of Wurtzite BN and Lonsdaleite. Physical Review
Letters, 102(102), 2009.
[7] Tasdemirci, A., Hall I.W., Numerical and experimental studies of damage
generation in multi-layer composite materials at high strain rates.
International Journal of Impact Engineering, 34, pp. 189204, 2007.
[8] LS-DYNA Keyword Users Manual, version 971, Livermore Software
Technology Corporation (LSTC), 2007.
[9] Holmquist, T.J., Johnson G.R. & Gooch, W.A., Modeling the 14.5 mm
BS41 projectile for ballistic impact computations, Computational Ballistics
II, Transaction: Modelling and Simulation volume 40, pp. 61-75, 2005.
[10] Panov, V., Modelling of behaviour of metals at high strain rates, Cranfield
University, PhD Thesis, 2005.
[11] Morka, A., Zduniak, B., Niezgoda, T., Numerical Study of the Armour
Structure for the V+ Ballistic Protection Level according to STANAG 4569,
transl. from Polish, Technical Report PBN 03-433/2010/WAT, AMZKutno, Poland, 2010.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
645
Abstract
Energy absorbing composite elements, which can be used for the protection of
people, vehicles and structures, have been widely investigated in recent years.
Different kinds of materials (metals, composites, foams) and structures have
been investigated. Many articles present the glass, carbon and aramid composites
investigation results. The sleeve and conical structures are described. The impact
energy in these elements is absorbed during the progressive fracture process.
Glass composites are very popular due to their low cost.
This paper deals with numerical and experimental research on the possibilities
of energy absorption by a conical element made from a glass composite in
comparison with a conical composite element filled with polyurethane foam.
One element and a small composite panel are investigated and compared.
Keywords: blast wave, energy absorbing element and panel.
1 Introduction
The problem of energy dissipation by composite absorbing elements is
considered from an aspect of the local stability loss or a progressive destruction
[1, 2]. The work, carried out by the destruction of an energy-absorbing element,
causes a significant limitation of shock load effects of the construction, for
example an impact of the landing platform or an airship with the ground, or the
influence of a pressure wave created by an explosion [3].
The greatest absorption energy in relation to a mass unit is possessed by the
elements made of composite [4].
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110571
Figure 1:
Their behaviour was described by the generalized Hooke's law [6], that can be
used for the description of composite material properties. Physical (of the
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
647
composite sleeve
Figure 2:
Numerical model of
the sleeve.
Figure 3:
composite sleeve
Numerical model of
the sleeve with a
polymer filler.
Figure 4:
Figure 5:
Figure 6:
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
649
Figure 8:
Figure 9:
The real object behaves similarly as the numerical models. The destruction
force graph is presented in fig. 10. In the first stage, the progressive destruction
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 10:
Figure 11:
Figure 12:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
651
H
Thickness Dw2
[mm] [mm]
[mm]
Pmax
[kN]
Pr
[kN]
Pr
Pmax
EA
[J]
WEA
[kJ/kg]
M
[g]
Sleeve
39.3
50
28.1
23,1
0.822
1155
44.3
26.1
Sleeve with
a filling
39.3
50
36.3
25
0.689
1250
37.7
33.2
Figure 13:
Absorption energy amounted to 2972 kJ, and the relative absorption energy
amounted to 45 kJ/kg.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 14:
Figure 15:
653
was destroyed with an average force of the order of 90 kN. The maximum force
that was measured amounted to, in this case, 105.7 kN. In comparison with the
former examinations of an object the upsetting force value was 1.13 times
greater.
Figure 16:
The absorption energy, for this case, amounted to 3600 kJ, whereas relative
absorption energy - 42,6 kJ/kg. The statement of the results for panels is
presented in Table 2.
Table 2:
Pmax
[kN]
Pr
Pmax
EA
[J]
WEA
[kJ/kg]
M
[g]
Thickness
crank [mm]
3 Sleeves
39.3
40
86.7
74.3
0.857
2972
45.0
66.1
3 Sleeves
with filling
39.3
40
105.7
90
0.851
3600
42.6
84.5
Figure 17:
Dw2
[mm]
Pr
[kN]
Structure
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 18:
Figure 19:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
655
Figure 20:
Figure 21:
Figure 22:
8 Conclusions
In the work results of numerical examinations with an experimental verification
of two energy-absorbing objects are presented. Quasistatic examinations were
carried out on the testing machine INSTRON, and dynamical tests on testing
ground with the use of explosive materials.
The use of a filling in a form of a foam caused an increase in the absorption
energy at the relatively low increase of the protective panel mass. This is quite
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] Niezgoda T., Barnat W., Numeryczna Analiza Wpywu Ksztatu
Podstawowych Struktur Kompozytowych na Energi Zniszczenia, III
Sympozjum Mechaniki Zniszczenia Materiaw i Konstrukcji Augustw, 1
4 czerwca 2005.
[2] Nagel G., Thambiratnam D. (2003), Use of thin-walled frusta energy
absorbers in protection of structures under impact loading. Design and
Analysis of Protective Structures against impart/Impulsive/Shock Loads,
Queensland.
[3] Niezgoda T., Barnat W.: Analiza pasa bariery drogowej wzmocnionej
elementami kompozytowymi w zastosowaniu do poprawy energochonnoci
elementw infrastruktury Grnictwo Odkrywkowe 5-6/2006.
[4] Barnat W., Niezgoda T.,: Badania energochonnoci elementw podatnych
w aspekcie zastosowanych materiaw Journal of Kones Powertrain and
Transport vol 14 No 1/2007.
[5] Niezgoda T., Barnat W., Numeryczna Analiza Wpywu Ksztatu
Podstawowych Struktur Kompozytowych na Energi Zniszczenia, III
Sympozjum Mechaniki Zniszczenia Materiaw i Konstrukcji Augustw, 1
4 czerwca 2005.
[6] MSC Dytran, Example Problem Manual, Version 3.0 MSC 1996.
[7] Niezgoda T., Ochelski S., Barnat W. Dowiadczalne badanie wpywu
rodzaju wypenienia podstawowych struktur kompozytowych na energi
zniszczenia, Acta mechanica et automatica 1/2007.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Section 10
Railway transport
659
Abstract
This paper presents a stochastic traffic generator in which (i) the driving manner
is assumed to be stochastic, (ii) random unexpected stops are considered and (iii)
the stop time in the stations is also supposed to be stochastic. This simulation
tool provides a large number of different driving scenarios to be used for a
Monte Carlo load flow analysis. Traffic simulations algorithms are described in
detail. Reference to a section of the MadridBarcelona high-speed line illustrates
the potential of such a tool.
Keywords: power supply, train driving, stochastic simulation.
1 Introduction
One of the inputs required for designing the electrification in a railway system is
the power to be supplied to the rolling stock in each time step, which is in itself
closely related to the considered operation. Accordingly, compared to other
power systems, railways are quite different: power consumptions are moving
loads and change very quickly. The power consumption depends on the manner
in which the trains are driven (the speed along the trip, how fast it accelerates or
it brakes, etc.), the characteristics of the train (mass, running resistance, motors,
etc.) and the path (slopes, speed limits, etc). Due to its complexity, computation
of instant power consumption of the trains is normally performed by software
tools.
When traction power consumptions have to be estimated, the characteristics
of the train and the path are typically well defined. However, rail lines are
normally not equipped with automatic driving systems and each driver drives his
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110581
Sector 1-L
Sector 1-R
Figure 1:
Sector 2-L
Traction
substation 2
Sector 2-R
Traction
substation 3
Sector 3-L
Sector 3-R
Vtransp
Figure 2:
Vfeed POS
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
661
These substations are connected between two of the three phases of the highvoltage network. Each of these sectors can use either mono-voltage system
(1x25kV) or bi-voltage system (2x25kV). In mono-voltage system, the feeding
conductors are set to the specified voltage level (see Figure 2).
In bi-voltage systems, a higher voltage is set between feeding conductors [3,
4]. This voltage is reduced by using autotransformers distributed along the
catenary (see Figure 3). In these systems, the term cell normally refers to the
portion of catenary located between two consecutive autotransformers. Typical
values for cell lengths are 1015 km.
Vtransp
Vfeed POS
Vfeed NEG
Figure 3:
It has been proven that bi-voltage systems can be represented as if they were
mo-voltage [5, 6]. For that reason, in this paper only mono-voltage systems are
considered in the discussions, without loss of generality. Actually, as this paper
is focused on driving simulation, the power supply system considered for the
discussions is not a critical aspect.
2.2 Electro-technical dimensioning criteria and driving mode
Railways power supply systems are dimensioned to be able to supply the power
required by the trains, usually assuming a long-term estimation of the traffic
needs. From an electro-technical point of view, the power supply system is
considered to be able to supply the required power if the following restrictions
are fulfilled:
Voltage in the catenary has to be within the range specified by UIC-600
standard, which specifies the upper and lower limits depending of its
duration.
Currents circulating along the catenary and through transformers (and
autotransformers) have to be lower than the rated values, in order to
ensure no overheating will occur. These limits are often expressed as
power limits instead of current limits.
Power supplied by the three-phase network through the traction
substations is also frequently limited when the network is too weak, in
order in order to ensure a proper operation of the network.
Figure 4 shows the train power consumptions of two different driving modes
as a function of the train position. In the figure, driving mode 1 and 2 could
correspond respectively to a MTD and to a less aggressive driving (lower
accelerations and somehow lower speeds). Electrical sections have been
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
FedbyTS2a
FedbyTS2b
FedbyTS3
Powerconsumption[kW]
FedbyTS1
Drivingmode1
Drivingmode2
Trainposition[km]
Figure 4:
represented and each of them is fed with one power transformer (TS1, TS2a,
TS2b and TS3), as they are electrically isolated.
The voltage
transformer
its
where
its
As shown in Eq. (1), the second term is proportional to currents (which are
proportional to power consumptions) but the third one is proportional both to
currents and to distances to the substation. When considering the driving modes
represented in Figure 4, it cannot be established that MTD (driving mode 1)
leads to lower voltages than driving mode 2, even if peak power consumptions
are higher in MTD, because distances from power peaks to substations are just
different in both cases.
As the topology of each electrical section is radial, the maximum instant
current circulating in the catenary
663
expressed in Eq. (2) (even if exceptions to that rule may occur when regenerated
power is high).
I C ,max ts
its
S* i
V* i, ts
(2)
In order to evaluate if currents are under the rated values, normally quadratic
mean currents are calculated from the instant values.
Also for currents, MTD do not necessarily correspond to the higher values
due to two different effects. First, if time is kept constant between two
consecutive trains, the distance between two consecutive trains is always higher
in MTD than in other driving modes. In other words, the same substation may
have to supply power to eventually more trains. Second, as power consumption
spatial distributions differ from one driving mode to other, the substations that
supply the power (and thus the current) may be different.
Finally, the instant power
(3)
its
where
3 Simulation procedure
3.1 Overview
In order to design the power supply system of a railway network the traffic
(including train movement) and the power supply are often supposed to be
uncoupled. The design is then done iteratively in two different steps, as shown in
Figure 5.
Firstly the traffic is simulated for each discrete time step and a traffic scenario
is obtained, usually assuming MTD. A traffic scenario is composed by the list of
locations and power consumptions of each train at each time step. Then an
electrical simulator, typically solving a power flow for each time step, is used to
determine all the voltages, currents and power flows and to determine if the
power supply works properly. The design is modified and the electrical
simulation is repeated until the power supply works properly.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Electrical simulator
(software)
Trafficscenario
Timestep1
Timestep2
Timestep3
...
Figure 5:
Once determined that MTD is not necessarily the most demanding driving
mode (neither for voltages, nor for currents, nor for power flows), it is proposed
to modify the design process so that the power supply is evaluated with a set of
stochastic scenarios (see Figure 6).
Powersupplysystem
definitionchanges
(designer)
Power supplysystem
definition
Stochasticsetofscenarios
(substations,neutralzones,
catenarysections,conectionto
Figure 6:
Electrical simulator
(software)
Trafficscenario 1
Timestep1
Timestep2
Timestep3
...
Trafficscenario 2
Timestep1
Timestep2
Timestep3
...
...
Trafficscenario N
Timestep1
Timestep2
Timestep3
...
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
665
Ft Fw Fc Fr kM
where Ft is the traction force, Fw
dv
dt
(4)
railway line, Fc is the resistant force due to the curvature of the railway line,
Fr is the running resistance force, M is the mass of the train, k is a factor that
is commonly used to consider the rotating inertia, and v is the speed of the train.
The resistance force due to the slope of the line is proportional to the mass of
the train and the slope of the line. The resistance force due to the curvature is
also proportional to the mass of the train and inverse to the curvature radius. The
running resistance force is normally approximated by a quadratic expression
function of the speed. Finally, the traction force Ft is decided for each time step
in order to fulfil the speed limits (rolling stock limits and infrastructure limits),
maximum acceleration/deceleration or other constraints defining the driving
mode.
Once the movement of the train has been solved, the electrical consumed
power Pe and regenerated power Pr is determined using Eqs. (5) and (6)
respectively:
Pe
Ft v
anc
e , m
Pr Ft v m,e Panc
(5)
(6)
where
e , m
m ,e
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Maximumspeed[km/h]
160
140
120
100
80
60
40
Random unexpectedspeed
reduction
20
0
Position[km]
Deterministicspeedlimit
Speedlimitfollowingprobabilitydistribution(1particulartrip)
Probabilitydensityfunctionofthespeedlimit
Figure 7:
While in peak hours some stations can be overcrowded and stop time
higher, in off-peak hours stop time can be reduced. To model this, the
stop time has been considered to follow a normal probability
distribution (see Figure 8), whose mean is the rated time stop, truncated
in order to have always positive times.
In all these aspects, an additional work has to be done to analyze other
probability distributions and to compare them with reality.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Position[km]
667
Stop timesfollow
aprobability
distribution
Time[s]
Figure 8:
4 Study case
In order to show the potential of the presented stochastic traffic generator, a 436
km long section (from Madrid to Lrida) of the Spanish high-speed line Madrid
Barcelona has been used (its cross-section is shown in Figure 9).
Relativeelevation[m]
800
600
400
200
0
200
400
Position[m]
Figure 9:
Generic high-speed trains (410 t heavy, 168 meter long, 850 kW ancillary
services power consumption, without regeneration, 300 km/h maximum speed,
280 kN maximum traction force) have been considered. 3 min intervals have
been presumed.
Figure 10 shows, for every position, the power that should be supplied by a
substation to a standard section 35 km long. It has been calculated as the sum of
the electrical power required by all the trains within a 35km moving window,
assuming MTD. This kind of graphics gives a good idea about how does power
consumption accumulate along the railway and which parts of the line are more
demanding.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
70
60
50
40
30
20
10
0
50
100
Figure 10:
150
200
250
Position in the line [km]
300
350
400
Figure 11 shows the same kind of graphics, but obtained from one of the
many scenarios obtained by the stochastic traffic generator. Actual speed limits
have been taken as mean values of the normal distributions, with a typical 10%
standard deviation. 3 minutes stop times have been consider, with 30 s standard
deviation. Finally, 20% steps have been considered for unexpected stops, which
occur with a 5% probability.
Power requirements of the line (stochastic traffic generator)
70
60
50
40
30
20
10
0
50
Figure 11:
100
150
200
250
Position in the line [km]
300
350
400
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
669
5 Conclusions
This paper has presented a stochastic traffic generator in which (i) the driving
manner is assumed to be stochastic, (ii) random unexpected stops are considered
and (iii) the stop time in the stations is also supposed to be stochastic. This
simulation tool provides a large number of different driving scenarios to be used
for a Monte Carlo load flow analysis for the design of the power supply system.
This stochastic traffic generator has been used to generate traffic scenarios for
a section of the Spanish high-speed line MadridBarcelona. The power
requirement analysis shows that using this kind of tools could lead to more
accurate power supply system designs, as more operating conditions can be
analyzed.
Future developments should include refining probability distribution
functions and validating them with measurements. In addition, the integration of
this tool with a power flow tool would ease the penetration of such tools in the
industry.
References
[1] P. Lukaszewicz, Energy Consumption and Running Time for Trains:
modelling of running resistance and driver behaviour based on full scale
testing PhD Thesis. Royal Institute of Technology (KTH). Stockholm,
2001
[2] E. Pilo, L. Rouco, A. Fernndez and A. Hernndez-Velilla, A simulation
tool for the design of the electrical supply system of high-speed railway
lines, in IEEE PES Summer Meeting 2000, Seattle (USA)
[3] P. H. Hsi, S. L. Chen and R. J. Li, Simulating on-line dynamic voltages of
multiple trains under real operating conditions for AC railways, IEEE
Transactions on Power Systems, vol. 14, pp. 452-459, 1999
[4] R. J. Hill and I. H. Cevik, On-line simulation of voltage regulation in
autotransformer-fed AC electric railroad traction networks, IEEE
Transactions on Vehicular Technology, vol. 42, pp. 365-372, 1993
[5] E. Pilo, L. Rouco and A. Fernndez, A reduced representation of 2/spl
times/25 kV electrical systems for high-speed railways, in IEEE/ASME
Joint Rail Conference, Chicago, 2003, pp. 199-205
[6] G. Varju, Simplified method for calculating the equivalent impedance of an
AT system, Innotech Ltd (Innovation Park of Technical University of
Budapest), Budapest, July 1996
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
671
Abstract
We are developing the new rescheduling system for drivers and crew that
synchronizes the train forecasted plan. The East Japan Railway Company has
five Bullet Train lines (SHINKANSEN) operating about 800 trains per day, and
dispatching nearly 250 drivers and 350 crew per day in order to operate the
trains. To keep the operation plan and management of SHINKANSEN properly,
we have the system named COSMOS (Computerized Safety Maintenance and
Operation systems of SHINKANSEN). The drivers and crew rostering schedule
are made with the drivers and crew rescheduling system that is one of the subsystems of COSMOS Transportation Planning System. Each driver and crew
works on their trains according to a rostering scheduling decided upon
beforehand normally. However, sometimes trains would be unable to operate on
schedule when the weather is bad or car troubles happen. In such cases, the
rostering scheduling must be redesigned. The influence reaches the two or more
crews rostering scheduling by changing only one crew member. It is very
difficult to make this rescheduling because a delay of the train changes minute
by minute. To make the crews rostering scheduling change adequately, we have
developed a new system. This system displays some contradictions of the
rescheduled plan of drivers and crew if a train delay happened. Dispatchers
reschedule the plan to solve these contradictions.
We reduce the delay time of trains by utilizing this system and will improve
our services for customers.
Keywords: train traffic rescheduling, drivers rostering rescheduling, crew
rostering rescheduling, forecast, real-time rescheduling.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110591
1 Introduction
East Japan Railway Company (JR-East) has five Bullet Train (SHINKANSEN)
lines Tohoku line, Joetsu line, Hokuriku line, Yamagata line, and the Akita
line.
Our network of SHINKANSEN is shown in Figure 1, and features of each
line are shown in Table 1. These lines all start from Tokyo and directly connect
to five areas in east Japan.
A part of the section of the Yamagata line between Fukushima and Shinjo
and a part of section of the Akita line between Morioka and Akita are low
speed sections. In these sections, SHINKANSEN trains and local trains are
operated in coexistence. Trains of Yamagata and Akita line are combined with
the train of Tohoku line between Tokyo and Fukushima or between Tokyo
Morioka.
To respond to a variety of passenger needs in the SHINKANSEN
transportation, JR-East has a variety of types of cars. The types of vehicles in
April, 2010 are shown in Table 2. The transportation scale, the number of trains
per day, car rostering, driver rostering and crew rostering are shown in Table 3.
Akita Line
Yamagata Line
Joetsu Line
Tohoku Line
Hokuriku Line
Figure 1:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Table 1:
Line name
Distance of line
Name of trains
Tohoku Line
631.9 km
(Tokyo Hachinohe)
Joetsu Line
333.9 km
(Tokyo Niigata)
222.4 km
(Tokyo Nagano)
421.4 km
(Tokyo Shinjo)
662.6 km
(Tokyo Akita)
Yamabiko
Hayate
Nasuno
Toki
Tanigawa
Hokuriku Line
Yamagata Line
Akita Line
Table 2:
Type
200
E1
E2
Number
a
day of trains
89
37
38
56
39
Asama
56
Tsubasa
33
Komachi
32
E4
N
R
LR
P
E5
Unsigned
E3
673
Table 3:
Line chiefly
operated
Joetsu
Joetsu
Tohoku,
Joetsu
Hokuriku
Akita
Yamagata
Tohoku,
Joetsu
Tohoku
Car type
10 cars, Flat
12 cars, Duplex
10 cars, Flat
Partner
of
combining
None
Enable
R
8 cars, Flat
6 cars, Flat
7 cars, Flat
8 cars, Duplex
Enable
J
P
P, LR
10 cars, Flat
Undecided
Trains
Car rostering
Driver rostering
Crew rostering
About 1,000
About 200
About 250
About 350
three
large
transportation
features
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
in
JR-East
02
P0
01
P0
02
P0
01
P0
52
L0
51
L0
02
P0
01
P0
01
P0
02
P0
01
P0
52
L0
51
L0
Figure 2:
3 COSMOS overview
3.1 General overview
In an operation of JR-East SHINKANSEN, we need to reschedule the train
operation plan as soon as possible. Therefore, we have some systems called
COSMOS for supporting our judgement of rescheduling. COSMOS has eight
systems Transportation Plan System, Operation Control System, Rolling Stock
Control System, Facility Control System, Maintenance Work Control System,
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
675
Figure 3:
Figure 4:
1min 38sec
54sec
2min 36sec
1min 52sec
4min 37sec
2min 27sec
5min 52sec
3min 40sec
4 Case study
4.1 The train traffic rescheduling system
This chapter introduces the train traffic rescheduling system which is one of the
sub-systems of the COSMOS Operation Control System.
P 002
L 051
Figure 5:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
102M
102B
Delay
Figure 6:
102M 104M
104B
7104B
102B
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
677
Figure 8:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
679
Contradiction mark
Figure 9:
5 Conclusion
In this thesis, the features in JR-East SHINKANSEN and supporting systems for
train rescheduling and drivers rescheduling are shown.
When the train traffic rescheduling system and the drivers and crew
rescheduling system are not used, the planners and the dispatchers have planned
by using the paper diagram by their knowledge, experience, and capability.
After these systems have started to be used, we can quickly reschedule train
traffic and drivers and crew rostering. In addition, these systems led to the
revolution of the drastic business rule of rescheduling of drivers and crew
rostering. Dispatchers changed from a person at the center to a local person.
These systems are contributing to the stability improvement of transportation.
JR-East thinks our supporting systems to be important and will develop a further
upgrade with JR East Japan information systems Company.
We have now been developing the proposal system for SHINKANSEN using
constraint programming [2] as one of the upgrades of the train rescheduling
system for practical use.
After this proposal system is put to practical use, we expect that we will
achieve a higher effect by cooperating with the rescheduling system for drivers
and crew.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
References
[1] Hiroyuki SHIMIZU, Hitoshi TANABE, Satoshi HONDA, Kazutoshi
YASURA The new Shinkansen rescheduling system for drivers and crew,
Proceedings of Computer System Design and Operation in the Railway and
Other Transit Systems Computers in Railways X, pp. 227-234, (2006)
[2] Hiroyuki SHIMIZU, Hitoshi TANABE, Masashi YAMAMOTO, The
proposal system for Shinkansen using Constraint Programming,
Proceedings of World Congress on Railway Research 2008, O.1.3.2.3,
(2008)
[3] East
Japan
Railway
Company
2009
Annual
Report,
http://www.jreast.co.jp/e/investor/ar/2009/index.html
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
681
Abstract
CTI is combining its products for situation awareness and disruption recovery
in airlines with its railway timetabling software to develop a tool for situation
awareness and disruption recovery in a railway environment.
Analysis has shown that many concepts are similar, but that the extra
complications caused by interactions between train paths require extra
visualisation options such as train diagrams and spatial network displays, along
with extra constraint checking to identify and prevent conflicts.
It has been found that these extra requirements can fit within the same overall
framework as is used for airlines. The extra visualisation options for rail were then
seen to in turn provide value for airlines, as the format used for train diagrams
is useful to visualise crew connections, and spatial network displays are useful to
visualise air corridors. The extra constraint checking required for rail can also be
useful for airlines to model flow restrictions placed on congested runways and air
corridors.
Keywords: disruption management, situational awareness, recovery optimization,
airlines, railways, fleet, crew, passengers.
1 Introduction
The disruption management process in airlines has been the subject of a large
amount of academic study summarised by Kohl et al. [1] and Clausen et al. [2]
and is expedited by commercial products offered by several vendors [3, 4] that
provide integrated fleet disruption management taking into account scheduled
maintenance, crew, passengers and cargo. The situation for other modes of
transport such as heavy and light rail and buses is much less mature, yet there
are many similarities in the problems such that much can be learned and borrowed
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110601
2 Terminology
The disruption management procedure for airlines has been described using a
range of terminology, and thus it is necessary to define the terms used in this paper.
Here we use the following terms for the three main components:
Situational Awareness, where controllers of an airlines operations are
provided with accurate and timely information about all aspects affecting
the operation of a schedule.
Recovery Scenario Evaluation, where a controller can evaluate the effect of
actions that might be taken to recover from disruptions without committing
to them.
Publishing of Schedule Updates, which publishes changes in the schedule to
recover from the disruption.
We will also define the term Recovery Optimisation to describe a tool for
use during Recovery Scenario Evaluation that automatically produces on request
one or more suggestions for recovery scenarios that will minimise the amount
of disruption. Much of the academic literature for both airlines and railways
concentrates on such tools that provide a single optimal solution, however our
experience with airlines has shown that it is not possible to provide a single
solution that will be acceptable in practice for all disruptions. There are two
reasons for this:
It is difficult to state a single objective function to be minimised that
correctly encapsulates the tradeoffs that are required in all disruption
scenarios.
There is often extra information that is not available electronically when the
optimisation is run. An example from airlines is that the time taken for a
mechanical fault to be rectified can only be estimated, so that the ability to
produce a range of provisional solutions for comparison that are based on
different estimates is a very useful part of the decision making process.
Each of these components will be discussed separately in the following sections.
683
A similar situation occurs for rail where above-rail operators (also known as
train operators) are concerned with disruptions to the operations of their fleet,
while below-rail operators (also known as infrastructure managers) are concerned
with managing the provision of paths in the event of disruptions to infrastructure
availability or the schedules of the above-rail operators. The distinction between
these two types of disruption management in rail is less clear where an operator
is both the above- and below-rail operator, such as commonly occurs in dedicated
metro systems.
The two levels of disruption management are interlinked, but there are well
defined systems that allow information to be exchanged between the systems
to enable good decisions to be made in the two separate systems. In the
airline context, the Air Traffic Control system allocates movement slots to limit
movements, and an airline can then make adjustments to its schedule to make best
use of its available movement slots. A similar situation occurs with paths requested
by an above-rail operator and allocated by a below-rail operator.
In this paper we are suggesting that it is best to keep these two levels of
disruption management separate, even for cases where an operator is responsible
for both levels. There are several reasons for this:
The lower level of detail that needs to be considered at the level of operator
disruption management enables options to be seen more clearly without
having to concentrate on infrastructure details.
The timeframes involved in operator disruption management are longer than
those involved in network disruption management. The extra time available
to make these decisions enables better strategic decisions to be made at the
higher level.
Operator disruption management should be concerned with more detailed
checks for non-network resources for example crews, passenger
overcrowding and scheduled maintenance for particular vehicles are not
considered in the context of network disruption management.
The systems that control the below-rail infrastructure are complex, and
since development and maintenance of these systems is costly and timeconsuming it is sensible to only include essential functionality at this level.
Separate software used to assist with disruption management currently exists
for each level, so it is much easier to take advantage of existing functionality
if this separation is maintained.
When seen from this point of view, it can then be seen that much commercial
software currently exists for both levels of disruption management for air transport
[36], but most commercial software for railways concentrates on network
disruption management or only handles a limited number of resources in the
context of operator disruption management [7, 8]. It will be further seen that there
are many similarities between operator disruption management for airlines and
other modes, and this paper will concentrate on showing the possibilities for the
use of this technology by above rail operators.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
4 Situational awareness
Situational awareness is the most critical factor in the process of disruption
management. Tools that make it easy for a controller to quickly comprehend the
essential features of a disruption increase both the speed and quality of the recovery
process.
A vital component of situational awareness software is the ability to show events
in real-time. This requires an event-driven architecture that processes messages as
they arrive and immediately updates all visualisations. Such architectures have
been in use in an airline context at CTI for 20 years, and we are now extending this
technology for use with rail situational awareness.
To understand the full picture of the current situation it is necessary to have the
following information:
The current position and availability of all vehicles.
Current information about any network restrictions. For airlines, this
information includes arrival and departure slot availability at airports. For
railways this can include information about available paths including time
restrictions caused by factors such as timed track possessions, and other
factors such as temporary speed restrictions.
Forecast information to predict whether any current deviations from the
nominal schedule are likely to naturally correct from the built-in recovery
times in the schedule, or whether recovery actions need to be taken to
prevent problem escalation.
Information about planned crew duties, rest breaks, connections and location
of standby crew. It is important to also be able to check maximum duty and
minimum rest break rules to determine if delays will make the currently
planned duties illegal. Crew information is particularly important in longdistance networks.
Information about passenger numbers and itineraries (and/or cargo for
freight operations), or estimates if these are not accurately known. This is
vital for determination of the impact of a disruption, and particularly so in a
railway context since overcrowding has a significant effect on dwell times,
and thus can affect running times significantly.
Information about scheduled maintenance times and locations for vehicles.
This is important if repositioning a vehicle for planned maintenance would
be very expensive, as is normally the case for airlines and long-distance rail
lines.
A range of visualisation formats have been used for both airline and railway
disruption management, and as different formats have different strengths for
different classes of disruption the most desirable option is to have all formats
available so that the controller can choose the most suitable for any given situation.
It is also interesting to note that most of the formats are also applicable to other
transport types, although the amount of usage of the different formats is likely to
differ between modes.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
685
For airline operations, it is typical to show the actual or planned legs for each
vehicle as the prime visualisation tool. This display is typically annotated to show
the origin and destination of each leg as well as the flight number, with colour
coding used to show problems such as late departures or arrivals, crew or passenger
connection problems, ATC slot problems, insufficient turnaround times at airports,
aircraft unavailability and problems meeting scheduled maintenance locations or
times.
For rail operations, this can be used in a similar way to show the trips planned for
each vehicle while highlighting any problems. In order to avoid excessive clutter,
only trip segments between major stations or junctions would be shown at this
level, with the stopping pattern information being available by clicking on each
trip segment.
A Gantt chart has the disadvantage of not highlighting the interactions between
vehicles, which is less of a problem in an airline context but is a significant factor
for rail. Thus for rail it is likely that the prime visualisation tool would instead be
a Time-Distance Chart.
4.2 Time-distance chart
Time-distance charts (Figure 1(b), commonly known for railways as Train
Diagrams or sometimes Service Planning Diagrams) are an excellent method of
visualising interactions between services and also any network limitations such as
track possessions.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
One of the critical factors when dealing with metro rail systems in peak periods
is the requirement to ensure that the number of passengers arriving at each
station can be transported using the available services. In many metro systems,
some overcrowding occurs even in the absence of any cancellations. If trains are
cancelled, it is vital to ensure that the capacity of subsequent trains on the same line
is sufficient to recover from the cancellation, as otherwise the resulting increase in
dwell times caused by extreme overcrowding is likely to lead to a decrease in
the line capacity and thus an escalating problem. One way of visualising whether
this is occurring would be to include estimated passenger figures on the platform
occupancy diagram.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
687
4.4 Timetable
A standard means of visualising a railway schedule is the timetable view. Additions
showing the actual and estimated times along with any cancellations or changes to
stopping patterns as shown in Figure 2(b) are another way of visualising the current
situation on the day of operations that is immediately familiar.
4.5 Spatial displays
Network displays such as track and signal diagrams shown in Figure 3(a) are used
extensively in railways by below-rail operators, but are in general too detailed
for higher level planning. Simplified network diagrams may be useful for aboverail operators to visualise alternative routings in complex networks with several
alternatives, however. A similar concept may also be useful for airline operators
where bad weather can cause flow restrictions through certain areas.
A geospatial display such as that in Figure 3(b) could be useful where it is
necessary to take into account options that can use other modes of transport
e.g. the use of buses for transport of passengers in the event of a major network
disruption.
Indicator
Forecast
to
Total
last
Forecast
to end
ago
day
53
30
1510
1456
32
15
20
10
32
17
923
478
912
456
204
193
3
4
0
0
1
2
56
98
45
87
6.7
132
4.2
79
6.6
121
202
3962
195
3765
Cancelled maintenance
Standby crew used
1
4
0
2
0
3
11
120
9
105
25
10
24
753
742
Count
5
0
0
0
0
1
689
As it is typically not possible to solve all problems for the rest of the day
immediately, it is also important to show the number of remaining problems in
a format such as shown in Table 2.
691
In the UK, the SIRI and TransXChange standards have been developed for
publishing of real-time public transport information and are in widespread use,
although TransXChange is aimed more suited to buses than railways.
7 Recovery optimisation
The recovery optimisation problem has been the subject of much study for airlines
and for rail.
The problems studied for airlines consider a much greater range of resources,
with many considering fleet, scheduled maintenance, airport slots and full
passenger itinerary modelling [11], and several also incorporating crew [12].
For railways, most work on recovery optimisation concentrates on the fleet and
incorporates quite detailed consideration of the network paths along with some
consideration of passenger connections [13], while passenger overcrowding and
crew are only considered for simple problems [14] [15].
As crew connection problems can have a large impact on the feasibility of
solutions for complex or long-haul networks, and the passenger service quality
can have a large impact when comparing the quality of the solution and also the
feasibility (due to the effect of overcrowding on dwell times), we consider that
extensions of airline recovery optimisation formulations to allow for the more
complex constraints of railway paths would be extremely beneficial.
8 Conclusion
There are many advantages in considering disruption management in railways at a
high level, as this enables consideration of factors such as passenger overcrowding
and crew connections. When considered this way, there are many similarities that
allow the use of the mature technology used for disruption recovery in airlines to
enable more effective disruption recovery for rail.
References
[1] Kohl, N., Larsen, A., Larsen, J., Rossd, A. & Tiourine, S., Airline
disruption management perspectives, experiences and outlook. Journal of
Air Transport Management, 13(3), pp. 149162, 2007.
[2] Clausen, J., Larsen, A. & Larsen, J., Disruption management in the airline
industry concepts, models and methods. University of Denmark, DTU,
2005.
[3] TPAC Operations product. Constraint Technologies International,
http://www.constrainttechnologies.com/software/TPAC Operations/.
[4] iFlight Operations product. IBS Software, http://www.ibsplc.com/fleetscheduling-and-management-software.html.
[5] OSYRIS
software
suite.
Barco,
http://www.barco.com/en/
productcategory/29.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
693
Abstract
The capacity analysis tool Kaban aims at being efficient for examining if planned
infrastructure meets the expected need for railway capacity. Infrastructure, safety
rules, signalling and traffic are all modelled in great detail in Kaban and hence
the tool is a useful support for signalling design. The tool is also useful for finding out which routing and what train order suits existing or planned track layout.
The idea of Kaban is that traffic patterns can be modelled as discrete event systems, with the minimum cycle time as a capacity measure. This measure answers
the question if a certain timetable is possible at a station and tells how much
buffer time there is. Kaban also presents results on what is critical for the capacity, aiming at explaining how to adjust to increase capacity. The GUI of Kaban
displays the infrastructure and train paths and takes care of the user interaction.
The development of Kaban is supported by the Swedish Transport Administration
(Trafikverket).
1 Introduction
Kaban aims at being a user-friendly capacity analysis tool, efficient to work with
and producing relevant and dependable results. Kaban aims in particular at supporting signalling design, but the tool is useful also for other phases of building
and utilising the railway. The computations of capacity in Kaban are analytic. The
tool is based on the methods presented in [1] and [2].
1.1 The goal of capacity analysis
In this paper the point of view is that analysis of railway capacity is conducted
with the aim of supporting decisions on how to most efficiently build and utilise
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110611
695
Kaban implements the Swedish safety rules in detail and offers the possibility to
specify certain kinds of deviations from the rules.
The static and analytic character of Kaban makes it in a sense model traffic in
an ideal world. Some results can be thought of as theoretical maximum capacity, to be used as measures for comparing different solutions. Other results, to
become realistic, needs to be calibrated by asserting buffer times. There are also
results, though, that has a static nature of their own. Static are some requirements
on vehicle positions in the moment a train route is locked. Such requirements are
capacity results in their own, they are significant as they are the basis for other
capacity analysis results. The advantages with Kabans models of railway operation are that:
the assumptions are precisely defined and we know exactly what the results
mean
fast computations are made possible and this allows for considering large
number of infrastructure proposals and modifications within available time
precise measures of capacity are provided for measuring the benefits of
infrastructure modifications, comparing infrastructure proposals and for
deciding on which proposal to choose.
It is possible to further develop Kabans, still being analytic, to become a tool
based on probabilistic models. For instance using probabilistic models for running
times. But such a development may be hard to enable without the drawback of
slowing down the computations. A development more likely to be advantageous
is the search for the minimum cycle time for all traffic pattern satisfying some
specification.
2.2 The concepts waiting point and cleared path
The essential concepts on which the method of Kaban relies are waiting point
and cleared path. Detailed description and exemplification are given also in [2].
A waiting point is a point at which a trains stops to wait for prior trains to pass.
Waiting points of trains are input to the capacity analysis. A waiting points is
associated with the end of a train route. A train path is divided into parts called
cleared paths by the waiting points. A train path here means the path on which
the train runs through the analysed railway section. Cleared paths shall not to be
confused with train routes and waiting points may not be confused with stops in
general.
2.3 Conflicts and cycle time computation
The model of train operation of Kaban is a discrete event system (see [3]). As
events we chose starting times of movements on cleared paths. In the continuation we will just use movements to refer to those movements on cleared paths.
Considering two events separately from all other events the following principles
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
697
two trains. One of the trains, say train TR, is divided in two movements, R1 and
R2, by a waiting point. The other train, say train TV, has no waiting point and
consists of just one movement V. Figure 1 shows us the movement R1 and figure
2 shows us the two movements V and R2. Assume that, for this example all data
needed for the analysis capacity has been given to the system. Among the given
data is the traffic pattern given by ordering the movements, here given as R1-VR2.
The first step in the capacity analysis is to make clear which the relevant conflicts between movements are. In Kaban, the conflicts of a traffic pattern is represented by a conflict graph. Figure 3 shows the conflict graph for the traffic pattern
R1-V-R2. The conflict graph has events, i.e. movement starts, as nodes and conflicts between movements as arcs. Subsequently we will refer to conflicts between
movements and minimum durations between two events as arcs and arc weights,
respectively. Since the analysis is concerned with a repeated traffic pattern there
are two kinds of arcs, those of conflicts of movements in the same period in
the traffic pattern and those of movements in successive periods. The first kind
is depicted and referred to as straight arcs and the second kind is depicted and
referred to as bowed arcs.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
262.2
191.1
270.0
82.8
240.0
269.4
270.0
269
191
699
run a given timetable on a railway section and how much buffer time there is. The
conflict graph tells us which the relevant conflicts are and can be used to answer
questions about slack time for a chosen sequence of movements and with respect
to the cycle time or a given timetable. The critical sequences of movements are of
interest for robustness sensitivity analysis and for finding out how to reduce the
cycle time further.
Since the cycle time can be reduced only by reducing the arc weight of arcs
in the critical graph, we need to study the computation of those arc weights for
understanding what the options are for reducing the cycle time. For this reason
Kaban describes, for each arc AB of the analysed traffic
how is the arc weight computed
the critical train route RB of the succeeding movement B
the position that vehicle A must pass before RB is locked and the position
that B must not pass before RB is locked.
Concerning the first point Kaban considers capacity limits for the surroundings of
the analysed area and hence the weight of an arc is either a consequence of that
capacity limit or an internal limitation. Moreover vehicles has a brake reaction
time and the signalling system also has some reaction time which makes the final
arc weight become a sum of some terms. That is what the first point refers to.
Regarding the second and third points let us once again consider the traffic
pattern R1VR2 described by figures 1 and 2. The arc weight of VR1 is the
minimum duration from the start of V till the start of R1. R1 consists of more than
one train routes, each one locked in one piece. It is not only the locking of the first
train route of R1 that is critical for the arc weight of VR1, it may be any of the
train routes of R1. The reason for this is the assumption that R1 is an unhindered
movement all the way to the waiting point. Hence, we need to know which is the
critical train route of R1, w.r.t. VR1 to reduce the arc weight of VR1. Given
that the capacity limitation is internal and given the critical route, then the arc
weight depends on requirements of the vehicle positions at the moment in time
the critical train route is locked. The safety regulation says which position V must
pass before locking the critical train of R1. The assumption that R1 is unhindered
does not allow the vehicle of R1 to adjust its speed to the fact that some train
route ahead is not yet locked. Hence, this assumption will determine the position
the vehicle of R1 must not pass before locking the critical train route of R1.
2.6 Support for entering data
An extensive amount of data is required for careful analysis of capacity and a
tool with a supportive user interface will save a lot of time for the users. Capacity
analysis in Kaban is based on data on
infrastructure
vehicles
train routes and protection of train routes
trains and traffic.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
All needed data can be entered via the Kaban GUI. Some of the features of Kaban
are
Infrastructure data includes, points, track circuits, speed limits, various signals
and buffer stops. It is possible to import infrastructure data on a special XMLformat generated from BIS, the infrastructure database of Banverket (the Swedish
rail administration). Kaban does not automatically draw a visually nice picture
of the track layout, but the Kaban GUI has got drawing modification support for
concisely picturing the real track layout. Figure 5 shows an example on a Kaban
drawing obtained by modifying BIS-imported data.
The vehicle data includes length, brake reaction time, acceleration and deceleration. The computations in Kaban is based on constant acceleration and deceleration. Train routes are automatically generated for all pairs of successive main
signals. Protection for train routes are generated according to the Swedish safety
regulation, where the first valid object is chosen in all directions needing protection. For any train using any train route the user may adjust the chosen protection.
Those adjustments appear as allowance of non-valid objects to protect the train
route and by disallowance of otherwise valid protection. There is several reasons
for enabling such adjustments:
to modify the train route protection is to adopt to older safety regulation,
which may still be in use at some sites
to give the user the possibility to override the safety regulation for some
almost acceptable protective objects
to increase capacity by cancelling conflicts which in turn is done by forbidding some protective objects for certain train routes used by certain trains.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
701
3 Conclusions
This paper views capacity analysis as a means to support efficient construction
and utilisation of railway infrastructure. Different methods and tools may contribute in different ways to help getting a solid understanding of which economic
investments will make the most for satisfying the growing railway transportation
needs. In this context time efficiency is crucial. Saving time and work is a decisive
factor for increasing the quality of analysis results, balancing operational aspects
and further developing capacity analysis, by equipping it with new powerful and
relevant results. In this context analytic, static methods of analysing capacity has
the potential to contribute with valuable results such as
fast computations that enable a lot of cases for infrastructure and traffic to
be part of the analysis
analysis of trains behaving like we would like them to behave
providing measures of capacity for comparing different solutions and
enabling automatic search for the best solution
offering results with a precise and clear meaning.
The paper presents Kaban aiming at being a user-friendly capacity analysis
tool. Kaban is based on analytic, non-probabilistic methods and the results of
Kaban aim at helping users to get a good understanding of the capacity advantages
and drawbacks of proposals infrastructure and traffic. A main result is the cycle
time of a repeated traffic pattern, useful for estimating feasibility of time table
options. Other results aim at explaining how to adjust the infrastructure to increase
capacity. The Swedish safety regulation is the basis for Kabans careful model of
train conflicts. Hence, Kaban suits capacity analysis in the signalling design phase
of railway construction.
References
[1] Forsgren, M., Computation of capacity on railway networks. SICS Technical
Report T2003:12, Swedish Institute of Computer Science, 2003. FoU-rapport,
Banverket.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
703
Abstract
An algorithm for analyzing the rail potentials in a DC traction power supply
system is proposed. The distinctive feature of this algorithm is that it uses the
node voltage method twice in the rail potential analysis. To calculate the rail
potentials, the leakage resistance of the running rail must be included in the
equivalent network which makes it difficult to apply the node voltage method.
The mesh current method has the drawback that the initial mesh currents are
difficult to estimate, which is necessary for an iterative network solution. In this
algorithm, the rail potentials are obtained by applying the node voltage method
twice. In the first stage, the injection currents to the negative rail are obtained
from a load-flow study. In the next stage, a network consisting of the negative
rail and the injection currents is constructed. The leakage resistance to ground is
added to the network a, and the rail potentials of the network are analyzed. A
computer load flow analysis software package was developed to verify the
validity of the algorithm. The results from the software are compared with the
EMTP-RV circuit analysis results, and the results are nearly identical.
Keywords: rail potential, simulation, DC traction power.
1 Introduction
In most DC powered electric railways, running rails are also used as the negative
return path to the rectifier negative bus in the substation. In this system, the
potentials of the running rails rise with increasing load current. The increased rail
potential causes concern for human safety, due to the possibly hazardous
excessive touch voltage and step voltages [1]. For this reason, the maximum
allowable rail potential is limited by IEC standard 62128 [3]. The rail potential
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
doi:10.2495/CMEM110621
RL
VN
R L RS
VGS
RS
VN
R L RS
VN I N R N
(1)
(2)
(3)
RS and RL are the effective resistance to ground at the rectifier station end and
the train end, respectively. They are lumped resistances obtained by converting
the distributed rail-earth conductance of the negative rail to a lumped pie
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
705
Figure 1:
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Figure 3:
Figure 4:
The logic flow diagram is shown in Fig. 5. In step 3 of the logic flow
diagram, the node equation is constructed in matrix form, as shown in equation
(4). Equation (4) is set for the circuit shown in Fig. 3.
[G][V]=[I]
(4)
It
Pt
Vt
(5)
where Pt: the required power of the train
It: the injection current at the train node
Vt: the train node voltage (assumed value)
Because It is a function of Vt, equation (4) is nonlinear, and must be solved
iteratively. In step 4 of the logic flow diagram in Fig. 5, a new network is
constructed, and a new set of node equations is built, as shown in eq.(6).
[G]new[V]new=[I]new
(6)
Figure 5:
707
Unlike equation (4), eq. (6) does not require an iterative solution because the
elements of [I]new are given as constants. All elements of [V]new represent the
running rail potential at every node location.
4 Test run
Computer load flow software for rail potential analysis was developed to
implement the algorithm proposed above. It was applied to the test system, and
the result was compared with the result of EMTP-RV analysis for the test
system. Fig. 6 shows the test system, and the symbols in Fig. 6 are explained in
Tables 1 and 2. In Fig. 6, m stands for milli-. The line parameters and the train
Figure 6:
Test system.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Resistance [m]
22.5
0.35
R3
0.35
R4
R5
R6
R7
R8
R9
10.4412
4.9728
9.7272
9.4717
4.5110
8.8240
Table 2:
Source/
Loads
Locatio
ns [m]
Rectifier
Station
2670
Train_1
Train_2
Train_3
1427
3262
4420
200
-627.2
1184.3
256.0867
-805.4106
1580.2787
Remarks
Source
Resistance
= 0.0225
Regenerating
locations are shown in Table 1. In the test system, the rectifier produces 810V
DC with no load, and three trains are drawing power, as shown in Table 2. The
rail potentials of the test system are obtained from the developed load flow
software. The results are shown in Fig. 7. The same rail potentials were analyzed
with EMTP-RV s/w. Fig. 8 shows the EMTP-RV model of the test system.
However, EMTP-RV cannot conduct a DC load flow study, therefore, instead of
the power required for the trains, the injection currents from the load flow
software are inserted as a DC current source at the train location. This can be
validated once the power consumption of each train is identical to its required
Figure 7:
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
709
+
10.4412m
+
9.47166m
Figure 8:
+
4.51104m
Train_1
+
3
(S7)
?vip
1580.2787
8.82396m
22.8571
+
1
(S11)
32.1802
+
0.35m
+
2
(S6)
21.7984
+
?vip
?vip
-805.4106
256.0867
+
Rectifier
+
9.7272m
Train_2
?vip
Train_3
810
+
4.9728m
4
(S3)
34.5423
+
22.5m
+
+
0.35m
power. Fig. 9 shows the resulting rail potentials from the EMTP-RV analysis.
The rail potentials from the previous two studies are summarized in Table 3, and
the two results are identical. Next, the power that each train consumed according
to the EMTP-RV analysis is compared with its required power to validate the
Figure 9:
Table 3:
Node
No
1
2
3
4
Remarks
% error
0.00
0.00
0.00
0.00
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
Train_3
DC Source
Train_2
Train_1
Figure 10:
Table 4:
Node
Train 3
DC Source
Train_2
Train_1
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
% error
0.00
0.00
0.00
0.00
711
5 Conclusion
The rail potentials in a DC traction power supply system can be analyzed by
applying the node voltage analysis method twice consecutively with the
proposed algorithm. This algorithm was validated in section 4. The advantage of
the algorithm is that we can avoid the mesh current method in the rail potential
analysis, which is difficult to use because assumed initial mesh currents must be
used in the iterative circuit analysis. The mesh current method also requires more
than twice the computer memory required for the node analysis method, because
the number of meshes is more than twice the number of nodes.
References
[1] Jian Guo Yu B.Sc.: Computer Analysis of touch voltages and stray currents
for DC railways, thesis for Ph.D. University of Birmingham, July, 1992.
[2] Pham, K.D., Thomas, R.S. and Stinger, W.E. Operational and safety
considerations in designing a light rail DC traction electrification system,
Proc. 2003 IEEE/ASME Joint, PP.171-189.
[3] IEC62128-1: Railway applications Fixed installations Part 1: Protective
provisions relating to electrical safety and earthing.
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
713
Author Index
Ababneh Ayman N. ................. 317
Abdel Magid Y. ....................... 307
Abdolahi S. .............................. 193
Abdul Majeed F. ...................... 307
AbdulGhani S. A. A................. 279
Ajalli F. ...................................... 55
Al-Haddad A............................ 547
Alqirem R. M. ............................ 35
Al-Sibahy A. .............................. 85
Alvarez E. ................................ 559
Anwar Bg O. .......................... 279
Arias B. .................................... 559
Arida H. A. .............................. 547
Barboni T. ................................ 155
Barnat W. ......................... 625, 645
Bautista E. ................................ 229
Bautista O. ............................... 229
Behera S. .................................. 215
Bellerov H. ............................. 293
Bielecki Z. ............................... 461
Bilek A..................................... 167
Bodino M. ................................ 205
Chamis C. C. .............................. 23
Chaudhuri B. ............................ 121
Chen G. C. ............................... 521
Chiorean C. G. ......................... 363
Christou P. ............................... 385
Chung S. .................................. 703
Corbetta C. ............................... 205
Curtis P. T. ............................... 397
Dan-Ali F. ................................ 133
Daud H. A. ............................... 279
de Fabris S. .............................. 577
Deng L. ...................................... 45
Djeddi F. .................................. 167
Dulikravich G. S. ..................... 111
Durley-Boot N. J. ..................... 509
Dziewulski P. ........................... 635
WIT Transactions on Modelling and Simulation, Vol 51, 2011 WIT Press
www.witpress.com, ISSN 1743-355X (on-line)
715
As computer models become more reliable and able to represent more realistic
problems, detailed data is leading to the development of appropriate new types
of experiments.
Experimental measurements need to be conditioned to the requirements of
the computational models. Hence it is important that scientists working on
experiments communicate with researchers developing computer codes, as well
as those carrying out measurements on prototypes. The orderly and progressive
concurrent development of all these fields is essential for the progress of
engineering sciences.
This book resulted from the latest in a series of successful meetings that
provide a unique forum for the review of the latest work on the interaction
between computational methods and experimental measurements. The topics
include: Computational and Experimental Methods; Experimental and
Computational Analysis; Direct, Indirect and In-Situ Measurements; Detection
and Signal Processing; Data Processing; Fluid Flow; Heat Transfer and Thermal
Processes; Material Characterization; Structural and Stress Analysis; Industrial
Applications; Forest Fires.
WIT Transactions on Modelling and Simulation, Vol 48
ISBN: 978-1-84564-187-0
eISBN: 978-1-84564-364-5
Published 2009 / 672pp / 255.00
WIT Press is a major publisher of engineering research. The company prides itself on producing books by
leading researchers and scientists at the cutting edge of their specialities, thus enabling readers to remain at the
forefront of scientific developments. Our list presently includes monographs, edited volumes, books on disk,
and software in areas such as: Acoustics, Advanced Computing, Architecture and Structures, Biomedicine,
Boundary Elements, Earthquake Engineering, Environmental Engineering, Fluid Mechanics, Fracture Mechanics,
Heat Transfer, Marine and Offshore Engineering and Transport Engineering.
This book presents a general elastic and elastoplastic analysis method for the
treatment of two- and three-dimensional contact problems between two
deformable bodies undergoing small displacements with and without friction.
The authors approach uses the Boundary Element Method (BEM) and
Mathematical Programming (MP). This is applied to the contacting bodies
pressed together by a normal force which is sufficient to expand the size of the
contact area, mainly through plastic deformation, acted on subsequently by a
tangential force less than that necessary to cause overall sliding. The formulated
method described in this book is straightforward and applicable to practical
engineering problems.
Series: Topics in Engineering, Vol 45
ISBN: 1-85312-733-7
Published 2005 / 144pp / 55.00
This book presents a clear and well-organised treatment of the concepts behind
the development of mathematics and numerical techniques. The central topic
is numerical methods and the calculus of variations to physical problems. Based
on the authors course taught at many universities around the world, the text is
primarily intended for undergraduates in electrical, mechanical, chemical and
civil engineering, physics, applied mathematics and computer science. Many
sections are also directly relevant to graduate students in the mathematical and
physical sciences.
More than 100 solved problems and approximately 120 exercises are also
featured.
ISBN: 1-85312-891-0
Published 2005 / 408pp+CD / 165.00