Nothing Special   »   [go: up one dir, main page]

Pub - Handbook of Research On Discrete Event Simulation PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 610

Handbook of Research on

Discrete Event Simulation


Environments:
Technologies and Applications

Evon M. O. Abu-Taieh
Arab Academy for Banking and Financial Sciences, Jordan

Asim Abdel Rahman El Sheikh


Arab Academy for Banking and Financial Sciences, Jordan

InformatIon scIence reference


Hershey • New York
Director of Editorial Content: Kristin Klinger
Senior Managing Editor: Jamie Snavely
Assistant Managing Editor: Michael Brehm
Publishing Assistant: Sean Woznicki
Typesetter: Michael Killian, Sean Woznicki
Cover Design: Lisa Tosheff
Printed at: Yurchak Printing Inc.

Published in the United States of America by


Information Science Reference (an imprint of IGI Global)
701 E. Chocolate Avenue
Hershey PA 17033
Tel: 717-533-8845
Fax: 717-533-8661
E-mail: cust@igi-global.com
Web site: http://www.igi-global.com/reference

Copyright © 2010 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in
any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher.
Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or
companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.

Library of Congress Cataloging-in-Publication Data

Handbook of research on discrete event simulation environments : technologies and applications / Evon M.O. Abu-Taieh and
Asim Abdel Rahman El Sheikh, editors.
p. cm.
Includes bibliographical references and index.
Summary: "This book provides a comprehensive overview of theory and practice in simulation systems focusing on major
breakthroughs within the technological arena, with particular concentration on the accelerating principles, concepts and
applications"--Provided by publisher.

ISBN 978-1-60566-774-4 (hardcover) -- ISBN 978-1-60566-775-1 (ebook) 1.


Discrete-time systems--Computer simulation. I. Abu-Taieh, Evon M. O. II. El
Sheikh, Asim Abdel Rahman.
T57.62H365 2012
003'.830113--dc22
2009019592

British Cataloguing in Publication Data


A Cataloguing in Publication record for this book is available from the British Library.

All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the
authors, but not necessarily of the publisher.
Editorial Advisory Board
Raymond R. Hill, Wright State University, USA
Firas Al-Khaldi, Arab Academy for Banking and Financial Sciences, Jordan
Jeihan Abu-Tayeh, The World Bank, Middle East and North Africa Region, USA
Tillal Eldabi, Brunel University, UK
Roberto Revetria, University of Genoa, Italy
Sabah Abutayeh, Housing Bank, Jordan
Michael Dupin, Harvard Medical School and Massachusetts General Hospital, USA
List of Contributors

Aboud, Sattar J. / Middle East University for Graduate Studies, Jordan .......................................... 58
Abu-Taieh, Evon M. O. / Civil Aviation Regulatory Commission and Arab
Academy for Financial Sciences, Jordan .......................................................................................... 15
Abutayeh, Jeihan M. O. / World Bank, Jordan .................................................................................. 15
Al-Bahadili, Hussein / The Arab Academy for Banking & Financial Sciences, Jordan ................... 418
Al-Fayoumi, Mohammad / Middle East University for Graduate Studies, Jordan ........................... 58
Al-Hudhud, Ghada / Al-Ahlyia Amman University, Jordan ............................................................. 252
Alnoukari, Mouhib / Arab Academy for Banking and Financial Sciences, Syria ............................ 359
Alnuaimi, Mohamed / Middle East University for Graduate Studies, Jordan ................................... 58
Al-Qirem, Raed M. / Al-Zaytoonah University of Jordan, Jordan ................................................... 484
Alzoabi, Zaidoun / Arab Academy for Banking and Financial Sciences, Syria................................ 359
Capra, Lorenzo / Università degli Studi di Milano, Italy ......................................................... 191, 218
Cartaxo, Adolfo V. T. / Instituto de Telecomunicações, Portugal ..................................................... 143
Cassettari, Lucia / University of Genoa, Italy..................................................................................... 92
Cazzola, Walter / Università degli Studi di Milano, Italy ......................................................... 191, 218
Cercas, Francisco A. B. / Instituto de Telecomunicações, Portugal ................................................. 143
El Sheikh, Asim / Arab Academy for Banking and Financial Sciences, Jordan ............................... 359
Gamez, David / Imperial College, UK .............................................................................................. 337
Heath, Brian L. / Wright State University, USA .................................................................................. 28
Hill, Raymond R. / Air Force Institute of Technology, USA ............................................................... 28
Kolker, Alexander / Children’s Hospital and Health Systems, USA ................................................. 443
Korhonen, Ari / Helsinki University of Technology, Finland ............................................................ 234
Kubátová, Hana / Czech Technical University in Prague, Czech Republic ..................................... 178
Lipovszki, Gyorgy / Budapest University of Technology and Economics, Hungary ........................ 284
Marzouk, Mohamed / Cairo University, Egypt ................................................................................ 509
Membarth, Richard / University of Erlangen-Nuremberg, Erlangen, Germany ............................. 379
Molnar, Istvan / Bloomsburg University of Pennsylvania, USA ................................................... 1, 284
Mosca, Roberto / University of Genoa, Italy....................................................................................... 92
Revetria, Roberto / University of Genoa, Italy ................................................................................... 92
Sarjoughian, Hessam / Arizona Center for Integrative Modeling and Simulation, USA ................... 75
Sarkar, Nurul I. / AUT University, Auckland, New Zealand ..................................................... 379, 398
Sebastião, Pedro J. A. / Instituto de Telecomunicações, Portugal .................................................... 143
Tolk, Andreas / Old Dominion University, USA ................................................................................ 317
Wutzler, Thomas / Max Planck Institute for Biogeochemistry, Germany........................................... 75
Yaseen, Saad G. / Al-Zaytoonah University of Jordan, Jordan ......................................................... 484
Table of Contents

Preface ............................................................................................................................................... xvii

Acknowledgment ..............................................................................................................................xxiii

Chapter 1
Simulation: Body of Knowledge ............................................................................................................ 1
Istvan Molnar, Bloomsburg University of Pennsylvania, USA

Chapter 2
Simulation Environments as Vocational and Training Tools ................................................................ 15
Evon M. O. Abu-Taieh, Civil Aviation Regulatory Commission and Arab Academy for
Financial Sciences, Jordan
Jeihan M. O. Abutayeh, World Bank, Jordan

Chapter 3
Agent-Based Modeling: A Historical Perspective and a Review of Validation and
Verification Efforts ................................................................................................................................ 28
Brian L. Heath, Wright State University, USA
Raymond R. Hill, Air Force Institute of Technology, USA

Chapter 4
Verification and Validation of Simulation Models ................................................................................ 58
Sattar J. Aboud, Middle East University for Graduate Studies, Jordan
Mohammad Al-Fayoumi, Middle East University for Graduate Studies, Jordan
Mohamed Alnuaimi, Middle East University for Graduate Studies, Jordan

Chapter 5
DEVS-Based Simulation Interoperability............................................................................................. 75
Thomas Wutzler, Max Planck Institute for Biogeochemistry, Germany
Hessam Sarjoughian, Arizona Center for Integrative Modeling and Simulation, USA
Chapter 6
Experimental Error Measurement in Monte Carlo Simulation.............................................................. 92
Lucia Cassettari, University of Genoa, Italy
Roberto Mosca, University of Genoa, Italy
Roberto Revetria, University of Genoa, Italy

Chapter 7
Efficient Discrete Simulation of Coded Wireless Communication Systems....................................... 143
Pedro J. A. Sebastião, Instituto de Telecomunicações, Portugal
Francisco A. B. Cercas, Instituto de Telecomunicações, Portugal
Adolfo V. T. Cartaxo, Instituto de Telecomunicações, Portugal

Chapter 8
Teaching Principles of Petri Nets in Hardware Courses and Students Projects.................................. 178
Hana Kubátová, Czech Technical University in Prague, Czech Republic

Chapter 9
An Introduction to Reflective Petri Nets.............................................................................................. 191
Lorenzo Capra, Università degli Studi di Milano, Italy
Walter Cazzola, Università degli Studi di Milano, Italy

Chapter 10
Trying Out Reflective Petri Nets on a Dynamic Workflow Case......................................................... 218
Lorenzo Capra, Università degli Studi di Milano, Italy
Walter Cazzola, Università degli Studi di Milano, Italy

Chapter 11
Applications of Visual Algorithm Simulation...................................................................................... 234
Ari Korhonen, Helsinki University of Technology, Finland

Chapter 12
Virtual Reality: A New Era of Simulation and Modelling................................................................... 252
Ghada Al-Hudhud, Al-Ahlyia Amman University, Jordan

Chapter 13
Implementation of a DES Environment............................................................................................... 284
Gyorgy Lipovszki, Budapest University of Technology and Economics, Hungary
Istvan Molnar, Bloomsburg University of Pennsylvania, USA

Chapter 14
Using Simulation Systems for Decision Support................................................................................ 317
Andreas Tolk, Old Dominion University, USA
Chapter 15
The Simulation of Spiking Neural Networks...................................................................................... 337
David Gamez, Imperial College, UK

Chapter 16
An Integrated Data Mining and Simulation Solution ......................................................................... 359
Mouhib Alnoukari, Arab Academy for Banking and Financial Sciences, Syria
Asim El Sheikh, Arab Academy for Banking and Financial Sciences, Jordan
Zaidoun Alzoabi, Arab Academy for Banking and Financial Sciences, Syria

Chapter 17
Modeling and Simulation of IEEE 802.11g using OMNeT++ ........................................................... 379
Nurul I. Sarkar, AUT University, Auckland, New Zealand
Richard Membarth, University of Erlangen-Nuremberg, Erlangen, Germany

Chapter 18
Performance Modeling of IEEE 802.11 WLAN using OPNET: A Tutorial ....................................... 398
Nurul I. Sarkar, AUT University, New Zealand

Chapter 19
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design ................... 418
Hussein Al-Bahadili, The Arab Academy for Banking & Financial Sciences, Jordan

Chapter 20
Queuing Theory and Discrete Events Simulation for Health Care: From Basic Processes to
Complex Systems with Interdependencies ......................................................................................... 443
Alexander Kolker, Children’s Hospital and Health Systems, USA

Chapter 21
Modelling a Small Firm in Jordan Using System Dynamics.............................................................. 484
Raed M. Al-Qirem, Al-Zaytoonah University of Jordan, Jordan
Saad G. Yaseen, Al-Zaytoonah University of Jordan, Jordan

Chapter 22
The State of Computer Simulation Applications in Construction ...................................................... 509
Mohamed Marzouk, Cairo University, Egypt

Compilation of References .............................................................................................................. 535

About the Contributors ................................................................................................................... 570

Index ................................................................................................................................................... 578


Table of Contents

Preface ............................................................................................................................................... xvii

Acknowledgment ..............................................................................................................................xxiii

Chapter 1
Simulation: Body of Knowledge ............................................................................................................ 1
Istvan Molnar, Bloomsburg University of Pennsylvania, USA

Chapter 1, Simulation: Body of Knowledge, attempts to define the knowledge body of simulation and
describes the underlying principles of simulation education. It argues that any programs in Modelling and
Simulation should recognize the multi-and interdisciplinary character of the field and realize the program
in wide co-operation. The paper starts with the clarification of the major objectives and principles of the
Modelling and Simulation Program and the related degrees, based on a broad business and real world
perspective. After reviewing students’ background, especially the communication, interpersonal, and
team skills, the analytical and critical thinking skills, furthermore some of the additional skills leading
to a career, the employer’s view and possible career paths are examined. Finally, the core knowledge
body, the curriculum design and program related issues are discussed. The author hopes to contribute to
the recent discussions about modelling and simulation education and the profession.

Chapter 2
Simulation Environments as Vocational and Training Tools ................................................................ 15
Evon M. O. Abu-Taieh, Civil Aviation Regulatory Commission and Arab Academy for
Financial Sciences, Jordan
Jeihan M. O. Abutayeh, World Bank, Jordan

Chapter 2, Simulation Environments as Vocational and Training Tools, investigates over 50 simula-
tion packages and simulators used in vocational and course training in many fields. Accordingly, the
50 simulation packages were categorized in the following fields: Pilot Training, Chemistry, Physics,
Mathematics, Environment and ecological systems, Cosmology and astrophysics, Medicine and Surgery
training, Cosmetic surgery, Engineering – Civil engineering, architecture, interior design, Computer and
communication networks, Stock Market Analysis, Financial Models and Marketing, Military Training
and Virtual Reality. The incentive for using simulation environments as vocational and training tools
is to save live, money and effort.
Chapter 3
Agent-Based Modeling: A Historical Perspective and a Review of Validation and
Verification Efforts ................................................................................................................................ 28
Brian L. Heath, Wright State University, USA
Raymond R. Hill, Air Force Institute of Technology, USA

Chapter 3, Agent-Based Modeling: A Historical Perspective and a Review of Validation and Verification
Efforts, traces the historical roots of agent-based modeling. This review examines the modern influ-
ences of systems thinking, cybernetics as well as chaos and complexity on the growth of agent-based
modeling. The chapter then examines the philosophical foundations of simulation verification and
validation. Simulation verification and validation can be viewed from two quite different perspectives:
the simulation philosopher and the simulation practitioner. Personnel from either camp are typically
unaware of the other camp’s view of simulation verification and validation. This chapter examines
both camps while also providing a survey of the literature and efforts pertaining to the verification and
validation of agent-based models.

Chapter 4
Verification and Validation of Simulation Models ................................................................................ 58
Sattar J. Aboud, Middle East University for Graduate Studies, Jordan
Mohammad Al-Fayoumi, Middle East University for Graduate Studies, Jordan
Mohamed Alnuaimi, Middle East University for Graduate Studies, Jordan

Chapter 4, Verification and Validation of Simulation Models, discusses validation and verification of simu-
lation models. The different approaches to deciding model validity are presented; how model validation
and verification relate to the model development process are discussed; various validation techniques are
defined; conceptual model validity, model verification, operational validity, and data validity; superior
verification and validation technique for simulation models relied on a multistage approach are described;
ways to document results are given; and a recommended procedure is presented.

Chapter 5
DEVS-Based Simulation Interoperability............................................................................................. 75
Thomas Wutzler, Max Planck Institute for Biogeochemistry, Germany
Hessam Sarjoughian, Arizona Center for Integrative Modeling and Simulation, USA

Chapter 5, DEVS-Based Simulation Interoperability, introduces the usage of DEVS for the purpose
of implementing interoperability across heterogeneous simulation models. It shows that the DEVS
framework provides a simple, yet effective conceptual basis for handling simulation interoperability. It
discusses the various useful properties of the DEVS framework, describes the Shared Abstract Model
(SAM) approach for interoperating simulation models, and compares it to other approaches. The DEVS
approach enables formal model specification with component models implemented in multiple program-
ming languages. The simplicity of the integration of component models designed in the DEVS, DTSS,
and DESS simulation formalisms and implemented in the programming languages Java and C++ is
demonstrated by a basic educational example and by a real world forest carbon accounting model. The
authors hope, that readers will appreciate the combination of generalness and simplicity and that readers
will consider using the DEVS approach for simulation interoperability in their own projects.
Chapter 6
Experimental Error Measurement in Monte Carlo Simulation ............................................................. 92
Lucia Cassettari, University of Genoa, Italy
Roberto Mosca, University of Genoa, Italy
Roberto Revetria, University of Genoa, Italy

Chapter 6, Experimental Error Measurement in Monte Carlo Simulation, describes the set up step series,
developed by the Genoa Research Group on Production System Simulation at the beginning of the ’80s,
as a sequence, through which it is possible at first statistically validate the simulator, then estimate the
variables which effectively affect the different target functions, then obtain, through the regression meta-
models, the relations linking the independent variables to the dependent ones (target functions) and,
finally, proceed to the detection of the optimal functioning conditions. The authors pay great attention
to the treatment, the evaluation and control of the Experimental Error, under the form of Mean Square
Pure Error (MSPE), a measurement which is always culpably neglected in the traditional experimentation
on the simulation models but, that potentially can consistently invalidate with its magnitude the value
of the results obtained from the model.

Chapter 7
Efficient Discrete Simulation of Coded Wireless Communication Systems ...................................... 143
Pedro J. A. Sebastião, Instituto de Telecomunicações, Portugal
Francisco A. B. Cercas, Instituto de Telecomunicações, Portugal
Adolfo V. T. Cartaxo, Instituto de Telecomunicações, Portugal

Chapter 7, Efficient Discrete Simulation of Coded Wireless Communication Systems, presents a simula-
tion method, named Accelerated Simulation Method (ASM), that provides a high degree of efficiency
and accuracy, namely for lower BER, where the application of methods like the Monte Carlo simulation
method (MCSM) is prohibitive, due to high computational and time requirements. The present work
generalizes the application of the ASM to a Wireless Communication System’s (WCS) modelled as a
stochastic discrete channel model, considering a real channel, where there are several random effects
that result in random energy fluctuations of the received symbols. The performance of the coded WCS
is assessed efficiently, with soft-decision (SD) and hard-decision (HD) decoding. The authors show
that this new method already achieves a time efficiency of two or three orders of magnitude for SD and
HD, considering a BER = 1 × 10-4 when compared to MCSM. The presented performance results are
compared with the MCSM, to check its accuracy.

Chapter 8
Teaching Principles of Petri Nets in Hardware Courses and Students Projects.................................. 178
Hana Kubátová, Czech Technical University in Prague, Czech Republic

Chapter 8, Teaching Principles of Petri Nets in Hardware Courses and Student’s Projects, presents the
principles of using Petri Net formalism in hardware design courses, especially in the course “Architec-
ture of peripheral devices”. Several models and results obtained by student individual or group projects
are mentioned. First the using of formalism as a modeling tool is presented consecutively from Place/
Transition nets to Coloured Petri nets. Then the possible Petri Nets using as a hardware specification
for direct hardware implementation (synthesized VHDL for FPGA) is described. Implementation and
simulation results of three directly implemented models are presented

Chapter 9
An Introduction to Reflective Petri Nets.............................................................................................. 191
Lorenzo Capra, Università degli Studi di Milano, Italy
Walter Cazzola, Università degli Studi di Milano, Italy

Chapter 9, An Introduction to Reflective Petri Nets, introduces Reflective Petri nets, a formal model for
dynamic discrete-event systems. Based on a typical reflective architecture, in which functional aspects
are cleanly separated from evolutionary ones, that model preserves the description effectiveness and
the analysis capabilities of Petri nets. On the short-time perspective of implementing a discrete-event
simulation engine, Reflective Petri nets are provided with timed state-transition semantics.

Chapter 10
Trying Out Reflective Petri Nets on a Dynamic Workflow Case......................................................... 218
Lorenzo Capra, Università degli Studi di Milano, Italy
Walter Cazzola, Università degli Studi di Milano, Italy

Chapter 10, Trying out Reflective Petri Nets on a Dynamic Workflow Case, proposes a recent Petri
net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A
localized open problem is considered: how to determine what tasks should be redone and which ones
do not when transferring a workflow instance from an old to a new template. The problem is efficiently
but rather empirically addressed in a workflow management system. The proposed approach is formal,
may be generalized, and is based on the preservation of classical Petri nets structural properties, which
permit an efficient characterization of workflow’s soundness.

Chapter 11
Applications of Visual Algorithm Simulation...................................................................................... 234
Ari Korhonen, Helsinki University of Technology, Finland

Chapter 11, Applications of Visual Algorithm Simulation, represent a novel idea to promote the interaction
between the user and the algorithm visualization system called visual algorithm simulation. As a proof
of concept, the chapter represents an application framework called Matrix that encapsulates the idea of
visual algorithm simulation. The framework is applied by the TRAKLA2 learning environment in which
algorithm simulation is employed to produce algorithm simulation exercises. Moreover, the benefits of
such exercises and applications of visual algorithm simulation in general are discussed.

Chapter 12
Virtual Reality: A New Era of Simulation and Modelling................................................................... 252
Ghada Al-Hudhud, Al-Ahlyia Amman University, Jordan

Chapter 12, Virtual Reality: New Era of Simulation And Modelling, represent a novel idea to promote the
interaction between the user and the algorithm visualization system called visual algorithm simulation.
As a proof of concept, the chapter represents an application framework called Matrix that encapsulates
the idea of visual algorithm simulation. The framework is applied by the TRAKLA2 learning environ-
ment in which algorithm simulation is employed to produce algorithm simulation exercises. Moreover,
the benefits of such exercises and applications of visual algorithm simulation in general are discussed.

Chapter 13
Implementation of a DES Environment.............................................................................................. 284
Gyorgy Lipovszki, Budapest University of Technology and Economics, Hungary
Istvan Molnar, Bloomsburg University of Pennsylvania, USA

Chapter 13, Case Study: Implementation of a DES Environment, describes a program system that imple-
ments a Discrete Event Simulation (DES) development environment. The simulation environment was
created using the LabVIEW graphical programming system; a National Instruments software product.
In this programming environment the user can connect different procedures and data structures with
“graphical wires” to implement a simulation model, thereby creating an executable simulation program.
The connected individual objects simulate a discrete event problem. The chapter describes all simulation
model objects, their attributes and methods. Another important element of the discrete event simulator is
the task list, which has also been created using task type objects. The simulation system uses the “next
event simulation” technique and refreshes the actual state (attribute values of all model objects) at every
event. The state changes are determined by the entity objects, their input, current content, and output.
Every model object can access (read) all and modify (write) a selected number of object attribute values.
This property of the simulation system provides the possibility to build a complex discrete event system
using predefined discrete event model objects.

Chapter 14
Using Simulation Systems for Decision Support................................................................................ 317
Andreas Tolk, Old Dominion University, USA

Chapter 14, Using Simulation Systems for Decision Support, describes the use of simulation systems
for decision support in support of real operations, which is the most challenging application domain in
the discipline of modeling and simulation. To this end, the systems must be integrated as services into
the operational infrastructure. To support discovery, selection, and composition of services, they need
to be annotated regarding technical, syntactic, semantic, pragmatic, dynamic, and conceptual catego-
ries. The systems themselves must be complete and validated. The data must be obtainable, preferably
via common protocols shared with the operational infrastructure. Agents and automated forces must
produce situation adequate behavior. If these requirements for simulation systems and their annotations
are fulfilled, decision support simulation can contribute significantly to the situational awareness up to
cognitive levels of the decision maker.

Chapter 15
The Simulation of Spiking Neural Networks...................................................................................... 337
David Gamez, Imperial College, UK
Chapter 15, Discrete Event Simulation of Spiking Neural Networks, is an overview of the simulation
of spiking neural networks that relates discrete event simulation to other approaches. To illustrate the
issues surrounding this work, the second half of this chapter presents a case study of the SpikeStream
neural simulator that covers the architecture, performance and typical applications of this software along
with some recent experiments.

Chapter 16
An Integrated Data Mining and Simulation Solution ......................................................................... 359
Mouhib Alnoukari, Arab Academy for Banking and Financial Sciences, Syria
Asim El Sheikh, Arab Academy for Banking and Financial Sciences, Jordan
Zaidoun Alzoabi, Arab Academy for Banking and Financial Sciences, Syria

Chapter 16, An Integrated Data Mining and Simulation Solution, we will propose an intelligent DSS
framework based on data mining and simulation integration. The main output of this framework is the
increase of knowledge. Two case studies are presented, the first one on car market demand simulation.
The simulation model was built using neural networks to get the first set of prediction results. Data min-
ing methodology used named ANFIS (Adaptive Neuro-Fuzzy Inference System). The second case study
demonstrates how applying data mining and simulation in assuring quality in higher education

Chapter 17
Modeling and Simulation of IEEE 802.11g using OMNeT++ ........................................................... 379
Nurul I. Sarkar, AUT University, Auckland, New Zealand
Richard Membarth, University of Erlangen-Nuremberg, Erlangen, Germany

Chapter 17, Modeling and Simulation of IEEE 802.11g using OMNeT++, aims to provide a tutorial on
OMNeT++ focusing on modeling and performance study of the IEEE 802.11g wireless network. Due
to the complex nature of computer and telecommunication networks, it is often difficult to predict the
impact of different parameters on system performance especially when deploying wireless networks.
Computer simulation has become a popular methodology for performance study of computer and telecom-
munication networks. This popularity results from the availability of various sophisticated and powerful
simulation software packages, and also because of the flexibility in model construction and validation
offered by simulation. While various network simulators exist for building a variety of network models,
choosing a good network simulator is very important in modeling and performance analysis of wireless
networks. A good simulator is one that is easy to use; more flexible in model development, modification
and validation; and incorporates appropriate analysis of simulation output data, pseudo-random number
generators, and statistical accuracy of the simulation results. OMNeT++ is becoming one of the most
popular network simulators because it has all the features of a good simulator.

Chapter 18
Performance Modeling of IEEE 802.11 WLAN using OPNET: A Tutorial ....................................... 398
Nurul I. Sarkar, AUT University, New Zealand

Chapter 18, Performance Modeling of IEEE 802.11 WLAN using OPNET: A Tutorial, aims to provide
a tutorial on OPNET focusing on the simulation and performance modeling of IEEE 802.11 wireless
local area networks (WLANs). Results obtained show that OPNET provides credible simulation results
close to a real system.

Chapter 19
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design ................... 418
Hussein Al-Bahadili, The Arab Academy for Banking & Financial Sciences, Jordan

Chapter 19, On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design, de-
scribes a newly developed research-level computer network simulator, which can be used to evaluate the
performance of a number of flooding algorithms in ideal and realistic mobile ad hoc network (MANET)
environments. It is referred to as MANSim.

Chapter 20
Queuing Theory and Discrete Events Simulation for Health Care: From Basic Processes to
Complex Systems with Interdependencies ......................................................................................... 443
Alexander Kolker, Children’s Hospital and Health Systems, USA

Chapter 20, Queuing Theory and Discrete Events Simulation for Health Care: From Basic Processes
to Complex Systems with Interdependenciess, objective is twofold: (i) to illustrate practical limitations
of queuing analytic (QA) compared to Discrete-event simulation (DES) by applying both of them to
analyze the same problems, and (ii) to demonstrate practical application of DES models starting from
simple examples and proceeding to rather advanced models.

Chapter 21
Modelling a Small Firm in Jordan Using System Dynamics.............................................................. 484
Raed M. Al-Qirem, Al-Zaytoonah University of Jordan, Jordan
Saad G. Yaseen, Al-Zaytoonah University of Jordan, Jordan

Chapter 21, Modelling a Small Firm in Jordan Using System Dynamics, objective of this chapter is to
introduce new performance measures using systems thinking paradigm that can be used by the Jordanian
banks to assess the credit worthiness of firms applying for credit. A simulator based on system dynamics
methodology which is the thinking tool presented in this chapter. The system dynamics methodology
allows the bank to test “What If” scenarios based on a model which captures the behavior of the real
system over time.

Chapter 22
The State of Computer Simulation Applications in Construction ...................................................... 509
Mohamed Marzouk, Cairo University, Egypt

Chapter 22, The State of Computer Simulation Applications in Construction, presents an overview
of computer simulation efforts that have been performed in the area of construction engineering and
management. Also, it presents two computer simulation applications in construction; earthmoving and
construction of bridges’ decks. Comprehensive case studies are worked out to illustrate the practicality
of using computer simulation in scheduling construction projects, taking into account the associated
uncertainties inherited in construction operations.

Compilation of References .............................................................................................................. 535

About the Contributors ................................................................................................................... 570

Index ................................................................................................................................................... 578


xvii

Preface

The Chinese Proverb cites that “I hear and I forget. I see and I remember. I do and I understand”, in this
context, Simulation is the next best thing after the “I do” part, as it is the nearest thing to giving real
life picture to images in the mind. Mirrors reflect real life into no existing picture, whereas simulation
embodies our notions and ideas into a picture that cannot only be seen, but played and experimented
with as well. Simulation environments exist on a number of dimensions in the market.
The desirable features in Discrete Event Simulation environments are taxonomiesed as modeling
features, simulation systems features, and implementation features. While the modeling features include
modularity, reuse and the hierarchical structure of the model, the simulation systems features include the
scalability, portability, and interoperability of the simulation system, and the implementations features
include distribution execution, execution over the internet, and ease of use. In order to accomplish the
aforementioned desirable features, many components must be examined, while taking into account the
market supply and demand factors. Actually, the race to accomplish such desirable features is as old as
simulation itself. The components to be examined in this book are: Methodologies, Simulation language,
Tutorials, Statistical analysis packages, Modeling, Animation, Interface, interoperability standards, Uses
and Applications, Stochastic / Deterministic, Time handling, and History.
In Handbook of Research on Discrete Event Simulation Environments: Technologies and Ap-
plications, simulation is discussed from within the different features of theory and application. The
goal of this book is not to look at simulation from traditional perspectives, but to illustrate the benefits
and issues that arise from the application of simulation within other disciplines. This book focuses on
major breakthroughs within the technological arena, with particular concentration on the accelerating
principles, concepts and applications.
The book caters to the needs of scholars, PhD candidates, researchers, as well as, graduate level stu-
dents of computer science, operations research, and economics disciplines. The target audience for this
book also includes academic libraries throughout the world that are interested in cutting edge research.
Another important segment of readers are students of Master of Business Administration (MBA) and
Master of Public Affairs (MPA) programs, which include information systems components as part of
their curriculum. To make the book accessible to all, a companion website was developed, which can
be reached through the link (http://www.computercrossroad.org/).
This book is organized in 22 chapters. On the whole, the chapters of this book fall into five categories,
while crossing paths with different disciplines, of which the first, Simulation Prelude, concentrates on
simulation theory, while the second concentrates on Petri Nets, whereas the third section concentrates
on Monte Carlo, besides the fourth section that sheds light on visualization and real-time simulation,
likewise, the fifth section, living simulation, gives color to the black and white picture. The fifth section
xviii

discusses simulation applications in neural networks, data mining, networks, banks, construction, thereby
aiming to enrich this book with others knowledge, experience, thought and insight.
Chapter 1, Simulation: Body of Knowledge, attempts to define the knowledge body of simulation and
describes the underlying principles of simulation education. It argues that any programs in Modelling and
Simulation should recognize the multi-and interdisciplinary character of the field and realize the program
in wide co-operation. The chapter starts with the clarification of the major objectives and principles of
the Modelling and Simulation Program and the related degrees, based on a broad business and real world
perspective. After reviewing students’ background, especially the communication, interpersonal, and
team skills, the analytical and critical thinking skills, furthermore some of the additional skills leading
to a career, the employer’s view and possible career paths are examined. Finally, the core knowledge
body, the curriculum design and program related issues are discussed. The author hopes to contribute to
the recent discussions about modelling and simulation education and the profession.
Chapter 2, Simulation Environments as Vocational and Training Tools, investigates over 50 simula-
tion packages and simulators used in vocational and course training in many fields. Accordingly, the
50 simulation packages were categorized in the following fields: Pilot Training, Chemistry, Physics,
Mathematics, Environment and ecological systems, Cosmology and astrophysics, Medicine and Surgery
training, Cosmetic surgery, Engineering – Civil engineering, architecture, interior design, Computer and
communication networks, Stock Market Analysis, Financial Models and Marketing, Military Training
and Virtual Reality. The incentive for using simulation environments as vocational and training tools
is to save live, money and effort.
Chapter 3, Agent-Based Modeling: A Historical Perspective and a Review of Validation and Verifi-
cation Efforts, traces the historical roots of agent-based modeling. This review examines the modern
influences of systems thinking, cybernetics as well as chaos and complexity on the growth of agent-
based modeling. The chapter then examines the philosophical foundations of simulation verification and
validation. Simulation verification and validation can be viewed from two quite different perspectives:
the simulation philosopher and the simulation practitioner. Personnel from either camp are typically
unaware of the other camp’s view of simulation verification and validation. This chapter examines
both camps while also providing a survey of the literature and efforts pertaining to the verification and
validation of agent-based models.
Chapter 4, Verification and Validation of Simulation Models, discusses validation and verification
of simulation models. The different approaches to deciding model validity are presented; how model
validation and verification relate to the model development process are discussed; various validation
techniques are defined; conceptual model validity, model verification, operational validity, and data valid-
ity; superior verification and validation technique for simulation models relied on a multistage approach
are described; ways to document results are given; and a recommended procedure is presented.
Chapter 5, DEVS-Based Simulation Interoperability, introduces the usage of DEVS for the purpose
of implementing interoperability across heterogeneous simulation models. It shows that the DEVS
framework provides a simple, yet effective conceptual basis for handling simulation interoperability. It
discusses the various useful properties of the DEVS framework, describes the Shared Abstract Model
(SAM) approach for interoperating simulation models, and compares it to other approaches. The DEVS
approach enables formal model specification with component models implemented in multiple program-
ming languages. The simplicity of the integration of component models designed in the DEVS, DTSS,
and DESS simulation formalisms and implemented in the programming languages Java and C++ is
demonstrated by a basic educational example and by a real world forest carbon accounting model. The
xix

authors hope, that readers will appreciate the combination of generalness and simplicity and that readers
will consider using the DEVS approach for simulation interoperability in their own projects.
The second section concentrates on, Monte Carlo Simulation where it is covered in chapter 6 and 7
as follows:
Chapter 6, Experimental Error Measurement in Monte Carlo Simulation, describes the set up step
series, developed by the Genoa Research Group on Production System Simulation at the beginning of the
’80s, as a sequence, through which it is possible at first statistically validate the simulator, then estimate
the variables which effectively affect the different target functions, then obtain, through the regression
meta-models, the relations linking the independent variables to the dependent ones (target functions) and,
finally, proceed to the detection of the optimal functioning conditions. The authors pay great attention
to the treatment, the evaluation and control of the Experimental Error, under the form of Mean Square
Pure Error (MSPE), a measurement which is always culpably neglected in the traditional experimentation
on the simulation models but, that potentially can consistently invalidate with its magnitude the value
of the results obtained from the model.
Chapter 7, Efficient Discrete Simulation of Coded Wireless Communication Systems, presents a simu-
lation method, named Accelerated Simulation Method (ASM), that provides a high degree of efficiency
and accuracy, namely for lower BER, where the application of methods like the Monte Carlo simulation
method (MCSM) is prohibitive, due to high computational and time requirements. The present work
generalizes the application of the ASM to a Wireless Communication System’s (WCS) modelled as a
stochastic discrete channel model, considering a real channel, where there are several random effects
that result in random energy fluctuations of the received symbols. The performance of the coded WCS
is assessed efficiently, with soft-decision (SD) and hard-decision (HD) decoding. The authors show
that this new method already achieves a time efficiency of two or three orders of magnitude for SD and
HD, considering a BER = 1 × 10-4 when compared to MCSM. The presented performance results are
compared with the MCSM, to check its accuracy.
The third part of the book concentrates on Petri Nets. The chapters 8 through 10 cover this part as
follows:
Chapter 8, Teaching Principles of Petri Nets in Hardware Courses and Student’s Projects, presents
the principles of using Petri Net formalism in hardware design courses, especially in the course “Ar-
chitecture of peripheral devices”. Several models and results obtained by student individual or group
projects are mentioned. First the using of formalism as a modeling tool is presented consecutively from
Place/Transition nets to Coloured Petri nets. Then the possible Petri Nets using as a hardware specifica-
tion for direct hardware implementation (synthesized VHDL for FPGA) is described. Implementation
and simulation results of three directly implemented models are presented
Chapter 9, An Introduction to Reflective Petri Nets, introduces Reflective Petri nets, a formal model
for dynamic discrete-event systems. Based on a typical reflective architecture, in which functional aspects
are cleanly separated from evolutionary ones, that model preserves the description effectiveness and
the analysis capabilities of Petri nets. On the short-time perspective of implementing a discrete-event
simulation engine, Reflective Petri nets are provided with timed state-transition semantics.
Chapter 10, Trying out Reflective Petri Nets on a Dynamic Workflow Case, proposes a recent Petri
net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A
localized open problem is considered: how to determine what tasks should be redone and which ones
do not when transferring a workflow instance from an old to a new template. The problem is efficiently
but rather empirically addressed in a workflow management system. The proposed approach is formal,
xx

may be generalized, and is based on the preservation of classical Petri nets structural properties, which
permit an efficient characterization of workflow’s soundness.
The fourth section of the book concentrates on visualization and real-time simulation. The chapters
11 through 14 cover this part as follows:
Chapter 11, Applications of Visual Algorithm Simulation, represent a novel idea to promote the inter-
action between the user and the algorithm visualization system called visual algorithm simulation. As a
proof of concept, the chapter represents an application framework called Matrix that encapsulates the
idea of visual algorithm simulation. The framework is applied by the TRAKLA2 learning environment
in which algorithm simulation is employed to produce algorithm simulation exercises. Moreover, the
benefits of such exercises and applications of visual algorithm simulation in general are discussed.
Chapter 12, Virtual Reality: A New Era of Simulation And Modelling, represent a novel idea to pro-
mote the interaction between the user and the algorithm visualization system called visual algorithm
simulation. As a proof of concept, the chapter represents an application framework called Matrix that
encapsulates the idea of visual algorithm simulation. The framework is applied by the TRAKLA2 learn-
ing environment in which algorithm simulation is employed to produce algorithm simulation exercises.
Moreover, the benefits of such exercises and applications of visual algorithm simulation in general are
discussed.
Chapter 13, Implementation of a DES Environment, describes a program system that implements a
Discrete Event Simulation (DES) development environment. The simulation environment was created
using the LabVIEW graphical programming system; a National Instruments software product. In this
programming environment the user can connect different procedures and data structures with “graphi-
cal wires” to implement a simulation model, thereby creating an executable simulation program. The
connected individual objects simulate a discrete event problem. The chapter describes all simulation
model objects, their attributes and methods. Another important element of the discrete event simulator is
the task list, which has also been created using task type objects. The simulation system uses the “next
event simulation” technique and refreshes the actual state (attribute values of all model objects) at every
event. The state changes are determined by the entity objects, their input, current content, and output.
Every model object can access (read) all and modify (write) a selected number of object attribute values.
This property of the simulation system provides the possibility to build a complex discrete event system
using predefined discrete event model objects.
Chapter 14, Using Simulation Systems for Decision Support, describes the use of simulation systems
for decision support in support of real operations, which is the most challenging application domain in
the discipline of modeling and simulation. To this end, the systems must be integrated as services into
the operational infrastructure. To support discovery, selection, and composition of services, they need
to be annotated regarding technical, syntactic, semantic, pragmatic, dynamic, and conceptual catego-
ries. The systems themselves must be complete and validated. The data must be obtainable, preferably
via common protocols shared with the operational infrastructure. Agents and automated forces must
produce situation adequate behavior. If these requirements for simulation systems and their annotations
are fulfilled, decision support simulation can contribute significantly to the situational awareness up to
cognitive levels of the decision maker.
The final part of the book, living simulation, The chapters 15 through 22 cover this part as follows:
Chapter 15, The Simulation of Spiking Neural Networks, is an overview of the simulation of spik-
ing neural networks that relates discrete event simulation to other approaches. To illustrate the issues
surrounding this work, the second half of this chapter presents a case study of the SpikeStream neural
xxi

simulator that covers the architecture, performance and typical applications of this software along with
some recent experiments.
Chapter 16, An Integrated Data Mining and Simulation Solution, we will propose an intelligent DSS
framework based on data mining and simulation integration. The main output of this framework is the
increase of knowledge. Two case studies are presented, the first one on car market demand simulation.
The simulation model was built using neural networks to get the first set of prediction results. Data min-
ing methodology used named ANFIS (Adaptive Neuro-Fuzzy Inference System). The second case study
demonstrates how applying data mining and simulation in assuring quality in higher education
Chapter 17, Modeling and Simulation of IEEE 802.11g using OMNeT++, aims to provide a tutorial
on OMNeT++ focusing on modeling and performance study of the IEEE 802.11g wireless network.
Due to the complex nature of computer and telecommunication networks, it is often difficult to predict
the impact of different parameters on system performance especially when deploying wireless networks.
Computer simulation has become a popular methodology for performance study of computer and telecom-
munication networks. This popularity results from the availability of various sophisticated and powerful
simulation software packages, and also because of the flexibility in model construction and validation
offered by simulation. While various network simulators exist for building a variety of network models,
choosing a good network simulator is very important in modeling and performance analysis of wireless
networks. A good simulator is one that is easy to use; more flexible in model development, modification
and validation; and incorporates appropriate analysis of simulation output data, pseudo-random number
generators, and statistical accuracy of the simulation results. OMNeT++ is becoming one of the most
popular network simulators because it has all the features of a good simulator.
Chapter 18, Performance Modeling of IEEE 802.11 WLAN using OPNET: A Tutorial, aims to provide
a tutorial on OPNET focusing on the simulation and performance modeling of IEEE 802.11 wireless
local area networks (WLANs). Results obtained show that OPNET provides credible simulation results
close to a real system.
Chapter 19, On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design,
describes a newly developed research-level computer network simulator, which can be used to evalu-
ate the performance of a number of flooding algorithms in ideal and realistic mobile ad hoc network
(MANET) environments. It is referred to as MANSim.
Chapter 20, Queuing Theory and Discrete Events Simulation for Health Care: From Basic Processes
to Complex Systems with Interdependencies, objective is twofold: (i) to illustrate practical limitations
of queuing analytic (QA) compared to Discrete-event simulation (DES) by applying both of them to
analyze the same problems, and (ii) to demonstrate practical application of DES models starting from
simple examples and proceeding to rather advanced models.
Chapter 21, Modelling a Small Firm in Jordan Using System Dynamics, objective of this chapter
is to introduce new performance measures using systems thinking paradigm that can be used by the
Jordanian banks to assess the credit worthiness of firms applying for credit. A simulator based on sys-
tem dynamics methodology which is the thinking tool presented in this chapter. The system dynamics
methodology allows the bank to test “What If” scenarios based on a model which captures the behavior
of the real system over time.
Chapter 22, The State of Computer Simulation Applications in Construction, presents an overview
of computer simulation efforts that have been performed in the area of construction engineering and
management. Also, it presents two computer simulation applications in construction; earthmoving and
construction of bridges’ decks. Comprehensive case studies are worked out to illustrate the practicality
xxii

of using computer simulation in scheduling construction projects, taking into account the associated
uncertainties inherited in construction operations.
In conclusion, it is worth reaffirming that this book is not meant to look at simulation from within
the different features of theory and application, nor is the goal of this book to look at simulation from
traditional perspectives, in fact this book points toward illustrating the benefits and issues that arise from
the application of simulation within other disciplines. As such, this book is organized in 22 chapters,
sorted into five categories, while crossing paths with different disciplines, of which the first, Simulation
Prelude, concentrated on simulation theory, while the second concentrated on Petri Nets, whereas the
third section concentrated on Monte Carlo, besides the fourth section that shed light on visualization
and real-time simulation, concluding in the fifth section, living simulation, which gave color to the black
and white picture, as it discussed simulation applications in neural networks, data mining, networks,
banks, construction.
xxiii

Acknowledgment

The editors would like to acknowledge the relentless support of the IGI Global team, as their help and
patience have been infinite and significant. Moreover, the authors would like to extend their gratitude
to Mehdi Khosrow-Pour Executive Director of the Information Resources Management Association
and Jan Travers the Vice President. Likewise, the authors would like to extend their appreciation to
the Development Division at IGI Global; namely Julia Mosemann- the Development Editor, Rebecca
Beistline- the Assistant Development Editor and Christine Bufton- the Editorial Assistant.
In this regard, the authors would like to express their recognition to their respective organizations
and colleagues for the moral support and encouragement that have proved to be indispensable. In the
same token, the editors would like to thank the reviewers for their relentless work and for their constant
demand for perfection.
More importantly, the authors would like to extend their sincere appreciation and indebtedness to
their family for their love, support, and patience. Also, as 2009 is the International Year of Astronomy,
we dedicated this work in memory of the Father of Modern Science Galileo Galilei who stated once
“But it does move”.

Evon M. Abu-Taieh, PhD


Asim A. El-Sheikh, PhD
Editors
1

Chapter 1
Simulation:
Body of Knowledge
Istvan Molnar
Bloomsburg University of Pennsylvania, USA

ABSTRACT
This chapter attempts to define the knowledge body of simulation and describes the underlying principles
of simulation education. It argues that any programs in Modelling and Simulation should recognize
the multi- and interdisciplinary character of the field and realize the program in wide co-operation.
The chapter starts with the clarification of the major objectives and principles of the Modelling and
Simulation Program and the related degrees, based on a broad business and real world perspective. After
reviewing students’ background, especially communication, interpersonal, team, analytical and critical
thinking skills, furthermore some of the additional skills facilitating entering a career, the employer’s
view and possible career paths are examined. Finally, the core knowledge body, the curriculum design
and program related issues are discussed. The author hopes to contribute to the recent discussions about
modelling and simulation education and the profession.

INTRODUCTION however, is not very well recognized by academ-


ics; M&S as scientific disciplines are “homeless”.
Since the 70s, simulation education has been in This reflects and underlines the interdisciplinary
the focus of attention. The growing acceptance of and multidisciplinary nature of M&S and at the
modelling and simulation (M&S) across different same time causes special problems in educational
scientific disciplines and different major application program and curriculum development.
domains (e.g., military, industry, services) increased Recognizing controversial developments and the
the demand for well-qualified specialists. The fact that actions are necessary, different stakeholders
place and recognition of modelling and simulation, of the international simulation community started to
attack the problems. As part of the actions, Rogers
DOI: 10.4018/978-1-60566-774-4.ch001 (1997) and Sargent (2000) aimed to define M&S

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Simulation

as a discipline, describing the characteristics • it is easy to move from one country to


of the profession, while Oren (2002) aimed at the other (within the European Higher
establishing its code of professional ethics. As Education Area) – for the purpose of fur-
a consequence of these efforts, questions were ther study or employment;
raised by Nance (2000) and Crosbie (2000) about • the attractiveness of European higher edu-
the necessity, characteristics and content of an cation is increased, so many people from
internationally acceptable educational program non-European countries also come to study
of simulation for different levels of education and/or work in Europe;
(undergraduate, graduate, and postgraduate). The • the European Higher Education Area pro-
first steps triggered a new wave of discussions by vides Europe with a broad, high quality
Szczerbicka (2000), Adelsberger (2000), Altiok and advanced knowledge base, and en-
(2001), Banks (2001), Nance and Balci (2001), sures the further development of Europe as
followed by Harmon (2002) and Fishwick (2002) a stable, peaceful and tolerant community.”
around the 50th anniversary of the Society for (Bologna Process, 2008)
Computer Simulation and these discussions are
not finished yet (e.g., Birta (2003a), Paul et al. These facts and developments call for action
(2003), and others). At the turn of the century, 50 and international efforts to introduce changes in
years after the professional field was established, higher education based on the Bologna Process.
special attention was devoted to the subject of As a result of the globalization of business, science
the simulation profession and the professional and also education, it is expected that fundamental
“simulationist”, as well. Definition of the profes- questions of educational practice will be regulated
sion, along with possible programs and curricula within the framework of or in compliance with
were published, as were attempts to define the the Bologna principles.
knowledge body of simulation discussed (e.g., The model curriculum for graduate degree
Birta, 2003b and Oren 2008). programs in M&S, which is the focus of this paper,
The growth of simulation applications in in- is based on the typical degree structure, which
dustry, government and especially in the military is in compliance with the Bologna principles.
in the US, led to a growing demand for simulation Nevertheless, a number of U.S. higher educa-
professionals in the 90’s. Academic programs tion organizations (around 20% of the graduate
have been introduced and standardization efforts schools) still resist accepting bachelor degrees
undertaken; moreover, new organizations have of countries that signed the Bologna Treaty and
been established to maintain different aspects deny that there are implications for the U.S. (see
of simulation. Europe has been following these Jaschik 2006 and CGS Report 2007). The author’s
trends with a slight delay. The Bologna Process is opinion is that progress cannot be stopped, espe-
a European reform process aimed at establishing cially not without viable alternative program(s).
a European Higher Education Area by 2010. It is Because of the identical degree structure applied
a process, driven by the 46 participating countries in the Bologna Process regulated countries, the
in cooperation with a number of international US and Canada, the presented suggestions can
organizations; it is not based on an intergovern- be widely applied.
mental treaty. “By 2010 higher education systems This paper attempts to define the knowledge
in European countries should be organized in body of simulation and the underlying principles
such a way that: of M&S education in relation to a Master’s Degree
program. The content representation intends to

2
Simulation

support the introduction of a curriculum model domain in detail and are able to apply a strong
rather than concentration or option in an MBA set of professional values essential for success
or MS program. Placing this model curriculum in the field.
into a specific context and the examination of the Similarly to the MSIS Model Curriculum
reasoning behind the degree structure and course (Gorgone et al., 2006), the skills, knowledge,
content descriptions, allows it to be applied by and values of M&S graduates are summarized in
educational program designers across the globe Figure 1. Accordingly, the curriculum model is
and helps to avoid difficulties of having to compare designed as a set of interrelated building blocks
a large number of different educational systems. In and applies the “sliding window” concept.
addition, students with different background can The course-supply (the number and content
use the model curriculum to obtain an overview of courses offered), is dictated by institutional
of the discipline. Professionals and managers of resource constraints, while the demand (the stu-
different application areas can get a basic under- dents’ choice of courses and course content), is
standing of the qualifications and skills they can dictated by the background of the students and
expect from recently hired new graduates . the program objectives. The program size of the
The major strength of this contribution is that entire model curriculum from fundamentals to
it discusses the related subjects in a new global, the most advanced courses consists of 20 courses;
multi-disciplinary and quality-oriented perspec- however, 12 courses are sufficient to finish suc-
tive, built on solid foundations and providing cessfully the program.
flexible but modular educational approach, serv- M&S graduates will have to have the following
ing different knowledge levels and professional skills, knowledge and values (see Figure 1):
groups. A significant part of the questions about the
necessity, characteristics and content of the M&S • Sound theoretical knowledge of M&S
education have already been discussed in their • A core of M&S technology and manage-
different aspects in the early 90’s in Europe. This ment knowledge
was also the time when within the framework of • Integration of M&S knowledge into a spe-
the Eastern-European economic transition, higher cific application domain (e.g., business,
education was being transformed by adapting key engineering, science, etc.)
technologies whilst preserving the qualitative as- • Broad business and real world perspective
pects of their system. Can the experience gained • Communication, interpersonal and team
during this transition be reused? skills
• Analytical and critical thinking skills
• Specific skills leading to a career
OBJECTIVES OF THE
M&S PROGRAM The specification of the curriculum includes
four components:
The model curriculum is designed to reflect current
and future industry needs, serve current standards, • M&S Theoretical Foundations: Most of
which can be used by higher educational institu- the foundation courses serve as pre-req-
tions worldwide in their curriculum design. By uisite for the rest of the curriculum and
adopting this model curriculum, faculty, students, provide an in-depth knowledge of basic
and employers can be assured that M&S graduates M&S knowledge. Courses are designed to
are competent in a set of professional knowledge accommodate a wide variety of students’
and skills, know about a particular application needs.

3
Simulation

Figure 1. Skills, knowledge, and values of M&S graduates

Real World Perspective

Communication/Interpersonal/Team Skills

Analytical and Critical Thinking Skills

Integration

Theoretical Appl. Domain


Foundations Electives

M&S Technology and


Management

• M&S Technology and Management: These of the M&S program are achieved. Quality assur-
courses cover general and specific M&S ance can take different forms and the measurement
related IT knowledge, furthermore educate of student and program progress can use differ-
students to work using collaboration and ent methods; the emphasis should be rather on
project management tools. monitoring, analysis of the results and generating
• Integration: An integration component is actions to further improve quality.
required after the core. This component
addresses the increasing need to integrate a
broad range of technologies and offers stu- STUDENTS’ BACKGROUND
dents the opportunity to synthesize theory
and practice learned in a form of a cap- It is often the case that students entering the M&S
stone course, which also provides imple- program have different backgrounds; students
mentation experience for a comprehensive entering directly from undergraduate programs
problem solution of a specific application may have a BS or BA degree in Business (e.g.,
domain. Information Systems), in Science (e.g., Com-
• Application Domain Electives (Career puter Science), in Engineering (e.g., Electrical
Tracks): High level flexibility can be Engineering) or some other domain. The M&S
achieved with courses that can be tailored program may also attract experienced profession-
to meet individual, group or institutional als and people seeking career changes, who will
needs by selecting specific career tracks to study as part-time evening students and usually
address current needs. require access to the courses through a remote
learning environment. With the rising volume
A continuous assessment of the program must of international student exchanges, international
ensure that the objectives and expected outcomes students’ need must also be taken into account.

4
Simulation

Table 1. Typical job objectives (career path) of M&S graduates

Engineer in design and development Game designer/developer


Engineer in manufacturing and logistics planning Systems analyst/designer
Engineer in energy production and dispatch Supply chain manager
Engineer/Economist in BPR Bank customer service analyst
Engineers/Scientist in aviation and space research Military analyst
MDs and nurses in hospital operations Researcher and Technical specialist
Financial (asset/liability/stock market) analyst A Ph.D. program leading to research

The M&S program architecture accommodates • application domain related simulation


a wide diversity of backgrounds and learning model usage,
environments. • simulation model development in a par-
Background analysis usually does not cover ticular application domain (e.g., scientific
any details about the quality aspects of the students’ research, government, military, industry,
entry characteristics and related requirements, business and economics, education, etc.),
but quality concerns are strong, especially in • simulation software development,
mathematics and science in the US (see National • consulting and systems integration.
Science Board 2008, pp.1: “Relative ranking of
U.S. students on international assessments gauges Some of the typical job objectives of M&S
U.S. students performance against peers in other graduates are listed in Table 1. The rapidly chang-
countries and economies. Among Organisation ing job market and the current demand can be best
for Economic Co-operation and Development estimated by using job searching web-sites (e.g.,
(OECD) nations participating in a recent assess- http://careers.simplyhired.com/a/job-finder).
ment of how well 15-year-old students can use
mathematics and science knowledge, U.S. students
were at or near the bottom of the 29 OECD mem- EMPLOYERS’ VIEW
bers participating.”). As a consequence, freshmen
students’ entry level knowledge can be very differ- Because of the wide variety of M&S programs
ent (see PISA 2006), therefore solutions of the US offered, employers are uncertain about the knowl-
universities should not be copied internationally edge, skills, and values of new graduates. The
without careful and critical analysis. Decisions, students can ease employers’ concerns by ensur-
related to course content, must also take the above- ing that they take a set of courses, which lays the
mentioned facts in consideration. foundation for practical experience in a particular
simulation application domain.
It is a further advantage if students are able to
CAREER PATHS help to overcome the skill shortage that exists in
many of the major M&S application fields. It is
Applications are concentrated almost exclusively a legitimate employer expectation that students
in government, military, large banks and industrial graduating with an M&S degree should be able
companies, as the M&S is rapidly gaining accep- to take on job-related responsibilities (e.g., work
tance in major and mid-sized corporations. Career independently on separate tasks or assignments)
paths now include but are not restricted to: and also serve as mentor or middle-range manager

5
Simulation

to staff with lower level academic education (see venture. Analysing the rough typology of
Madewell and Swain, 2003). computer simulation models to understand
the implications for education, one can
demonstrate that the use of simulation soft-
PRINCIPLES OF M&S DEGREE ware tools and applied technologies need
straightforward linear, convergent think-
Author strongly believes that certain aspects of ing. There is some technical and scientific
the specific M&S educational philosophy and expertise needed to make an appropriate
principles need to be underlined separately. Based selection of software or customise the soft-
on (Molnar et al. 2008), some of these underlying ware product itself; it might be time-con-
philosophical aspects are listed as follows: suming but relatively easy to learn and also
to educate. The usage of simulation software
• Simulation: Art or Science: If we practice and related applied technology increases
M&S, we certainly practice science and programming efficiency and perhaps also
art at the same time; in different phases of overall project efficiency. It can also cause
the simulation modelling process different several problems and might be the source
predominating elements of science and art of significant errors. Nevertheless, the us-
appear. The whole process is always de- age itself is not considered by the author as
termined by time and space; we could say, a central issue.
determined by the “Zeitgeist” and “genus • Skills needed for a good simulation:Rogers
locii”. (1997) classified several skills but only the
• Central Role of Mathematical Modelling: most important are advocated by the au-
Because the strength of simulation will de- thor (see also Molnar, et al. 1996): good
pend on the underlying model, mathemati- judgement, a thorough knowledge of
cal modelling plays a crucial role in the mathematical statistics and probability
M&S process. Creative thinking, holistic theory; different ways of thinking; cer-
thinking and lateral thinking are important tain personal skills (e.g., communication
attributes that to a certain extent are innate skills, which also include listening to the
but can be improved by practice and en- person with the problem and translating
couragement. In the process of modelling those words into a model, ability to adapt
that uses the above skills, we contend that and learn, learning to learn and life-long
Art is a predominate factor. learning); some managerial expertise and
• Simulation software and applied technol- team spirit. Supplementary issues related
ogy: One has to distinguish between de- to general problems of the profession (e.g.,
veloping simulation models and the use responsibilities, value system, moral) are
of ready-made simulation software. At the discussed in detail in Sargent (2000) and
same time, the “mindless” use of ready- Oren (2002).
made simulation software, which only
requires model computation and experi- The philosophy discussed above serves as
mentation, is a kind of systematic, planned one element of the foundation of developing the
series of activities using a deterministic M&S curriculum. In addition to this philosophy,
and finite machine, a computer, in order to some basic principles are also applied. These basic
search for conclusive evidence or for laws. principles are a series of additional considerations,
Hence, this latter is exclusively a scientific which are used to design the architecture of the

6
Simulation

M&S program. The most important ones will be and application-related technologies, poli-
discussed briefly and cover the following: cies and strategies.
• Unit Requirements: The program architec-
• Professional degree with added value: ture is flexible and compatible with insti-
The M&S is a professional degree, which tutional unit requirements for an M&S de-
adds value to students studying beyond the gree. These requirements may range from
bachelor degree and integrates the infor- 24 to 60 credit units, depending on the
mation culture and the organizational cul- individual institution, program objectives,
ture. The investment of both the students and student entry and exit characteristics.
and the employers will pay off.
• Core courses: The degree includes a stan-
dard set of core courses in M&S and re- THE DESCRIPTION OF
lated technology. Beside the core courses, THE M&S PROGRAM
the flexibility of the curriculum ensures the
accommodation of students with differing The M&S program’s building blocks are pre-
background, skills, and objectives. sented in Figure 2. The undergraduate courses
• Career Tracks: The curriculum focuses on are pre-requisites for the program. Students with
current application domains and also pro- missing pre-requisites or inadequate backgrounds
vides great flexibility to include emerging are required to take additional courses.
concepts, “career tracks.” This flexibility The M&S Core defines the fundamental
allows students to concentrate in a specific knowledge about M&S required from students
area within the competency of the faculty. and consists of two blocks the M&S Theoretical
• Integration of non-technical skills: Ethics Foundation and M&S Technology and Manage-
and professionalism, presentation skills ment blocks. The core represents a standard that
(both, oral and written), negotiation, pro- defines the M&S program and differentiates it
motion of ideas, team and business skills, from Computer Science, Information Systems or
furthermore, customer orientation and Science and Engineering programs and concentra-
real-world focus are integrated throughout tions within these programs.
the whole program. The Integration block is a one-semester cap-
• Practicum: A practicum is considered as stone course and is fully devoted to a practical
a long project lasting one term and solves project.
a real-world problem for a client within a The Career Tracks block consists of elective
given timeframe. It is recommended that courses organized around careers.
institutions related to the application do- According to Figure 2, the M&S program can
main (e.g., industry) support the project by be composed of different courses as the author sug-
providing professional support and finan- gests in Table 2. The pre-requisites are presented in
cial incentive, which increases the quality three different versions based on students’ profes-
of the student output. The practicum can be sional orientation. Three typical M&S application
applied as graduation requirement used in- domains are presented: Business and Economics,
stead or in addition of a master’s thesis. Engineering, and Computer Science.
• Capstone Integration: The purpose of the The program’s core courses are established
capstone course is to integrate the program rather along the scientific disciplines than the
components and lead students into think- application domain, therefore the knowledge ac-
ing about integration of M&S knowledge quired by students is more flexible and portable.

7
Simulation

Figure 2. Elements of the M&S program

Application Field Electives


(Career tracks)

Integration
(Capstone course)

M&S Technology and Management

M& S Theoretical F oundations

BS /BA in Engineering/Science/Business (undergraduate


)

The sequence of the program blocks is strength- edge body should be specified, paying special
ening the theoretical foundations and providing attention to control and restrict overlapping.
a learning path from the “general” theory to the Table 2 also intends to provide a suggested
“particular” application. This approach might course sequence within the blocks. This course
serve well the described philosophy and helps to sequence expresses a pre-requisite structure and
avoid application domain “blindness”. Finally, can be used by students.
this approach also supports the theoretically and Table 3 clearly shows that students’ choices
methodologically founded applications of single depend on the individual institutional resources,
application domains and beyond this, the multi- the program objectives, and the student entry and
and interdisciplinary application domains. To exit characteristics. On the one hand, Table 3 is
increase learning efficiency, course-related knowl- directly determined by the M&S knowledge body

Table 2. Pre-requisites for the M&S program

Pre-requisite Additional application domain specific pre-requisites


Business and Economics Engineering Computer Science
BA/BS un- Critical Thinking Critical Thinking Critical Thinking
dergraduate
Mathematical Thinking Mathematical Thinking Mathematical Thinking
degree
Programming, data and object struc- Programming, data and object struc- OO Programming Language
tures tures
Marketing or Organizational Behav- OO Systems Analysis and Design OO Systems Analysis and Design
iour
Operations management Software Engineering Software Engineering
IT Infrastructure Systems Engineering Mathematical Statistics
Business Analysis Dynamic Systems IT Architectures
Emerging Technologies Emerging Technologies Emerging Technologies
Implications of Digitalization Implications of Digitalization Implications of Digitalization
4-6 courses 4-6 courses 4-6 courses

8
Simulation

Table 3. The complete M&S curriculum

M&S Theoretical Foundation M&S Technology and Management Application Domain Specific Elec-
tives
Systems Science Model Description Formalisms System Dynamics and Global Mod-
els
Mathematical Modelling Simulation Languages a) Discrete, b) continuous, Microsimulation and its Application
c) combined
Computer Simulation Symbolic Programming Languages a) Maple, b) Economic Dynamics and Macroeco-
Mathematica nomic Models
Computational Math. M&S Integrated Development Environments Queuing Systems vs. Manufacturing

Integration
a) Numerical Methods (Logistics)
Computational Math. Decision Support Systems Training Simulators vs. Power Plant
b) Stochastic Methods Application
Computational Math. Distributed Systems Game Simulators vs. Business Game
c) Symbolic Computations
Simulation Data Analysis and Visu- Software Quality Assurance Dynamic Systems vs. Population
alization Dynamics
Simulation Optimization Project and Change Management Embedded Simulation Systems
vs.DSS
Emerging M&S related Technologies Emerging M&S related Technologies and Issues Emerging M&S related Application
and Issues Domains
3-5 courses 2-4 courses 1 2-4 courses

and on the other hand its content helps to specify Table 5 demonstrates the details of four dif-
this knowledge body in details. ferent courses in relation to the M&S knowledge
As indicated in Table 2, the M&S program can body. The selected courses show a cross-sectional
be minimum 24 units for well-prepared students view and present one particular course with the
and up to 60 units for students with no prepara- knowledge body covered.
tion, as described in Table 4. Based on the model curriculum presented,
The M&S knowledge body is heavily dis- educational institutions can develop their own
cussed (Birta 2003a, Birta 2003b and Oren, 2008). M&S program by following the basic steps de-
Sources evaluated clearly show that no consensus scribed above.
has been achieved yet. Table 2 reflects the author’s
professional values, perception and understanding
of M&S and its education.

Table 4. The “window size” of the program

Courses Well prepared student Student with no preparation


Core courses 15 27
Integration 3 3
Application domain 6 12
Additional required courses - 18

9
Simulation

Table 5. The rough course content used to define the detailed knowledge body

Computer Simulation Emerging Technologies and Integrated Capstone Implications of Digitization


Issues
Simulation modelling basics Fuzzy modelling Introduction to the M&S project Ethics
Discrete, continuous and com- Agent-based modelling Project realization based e.g., on Virtual Work and Telecom-
bined models prototyping life cycle muting
Random number generation M&S of Business Processes Project impact on the organiza- Globalization and Outsourcing
tion
Numerical integration Data mining Government. regulations
Simulation data analysis SOA and Web Services Implications of AI
Verification and validation Mobile and Ubiquitous Com- Intellectual Property
puting
Experimental design, Sensitivity Business intelligence Digital Divide
analysis
Simulation Optimization Privacy

EDUCATIONAL PROBLEMS Often raised question and a usual dilemma


of many lecturers, how to create the curriculum
Analysing the education of M&S, one can realize for these courses, what phase of the simulation
that the major difficulty lies in the “Janus-face” modelling process should be educated in detail,
of simulation: both, artistic and scientific char- where the main point(s) of the course should be?
acteristics should be educated during the time Analysing the problem, basically two possibili-
available. The problems are discussed in detail ties are given:
in Molnar et al., 2008.
Within the frame of typical one semester simu- 1. Concentrate on mathematical modelling:
lation courses most lecturers are trying to teach because of increasing complexity of real
different subjects, like system analysis, mathemati- world systems and their mathematical mod-
cal modelling, using simulation software, planning els, one can put the question how to create
experiments and analysis of their results, and in mathematical models in the centre of the
addition to all these, specific knowledge of the course. Instead of technical realisation and
application domain. Do we want too much? computational efforts, the lecturer can try to
Most of these courses are dedicated to stu- teach how to create a model, how to validate
dents of different application domains (engineers, and use it.
economists, etc.), sometimes even students of 2. Concentrate on simulation software us-
the broader field of computer science. None of age: because of the complexity of modern
the first group of students has deeper computer simulation software tools, the education
knowledge; none of the second group of students of simulation software and its usage is the
has deeper knowledge of the application domains. main point of the course and can cover the
Worse, creative thinking is generally neglected whole semester if necessary. The main point
in our present education systems. Following of the course is how to use the software for
the organizational rules and the educational ef- simulation modelling purposes. In order to
ficiency criteria can cause further problems (e.g., get the appropriate effect, the lecturer usu-
big class size). ally uses prepared models for demonstration

10
Simulation

of software power. Thus, both mathemati- To achieve quality while keeping creative
cal models and simulation techniques are thinking, we suggest the introduction of a modu-
taught. lar curriculum structure and use co-operation. In
order to accomplish the educational goals, there
The lack of time makes the simultaneous teach- is sometimes a possibility and organisational
ing of mathematical modelling and software usage background to increase the time frame of the edu-
impossible. If the lecturer nevertheless tries to do cation and extend the curricula to more than one
so, she/he will be under continuous time pressure semester. An often accepted way of increasing the
and have the feeling that the educational results time frame of the M&S courses is the insertion of
are insufficient. Another problem addressed is the first simulation course into the undergraduate
the knowledge level of students (see national and curriculum, while maintaining the graduate and
international comparisons, e.g., National Science postgraduate level M&S courses. The extreme
Board 2008 and PISA 2006). complexity of the M&S knowledge body, the
The second solution, concentrating on software inter- and multi-disciplinary nature of the dif-
usage, is much more dangerous to implement, ferent subjects and the large amount of different
because it just transfers the actual technical level graduate courses are calling for institutional, even
of knowledge (even though it is non-standardized international co-operation (e.g., joint master cur-
and becomes quickly obsolete) and does not sup- riculum, co-operation of MISS (McLeod Institute
port the creative thinking process. The aggressive of Simulation Sciences) institutes, etc.).
effort to cover the M&S life cycle in education does The courses, offered based on a wide range
not make it possible to concentrate on the main of co-operation, can give an excellent possibility
problems of application. Nevertheless, following for meeting different student demands. Students
from the above, this concept is easier to teach. have great freedom to choose one or more courses
Both approaches are having the fundamental from the offered programs. In addition, the flex-
disadvantage that the course is not delivered in ible course structure makes it possible to support
co-operation and the interdisciplinary and mul- important programs:
tidisciplinary aspect of M&S education cannot
prevail. Co-operation across departmental and • Master programs (e.g., MBA or MS) in
college borders can help, but will never be able M&S
to resolve low enrolment problems or problems • PhD programs in M&S
related to low student knowledge level, which • Retraining courses (for engineers and
also can further endanger the quality of the M&S economists)
program. • Short-cycle retraining programs
One can ask: is the profession ‘simulationist’
so difficult that one should teach both approaches
at a high level of detail? Yes, the author thinks CONCLUSION
so, even if there are naturally endowed individual
experts knowing both; these, however, usually It is a common problem in different countries to
work in teams! In order to teach both, the lecturer create simulation curricula that are able to cover
must introduce a curriculum that increases the the rapidly changing subject of M&S. Based on
quality of the mathematical modelling education the long experience of the author in simulation
and accept the fact that mathematical modelling education and in international curriculum devel-
and thus simulation is a kind of art and an intel- opment projects, a flexible, modular M&S model
lectual challenge. curriculum is suggested. The author sincerely

11
Simulation

hopes that this paper gives a flavour of M&S Birta, L. G. (2003b). The Quest for the Modeling
courses and readers are willing to rethink and and Simulation Body of Knowledge. Keynote
discuss any of the points raised. presentation at the Sixth Conference on Computer
Finally, the author believes that all efforts to Simulation and Industry Applications, Instituto
increase the quality of M&S education are well Tecnologico de Tijuana, Mexico, February 19-
invested, because: 21, 2003.
Bologna Process (2008). Strasbourg, France:
Seldom have so many independent studies been
Council of Europe, Higher Education and Re-
in such agreement: simulation is a key element
search. Retrieved August 15, 2008, from http://
for achieving progress in engineering and sci-
www.coe.int/t/dg4/ highereducation/EHEA2010/
ence. (Report of the National Science Foundation
BolognaPedestrians_en.asp
(NSF) Blue Ribbon Panel on Simulation-Based
Engineering Science, May 2006) Council of Graduate Schools. (2007). Findings
from the 2006 CGS International Graduate Ad-
missions Survey, Phase III Admissions and Enrol-
ment Oct. 2006, Revised March 2007. Council of
REFERENCES Graduate Schools Research Report, Council of
Graduate Schools, Washington DC.
Adelsberger, H. H., Bick, M., & Pawlowski, J. M.
(2000, December). Design principles for teaching Crosbie, R. E. (2000, December). Simulation
simulation with explorative learning environ- curriculum: a model curriculum in modeling and
ments. In J. A. Joines, R. R. Barton, K. Kang & simulation: do we need it? Can we do it? In J. A.
P. A. Fishwick (Eds.), Proceedings of the 2000 Joines, R. R. Barton, K. Kang & P. A. Fishwick,
Winter Simulation Conference, Piscataway, NJ (Eds.) Proceedings of the 2000 Winter Simulation
(pp. 1684-1691). Washington, DC: IEEE. Conference, Piscataway, NJ, (pp. 1666-1668).
Washington, DC: IEEE...
Altiok, T. (2001, December). Various ways aca-
demics teach simulation: are they all appropriate? Fishwick, P. (2002). The Art of Modeling. Model-
In B. A. Peters, J. S. Smith, D. J. Medeiros, and ing & Simulation Magazine, 1(1), 36.
M. W. Rohrer (Eds.), Proceedings of the 2001 Gorgone, J. T., Gray, P., Stohr, E. A., Valacich, J.
Winter Simulation Conference, Arlington, VA, S., & Wigand, R. T. (2006). MSIS 2006. Model
(pp. 1580-1591). Curriculum and Guidelines for Graduate Degree
Banks, J. (2001, December). Panel session: Edu- Programs in Information Systems. Communica-
cation for simulation practice – five perspectives. tions of the Association for Information Systems,
In B. A. Peters, J. S. Smith, D. J. Medeiros, & 17, 1–56.
M. W. Rohrer (Eds.) Proceedings of the 2001 Harmon, S. Y. (2002, February-March). Can there
Winter Simulation Conference, Arlington, VA, be a Science of Simulation? Why should we care?
(pp. 1571-1579). Modeling & Simulation Magazine, 1(1).
Birta, L. G. (2003a). A Perspective of the Modeling Jaschik, S. (2006). Making Sense of ‘Bologna
and Simulation Body of Knowledge. Modeling & Degrees.’ Inside Higher Ed. Retrieved November
Simulation Magazine, 2(1), 16–19. 15, 2008 from http://www.insidehighered.com/
news/ 2006/11/06/bologna

12
Simulation

Madewell, C. D., & Swain, J. J. (2003, April- Oren, T. I. (2008). Modeling and Simulation Body
June). The Huntsville Simulation Snapshot: A of Knowledge. SCS International. Retrieved May
Quantitative Analysis of What Employers Want 31 2008 from http://www.site.uottawa.ca/~oren/
in a Systems Simulation Professional. Modelling MSBOK/ MSBOK-index.htm#coreareas
and Simulation (Anaheim), 2(2).
Paul, R. J., Eldabi, T., & Kuljis, J. (2003, Decem-
Molnar, I., Moscardini, A. O., & Breyer, R. ber). Simulation education is no substitute for
(2009). Simulation – Art or Science? How to intelligent thinking. In S. Chick, P. J. Sanchez,
teach it? International Journal of Simulation and D. Ferrin & D. J. Morrice (Eds.), Proceedings
Process Modelling, 5(1), 20–30. doi:10.1504/ of the 2003 Winter Simulation Conference, New
IJSPM.2009.025824 Orleans, LA, (pp. 1989-1993).
Molnar, I., Moscardini, A. O., & Omey, E. (1996, Program for International Student Assessment
June). Structural concepts of a new master cur- (PISA). (2006). Highlights from PISA 2006.
riculum in simulation. In A. Javor, A. Lehmann Retrieved August 15, 2008 from Web site: http://
& I. Molnar (Eds.), Proceedings of the Society for nces.ed.gov/surveys/pisa/
Computer Simulation International on Modelling
Rogers, R. V. (1997, December) What makes
and Simulation ESM96, Budapest, Hungary, (pp.
a modelling and simulation professional? The
405-409).
consensus view from one workshop. In S. Andra-
Nance, R. E. (2000, December). Simulation educa- dottir, K. J. Healy, D. A. Whiters & B. L. Nelson
tion: Past reflections and future directions. In J. A. (Eds.), Proceedings of the 1997 Winter Simula-
Joines, R. R. Barton, K. Kang & P. A. Fishwick, tion Conference, Atlanta, GA (pp. 1375-1382).
(Eds.) Proceedings of the 2000 Winter Simulation Washington, DC: IEEE...
Conference, Piscataway, NJ (pp. 1595-1601).
Sargent, R. G. (2000, December). Doctoral col-
Washington, DC: IEEE.
loquium keynote address: being a professional.
Nance, R. E., & Balci, O. (2001, December). In J. A. Joines, R. R. Barton, K. Kang and P. A.
Thoughts and musings on simulation education. Fishwick (Eds.), Proceedings of the 2000 Winter
In B. A. Peters, J. S. Smith, D. J. Medeiros, & Simulation Conference, Piscataway, NJ, (pp. 1595-
M. W. Rohrer (eds.), Proceedings of the 2001 1601). Washington, DC: IEEE.
Winter Simulation Conference, Arlington, VA
Szczerbicka, H., et al. (2000, December). Con-
(pp. 1567-1570).
ceptions of curriculum for simulation education
National Science Board. (2008). Science and (Panel). In J. A. Joines, R. R. Barton, K. Kang &
Engineering Indicators 2008. Arlington, VA: P. A. Fishwick (Eds.), Proceedings of the 2000
National Science Foundation. Winter Simulation Conference, Piscataway, NJ
(pp. 1635-1644). Washington, DC: IEEE.
Oren, T. I. (2002, December). Rationale for a Code
of Professional Ethics for Simulationists. In E.
Yucesan, C. Chen, J. L. Snowdon & J. M. Charnes
(Eds.), Proceedings of the 2002 Winter Simulation KEY TERMS AND DEFINITIONS
Conference, San Diego, CA, (pp. 13-18).
Body of Knowledge (BoK): the sum of all
knowledge elements of a particular professional
field, defined usually by a professional organiza-
tion.

13
Simulation

Bologna Process: is a European reform formation and skills (incl. thinking skills) during
process aiming to establish a European Higher the course of life. Learning process can include a
Education Area by 2010. The process is driven teacher, a person who teaches.
by 46 participating countries and not based on an Knowledge: the range of a person’s infor-
intergovernmental treaty. mation or understanding, or the body of truth,
Career Path: is a series of consecutive pro- information, and principles acquired.
gressive achievements in professional life over Model Curriculum or Curriculum Model:
an entire lifespan. is an organized plan, a set of standards, defined
Curriculum: a set of courses offered by an learning outcomes, which describe systematically
educational institution. It means two things: on one the curriculum.
hand, the curriculum defines a range of courses Simulation Profession: it is a vocation based
from which students choose, on the other hand on specialised education or training in modeling
the curriculum is understood as a specific learning and simulation, involves the application of system-
program, which describes the teaching, learning, atic knowledge and proficiency of the modeling
and assessment materials of a defined knowledge and simulation subject, field, or science to receive
body available for a given course. compensation. Modeling and simulation erose
Education: refers to experiences in which recently as a profession.
students can learn, including teaching, training Skill: is a learned ability to do something in
and instruction. Students learn knowledge, in- a competent way.

14
15

Chapter 2
Simulation Environments as
Vocational and Training Tools
Evon M. O. Abu-Taieh
Civil Aviation Regulatory Commission and Arab Academy For Financial Sciences, Jordan

Jeihan M. O. Abutayeh
World Bank, Jordan

ABSTRACT
This paper investigates over 50 simulation packages and simulators used in vocational and course train-
ing in many fields. Accordingly, the 50 simulation packages were categorized in the following fields:
Pilot Training, Chemistry, Physics, Mathematics, Environment and ecological systems, Cosmology
and astrophysics, Medicine and Surgery training, Cosmetic surgery, Engineering – Civil engineering,
architecture, interior design, Computer and communication networks, Stock Market Analysis, Financial
Models and Marketing, Military Training and Virtual Reality. The incentive for using simulation envi-
ronments as vocational and training tools is to save live, money and effort.

INTRODUCTION Banks in 2000 summarized the incentives why


we need simulation in the following reasons: Making
Computer Simulation is widely used as an educa- correct choices, Compressing and expanding time,
tional, as well as training tool in diverse fields; inter Understanding “Why?”, Exploring possibilities,
alia pilot training, chemistry, physics, mathematics, Diagnosing problems, Developing understanding,
ecology, cosmology, medicine, engineering, market- Visualizing the plan, Building consensus, Preparing
ing, business, computer communications networks, for change. The reader can refer to (Banks, 2000)
financial analysis etc., whereby computer simulation for more detailed study.
is used to train and teach students in many fields, Moreover, computer simulation is considered a
not only to save: time, effort, lives, and money, but knowledge channel that transfers knowledge from
also to give them confidence in the matter at hand, an expert to newbie, thereby, training a pilot or a
in view that using computer simulation delivers the surgeon while using computer simulation, is in fact
idea with sight and sound. empowering the trainee with the knowledge of many
expert pilots and expert surgeons. Accordingly, the
DOI: 10.4018/978-1-60566-774-4.ch002

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Simulation Environments as Vocational and Training Tools

Figure 1. Simulation Applications in Educations and Training

paper shall discuss the simulation environments 12. Military Training and virtual reality.
and packages used to train and teach in the fol-
lowing fields, as it is stipulated in (Figure 1): In Pilot Training the paper discusses flight-
safety, Frasca, HAVELSAN, and Thales. In
1. Pilot Training. Chemistry the simulators: Virtlab, REACTOR,
2. Chemistry. ChemViz, ChemLab, NAMD and VMD are
3. Physics. discussed. In regards to physics training Physics
4. Mathematics. Simulator, PhET, Fission, RadSrc and CRY, ODE,
5. Environment and ecological systems. Simbad, and TeamBots simulation packages will
6. Cosmology and astrophysics. be discussed. In teaching mathematics by using
7. Medicine & Surgery training. simulation Matlab and STELLA are discussed
8. Cosmetic surgery. in addition to statistical software like Fathom,
9. Engineering – Civil engineering, architec- DataDesk and Excel.
ture, interior design. Environment and Ecological Systems, the
10. Computer and communication networks. paper will discuss SEASAM, The Agricultural
11. Stock market analysis, Financial Models, Production Systems Simulator (APSIM), and
and Marketing. Ecosim Pro simulation packages. In addition,

16
Simulation Environments as Vocational and Training Tools

Table 1. Pilot training simulators

Simulator Country of Origin Website


FlightSafety International (FSI) USA www.flightsafety.com/
Frasca International, Inc. USA www.frasca.com/
Havelsan Turkey www.havelsan.com.tr/
Thales Group France www.thalescomminc.com
Rockwell Collins USA www.rockwellcollins.com
CAE Inc. USA www.cae.com/
Jeppesen USA www.jeppesen.com/

the five simulation packages: Celestia, Computer Safety Agency (EASA) certify each category of
Simulations of Cosmic Evolution, CLEA, Universe simulators and test individual simulators within
in a box: formation of large-scale structure, and the approved categories. Simulators that pertain
WMAP Cosmology will be discussed for Cosmol- to aviation training can be categorized into two
ogy and Astrophysics. In the arena of Surgery types: Flight Training Device (FTD) or Full Flight
training and Medicine: Human Patient Simulator Simulator (FFS). In the same token, simulators
(HPS), Emergency Care Simulator (ECS), and are classified as Level 1-7 Flight Training Devices
Laerdal SimMan will be discussed. Simulators (FTD) or Level A-D full-flight simulators. The
used in Cosmetic Surgery that will be discussed highest and most capable device is the Level D
in this paper include Plastic Designer, LiftMagic, Full Flight Simulator, which can be used for the
Virtual Cosmetic Surgery and MyBodyPart. so-called Zero Flight Time (ZFT) conversions
In the Engineering simulation packages the pa- of already-experienced pilots from one type of
per discusses EONreality, ANSYS, and Full-Scale aircraft to a type with similar characteristics.
Virtual Walkthrough. The simulation packages There many manufactures for such professional
used in computer and communication networks simulators as seen in Table 1.
are: Anylogic and OptSim, and the paper will FlightSafty is a product of FSI a corporation
discuss both. In Stock Market Analysis, Financial based in the USA. FlightSafty offers a wide range
Models and Marketing the simulation packages of simulation environments to many disciplines.
discussed are: Crews, Decision Script, Decision FlightSafty trains pilots, aircraft maintenance
Pro, Crystalball and Anlaytica. For military train- technicians, ship operators, flight attendants and
ing and virtual reality the following simulators flight dispatchers using simulators that range from
will be discussed: World of Warcraft, Call of Duty, classroom-based desktop units to full-motion full
Disaster Response Trainer and Safety Solution. flight simulators.
Frasca is a product of Frrasca International
Incorporation and is also based in the USA.
PILOT TRAINING Frasca has products categories that include:
general Aviation, Business Aircraft, Helicopters,
Aircrafts simulators are used to train pilots fly. Airlines, and Military (Fixed and rotary wings)
In this regard, there many types of simulators, as and Visual systems.
such, the National Aviation Authorities (NAA) for HAVELSAN develops FAA Level C&D com-
civil aircraft such as the U.S., the Federal Aviation patible Mission Simulators, Full Flight Simulators
Administration (FAA) and the European Aviation and Weapon System Trainers, in high fidelity, for

17
Simulation Environments as Vocational and Training Tools

cargo aircrafts, jet aircrafts and helicopters for mili- ChemViz, ChemLab, NAMD and VMD; which
tary and civilian customers. In addition to full flight will be discussed next.
simulators; HAVELSAN provides maintenance, Virtlab (www.virtlab.com/) is web based simu-
repair, operation, modification and supporting lator where teachers can use it as visual aid and
services. HAVELSAN is a product of HAVELSAN students can experiment safely with chemicals,
corporate which is based in Turkey. whereas, REACTOR Prep Labs from Late Nite
Thales Group, Rockwell Collins and CEA Inc. Labs (www.latenitelabs.com/), students perform
all three compete in the army oriented kind of simu- and analyze advanced hands-on simulations that
lation environment in addition to civil aviation are easy and enjoyable to use, and then enter the
environment, whereas, Jeppesen is a subsidiary wet lab prepared, confident and ready to work.
of Boeing Commercial Aviation Services and a ChemViz (Chemistry Visualization) is an
unit of Boeing Commercial Airplanes. interactive chemistry program which incorpo-
rates computational chemistry simulations and
visualizations for use in the chemistry classroom
CHEMISTRY (education.ncsa.uiuc.edu/products/chemviz/
index.html).
When tackling the subject of chemistry, the words ChemLab (www.soft14.com/Home_and_Edu-
visualization, making sense, cognition, indepen- cation/Science/ChemLab_251_Review.html)
dence, and understanding come to mind. Within originated from academic work in computer
this context, in the area of chemistry experiments, simulation and software design at McMaster
simulators work as a visualization tool or visual University. It has continued to be developed with
aid to both the teacher and the student, in view extensive input from educators interested in the
that molecules are too small to see and control. possible application of computer simulations for
Likewise, conducting the experiment while using classroom and distance learning. Model ChemLab
a simulator is considered safe in addition, it gives incorporates both an interactive simulation and
the student the needed independence, while the a lab notebook workspace with separate areas
same experiment can be repeated over and over. for theory, procedures and student observations.
In the same token, the student cognition may Commonly used lab equipment and procedures
improve by repetition, especially for students are used to simulate the steps involved in perform-
with slow understanding rate, while giving the ing an experiment, whereby users step-through
other students with faster rate of understanding the actual lab procedure while interacting with
the chance to explore other options in the experi- animated equipment in a way that is similar to the
ment in a safe environment. The aforementioned real lab experience. Furthermore, ChemLab comes
is significantly preferred, in view that practicing with a range of pre-designed lab experiments for
the experiment before actually conducting it, general chemistry at the high school and college
would give the student the confidence needed. level, likewise, users can expand upon the original
More importantly, another motivation for such lab set using ChemLab’s LabWizard development
simulators is distance learning, although the say tools, thereby allowing for curriculum specific lab
is “seeing is believing”, however, simulators may simulation development by educators, worthwhile
provide the student with the needed knowledge to note that these user-designed simulations com-
without actually conducting the experiment, bine both text based instructions and the simulation
thereby accentuating the intellectual capability of into a single distributable file.
a student. In this regard, some of the well known NAMD and its sister visualization program
simulators in this arena are Virtlab, REACTOR, VMD (www.ks.uiuc.edu/Research/vmd/) helped

18
Simulation Environments as Vocational and Training Tools

scientists discern how plants harvest sunlight, and at linking multiple representations.
how muscles stretch and how kidneys filter water. In the Computational Nuclear Physics Group
Both are used to teach and train students in the (CNP) the Lawrence Livermore National Labo-
MIT. VMD (Virtual DNA Viewer) is a molecular ratory (LLNL) provide three distinct simulation
visualization program for displaying, animating, codes (nuclear.llnl.gov/CNP/Home/CNP_Home.
and analyzing large bio-molecular systems using htm): cosmic-ray shower distributions near the
3-D graphics and built-in scripting. Earth’s surface, gamma-ray source spectra from
Conclusively, all five simulators: Virlab, nuclear decay of aged mixtures of radioisotopes,
REACTOR , ChemViz, ChemLab, NAMD and and discrete neutron and gamma-ray emission
VMD allow the student/trainee to experiment in from fission.
the chemistry world within the safety borders. Fission: Simulates discrete neutron and
gamma-ray emission from the fission of heavy
nuclei that is either spontaneous or neutron induced
PHYSICS (Verbeke, Hagmann, & Wright, 2008). RadSrc
Calculates intrinsic gamma-ray spectrum from
It is known that the study of physics entails: Mo- the nuclear decay of a mixture of radioisotopes
tion, Energy and power, Sound and waves, Heat (Hiller, Gosnell, Gronberg, & Wright, 2007).
and Thermo, Electricity, Magnets and circuits, Cosmic-ray Shower (CRY): Generates corre-
light and radiation, as such, the use of simulation lated cosmic-ray particle showers at one of three
and multimedia-based systems provide the stu- elevations (sea level, 2100m, and 11300m) for
dents with extensively rich source of educational use as input to transport and detector simulation
material in a form that makes learning exciting. codes (Hagmann, Lange, & Wright, 2008).
Pithily, the subsequent six simulators will be dis- Open Dynamics Engine (ODE) (www.ode.
cussed: Physics Simulator, PhET, Fission, RadSrc org/) is an open source, high performance library
and CRY, ODE, Simbad, and TeamBots for simulating rigid body dynamics. ODE is useful
Physics Simulator (www.simtel.net/pub/ for simulating vehicles, objects in virtual reality
pd/53712.shtml) simulates the dynamics of par- environments and virtual creatures. It is currently
ticles under the influence of their gravitational used in many computer games, 3D authoring
and/or electrostatic interactions. tools and simulation tools. ODE is BSD-licensed.
The Physics Education Technology (PhET) (Russell, 2007)
(http://phet.colorado.edu/get_phet/index.php) Simbad (simbad.sourceforge.net/) is a Java
project is an ongoing effort to provide an exten- 3D robot simulator for scientific and educational
sive suite of simulations for teaching and learning purposes. It is mainly dedicated to researchers/
physics and chemistry and to make these resources programmers who want a simple basis for studying
both freely available from the PhET website and Situated Artificial Intelligence, Machine Learning,
easy to incorporate into classrooms. and more generally AI algorithms, in the context
PhET simulations animate what is invisible of Autonomous Robotics and Autonomous Agents.
to the eye, such as atoms, electrons, photons and It is not intended to provide a real world simula-
electric fields. User interaction is encouraged tion and is kept voluntarily readable and simple.
by engaging graphics and intuitive controls that (Simbad Project Home, 2007).
include click-and-drag manipulation, sliders and TeamBots (www.cs.cmu.edu/~trb/TeamBots/)
radio buttons. By immediately animating the re- is a portable multi-agent robotic simulator that
sponse to any user interaction, the simulations are supports simulation of multi-agent control systems
particularly good at establishing cause-and-effect in dynamic environments with visualization, for

19
Simulation Environments as Vocational and Training Tools

example teams of soccer-playing robots. (Balch, models with words or attach documents to explain
2000) the impact of a new environmental policy.
Other simulators like Fathom (www.keycol-
lege.com/catalog/titles/fathom.html), DataDesk
MATHEMATICS (www.datadesk.com/) and Excel (www.microsoft.
com) are also used in the statistics arena.
Mathematics is multi-dimensional discipline that
involves the study of quantity, structure, space
and change, noting that mathematics, as well as ENVIRONMENT AND
statistics are considered an integral part of simula- ECOLOGICAL SYSTEMS
tion, as such, simulation software is used to teach
mathematics and statistics. The term ecosystem was coined in 1930 by Roy
In addition, considering mathematics includes Clapham, to denote the physical and biological
many sub-topics like: Algebra and Number components of an environment considered in
Theory, Geometry and Trigonometry, Functions relation to each other as a unit. British ecologist
and Analysis, Data Analysis, Statistics, and Prob- Arthur Tansley later refined the term, describing
ability, and Discrete Mathematics and Computer it as “The whole system,… including not only the
Science– Graphs, trees, and networks; enumerative organism-complex, but also the whole complex
combinatorics; iteration and recursion; conceptual of physical factors forming what we call the envi-
underpinnings of computer science, accordingly, ronment” (Tansley, 1935). Needless to say, when
(Nelson, 2002) has cited number of reasons that studying the ecosystem or environment there are
motivate teaching mathematics using simula- many variables to consider; among them a very
tion: important element is TIME. Using simulators to
teach and train students is an essential aspect,
1. Visualization whereby, many simulators used for teaching and
2. Algorithmic Representation of Probability training ecosystems, mainly there are SEASAM,
3. Sensitivity or Insensitivity of Results to The Agricultural Production Systems Simulator
Assumptions (APSIM), and Ecosim Pro., which are all dis-
4. Connecting Probability to Statistics cussed next.
5. Integrating Probability and Simulation SEASAM (www.seasam.com/)is a software
Supports a Unified Treatment of Stochastic environment SESAME is a model development and
Modeling and Analysis analysis tool designed to facilitate the construction
of ecological models using Fortran-77 on UNIX
Matlab (www.mathworks.com/)is used in machines. (P. Ruardija, 1995).
teaching Differential Equations, Linear Algebra, The Agricultural Production Systems Simula-
Multivariable Calculus, Finite Elements in Elas- tor (APSIM) (www.apsim.info/) is a modular mod-
tic Structures, Optimization, and Mathematical eling framework that has been developed by the
Theory of Waves among others. Agricultural Production Systems Research Unit
STELLA (www.iseesystems.com/softwares/) in Australia. APSIM was developed to simulate
offers a practical way to dynamically visualize biophysical process in farming systems, in par-
and communicate how complex systems and ticular where there is interest in the economic and
ideas really work. STELLA helps visual learners ecological outcomes of management practice in
discover relationships between variables in an the face of climatic risk. (B. A. Keating, 2003)
equation. Verbal learners might surround visual Ecosim Pro (www.ecosimpro.com) a power-

20
Simulation Environments as Vocational and Training Tools

ful simulation environment for very complex age with a wealth of add-ons, with a multi-platform
hybrid systems (thermal fluid dynamics, chemi- package (Windows, Mac OS, Linux, Unix) that
cal, mechanical, electrical, etc). Ecosim was used would allow the Solar System and galaxy to be
as real-time simulator for large scale cryogenic visualized using real astronomical data, actually
systems of CERN (European Organization for it would feel real, in terms of enabling the viewed
Nuclear Research) using helium refrigerators to “fly” to other stars; visit the planets and even
controlled by Programmable Logic Controllers piggybank on any of the current or planned space
(PLC). Ecosim Pro was used as a a tool to train probes, as such, this tool is considered an excel-
the operators in this project. (Bradu, Gayet, & lent educational add-ons, noting that there are
Niculescu, 2007). interactive learning documents including one on
Conclusively, the three simulators; SEASAM, stellar evolution available from another website
The Agricultural Production Systems Simulator (www.fsgregs.org/celestia/).
(APSIM), and Ecosim Pro; allow the student to Computer Simulations of Cosmic Evolution
experiment and test the different elements of (www.astro.washington.edu/weblinks/ Universe/
environment within a safety net. Simulations.html) provides over a dozen anima-
tions on galaxies and formation of large-scale
structure.
COSMOLOGY AND ASTROPHYSICS Project CLEA (www.gettysburg.edu/academ-
ics/physics/clea/CLEAhome.html) allows the user
Cosmology is the scientific study of the origin and to download free programs and manuals that allow
evolution of the Universe. As such it obviously user to simulate observing, obtaining and analyz-
seeks to address some of the most profound ques- ing data. The Hubble Redshift-Distance Relation
tions in science. It is a lively and rapidly evolving and Large-Scale Structure of the Universe are
field of inquiry. Technological improvements particularly relevant for this module whilst others
within the last decade have led to discoveries are excellent for Astrophysics. These are excellent
that have radically altered our understanding of simulations that utilize real data.
the Universe. Software-based activities provide Universe in a box: formation of large-scale
an interesting and effective way of engaging stu- structure (cosmicweb.uchicago.edu/sims.html)
dents and demonstrating some of the principles from the Center for Cosmological Physics at
and technologies involved, in this regard, it is the University of Chicago has a set of pages and
significant to note that there are several websites animations of supercomputer simulations on how
that can be browsed where supercomputer simu- large-scale structure forms, where it discusses cold
lations of the formation of galaxies, star clusters dark matter models.
and large-scale structure can be viewed. These WMAP Cosmology 101: Our Universe (map.
are useful in conveying the role of simulation and gsfc.nasa.gov/m_uni/uni_101ouruni.html) pro-
mathematical modeling in modern cosmology vides a clear introduction into modern cosmology,
and astrophysics. albeit it may be too detailed for most students, it
Throughout the course of this paper, five is however useful for teachers and students who
Cosmology and astrophysics simulators are intro- would want more details.
duced: Celestia, Computer Simulations of Cosmic
Evolution, CLEA, Universe in a box: formation of
large-scale structure, and WMAP Cosmology.
Celestia: A 3D Space Simulator (www.shatters.
net/celestia/) is an outstanding free software pack-

21
Simulation Environments as Vocational and Training Tools

SURGERY TRAINING AND MEDICINE COSMETIC SURGERY

“Combining the sense of touch with 3-D com- Simulators used in cosmetic surgery applica-
puter models of organs, researchers at Rensselaer tions can be a valuable tool for both the doctor
Polytechnic Institute are developing a new ap- and the patient, whereby the patient can see in
proach to training surgeons, much as pilots learn advance if the results of a chosen procedure will
to fly on flight simulators. With collaborators at suit their anatomical goals, in addition, they can
Harvard Medical School, Albany Medical Center, experiment a variety of looks to make sure they
and the Massachusetts Institute of Technology, are confident about the body aesthetic they really
the team is developing a virtual simulator that want to achieve. In retrospect, the doctor can use
will allow surgeons to touch, feel, and manipu- the process as a way to insure good communi-
late computer-generated organs with actual tool cation with the patient. The doctor can actually
handles used in minimally invasive surgery see the result the patient wants and can save the
(MIS).” (ScienceDaily, 2006). Some of the well image as a visual reference to be utilized during
known medicine and surgery simulators are: Hu- surgery. This virtual result makes it clear to both
man Patient Simulator (HPS), Emergency Care parties the exact look which is expected at the
Simulator (ECS), and Laerdal SimMan (www. end of the healing process. Virtual reality can
cesei.org/simulators.php). make the entire surgical process far easier for
METI Human Patient Simulator (HPS): is full- both doctor and patient and generally leads to
sized mannequin is ultra sophisticated and highly more satisfying results with less pre-operative
versatile with programmed cardiovascular, pulmo- anxiety.
nary, pharmacological, metabolic, genitourinary Plastic Designer (www.nausoft.net/eng/
(male and female), and neurological systems. It products/plastic/index.html) is new generation
blinks, speaks and breathes; it has a heartbeat and of plastic surgery software from NauSoft. Plastic
a pulse, and accurately mirrors human responses to Designer is plastic surgery imaging software al-
procedures such as CPR, intravenous medication, lows physicians to work with a computer-based
intubation, ventilation, and catheterization. solution specifically to the needs of their prac-
METI Emergency Care Simulator (ECS): tice. With the help of State-of-the-Art real time
Specially designed to support emergency care morphing algorithms, wide range of art tools
and trauma scenarios, this full-sized mannequin (wrinkle smoother, skin pigment sprayer, sculpt
is portable, creating the opportunity to train learn- builder and others) and multilevel undo functions,
ers in any environment. The ECS offers much of a physician can easily produce image modification
the same technology as the HPS, but optimizes in minimal time.
emergency scenarios to expose students to the Plastic Designer plastic surgery makeover
most complicated and high-risk situations. software provides: Aesthetic Imaging and Post-
Laerdal SimMan: is full-body mannequin Operation outcome modeling, Laser resurfacing,
offers scenarios similar to the METI simulators Cosmetic surgery simulation, 3 powerful morphing
with the addition of an anatomically accurate algorithms, Automatic modeling for rihinopelasty,
bronchial tree, providing learners the opportu- Pre-Operation Analysis, Automatically measures
nity to respond to airway complication scenarios the main lengths, proportions and angles for Naso-
and practice advanced procedures relative to the facial operations, Manual measurements are avail-
training needs of anesthesia, ACLS, ATLS, and able, Data Table shows the current dimensions,
difficult airway management. the individual norm and the difference, Operative
Planning, Assessment list, Pre-Op list, Medical

22
Simulation Environments as Vocational and Training Tools

Documentation, Medical Images processing and tion” in May 2006 to emphasize the importance
archiving, Slides Presentation. of using simulation in college curricula, in order
LiftMagic (www.liftmagic.com/) is a software to revolutionize the Engineering science (NSF-
that is web based simulation program, where it Panel, 2006).
allows the user to upload a frontal face image and In this regard, civil engineers use simulation
then try out the different wanted procedures, as software to guide design and construction as well
the software then shows the before and after affect as to solve a wide range of projects in: Build-
of the procedures, thereby the software allows ing, Environmental engineering, Geotechnical
Face Processing Tools: Forehead enhancement, engineering, Hydraulic engineering, Materials
Eyebrow lift, Mid-brow enhancement, Around science, Structural engineering, Transportation
eyes enhancement, Beside eye enhancement, engineering and Wind engineering.
Tear trough enhancement, Inner cheek enhance- One of the simulation software used for edu-
ment, Outer cheek enhancement, Cheek lift, Nose cational and training purposes in civil engineer-
reduction, Outer lip lift, Lip augmentation, Jaw ing arena is EONreality (www.eonreality.com/),
restoration, Weight reduction: noting that the applications of the EONreality are
Virtual Cosmetic Surgery, which is a computer discussed by (Sampaio & Henriques, 2008) in their
software that was developed to simulate an actual paper “Visual simulation of Civil Engineering
surgical procedure on the human body. First, the activities: Didactic virtual models”.
doctor will photograph patient and upload these Another software is ANSYS (www.ansys.com/
images into the computer, then the doctor would solutions/default.asp). ANSYS designs, devel-
discuss patient preferences for surgery and the ops, markets and globally supports engineering
look patient is trying to achieve, after that, the simulation solutions used to predict how product
surgeon would enter these parameters into the designs will behave in manufacturing and real-
computer which will adjust patient actual body world environments. Its integrated, modular and
image according to the doctor’s commands. These extensible set of solutions addresses the needs of
alterations will simulate the exact methods used organizations in a wide range of industries. ANSYS
during actual plastic surgery and will give the solutions qualify risk, enabling organizations to
patient a good idea of how s/her will look after know if their designs are acceptable or unaccept-
undergoing the chosen operation. able not just that they will function as designed.
MyBodyPart (www.mybodypart.com/), which Full-Scale Virtual Walkthrough (www.world-
can be downloaded to a PC and the patient, as viz.com/solutions/architecture.html): would liter-
well as, the surgeon can experiment with different ally allow a walk through the architectural designs
scenarios. As such, the simulator allows the user in full scale and experience a stunning sense of
to experiment with replacing different body parts the space in 3D.
rather than morphing the existing one.

COMPUTER AND
ENGINEERING – CIVIL COMMUNICATION NETWORKS
ENGINEERING, ARCHITECTURE,
INTERIOR DESIGN In order to learn about computer architecture and
the tricks of the communication networks, there
In engineering the National Science Foundation are professional simulation packages, such as
(NSF) supported research in the report “Revolu- AnyLogic and OptSim, which are both discussed
tionizing Engineering Science through Simula- next.

23
Simulation Environments as Vocational and Training Tools

AnyLogic (www.xjtek.com)is based on UML Management Reporting, Portfolio Simulation,


and has big library of examples. AnyLogic is Business Financial Modeling, Process Optimi-
one of the few hybrid simulation packages, with zation, Decision-Making, Strategic Planning,
application areas that include: Control Systems, Marketing, Finance Accounting, Operations, and
Traffic, System Dynamics, Manufacturing, Supply Human Resources.
Chain, Logistics, Telecom, Networks, Computer The package Crystal Ball (www.crystalball.
Systems, Mechanics, Chemical, Water Treatment, com)has two versions that are sold commercially,
Military, and Education. one is standard, while the other is professional.
Optsim was known as Artifex (www.rsoftde- Crystal Ball is an add-in for Microsoft Excel
sign.com), is a tool based on Petri Nets. Optsim spreadsheet. The package is developed by using
is one of the few packages that can handle both Visual Basic, based on the famous Monte Carlo,
discrete and Continuous systems. Optsim applica- which offers applications that include: Business
tions are: Performance evaluation of the transmis- Planning And Analysis, Cost/Benefit Analysis,
sion level of Optical Communications systems, Risk Management, Petroleum Exploration, Port-
Network Modeling, Strategic analysis. folio Optimization, and Project Management (OR/
MS, 2003)
Analytica (www.lumina.com)is graphical
STOCK MARKET ANALYSIS, user interface software that is used, among other
FINANCIAL MODELS things, for business modeling and decision and risk
AND MARKETING analysis, moreover, it is a visual tool for creating,
analyzing, and communicating decision models”
There many simulation-based games that are (Analytica, 2006). Analytica uses influence dia-
used to learn how to trade in the stock exchange grams which Lumina claims to be “The perfect
markets. In fact, students can “earn their Stock visual format for creating and communicating a
Broker License and then in turn apply for a busi- common vision of objective issues, uncertainties
ness license to create their own investment com- and decisions” (Analytica, 2006)
pany” as the schools website promises (Crews).
Gwinnett County Public Schools the owner of the
website is a public school that teaches children MILITARY TRAINING AND
stock exchange, where the game is setup as part of VIRTUAL REALITY
computer science curriculum for eighth graders;
the game is web-based simulation (www.crews. Simulation is used to train military as well as
org/curriculum/ex/compsci/ 8thgrade/ stkmkt/ disaster response teams, accordingly, many
index.htm). simulators are built for such purposes inter alia:
However, in order to learn Financial Planning MRE, World of Warcraft, Call of Duty, Disaster
& Modeling and Marketing in the real world, there Response Trainer and Safety Solution.
are other more professional simulation packages Mission Rehearsal Exercise (MRE) is part
like Decision Script, Decision Pro, Crystalball of an elaborate high-tech simulator that is being
and Anlaytica. developed by a contractor for the U.S. military
DecisionScript along with DecisionPro (www. to help train soldiers heading for combat, peace-
vanguardsw.com)are made by Vanguard except keeping and humanitarian missions. This package
that DecisionScript is web based (vanguardsw), actually reflects on a larger Pentagon policy to
which provides applications that include: Military use technology to train the video game genera-
Financial Planning, Online Sales Assistance, tion now entering the service. Additionally, the

24
Simulation Environments as Vocational and Training Tools

developers at the Institute of Creative Technologies CONCLUSION


(ICT) -- which created MRE -- are working in
conjunction with storytellers from the entertain- This paper studied more than 50 simulation pack-
ment industry; technologists and educators from ages (simulators) used for the purpose of education
the University of Southern California (USC); and training in many fields. The 50 simulators
and Army military strategists, as well as the Los were categorized in the following fields: Pilot
Angeles, California-based ICT was formed in Training, Chemistry, Physics, Mathematics, En-
1999 to research the best types of simulators to vironment and Ecological Systems, Cosmology
be used by the military.(www.globalsecurity.org/ and Astrophysics, Medicine & Surgery Training,
education/index.html) Cosmetic Surgery, Engineering – Civil Engineer-
World of Warcraft and Call of Duty are addi- ing, Architecture, Interior Design, Computer and
tional VR Military training simulation packages, Communication Networks, Stock Market Analy-
where World of Warcraft has more than eleven sis, Financial Models, and Marketing, Military
million subscribers for the game. Following is Training and Virtual Reality.
the website address that can provide the user with In Pilot Training the paper discussed flight-
a trail version (https://signup.worldofwarcraft. safety, Frasca, HAVELSAN, and Thales. In
com/trial/index.html;jsessionid=65296B939D4 Chemistry the simulators: Virtlab, REACTOR,
3EA128E7243D9AFD9B5AC.app10_05). On ChemViz, ChemLab, NAMD and VMD were
the other hand Call of Duty is available on the discussed. In regards to physics training Phys-
website (www.callofduty.com/). ics Simulator, PhET, Fission, RadSrc and CRY,
Disaster Response Trainer: Methodist Univer- ODE, Simbad, and TeamBots simulation packages
sity commissioned WorldViz to construct a turnkey were discussed. In teaching mathematics by using
solution for their new Environmental Simulation simulation Matlab and STELLA were discussed
Center, as part of the university’s Occupational in addition to statistical software like Fathom,
Environmental Management Program. The center DataDesk and Excel.
trains participants in environmental and industrial Vis-à-vis Environment and Ecological Sys-
disaster prevention and management. Using such tems, the paper discussed SEASAM, The Agri-
simulation packages, the participants are given cultural Production Systems Simulator (APSIM),
the opportunity to apply classroom knowledge and Ecosim Pro simulation packages. In addition,
in immersive virtual training environments. the five simulation packages: Celestia, Computer
University students, local catastrophe response Simulations of Cosmic Evolution, CLEA, Universe
teams, industry employees, and Fort Bragg Army in a box: formation of large-scale structure, and
personnel will use the center. WMAP Cosmology were discussed for Cosmol-
Safety Solution: Simulate workplace environ- ogy and Astrophysics. In the arena of Surgery
ments, safety hazards, or machinery operation in training and Medicine: Human Patient Simulator
ultra-realistic interactive 3D virtual environments, (HPS), Emergency Care Simulator (ECS), and
thereby making safety training more effective than Laerdal SimMan were discussed. Simulators used
before. Both Disaster Response Trainer and Safety in Cosmetic Surgery that are discussed in this
Solution from (www.worldviz.com/) paper include Plastic Designer, LiftMagic, Virtual
Cosmetic Surgery and MyBodyPart.
In the Engineering simulation packages the pa-
per discussed EONreality, ANSYS, and Full-Scale
Virtual Walkthrough. The simulation packages
used in computer and communication networks

25
Simulation Environments as Vocational and Training Tools

were: Anylogic and OptSim, and the paper tackled Hagmann, C., Lange, D., & Wright, D. (2008, 1).
both. In Stock Market Analysis, Financial Models Cosmic-ray Shower Library (CRY). Retrieved 10
and Marketing the simulation packages discussed 2008, from Lawrence Livermore National Labora-
were: Crews, Decision Script, Decision Pro, tory, http://nuclear.llnl.gov/
Crystalball and Anlaytica. For military training
Hiller, L., Gosnell, T., Gronberg, J., & Wright,
and virtual reality the following simulators were
D. (2007, November). RadSrc Library and Ap-
discussed: World of Warcraft, Call of Duty, Disas-
plication Manual. Retrieved October 2008, from
ter Response Trainer and Safety Solution.
http://nuclear.llnl.gov/
Using computer simulation delivers the idea
with sight and sound which gives the student Keating, B. A., P. S. (2003). An overview of
confidence in the matter at hand. When training a APSIM, a model designed for farming systems
pilot or a surgeon by using computer simulation, simulation. European Journal of Agronomy, 18(3-
the trainee is actually given the knowledge of 4), 267–288. doi:10.1016/S1161-0301(02)00108-
many expert pilots and expert surgeons. Therefore, 9
computer simulation is a knowledge channel that
Nelson, B. L. (2002). Using Simulation To Teach
transfers knowledge from an expert to newbie.
Probability. In C.-H. C. E. Yücesan (Ed.), Proceed-
ings of the 2002 Winter Simulation Conference
(p. 1815). San Diego, CA: informs-cs.
REFERENCES
NSF-Panel. (2006, May). Revolutionizing Engi-
Analytica. (2006). Lumina. Retrieved 2008, from neering Science through Simulation. Retrieved
Analytica: www.lumina.com 2008, from Report of the National Science Foun-
Balch, T. (2000, 4). TeamBots. Retrieved 10 dation Blue Ribbon Panel on Simulation-Based
2008, from Carnegie Mellon University - SCholl Engineering Science, http://www.nsf.gov/pubs/
of Computer science: http://www.cs.cmu. reports/sbes_final_report.pdf
edu/~trb/TeamBots/ OR/MS. (2003). OR/MS. Retrieved from OR/MS,
Banks, J. (2000, December 10-13). Introduction www.lionhrtpub.com/orms/surveys/Simulation/
To Simulation. In J. A. Joines, R. R. Barton, K. Simulation.html
Kang, & P. A. Fishwick (Eds.), Proceedings of Ross, M. D. (n.d.). 3-D Imaging In Virtual Envi-
the 2000 Winter Simulation Conference, Orlando, ronment: A Scientific, Clinical And Teaching Tool.
FL, (pp. 510-517). San Diego, CA: Society for Retrieved from United States National Library of
Computer Simulation International. Medicine- National Institiute of Health, http://
Bradu, B., Gayet, P., & Niculescu, S.-I. (2007). biocomp.arc.nasa.gov/
A Dynamic Simulator for Large-Scale Cryogenic Ruardija, P., J. W.-B. (1995). SESAME, a soft-
Systems. In R. K. B. Zupančič (Ed.), Proc. EU- ware environment for simulation and analysis of
ROSIM, (pp. 1-8). marine ecosystems. Netherlands Journal of Sea
Crews, W. (n.d.). Gwinnett County Public Schools. Research, 33(3-4), 261–270. doi:10.1016/0077-
Retrieved 2008, from Gwinnett County Public 7579(95)90049-7
Schools, http://www.crews.org/curriculum/ex/ Russell, S. (2007, May). Open Dynamics Engine.
compsci/8thgrade/stkmkt/index.htm Retrieved October 2008, from Open Dynamics
Engine, http://www.ode.org/

26
Simulation Environments as Vocational and Training Tools

Sampaio, A., & Henriques, P. (2008). Visual Simbad Project Home. (2007, Dec). Retrieved 1
simulation of Civil Engineering activities:Didactic 2008, from Simbad Project Home: http://simbad.
virtual models. International Conferences in Cen- sourceforge.net/
tral Europe onComputer Graphics, Visualization
vanguardsw. (n.d.). vanguardsw. Retrieved 2008,
and Computer Vision. Czech Republic: University
from vanguardsw: www.vanguardsw.com
of West Bohemia.
Verbeke, J. M., Hagmann, C., & Wright, D. (2008,
ScienceDaily. (2006). Digital Surgery With Touch
February 1). http://nuclear.llnl.gov/simulation/
Feedback Could Improve Medical Training.
fission.pdf. Retrieved October 1, 2008, from
ScienceDaily.
Computational Nuclear Physics, http://nuclear.
llnl.gov/simulation/

27
28

Chapter 3
Agent-Based Modeling:
A Historical Perspective and a Review
of Validation and Verification Efforts
Brian L. Heath
Wright State University, USA

Raymond R. Hill
Air Force Institute of Technology, USA

ABSTRACT
Models and simulations have been widely used as a means to predict the performance of systems. Agent-
based modeling and agent distillations have recently found tremendous success particularly in analyzing
ground force employment and doctrine. They have also seen wide use in the social sciences modeling a
plethora of real-life scenarios. The use of these models always raises the question of whether the model
is correctly encoded (verified) and accurately or faithfully represents the system of interest (validated).
The topic of agent-based model verification and validation has received increased interest. This chapter
traces the historical roots of agent-based modeling. This review examines the modern influences of sys-
tems thinking, cybernetics as well as chaos and complexity on the growth of agent-based modeling. The
chapter then examines the philosophical foundations of simulation verification and validation. Simulation
verification and validation can be viewed from two quite different perspectives: the simulation philosopher
and the simulation practitioner. Personnel from either camp are typically unaware of the other camp’s
view of simulation verification and validation. This chapter examines both camps while also providing
a survey of the literature and efforts pertaining to the verification and validation of agent-based models.
The chapter closes with insights pertaining to agent-based modeling, the verification and validation of
agent-based models, and potential directions for future research.

INTRODUCTION TO THE CHAPTER through the powerful discrete-event paradigm, and


with the more recent object-oriented and web-based
Simulation has long been a favored analytical tech- simulation paradigms, simulation has continued
nique. From early Monte Carlo (sampling) methods, to provide analysts a tool that provides valuable
insight into many complex, real-world problems.
DOI: 10.4018/978-1-60566-774-4.ch003 Since many real-world systems feature an influen-

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Agent-Based Modeling

tial human component, simulationists have often of ABM, discussions that tend to emphasize the
sought to implement simulation representations diverse applications of ABM as well as how some
of that human component into their models, often fundamental properties of ABM were discovered.
with little success. However, these historical discussions often do not
Agent-based modeling has emerged from the go into much depth about the fundamental theories
object-oriented paradigm with great potential to and fields of inquiry that would eventually lead
better model complex, real-world systems includ- to ABM’s emergence. Thus, in this chapter we
ing those hard-to-model systems featuring the re-examine some of the scientific developments
human component. However, the agent-based in computers, complexity, and systems thinking
modeling paradigm struggles as do all other simu- that helped lead to the emergence of ABM, shed
lation paradigms with the question of whether the new light onto some old theories while connecting
simulation accurately represents the system of in- them to several key ABM principles of today. This
terest. This is the simulation validation issue faced chapter is not a complete account of the field, but
by any simulation model developer and user. does provide a historical perspective into ABM
This chapter provides a historical perspective and complexity intended to provide a clearer
on the evolution of agent-based models. Our understanding of the field, show the benefits of
message is that this new paradigm has a series of understanding the diverse origins of ABM, and
historical scientific treads leading to the current hopefully spark further interest into the theories
state of agent-based modeling. We then delve into and ideas that laid the foundation for today’s
the various perspectives associated with verifica- ABM paradigm.
tion and validation as a way to make the case for
moving away from using “validation” and more The Beginning: Computers
towards the concept of model “sanctioning.” We
close with summary statements and concluding The true origins of ABM can be traced back to
remarks. when scientists first began discovering and at-
tempting to explain the emergent and complex
behavior seen in nonlinear systems. Some of
INSIGHTS INTO THE EMERGENCE these more familiar discoveries include Adam
OF AGENT-BASED MODELING Smith’s Invisible Hand in Economics, Donald
Hebb’s Cell Assembly, and the Blind Watchmak-
Introduction ing in Darwinian Evolution (Axelrod & Cohen,
2000). In each of these theories simple individual
Over the years Agent-Based Modeling (ABM) entities interact with each other to produce new
has become a popular tool used to model and complex phenomena that seemingly just emerge.
understand the many complex, nonlinear systems In Adam Smith’s theory, this emergent phenom-
seen in our world (Ferber, 1999). As a result, many ena is called the Invisible Hand, which occurs
papers geared toward modelers discuss the various when each individual tries to maximize their
aspects and uses of ABM. The topics typically own interests and as a result tend to improve the
covered include an explanation of ABM, when to entire community. Similarly, Donald Hebb’s Cell
use it, how to build it and with what software, how Assembly Theory states that individual neurons
results can be analyzed, research opportunities, interacting together form a hierarchy that results
and discussions of successful applications of the in the storage and recall of memories in the hu-
modeling paradigm. It is also typical to find within man brain. In this case, the emergent phenomena
these papers brief discussions about the origins is the memory formed by the relatively simple

29
Agent-Based Modeling

interactions of individual neurons. Lastly, the senting systems. Furthermore, Turing and Church
emergent phenomena in Darwinian evolution are later developed the Church-Turing hypothesis
that complex and specialized organisms resulted which hypothesized that a machine could dupli-
from the interaction of simple organisms and the cate not only the functions of mathematics, but
principles of natural selection. also the functions of nature (Levy, 1992). With
Although these theories were brilliant for these developments, scientists had the theoretical
their time, in retrospect, they appear marred foundation onto which they could begin building
by the prevalent scientific philosophy of the machines to try and recreate the nonlinear systems
time. Newton’s philosophy, which is still com- they observed in nature.
mon today, posited that given an approximate Eventually, these machines would move from
knowledge of a system’s initial condition and an theoretical ideas to the computers of today. The
understanding of natural law, one can calculate introduction of the computer into the world has
the approximate future behavior of the system certainly had a huge impact, but its impact in
(Gleick, 1987). Essentially, this view created the science as more than just a high speed calculator
idea that nature is a linear system reducible into or storage device is often overlooked. When the
parts that eventually can be put back together to first computers were introduced, Von Neumann
resurrect the whole system. Interestingly, it was recognized their ability to “break the present
widely known at the time that there were many stalemate created by the failure of the purely
systems where reductionism failed. These types analytical approach to nonlinear problems” by
of systems were called nonlinear because the sum giving scientists the ability to heuristically use
output of the parts did not equal the output of the the computer to develop theories (Burks & Neu-
whole system. One of the more famous nonlinear mann, 1966). The heuristic use of computers, as
systems is the Three Body Problem of classical viewed by Von Neumann and Ulam, resembles
mechanics, which shows that it is impossible to the traditional scientific method except that the
mathematically determine the future states of three computer replaces or supplements the experi-
bodies given the initial conditions. mentation process (Burks & Neumann, 1966). By
Despite observing and theorizing about using a computer to replace real experiments, Von
emergent behavior in systems, scientists of the Neumann’s process would first involve making
time did not have the tools available to fully a hypothesis based on information known about
study and understand these nonlinear systems. the system, building the model in the computer,
Therefore, it was not until theoretical and tech- running the computer experiments, comparing
nological advances were made that would lead the hypothesis with the results, forming a new
to the invention of the computer that scientists hypothesis, and repeating these steps as needed
could begin building models of these complex (Burks & Neumann, 1966). Essentially the com-
systems to better understand their behavior. Some puter serves as a proxy of the real system, which
of the more notable theoretical advances that led allows more flexibility in collecting data and
to the invention of the computer were first made controlling conditions as well as better control
by Gödel with his famous work in establishing of the timeliness of the results.
the limitations of mathematics (Casti, 1995) and
then by Turing in 1936 with his creation of the The Synthesis of Natural Systems:
Turing Machine. The fundamental idea of the Cellular Automata and Complexity
theoretical Turing Machine is that it can replicate
any mathematical process, which was a big step Once computers were invented and became
in showing that machines were capable of repre- established, several different research areas

30
Agent-Based Modeling

appeared with respect to understanding natural cessful in understanding the complex nonlinear
systems. One such area was focused primarily on systems found in nature (Langton, 1989).
synthesizing natural systems (Langton, 1989) and Although Von Neumann believed that
was led primarily by the work of Von Neumann complexity was needed to represent complex
and his theory on self-reproducing automata, systems, his colleague Ulam suggested that this
which are self-operating machines or entities. self-reproducing machine could be easily repre-
In a series of lectures, Von Neumann presents a sented using a Cellular Automata (CA) approach
complicated machine that possesses a blueprint of (Langton, 1989). As the name may suggest, CA
information that controls how the machine acts, are self-operating entities that exist in individual
including the ability to self-reproduce (Burks & cells that are adjacent to one another in a 2-D
Neumann, 1966). This key insight by Von Neu- space like a checkerboard and have the capability
mann to focus not on engineering a machine, but to interact with the cells around it. The impact
instead on passing information was a precursor to of taking the CA approach was significant for at
the discovery of DNA which would later inspire least two reasons. First, CA is a naturally parallel
and lead to the development of genetic algorithm system where each cell can make autonomous
computational search processes. However, despite decisions simultaneously with other cells in the
his many brilliant insights, Von Neumann’s ma- system (Langton, 1989). This change from serial
chine was quite complicated since he believed to parallel systems was significant because it is
that a certain level of complexity was required widely recognized that many natural systems are
in order for organisms to be capable of life and parallel (Burks & Neumann, 1966). Second, the
self-reproduction (Levy, 1992). Although it is CA approach had a significant impact on repre-
certainly true that organisms are fairly complex, senting complex systems is that CA systems are
Von Neumann missed the idea, later discovered, composed of many locally controlled cells that
that global complexity can emerge from simple together create a global behavior. This CA archi-
local rules (Gleick, 1987). tecture requires engineering a cell’s logic at the
With the idea that complexity was needed local level in hopes that it will create the desired
to produce complex results, with reductionism global behavior (Langton, 1989). Ultimately,
still being the prevalent scientific methodology CA would lead to the bottom-up development
employed, and perhaps spurred on by the idea paradigm now mainly employed by the field of
of powerful serial computing capabilities, many Artificial Life because it is more naturally inclined
scientists began trying to synthesize systems to produce the same global behavior that is seen
using a top-down system design paradigm. As to emerge in complex, nonlinear systems.
discussed earlier, top-down systems analysis takes Eventually Von Neumann and Ulam success-
global behavior, discomposes it into small pieces, fully created a paper-based self-reproducing CA
understands those pieces, and then reassembles system which was much simpler than Von Neu-
the pieces into a system to reproduce or predict mann’s previous efforts (Langton, 1989). As a
future global behavior. This top-down paradigm result, some scientists began using CA systems to
was primarily employed in the early applications synthesize and understand complexity and natural
of Artificial Intelligence, where the focus was systems. Probably the most notable and famous
more on defining the rules of intelligence-looking use of CA was Conway’s “Game of Life.” In this
and creating intelligent solutions rather than the CA system, which started out as just a Go Board
structure that actually creates intelligence (Casti, with pieces representing the cells, only three
1995). Steeped in the tradition of linear systems, simple rules were used by each cell to determine
this approach did not prove to be extremely suc- whether it would be colored white or black based

31
Agent-Based Modeling

on the color of cells around it. It was found that issue of debate. In particular, there are arguments
depending upon the starting configuration, certain that suggest that it is not well defined and that
shapes or patterns such as the famous glider would experiments attempting to reproduce some of the
emerge and begin to move across the board where earlier work concerning the “edge of chaos” have
it might encounter other shapes and create new failed (Mitchell, Crutch & Hraber, 1994). Oth-
ones as if mimicking a very crude form of evolu- ers, such as Czerwinski (1998), define nonlinear
tion. After some research, a set of starting patterns systems with three regions of behavior with his
were found that would lead to self-reproduction transition between the Complex behavior region
in this very simple system (Levy, 1992). For more and the Chaos region aligning with the “edge
information on the “Game of Life,” to see some of chaos” concept. Hill et al. (2003) describe
of the famous patterns, and to see the game in ac- an ABM of two-sided combat whose behavior
tion the reader can go to http://en.wikipedia.org/ demonstrated the stage transitions described in
wiki/Conway’s_Game_of_Life. However, this (Epstein & Axtell, 1996). The debate, however,
discovery that simple rules can lead to complex and seems primarily focused on whether the particular
unexpected emergent behavior was not an isolated trade-off mechanism used by natural systems is
discovery. Many others would later come to the appropriately described by the “edge of chaos”
same conclusions using CA systems, including and not on whether a trade-off mechanism exists
Schelling’s famous work in housing segregation (Axelrod & Cohen, 2000). Thus, until the debate
which showed that the many micro-motives of comes to a conclusion, this paper takes the stance
individuals can lead to macro-behavior of the that the “edge of chaos” represents the idea of a
entire system (Schelling, 2006). trade-off mechanism that is thought to exist in
Upon discovering that relatively simply CA natural systems.
systems were capable of producing emergent Armed with these discoveries about synthe-
behavior, scientists started conducting research to sizing complex systems and emergent behavior,
further determine the characteristics and proper- many scientists in the fields of ecology, biology,
ties of these CA systems. Wolfram published a economics, and other social sciences began using
series of papers in the 1980s on the properties CA to model systems that were traditionally very
and potential uses of 2-dimensional CA. In his hard to study due to their nonlinearity (Epstein &
papers (Wolfram, 1994), Wolfram creates four Axtell, 1996). However, as technology improved,
classifications into which different CA systems the lessons learned in synthesizing these nonlin-
can be placed based on their long-term behavior. ear systems with CA would eventually lead to
A description of these classifications is found in models where autonomous agents would inhabit
Figure 1. Langton would later take this research environments free from restriction of their cells.
further and described that life, or the synthesis Such a model includes Reynold’s “boids” which
of life, exists only in class 4 systems, which is to exhibited the flocking behavior of birds. How-
say that life and similar complex systems exist ever, to better understand agents, their origins,
between order and complete instability (Levy, and behaviors another important perspective of
1992). As a result, it was concluded that in order agents, the analysis of natural systems, should
to create complex systems that exhibit emergent be examined.
behavior, one must be able to find the right balance
between order and instability (termed the “edge of
chaos”) or else the system will either collapse on
itself or explode indefinitely. It should be pointed
out that the “edge of chaos” concept has been an

32
Agent-Based Modeling

Figure 1. cellular automata classifications

The Analysis of Natural Systems: One of the main tools used in cybernetics to
Cybernetics and Chaos begin building theories about complex systems
was Information Theory as it helped scientists
While Von Neumann was working on his theory think about systems in terms of coordination,
of self-reproducing automata and asking, ‘what regulation, and control. Armed with this new
makes a complex system,’Wiener and others were mathematical theory of the time, those studying
developing the field of cybernetics (Langton, cybernetics began to develop and describe many
1989) and asking the question, ‘what do complex theories and properties of complex systems. One of
systems do (Ashby, 1956)?’ Although these two these discoveries about complex systems was the
questions are related, each is clearly focused on importance of feedback on the long-term patterns
different aspects of the complexity problem and and properties of complex systems. In general,
led to two different, but related, paths toward complex systems consist of a large number of
discovering the nature of complexity, the latter tightly coupled pieces that together receive feed-
course of inquiry become cybernetics. According back that influences the system’s future behavior.
to Wiener, cybernetics is “the science of control Based on this information, Ashby explains that
and communication in the animal and the machine” complex systems will exhibit different patterns
(Weaver, 1948) and has it’s origins in the control depending upon the type of feedback found in
of the anti-aircraft firing systems of World War II the system. If the feedback is negative (i.e., the
(Langton, 1989). Upon fine tuning the controls, Lyapunov Exponent, λ<0), then the patterns will
scientists found that feedback and sensitivity were become extinct or essentially reach a fixed point.
very important and began formalizing theories If the feedback is zero (λ =0), then the pattern
about the control and communications of these will remain constant or essentially be periodic.
systems having feedback. Eventually they would Finally, if the feedback is positive (λ >0), then
discover that the same principles found in the the patterns would grow indefinitely and out of
control of machines were also true for animals, control (Ashby, 1956).
such as the activity of recognizing and picking However, just as Von Neumann failed to make
up an object (Weaver, 1948). This discovery certain observations about complexity, the found-
would lead cybernetics to eventually be defined ers of cybernetics failed to consider what would
by Ashby as a “field concerned with understand- happen if both positive and negative feedback
ing complexity and establishing foundations to simultaneously existed in a system. It was not until
study and understand it better” (Ashby, 1956), Shaw used Information Theory to show that if at
which includes the study of both machines and least one component of a complex system has a
organisms as one system entity. positive Lyapunov Exponent, and was mixed with

33
Agent-Based Modeling

other components with varying exponent values, into the system, the complexity of these systems
the system will exhibit chaotic patterns (Gleick, seems to emerge out of nowhere. As one can recall,
1987). With Shaw’s discovery that complex sys- the same discovery was made in CA when cells
tems can exhibit chaotic behavior, scientists began with simple rules were allowed to interact dynami-
considering what further impacts Chaos Theory cally with each other (Gleick, 1987). Therefore,
might have on understanding complex systems. it appears that although natural complex systems
In general, any system exhibiting chaos will cannot be modeled directly, some of the same
appear to behave randomly with the reality be- emergent properties and behavior of these systems
ing that the behavior is completely deterministic can be generated in a computer using simple rules
(Casti, 1995). However, this does not mean that (i.e., the bottom-up approach) without complete
the system is completely predictable. As Lorenz knowledge of the entire real system. Perhaps it is
was first to discover with his simulation of weather not surprising that the idea that complex systems
patterns, it is impossible to make long-term predic- can be represented sufficiently with a simpler
tions of a chaotic system with a simulated model model, often called a Homomorphic Model, has
because it is infeasible to record all of the initial long been a fundamental concept when studying
conditions at the required level of significance complex systems (Ashby, 1956).
(Gleick, 1987). This sensitivity to initial conditions Whenever discussing the idea that simple
results from the fact that the initial conditions are rules can be used to model complex systems it is
infinitely many random numbers, which implies valuable to mention fractals, which are a closely
they are incompressible and infinitely long. related to and often a fundamental component of
Therefore, collecting these initial conditions to Chaos Theory. First named by Mandelbrot, frac-
the required level of significance is impossible tals are geometric shapes that regardless of the
without a measurement device capable of collect- scale show the same general pattern (Mandelbrot,
ing an infinite number of infinitely long numbers 1982).. The interesting aspect of fractals is that
as well as finding a computer capable of handling because of their scale-free, self-similar nature they
all of those infinitely long numbers. can both fit within a defined space and have an
It may seem that this property of chaos has at infinite perimeter, which makes them complex
some level discredited the previously mentioned and relates them very closely to the effect strange
Church-Turing Hypothesis by suggesting that attractors can have on a system. Furthermore,
these types of natural complex systems cannot forms of fractals can be observed in nature and,
be duplicated by a machine. However, there in turn, generated in labs using very simple rules,
are several other properties of chaos that help which shows that they also exhibit the same type
those attempting to model and understand these of emergent behavior and properties as the previ-
complex systems despite the inability to directly ously discussed complex systems (Gleick, 1987)
represent them. The first is that chaotic systems As a result, although fractals, chaos, and complex
have a strange attractor property that keep these systems have a lot in common, fractals, due to their
aperiodic systems within some definable region physical representation, provide an insightful look
(Gleick, 1987). This is obviously good for those into the architecture of complexity. Essentially,
studying these complex systems because it limits fractals are composed of many similar subsystems
the region of study into a finite space. The other of infinitely many more similar subsystems of the
property of these complex systems is that they can same shapes, which results in a natural hierarchy
be generated using a very simple set of rules or and the emergence of other, similar shapes. It is
equations. By using a small set of rules or equa- interesting to note that the architecture of fractals
tions, and allowing the results to act as a feedback directly shows why reductionism does not work

34
Agent-Based Modeling

for nonlinear systems. With fractals, a scientist strange attractor not only limits the state space
could forever break the fractal into smaller pieces of the system, but it also causes the system to be
and never be able to measure its perimeter. An- aperiodic. In other words, the system with a strange
other interesting aspect about the architecture of attractor will never return to a previous state; this
fractals is that they naturally form a hierarchy, results in tremendous variety within the system
which means the properties of hierarchies could (Casti, 1995). In 1962, Ashby examined the issue of
possibly be exploited when attempting to model variety in systems and posited the Law of Requisite
and understand complex systems. For example, Variety, which simply states that the diversity of an
the fact that Homomorphic models are effective environment can be blocked by a diverse system
at modeling complex systems could come from (Ashby, 1956). In essence, Ashby’s law shows
the fact that hierarchical systems are composed that in order to handle a variety of situations, one
of subsystems such that the subsystems can be must have a diverse system capable of adapting
represented not as many individual entities but to those various situations. As a result, it is clear
as a single entity (Simon, 1962). that variety is important for natural systems given
There are other properties of chaos which give the diversity of the environment in which they can
insight into complex natural systems and ABM. exist. In fact, it has been seen that entities within
Since it is impossible to satisfactorily collect all of an environment will adapt to create or replace
the initial conditions to obtain an exact prediction any diversity that have been removed, further
of a chaotic system, one might ask what would enforcing the need and importance of diversity
happen if the needed initial conditions were col- (Holland, 1995). However, it has also been found
lected, but not to the infinite level of detail? It that too much variety can be counter productive to
turns out that such a model, while close for the a system because it can grow uncontrollably and
short term, would eventually diverge from the be unable to maintain improvements (Axelrod &
actual system being modeled. This example brings Cohen, 2000). Therefore, it appears that complex
about another property of chaotic systems; they natural systems that exhibit emergent behavior
are very sensitive to the initial conditions (Casti, need to have the right balance between order
1995). Because this sensitivity property of chaos and variety or positive and negative feedback,
ultimately leads to unreliable results when com- which is exactly what a strange attractor does in a
pared to the actual system and the models only chaotic system. By keeping the system aperiodic
being homomorphic, these computer models are within definable bounds, chaotic systems show
unlikely to aid any decision about how to handle that the battle between order and variety is an
the real system. Instead, these models should be essential part of complex natural systems. As a
used to provide insights into the general properties result, strange attractors provide systems with the
of a complex system and not for forecasting ‘hard’ maximum adaptability.
statistics like mean performance. Essentially, this
methodology of using a computer for inference Towards Today’s ABM:
and insight harps back to Von Neumann’s idea of Complex Adaptive Systems
using a computer to facilitate an experiment with
hopes to gain insights about the system rather After learning how to synthesize complex systems
than using the computer to generate exact results and discovering some of their properties, the field
about the future states of the system (Burks & of Complex Adaptive Systems (CAS), which is
Neumann, 1966). commonly referenced as the direct historical roots
The final property of chaos that can give insight of ABM, began to take shape. The field of CAS
into complex natural systems and ABM is that a draws much of its of inspiration from biologi-

35
Agent-Based Modeling

cal systems and is concerned mainly with how Another Holland property of CAS is Nonlinear-
complex adaptive behavior emerges in nature ity, the idea that the whole system output is greater
from the interaction among autonomous agents than the sum of the individual component output.
(Macal & North, 2006). One of the fundamental Essentially, the agents in a CAS come together
contributions made to the field of CAS, and in to create a result such that it cannot be attributed
turn ABM, was Holland’s identification of the back to the individual agents. This fundamental
four properties and three mechanisms that com- property, the inspiration behind synthesizing and
pose all CAS (Holland, 1995). Essentially, these analyzing complex systems, is the result of dy-
items have aided in defining and designing ABM namic feedback and interactions. These causes of
as they are known today (Macal & North, 2006) nonlinearity are related to two more of Holland’s
because Holland takes many of the properties of CAS elements. The first is the property of Flow,
complex systems discussed earlier and places which states that agents in CAS communicate and
them into clear categories, allowing for better that this communication changes with time. As the
focus, development, and research. case in CA, having agents communicate with each
The first property of CAS discussed by Hol- other and their environment dynamically can lead
land is Aggregation, which essentially states that to the nonlinearity of emergent behavior. Also,
all CAS can be generalized into subgroups and within the property of Flow, Holland discusses
similar subgroups can be considered and treated several interesting effects that can result from
the same. This property of CAS directly relates changes made to the flow of information such as
to the hierarchical structure of complex systems the Multiplier Effect and the Recycling Effect. In
discussed early. Furthermore, not only in 1962 short, the Multiplier Effect occurs when an input
did Simon discussed this property of complex gets multiplied many times within a system. An
systems, he also discussed several other hierar- example of the Multiplier Effect is the impact
chical ideas about the architecture of complex made on many other markets when a person builds
systems (Simon, 1962) that can be related to two a house. Similarly, the Recycling Effect occurs
of Holland’s mechanisms of CAS. The first is when an input gets recycled within the system
Tagging, which is the mechanism that classifies and the overall output is increased. An example
agents, allows them to recognize each other, and of the Recycling Effect is when steel is recycled
allows their easier observation of the system. from old cars to make new cars [Holland, 1995].
This classification means putting agents into sub- Interestingly enough, both of these effects can be
groups within some sort of hierarchy. The second directly related back to Information Theory and
mechanism is Building Blocks, which is the idea Cybernetics. The other element that relates to non-
that simple subgroups can be decomposed from linearity is the Internal Model Mechanism, which
complex systems that in turn can be reused and gives the agents an ability to perceive and make
combined in many different ways to represent decisions about their environment. It is easy to
patterns. Besides being related to Simon’s discus- think of this mechanism as being the rules that an
sion of the decomposability of complex systems, agent follows in the model, such as turning colors
this mechanism also reflects the common theme based on its surroundings or moving away from
that simplicity can lead to emergent behavior and obstacles. Even simple Internal Models can lead
the theory behind modeling a complex system. to emergent behavior in complex systems. The
Thus, the elements of Aggregation, Tagging, link between these three elements is the essential
and Building Blocks can be related back to the nature of complex systems: nonlinearity.
results discovered by Simon when studying the The final property discussed by Holland is
architecture of complexity. Diversity. Holland states that agents in CAS are

36
Agent-Based Modeling

diverse, which means they do not all act the same the theory behind Cybernetics and Chaos Theory
way when stimulated by a set of conditions. With a modeler is better equipped in determining the
diverse agents, Holland argues that new interac- impact that certain rules may have on the system
tions and adaptations can develop such that the or in trouble shooting why the system is not
overall system will be more robust. Of course, creating the desired emergent behavior. Finally,
the idea that variety creates more robust systems understanding the history of ABM presents the
relates directly back to Ashby’s Law of Requisite modeler with a better ability to discern between
Variety, which in turn relates back to strange at- and develop new ABM approaches.
tractors and Chaos Theory. As it is often the case, examining history can
lead to insightful views about the past, present,
Summary and the future. It is the hoped that this section has
shed some light on the origins of ABM as well
For all of positives of ABM there are often just as the connections between the many fields from
as many, if not more, criticisms of ABM. For which it emerged. Starting with theories about
the modeler to successfully defend their model machines, moving onto synthesis and analysis
and have it be considered worth any more than a of natural systems, and ending with CAS, it is
new and trendy modeling technique, the modeler clear, despite this article being primarily focused
needs to have a fundamental understanding of on complexity, that many fields played an im-
the many scientific theories, principles and ideas portant role in developing the multidisciplinary
that lead to ABM and not just an understanding field of ABM. Therefore, in accordance with
of the ‘how to’ perspective on emergence and the Law of Requisite Variety, it appears wise for
ABM. By gaining deeper understandings of the those wishing to be successful in ABM to also
history of ABM, the modeler can better con- be well versed in the many disciplines that ABM
tribute to transforming ABM from a potential encompasses. Furthermore, many insights can
modeling revolution (Bankes, 2002) to an actual be discovered about the present nature of ABM
modeling revolution with real life implications. by understanding the theoretical and historical
Understanding that ABMs were the result of the roots that compose the rules-of-thumb used in
lack of human ability to understand nonlinear today’s ABM. For example, knowing the theory
systems allows the modeler to see where ABM behind Cybernetics and Chaos Theory could help
fits in as a research tool. Understanding the role a modeler in determining the impact that certain
that computers play in ABM shows the importance rules may have on the system or in trouble shoot-
of understanding the properties of computers and ing why the system is not creating the desired
in turn their limitations. Understanding that the emergent behavior. Finally, it could be postulated
fundamental properties of CAS have their origins that understanding the history of ABM presents
in many different fields (Computers, CA, Cyber- one with a better ability to discern between and
netics, Chaos, etc) give the modeler the ability develop new ABM approaches. In conclusion, this
to better comprehend and explain their model. article has provided an abbreviated look into the
For example, understanding Chaos Theory can emergence of ABM with respect to complexity
reveal why ABMs are thought to be incapable and has made some new connections to today’s
of providing anything more than insight into the ABM that can hopefully serve as a starting point
model. By understanding each of these individual for those interested in understanding the diverse
fields and how they are interrelated, a modeler can fields that compose ABM.
potentially make new discoveries and better ana-
lyze their model. For example, by understanding

37
Agent-Based Modeling

SIMULATION AND AGENT-BASED of simulation validity is so important, because


MODELING VALIDATION: STRIVING having an accurate simulation could mean that
TO OBTAIN THE UNOBTAINABLE? knowledge can be gained about reality without
actually observing, experimenting, and dealing
Introduction with the constraints of reality (Burks & Neumann,
1966). As a result of this potential, many articles
Since the introduction of the computer, simula- over the years have been devoted to the topic of
tions have become popular in many scientific simulation validity and in particular they each
and engineering disciplines. This is partly due tend to focus on some aspect of the following
to a computer simulation’s ability to aid in the fundamental questions of simulation validity:
decision making and understanding of relatively
complex and dynamic systems where traditional • Can simulations represent reality? If not,
analytical techniques may fail or be impractical. As what can they represent?
a result of this ability, the use of simulations can • If a simulation cannot or does not repre-
be found in just about every field of study. These sent reality, then is the simulation worth
fields can range anywhere from military applica- anything?
tions (Davis, 1992) and meteorology (Küppers • How can one show that a simulation is val-
& Lenhard, 2005) to management science (Pidd, id? What techniques exist for establishing
2003), social science (Epstein & Axtell, 1996), validity?
nanotechnology (Johnson, 2006), and terrorism • What roles do or should simulations play
(Resnyansky, 2007). What can be inferred from today?
this wide spread use is that not only are simula-
tions robust in their application, but they are also Given the considerable amount of time and
practically successful. Due in large part to this effort spent on simulation validity, a reasonable
robustness and success, simulations are becoming question to ask is why is simulation validity
a fairly standard tool found in most analyst’s tool- still haunting simulationists today? In short, the
box. In fact, proof that simulations are becoming fundamental reason why it is still an issue today,
more of a generic analysis tool and less of a new and will continue to be one, is that the question
technique can be found in the increasing number of a simulation’s validity is really a philosophi-
of published articles that use simulations but do cal question found at the heart of all scientific
not mention it in the title (Küppers, Lenhard, & disciplines (Stanislaw, 1986). By considering the
Shinn, 2006). above questions, one will notice that they closely
However, despite their increasing popular- resemble some typical philosophy of science
ity, a fundamental issue has continued to plague questions (Kincaid, 1998):
simulations since their inception (Naylor & Finger,
1967; Stanislaw, 1986): is the simulation an ac- • Can scientific theories be taken as true or
curate representation of the reality being studied? approximately true statements of what is
This question is important because typically a true in reality?
simulation’s goal is to represent some abstraction • What methods, procedures, and practic-
of reality and it is believed that if a simulation es make scientific theories believable or
does not accomplish this representation, then true?
information gained from the simulation is either
worthless or at least not as valuable. Therefore, Therefore, the philosophy of science perspec-
one can understand why answering the question tive can shed light on the nature of simulation

38
Agent-Based Modeling

validity as it is known today as well as the nature ways (Banks, Carson, Nelson, & Nicol, 2001;
of simulation itself. It is from this fundamental Law, 2007; Stanislaw, 1986; Winsberg, 1999), a
philosophy of science perspective that this section simplified version of the cascading relationship
will give insights into the fundamental questions is shown in Figure 2. It is also important to note
of simulation validity, where current practices in that commonly simulations today are performed
simulation validation fit into the general frame- by computers because they are much more ef-
work of the philosophy of science, and what role ficient at numerical calculations. Therefore, we
simulations play in today’s world. assume that a simulation is constructed within a
With this objective in mind, this section has four computer and that a simulation is a representation
subsections. The first discusses how the relation- of a model which is in turn a representation of a
ship between reality and simulation is flawed such real system (as shown in Figure 2).
that all simulations do not represent reality. The Having defined the fundamental relationship
second describes what is currently meant by the between a real system, a model, and a simula-
idea of simulation validation in practice. The third tion, the implications of this relationship can
discusses the usefulness of simulations today and be examined. A simulation’s ability to represent
how simulations are becoming the epistemologi- reality first hinges on the model’s underlying
cal tool of our time. In the fourth, the usefulness ability to represent the real system. Therefore,
and role of Agent-Based Models as well as the the first step in determining a simulation’s ability
special case they present to simulation validation to represent reality is to examine the relation-
is discussed. ship between a real system and a model of that
real system. To begin, it must be recognized
Why All Simulations are Invalid that a real system is infinite in its input, how it
processes the input, and its output, and that any
It is valuable to first define simulation and dis- model created must always be finite in nature
cuss how it is typically seen as related to reality. given our finite abilities (Gershenson, 2002).
Although there are many definitions of simula- From this statement alone it can be seen that a
tion, for this section a simulation is defined as model can never be as real as the actual system
a numerical technique that takes input data and and that instead all that can be hoped for is that
creates output data based upon a model of a sys- the model is at least capable of representing some
tem (Law, 2007) (for this section the distinction smaller component of the real system (Ashby,
between theory and model will not be made, 1970). As a result, all models are invalid in the
instead the term model will be used to represent sense that they are not capable of representing
them together). In essence, a simulation attempts reality completely.
to show the nature of a model as it changes over The idea that all models are bad is certainly
time. Therefore, it can be said that a simulation not a new idea. In fact, this is recognized by
is a representation of a model and not directly a many people (Ashby, 1970; Gershenson, 2002;
representation of reality. Instead, it is the model’s Stanislaw, 1986) and there are even articles writ-
job to attempt to represent some level of reality ten which discuss what can be done with some of
in a system. In this case, it would appear that these bad models to aid in our understanding and
a simulation’s ability to represent reality really decision making (Hodges, 1991). However, if all
depends upon the model upon which it is built models are bad at representing a real system and
(Davis & Blumenthal, 1991). Although this re- a model is only capable of representing a small
lationship between a real system, a model, and a portion of that real system, then how will it be
simulation has been described in many different known if a model actually represents what hap-

39
Agent-Based Modeling

Figure 2. Relationship between a system, a theory/model, and a simulation

pens in the system? In essence, how can we prove mean that the second fundamental question of
that a model is valid at least in representing some the philosophy of science (what methods and
subset of a real system? procedures make models believable?) has not been
The basic answer to this question is that a model thoroughly explored. There actually exists many
can never be proven to be a valid representation belief systems developed by famous philosophers
of reality. This can be shown by examining sev- that attempt to provide some perspective on this
eral different perspectives. The first perspective question (Kincaid, 1998; Kleindorfer & Ganeshan,
involves using Gödel’s Incompleteness Theorem 1993). For instance, Karl Popper believed that a
(Gödel, 1931). In his theorem, Gödel showed that theory could only be disproved and never proved
a theory cannot be proven from the axioms upon (Falsificationism), others believe that a model is
which the theory was based. By implication, this true if it is an objectively correct reflection of
means that because every model is based upon factual observations (Empiricism) (Pidd, 2003).
some set of axioms about the real system, there However, no matter what one adopts as their phi-
is no way to prove that model is correct (Gersh- losophy, the fundamental idea that remains is that
enson, 2002). Another perspective to consider all models are invalid and impossible to validate.
is that there are an infinite number of possible A shining example of this idea can be seen by the
models that can represent any system and it would fact that although both are considered geniuses,
therefore take an infinite amount of time to show Einstein still showed that Newton’s model of the
that a particular model is the best representation world was wrong and therefore it is likely that
of reality. Together these perspectives hearken eventually someone will come up with a new
back to one of the fundamental questions found model that seems to fit in better with our current
in the philosophy of science: how can a model be knowledge of reality (Kincaid, 1998). Regardless
trusted as representing reality? of how correct a model may be believed to be, it
Although a model cannot be proven to be is likely that there will always exist another model
a correct representation of reality, it does not which is better.

40
Agent-Based Modeling

The previous discussion on the relationship memory storage and truncation errors obsolete,
between a real system and a model lead to the then the next issue in a computer simulation’s
following conjectures about models: ability to represent a model is the computer’s
processing speed. Given that computer process-
• Models cannot represent an infinite reality ing speeds are getting increasingly faster, the
and therefore all models are invalid with question about whether a computer can process
respect to a complete reality; the necessary information, no matter how large
• Models can only hope to represent some and detailed the model, within an acceptable
aspect of reality and be less incomplete; time seems to be answered by just waiting until
• There are infinitely many models that technology advances. Unfortunately, there is a
could represent some aspect of reality and conjecture which states that there is a speed limit
therefore no model can ever be proven to of any data processing system. This processing
be the correct representation of any aspect speed limit, better known as Bremermann’s Limit
of reality; and (Ashby, 1970), is based upon Einstein’s mass-
• A better model than the current model is energy relation and the Heisenberg Uncertainty
always likely to exist in the future. Principle and conjectures that no data processing
system whether artificial or living can process
From these conjectures, it appears that a more than 2x1047 bits per second per gram of its
simulation’s capability to represent a real system mass (Bremermann, 1962). From this conjecture,
is bleak based purely on the fact that a model is it can be seen that eventually computers will reach
incapable of representing reality. However, there a processing limit and that models and the amount
is another issue with trying to represent a model of digits processed in a respectable amount of time
with a simulation. As seen graphically in Figure 2, will dwarf Bremermann’s Limit. Consider for
another round of translation needs to occur before example how long it would take a computer ap-
the transition from the system to the simulation is proximately the size (6x1027 grams) and age (1010
complete. At first glance, translating a model into years) of the Earth operating at Bremermann’s
a computer simulation seems relatively straight- Limit to enumerate all of the approximately 10120
forward. Unfortunately, this does not appear to be possible move sequences in chess (Bremermann,
the case even when programming (verification) 1962) or prove the optimal solution to a 100 city
issues are left out of the equation. This conclu- traveling salesperson problem (100! or approxi-
sion generally arises from to the limitations of the mately 9.33x10157 different routes). Given that this
computer. For example, because computers are super efficient, Earth-sized computer would only
only capable of finite calculations, often times be able to process approximately 1093 bits to date,
truncation errors may occur in the computer it would take approximately 1027 and 9.33x1064
simulation when translating input into output via times longer than the current age of the Earth to
the model. Due to truncation errors alone, widely enumerate all possible combinations for each
different results can be obtained from a simulation problem, respectively. It would take entirely too
of a model with slightly different levels of detail. long and be entirely too impractical to attempt to
In fact this result is often seen in chaotic systems solve these problems using brute force.
such as Lorenz’s famous weather simulations, Thus, it is challenging for a computer, given it’s
which would later lead to the idea of the Butterfly memory and processing limitations, to accurately
Effect (Gleick, 1987). represent a model or provide accurate results in
Suppose, however infeasible it may be, that a practical amount of time. To combat this issue
advances in computers would make the issues of of computing limitations, simulationists often

41
Agent-Based Modeling

build a simulation of a model that incorporates • Given the limitations of computing and
many assumptions, abstractions, distortions and the trade off between accuracy and speed,
non-realistic entities that are not in the model there are many ways to try and represent a
(Küppers & Lenhard, 2005; Morrison & Morgan, model with a simulation; and
1999; Stanislaw, 1986; Winsberg, 2001; Winsberg, • Because there are many possible simu-
2003; Winsberg, 2006b). Such examples include lations that can represent an aspect of
breaking continuous functions into discrete func- a model, it is impossible to have a com-
tions (Küppers & Lenhard, 2005), introducing pletely valid and still useful simulation of
artificial entities to limit instabilities (Küppers & a model.
Lenhard, 2005), and creating algorithms which
pass information from one abstraction level to These conjectures illuminate why translat-
another (Winsberg, 2006a). From these examples, ing a model into a computer simulation is not
we see that the limitations of computing makes a straightforward process and that many times
translating a model into a simulation very unlikely a simulationist is simply trying to tinker with a
to result in a completely valid representation of simulation until it happens to be appear to have
that model. This is why often simulation building some representation of the model. This complexity
is considered more of an art rather than a science, with translating a model into a simulation only
because getting a simulation to appear to represent makes the question of a simulation’s ability to
a model in a computer requires a lot of tinkering represent reality more pressing.
that ends up with a simulation that only appears to A main goal of this section is to explain why all
represent the model. As a result of this discussion, simulations are invalid representations of reality.
it can be seen that not only are there an infinite By examining the relationship between the real
number of models that can represent some aspect system, the model attempting to represent the
of reality, but there is probably also an infinite system, and the simulation attempting to represent
number of simulations that can represent some the model, the following summary conjectures
aspect of a model. can be made about simulation validity:
Given the above discussion, the following
conjectures can be made about the ability of a 1. A real system is infinite.
computer simulation to represent a model: 2. A model cannot represent an infinite real
system and can only hope to be one of any
• Computers are only capable of finite calcu- infinite possible representations of some
lations and finite storage, therefore trunca- aspect of that real system.
tion errors and storage limitations may sig- 3. As a result of 1 and 2, a model is an invalid
nificantly impact the ability of a computer representation of the real system and can-
to represent a model; not be proven to be a valid representation
• Computers can only process information at of some aspect of the real system.
Bremermann’s Limit, making it is impos- 4. There are many possible computer simula-
sible for them to process large amounts of tions that can represent a model and each
information about a model in a practical computer simulation has trade offs between
amount of time; the accuracy of the results and time it takes
• To attempt to represent a model with a to obtain those results.
computer simulation either requires sacri- 5. As a result of 4, a simulation cannot be said
ficing accuracy to get results or sacrificing to be a completely valid representation of a
time to get better accuracy; model.

42
Agent-Based Modeling

6. Therefore, a computer simulation is an Model Validation is substantiating that a model


invalid representation of a complete real within its domain of applicability, behaves with
system and at the very best cannot be proven satisfying accuracy consistent with the models
to be a valid representation of some aspect and simulations objectives... (Balci, 1998; Sar-
of a real system. gent, 2005)

The above conjectures lay out the issues with Validation is concerned with building the right
a simulation’s ability to represent reality. Further- model. It is utilized to determine that a model is an
more, it can be seen why simulation validation accurate representation of the real system. Valida-
continues to be a major issue. If simulations cannot tion is usually achieved through the calibration
be proven to be valid and are generally invalid of the model, an iterative process of comparing
representations of a complete real system, then the model to actual system behavior and using the
what value to they serve? However, this question discrepancies between the two, and the insights
is not the primary source of research in simulation gained, to improve the model. This process is
validation. Instead, much of the focus still remains repeated until the model accuracy is judged to
on how one can validate a simulation. Given the be acceptable. (Bankes et al., 2001)
conjecture that all simulations are invalid, or
impossible to prove to be valid, what possibly Validation is the process of determining the man-
could all of these articles mean when discussing ner in which and degree to which a model and its
simulation validation? data is an accurate representation of the real world
from the perspective of the intended uses of the
What Does Simulation Validation model and the subjective confidence that should
Really Mean in Practice? be placed on this assessment. (Davis, 1992)

Even though it has been shown that generally These definitions indicate that in practice
all simulations are invalid with respect to a real simulation validation takes on a somewhat
system, there is still a fair amount of literature that subjective meaning. Instead of validation just
continues to attempt to show how a simulation being the process of determining the accuracy
can be validated. It may initially appear that those of a simulation to represent a real system, all of
practicing simulation building are unaware of the the above authors are forced to add the clause
downfalls facing simulation’s ability to represent ‘with respect to some objectives.’ By adding
reality, but this is not the case (Banks et al., 2001; this caveat, simulationists have inserted some
Law, 2007). So what are these articles and books hope that a model is capable of being classified
discussing when they are focused on simulation as valid for a particular application, even though
validation? Insight into what practitioner’s really any model they create will at best be a less than
mean by simulation validation can be seen from perfect representation of reality. With the insertion
their own definitions of validation: of this clause, the issue of validation takes on a
completely new meaning. No longer is the issue
Validation is the process of determining whether of absolute validity the problem, the problem is
a simulation model is an accurate representation now proving the relative validity of a simulation
of the system, for the particular objectives of the model with respect to some set of objectives.
study. (Fishman & Kiviat, 1968; Law, 2007) In order to address this new problem, many
articles have been published which provide a
different perspective of this relative validity

43
Agent-Based Modeling

problem. One of these perspectives is to attempt philosophy of science to aid in simulation validity
to evaluate the relative validity of the simulation is more applicable in providing insights into the
by treating it not as a representation of a model/ fundamental philosophical questions stemming
theory but as a miniature scientific theory by itself from validation as well as potential validation
and then use the principles from the philosophy frameworks than in actually proving the validity
of science to aid in proving/disproving its valid- of a simulation.
ity (Kleindorfer & Ganeshan, 1993; Kleindorfer, Besides this high-level look at the validation of
O’Neill, & Ganeshan, 1998). As first introduced simulation as miniature theories, another perspec-
by Naylor and Finger in 1967 (Naylor & Finger, tive considers methods and procedures which can
1967), many authors have since thoroughly aid simulationists in proving the relative validity
examined the many beliefs that have emerged of their simulation given the assumption that it
from the philosophy of science and have related can be proven. This assumption is by no means a
them to simulation validity (Barlas & Carpenter, radical one. It makes sense that if one defines the
1990; A. H. Feinstein & Cannon, 2003; Klein & objective of the simulation to include the fact that
Herskovitz, 2005; Kleindorfer & Ganeshan, 1993; it cannot completely represent the real system then
Pidd, 2003; Schmid, 2005). Often these authors it is possible for a simulation to meet the needs of
provide insightful views into simulation valida- a well defined objective and therefore have relative
tion because the philosophy of science has been validity. With a goal in mind to find these methods
actively discussing the validity of theories long and procedures, a plethora of techniques have been
before the inception of simulation (Kleindorfer developed as well as when and how they should
& Ganeshan, 1993). be applied within systematic frameworks to aid
Although the introduction of scientific phi- simulationists in validating their models (Balci,
losophy has certainly provided new perspectives 1998; Sargent, 2005). As another indication of
and points of view on the subject of validity to how much this field has been explored, even the
simulationists (Kleindorfer & Ganeshan, 1993; idea of validation itself has been reduced to many
Schmid, 2005), it could be said that overall the different types of validation such as replicative,
philosophy of science has brought with it more predictive, structural, and operational validity (A.
questions than answers. There are several key H. Feinstein & Cannon, 2002; Zeigler, Praehofer,
reasons for this. The first is that for every belief & Kim, 2000).
system in the philosophy of science there are Clearly a lot of effort has been spent on sum-
both advantages and disadvantages. For example, marizing and defining how one can go about
simulation validation is often favorably compared validating a simulation given some objectives. It
to Falsificationism because it states that a simula- would be redundant to discuss all of them here in
tion can only be proved false and that in order to detail. However, to better understand what simula-
have a simulation be considered scientific it must tion validation means in practice it is worthwhile
first undergo scrutiny to attempt to prove that it to consider what all of these techniques and
is false (Klein & Herskovitz, 2005). However, ideas have in common. Whether the validation
under this belief system it is difficult to determine technique is quantitative, pseudo-quantitative, or
whether a model was rejected based on testing and qualitative, each technique has it’s advantages,
hypothesis errors or whether the model is actually disadvantages, and is subjective to the evaluator.
false (Kincaid, 1998). Another source of concern Although the statement that all of these techniques
for using the philosophy of science is that, as was are subjective may seem surprising, it can actually
discussed earlier, it is impossible to prove that a be easily shown for all the validation techniques
model/theory is valid at all. Therefore, using the developed today. For example, when qualitatively/

44
Agent-Based Modeling

pseudo-quantitatively comparing the behavior of evaluator may not appeal to scientists, engineers,
the simulation and the real system, it is clear that and simulationists, but there is a fair amount of
a different evaluator may have a different belief evidence that supports this conclusion. First of
about whether the behaviors of the two systems all, any simulation book or article focused on
are close enough such that the simulation could validation will frequently stress the importance
be considered valid. For a quantitative example of knowing the evaluator’s expectations and get-
consider when statistically comparing two systems ting the evaluator to buy into the credibility of the
and the evaluator is required to subjectively select simulation (Banks et al., 2001), some will even
a certain level of significance for the particular go as far as to say explicitly that one must sell the
hypothesis test used. For those familiar with sta- simulation to the evaluator (Law, 2007). Others
tistical hypothesis testing, it is clear that different indicate that validating a simulation is similar
levels of significance may result in different con- to getting a judicial court system to believe in
clusions regarding the validity of the simulation. the validity of the simulation (A. H. Feinstein
From these examples, it can be seen that even & Cannon, 2003). Generally those practicing
though many techniques have been developed, simulation understand that validation is more
none of them can serve as a definitive method for about getting the evaluators to believe in the
validating a simulation. They are all subjective to simulation’s validity and less about getting a truly
the evaluator. valid simulation (which is impossible anyway).
Given there is no technique that can prove that From this statement, an interesting insight is that
the relative validity of a simulation is true, the simulation validation is not completely removed
fundamental question of this section resurfaces: from society and other social influences. In fact,
what does simulation validation really mean in it appears that simulation validation in practice
practice? In practice simulationists are not trying requires the simulationist to have the ability to
to validate that a simulation is a representation of actively interact with the community of evaluators
a real system but are instead attempting to validate and attempt to persuade that community to accept
the simulation according to some objective, which the simulation as correct. As a result, some have
also cannot be systematically proven to be true. argued that simulation validation in practice is
Therefore, one must consider other alternatives similar to how any social group makes a decision
when examining what is really occurring when a (Pidd, 2003).
simulationist is trying to validate their model ac- In trying to determine what simulation valida-
cording to some objective. From the vast amount tion really means in practice, several fundamental
of techniques, guides, and systems proposed to points arise:
validate a simulation, it can be conjectured that
simulation validation in practice is really the pro- • In practice, a simulation is validated based
cess of persuading the evaluators to believe that the on some objective and not on being a true
simulation is valid with respect to the objective. representation of the real system;
In other words, in practice whether a simulation • All of the techniques developed to prove
is validated by the evaluators depends upon how the validity of a simulation in practice are
well the simulationist can `sell’ the simulation’s subjective to the evaluator and therefore
validity by using the appropriate validation tech- cannot systematically prove the relative
niques that best appeals to the evaluator’s sense validity of the simulation; and
of accuracy and correctness. • Validating a simulation in practice de-
The idea that simulation validation in practice pends upon how well the simulationist
is really the process of selling the simulation to the sells the validity of the simulation by using

45
Agent-Based Modeling

the appropriate validation techniques that and philosophy and see that practically speaking
best appeals to the evaluator’s sense of ac- simulation would not be growing in popularity
curacy and correctness. Furthermore, this if it did not provide some demonstrable good to
means that simulation validation in prac- those using the technique. In fact, it could be said
tice is susceptible to the social influences that the continuing widespread use of simulation
permeating the society within which the in practice, the number of commercial simulation
simulation exists. software packages, and the number of academic
publications meaning simulation are clear in-
Simulation validation in practice really seems dications that simulation has provided enough
to have little to do with actual validation, where robustness to be considered useful and indeed
validation is the process of ensuring that a simula- successful (Küppers et al., 2006).
tion accurately represents reality. Instead, simula- At first glance, the success of simulation in
tion validation in practice is more concerned with practice may appear to be an contradiction to
getting approval from evaluators and peers of a the previous statement that all simulations are
community relative to some overall objective for invalid. However, this is resolved by making
the simulation; simulation validation in practice is clear that all simulations are invalid with respect
really the process of getting the simulation sanc- to an absolute real system, which does not mean
tioned (Winsberg, 1999), not validated. Perhaps it that a simulation is not capable at some abstrac-
is time for simulationists to consider adopting the tion level to get relatively ‘close’ to representing
term simulation sanctioning instead of simulation a small portion of an absolute real system. For
validation since sanctioning implies a better sense example, a simulation of a manufacturing system
of what is actually occurring and validation implies may come very close to representing the outcome
that the truth is actually being conveyed in the of a process, but nevertheless it is still invalid
simulation when it is not. However, it is unlikely with respect to actual system because of all the
that this transition will occur given the fact that reasons discussed earlier. This same conclusion
simulation validation today is mainly concerned can also be related to purely scientific theories,
with getting evaluators to buy into the results of such as Newton’s laws of motion. Although they
the simulation, the current paradigm in simulation can produce reliable results in certain abstrac-
has been established, and saying a simulation is tion levels, overall they have been shown to not
valid sounds much better to a seller than does be completely correct representations of reality.
saying a simulation is sanctioned. This brings up Given the popularity of simulation it appears that
an interesting dilemma for simulationists because in spite of the fact that they are not capable of
if simulations cannot represent reality, then what representing an absolute real system, simulations
good are they? are capable of getting close and robust enough
results to become practically useful (Knuuttila,
What Good are Simulations? 2006; Küppers & Lenhard, 2006). Simulations
are useful because they are capable of providing
Since simulations in practice are sanctioned and reliable results without needing to be a true rep-
not validated, the next logical question to ask resentation of a real system (Winsberg, 2006b).
is if simulations are incapable of representing However, this is often more true for some types
reality and therefore are incapable of providing of simulations and real systems than others.
true results with respect to the system, then what In general, it can be said that the success of a
good are simulations? To answer this question, it simulation at obtaining reliable and predictable
is first important to jump out of the world of logic results depends upon how well the real system is

46
Agent-Based Modeling

understood and studied, because a well-understood isting theories about the real system and the real
system will result in better underlying theories system itself. Experiments can be designed and
that form the foundation of the simulation. When hypotheses tested based on information gained
a simulation is used to represent a well-understood from the microscope. In this same way, a mediat-
system, the reason for using the simulation is not ing simulation is capable of providing insight into
solely to try to understand how the real system the real system and the theory on which it was
may operate but instead is to take advantage of built without being a completely valid representa-
the processing power and memory of the com- tion of that real system. Although only formally
puter as a computational device. In this role, the recognized recently (Morrison & Morgan, 1999),
simulation is likely to produce reliable results the idea that computers can be used to facilitate
because it is simply being used to express the experiments and mediate between reality and
calculations that result from a well-established theory has existed for a relatively long time. In
theory. A typical example of this can be found in the early years of computing, John von Neumann
a standard queuing simulation. Since queues have and Stanislaw Ulam espoused the heuristic use
been extensively studied and well-established laws of the computer, as an alteration of the scientific
have been developed (Jensen & Bard, 2003), it method to replace real system experimentation
would be likely that a sanctioned simulation of with experiments performed within a computer
a queue would be capable of producing reliable (Burks & Neumann, 1966).
results and predictions because the simulation’s An interesting aspect of mediating simula-
role is to simply be a calculator and data storage tions, which is valuable to consider, is the ap-
device. For these types of well-understood sys- parent interplay between the simulation and the
tems, simulation can provide predictive power. real system. Just as a microscope started off as a
However, as less is known about the system, the relatively crude research tool that only provided
likelihood of the simulation providing reliable a limited view of the real system and with time
data decreases to the point where the usefulness was improved to provide continuously deeper
of the simulation takes on a new meaning. understandings of the real system, so too can a
A simulation takes on a new role of becom- simulation of a real system be improved to gain
ing a research instrument acting as a mediator better insights into that real system (Winsberg,
between theory and the real system (Morrison 2003). Furthermore, as the real system is better
& Morgan, 1999) when less is understood about understood, so can the simulation of that real
the real system because the theories about the system be improved to allow new insights to be
real system are not developed enough to truly gained about the real system. Examples of this
provide reliable predictions about future states of mediating role of simulation can be seen in many
that system. One can think of a simulation in this different fields. One such example can be found
case as being a research tool in the same sense in the field of nanotechnology where without
that a microscope is a research tool (Küppers & computer simulations to aid in the complex and
Lenhard, 2005). While the microscope provides difficult experiments, certain advances in nano-
insight into the real system, it does not directly technology would not be possible (Johnson, 2006).
reflect the nature of the real system and cannot Another example can be found in any complex
directly provide reliable predictions about the production system, where the simulation provides
real system. The microscope is only capable of insights into how the real system might behave
providing a two-dimensional image of a dynamic under different circumstances. In the world of
three-dimensional real system. However, the ABM, “toy models” such as ISAAC (Irreducible
microscope is capable of mediating between ex- Semi-Autonomous Adaptive Combat) have been

47
Agent-Based Modeling

used as to explore and potentially exploit behavior system that is very well understood contains many
that emerges in battlefield scenarios (Ilachinski, built-in assumptions and falsifications, which
2000). A final example can be seen in the field do not match what is known about the structural
of physics where some ``think of sciences as a aspects of the real system. This is especially true
stool with three legs - theory, simulation, and for simulations when not much is known about
experimentation - each helping us to understand the real system because how can a simulation
and interpret the others’’ (Latane, 1996). be a structural representation of a real system
It may seem odd to conduct an experiment us- when the structure of the real system is not fully
ing a computer simulation of a real system when understood? This flaw of structural representation
the real system would seem to be the only way perhaps has led to one of the current paradigms in
to guarantee that the conclusions obtained are simulation where the performance of a simulation,
impacted by the real system and not an error built i.e. how well the simulation translates realistic
into the simulation. However, since a real system input into realistic output, and not accuracy is
is infinite in nature it makes error a natural aspect the fundamental benchmark in determining the
of any finite experiment conducted on that real usefulness of that simulation (Knuuttila, 2006;
system. For example, attempting to measure the Küppers & Lenhard, 2005). Indeed, many of the
impact different soils have on the growing rate of technical validation techniques proposed today
a plant will always have variation and error simply emphasize the use of this realistic output or black-
because the experimenter is trying to conduct a box paradigm (Balci, 1995; Banks et al., 2001;
finite experiment and make finite measurements Law, 2007; Sargent, 2005).
on an infinite system. Therefore, error is always In general, this shift away from white-box
present in experiments, regardless of whether they evaluation (structural representation) towards
are done on the real system or a simulation of the black-box evaluation (output is all that matters)
real system (Winsberg, 2003). A huge difference (Boumans, 2006) leads to several interesting
is that simulation errors are largely repeatable conclusions. The first is this indicates the general
giving the researcher greater control for potential acceptance of the idea found in Simon’s The Sci-
insight into the real system. Nevertheless, this ences of the Artificial. Simon argues that artificial
does not mean that the simulation should be not systems (ones man-made such as simulations) are
properly sanctioned prior to being used to facilitate useful because it is not necessary to understand the
an experiment. In order for a simulation to be a complete inner workings of a real system due to the
mediator in this sense, the simulation must contain fact that there are many possible inner workings
some sort representation of the real system and of an artificial system that could produce the same
be sanctioned by the evaluators, otherwise the or similar results (Simon, 1996). One way to think
results produced may not be reliable enough to about this is to consider whether the differences
provide any insight. between the inner workings of a digital clock and
As the relationship between the real system, an analog clock really matter if they both provide
the theory, and the simulation blurs due to the the current time. Clearly, someone interested in
lack of knowledge about the real system, what knowing the correct time would be able to gain
we mean by a simulation representing some part the same amount of information from either clock
of reality also begins to become hazy. The more even though both clocks are structurally very
traditional belief is that for simulations to rep- different. This fundamental aspect of different
resent some aspect of a real system they should structures producing the same or similar results
have at least some structural resemblance to the should not be a complete surprise; as already
real system. However, even a simulation of a real discussed there are potentially an infinite number

48
Agent-Based Modeling

of possible models which can represent a single high number of interactions that occur in these
abstraction of a real system. Another conclusion systems (more about the special case that ABM
drawn by the shift towards black-box evaluation simulations present to the world of simulation is
is that simulations are beginning to catch up and discussed below) (Epstein & Axtell, 1996; Miller
pass the theoretical understanding of the systems & Page, 2007; North & Macal, 2007).
that they are being built to represent. The ques- There are several clear advantages for using
tion now becomes, what possible usefulness can simulations as generators, the most important
a simulation be when little to nothing is known of which is the ability of a simulation to create
about the underlying principles of the real system ‘dirty’ theories of systems where the simplicity
of interest? of the real system eludes our grasp. Typically,
At first glance, the usefulness of a simulation scientific theories are often idealized for a par-
for a system that is not well-understood appears ticular case and do not allow for much deviations
to be nonexistent, because there is clearly nothing from these idealizations. It could be thought that
from which to build the simulation. But it is from these idealizations are partly the result or desire
this lack of underlying theory and understanding humans have to make simplifications and elegant
of the real system that the usefulness of this type equations to represent the complex world we live
of simulation becomes evident. Consider what a in. However, simulations allow a theorist to build
simulationist would encounter if asked to build a representation of a system within a simula-
a simulation of a poorly understood system. tion that is capable of having many non-elegant
The first steps, besides trying to understand the aspects, such as ad-hoc tinkering, engineering,
needs of the evaluators, would be to observe the and the addition of logic controls such as if-then
system and then attempt to generate the same statements (Küppers & Lenhard, 2006; Winsberg,
behavior observed in the real system from within 2006a). As a result of this flexibility, it could be
the simulation. This ability of a simulation to predicted that as more problems venture into
act as a medium in which new theories about a the realm of Organized Complexity (medium
real system can be generated points to the third number of variables that organize into a highly
role of simulation, that of a theory or knowledge interrelated organic whole) (Weaver, 1948) that
generator. Although certainly not a traditional the use of simulation to generate new ‘dirty’ theo-
means, using a simulation as a medium to gener- ries about the real system will be needed because
ate new theories and ideas about the real system these systems are irreducible and typically hard
is no different in any way from using pencil and to understand to the point that often simulation-
paper or simply developing mental simulations ists are surprised about the results obtained from
about the real system (Ashby, 1970). One could these systems (Casti, 1995).
observe a system and attempt to test the implica- By using simulations as theory generators,
tions of a theory by using pencil and paper or simulationists can attempt to generate a potential
develop elaborate thought experiments as those theory that explains the phenomena observed in
made famous by Einstein just as easily as one the real system, just as a more traditional scientist
could use a simulation to test whether a theory is would do but without the simulation medium.
capable of representing the real system. Examples Therefore, there are no apparent implications of
of simulations being used for this role abound using a simulation as a generator to the philosophy
in the new simulation paradigm of ABM, where of science (Frigg & Reiss, 2008). As a result, those
simulationists are typically trying to understand using simulations in this manner should perhaps
problems that are difficult for us to grasp due to not consider themselves as disconnected from
the large amount of dispersed information and the science because they are an engineer or computer

49
Agent-Based Modeling

Figure 3. Different roles of simulation

programmer by trade, but perhaps should attempt some of the implications of using simulations as
to ascribe to the practices, rigor, and roles taken generators have been discussed, it would be valu-
on by scientists to make true progress in the able to further breakdown the use of simulations
practitioner’s field of interest. Furthermore, as as generators in order to better understand their
simulationists and scientists continue to push the relation to other simulations, the limitations they
limits of simulations beyond that of the current present, and to evaluate what might be lacking due
knowledge of some system of interest, it can the fact that the use of simulation in this manner
be seen why some researchers are considering is relatively new. Since ABM directly fits into this
simulation to be the epistemological engine of category, the deeper issues involved with generator
our time (Ihde, 2006). simulations are discussed next.
Despite the inability of a simulation to com-
pletely represent some abstraction of reality, simu- What Good is ABM?
lation has still proven useful in several different
ways. By making the connection between the level Despite the fact that any simulation paradigm
of understanding known about the real system and could potentially be used as a generator, probably
the simulation, a clearer picture is rendered about the most popular one used today is ABM. Emerg-
what simulation is capable of as well as where it fits ing from Cellular Automata, Cybernetics, Chaos,
into today’s scientific and engineering endeavors. Complexity, and Complex Adaptive Systems,
This continuous relationship, captured in Figure 3, ABM is used today to understand and explore
means that when much is known about the system, complex, nonlinear systems where typically inde-
the simulation tends to take on more of a predic- pendent and autonomous entities interact together
tor role, similar to that of a calculator. As less is to form a new emergent whole. An example of
known about the real system, the simulation takes such a system can be observed by the flocking
on the role of a mediator between the system and behavior of birds (Levy, 1992). Although each bird
the theory, much like the role a microscope plays is independent, somehow they interact together to
a mediator between microscopic systems and the form a flock, and seemingly without any leading
understanding of those systems. Finally, as the entity, manage to stay in tight formations. With
understanding of the system approaches almost this in mind, simulationists using ABM attempt to
nothing, the simulation can help generate potential discover the rules embedded in these individual
theories about the nature of the system. Although entities that could lead to the emergent behavior

50
Agent-Based Modeling

and eventually attempt to make inferences about predictor; the current sanctioning paradigm has
future states of these systems based on the simula- a majority of its interest is predictability and in
tions they developed. It can be clearly seen that turn has created sanctioning techniques that are
ABM is indeed used as generators of hypotheses mainly focused on this paradigm. Therefore, it is
for these types of complex systems. impossible for generator and ABM simulations
As expected, theories generated by ABM by their nature to fit into the current predictive
have many of the same problems encountered by sanctioning paradigm. Furthermore, if an ABM
more traditional methods of generating scientific simulation ever became predictable it would no
theories. Such fundamental problems include longer be a generator simulation and traditional
whether the truth of a system be known and what quantitative sanctioning techniques could be used.
methods can be used to sanction these theories. As a result, simulationists using ABM today as
Although the issue of truth will always remain a generator should not be focused on meeting
for any scientific theory, when it comes to the current predictive sanctioning paradigms, but
methods needed to sanction ABM it appears that should shift their focus to creating a new genera-
several new problems begin, at least in terms of tor sanctioning paradigm and the development of
simulation, to emerge. These problems typically new techniques to match. Attempting to create this
come from the fact that currently ABM is used new sanctioning paradigm will certainly not be an
to investigate problems where no micro level easy task, but it is absolutely necessary if ABM
theory exists (it is not known how the individual simulations as generators are to become acceptable
entities operate) and where it is often very diffi- for what they are: generators of hypotheses about
cult to measure and collect macro level data (the complex systems. Only after this paradigm has
emergent behavior) from a real system and com- been created can both simulationists and evalu-
pare it to the data generated from the simulation ators come to a firm conclusion about whether a
(Bankes, 2002; Levy, 1992; Miller & Page, 2007; generator simulation should be sanctioned as a
North & Macal, 2007). Ultimately, this current scientific research tool or an engineering alterna-
characteristic of these complex problems means tive analysis tool.
that the current traditional and accepted quanti- Although the overall lack of micro-level theory
tative sanctioning techniques that promote risk for these types of problems may push the current
avoidance based on performance and comparing limits of today’s simulation sanctioning for ABM,
outputs are not applicable (Shinn, 2006), since too it should by no means be seen as a completely new
little is known about these systems. Thus, several problem. For almost every simulation the simula-
interesting conclusions can be made about ABM tionist must make some assumption or build some
and generator simulations. theory about how the system works in order for
The first is that because ABM is relatively new the overall simulation to function appropriately.
as a paradigm, either accepted techniques to sanc- What ABM and generator simulations really do
tion these simulations have not yet been created is take this engineering and ad-hoc notion to the
to match the current sanctioning paradigm or a extreme. As a result, it can be seen why that this
new sanctioning paradigm with new sanctioning departure from the engineering ‘norm’ in the
techniques is needed for generator simulations. world of simulation could produce a fair amount
In order for the first statement to be the case, the of skeptics. In fact, even von Neumann was skepti-
underlying theory behind the real system being cal about using a computer in this manner, even
studied by these generator simulations needs though he recognized the practical usefulness of
to be known to the point that the simulation this approach (Küppers & Lenhard, 2006). How-
would no longer be a generator but instead be a ever, this type of skepticism is expected whenever

51
Agent-Based Modeling

pushing the limits of any paradigm accepted by a In particular, we took fundamental philosophy
community (Pidd, 2003). Overall, what is needed of science issues related to simulation sanction-
to gain acceptance of generator simulations and ing and built a framework describing the crucial
a new sanctioning paradigm will be time and relationships that exist between simulation as a
compelling evidence that eventually these ad- medium and real systems. The first relationship
hoc simulations will move up the continuum of discussed is a simulation’s inability to represent
understanding about the real system such that they a complete abstraction of that real system. As a
become mediators and then predictors. result, all simulations are invalid with respect to
Until the complex systems simulated by ABM a complete real system. From this conclusion,
are well understood, ABM simulations should the current practice of simulation validation was
be viewed practically for what they provide as investigated to gain insights into what simulation
strictly generator simulations. This means that validation really means. Despite the attempts of
ABM simulation should be viewed currently simulationists today, simulation validation really
as a research tool, which is not only capable of boils down to selling the simulation correctness to
providing insight into the real system but also the evaluators of the simulation, since it is impos-
points to what needs to be understood about the sible to prove that a simulation is valid. With this
real system in order for a theory to be developed in mind, it has been suggested that simulations
that can predict some aspect of the real system are not really validated in practice but are instead
(Bankes, 2002). In order to know that the knowl- sanctioned. In turn, this inability of a simulation
edge gained from ABM simulations is reliable, a to be validated brings into question the usefulness
new sanctioning paradigm is needed that is not of simulations in general.
based on predictability because it is impossible However, a simulation does not need to be
for the systems being studied with a generator a complete representation of some aspect of a
simulation to be strictly predictable. Instead, real system to be useful. Therefore a general
this new sanctioning paradigm should be based framework was developed that related the role of
on precision and understanding as it relates to simulation based upon the level of understanding
the more traditional methods employed by sci- of the real system of interest. In this continuous
entists (Shinn, 2006). Furthermore, as this new framework, a simulation can take on the role of
sanctioning paradigm expands, new sanctioning generator, mediator, or predictor as the level of
techniques can be created which provide value understanding increases with regards to the real
to the generator simulationist such that the real system. From this framework a clear set of ex-
system is understood to the point that generator pectations for a simulation can be distinguished
simulation paradigms such as ABM can become based upon the level of understanding about the
mediator or predictor simulations. real system. Furthermore, it is hoped that this
framework can further provide a base onto which
Summary new techniques and perceptions of a simulation’s
usefulness can be developed. Ultimately, this re-
As simulation continues to grow in popularity lationship shows partly why today simulation is
in scientific and engineering communities, it is becoming the research and knowledge-generating
invaluable to reflect upon the theories and issues tool of choice. The epitome of this new use of
that serve as the foundation for simulation as it simulation as a generator and research tool has
is known today. With this in mind, this section emerged in the form of ABM, because ABM aids
attempted to add context and reconcile the prac- in the understanding of complex, nonlinear sys-
tices of simulation with the theory of simulation. tems. However, because ABM is relatively new

52
Agent-Based Modeling

to simulation, the current practices developed to meaning of the process of and the goals of model
sanction simulations that are focused on predic- validation. Along this latter purpose a plethora of
tive performances are not applicable. Therefore, research questions arise. These include:
simulationists using ABM should develop a new
sanctioning paradigm for generator simulations • Can a general purpose model be
that is focused on understanding and accuracy. validated?
With a new sanctioning paradigm, evaluators • If analyses are based on “un-validated”
and simulationists can have better expectations models, do the analyses provide any
of ABM and other generator simulations as a benefit?
research tools that point to what is needed and • Are simulation developmental meth-
what is possible. In the context of this framework ods sufficiently precise to support model
and that as ABM improves the understanding of sanctioning?
complex systems, perhaps eventually the level of • Are ABMs amenable to each simulation
understanding will increase concerning the real role we define via our Figure 3?
system and in turn allow ABM to take on the role • What is a reasonable ABM sanctioning
of mediator or predictor. process?
• What kind of use cases are appropri-
ate to test proposed ABM developmental
CHAPTER SUMMARY, CONCLUSION methods as they purport to support model
AND FUTURE CHALLENGES sanctioning?

The growth of ABM requires collective critical


thought pertainng to crucial aspects of the model-
ing paradigm. This chapter provided background REFERENCES
information pertaining to the historical bases of
ABM and to the philosophical positions regard- Ashby, W. R. (1970). Analysis of the system to
ing ABM and model validation. In the historical be modeled. The process of model-building in the
background section, key points made are that behavioral sciences (pp. 94-114). Columbus, OH:
the current ABM paradigm grew out of multiple Ohio State University Press.
disciplines and that the proper use of an ABM is Axelrod, R., & Cohen, M. D. (2000). Harness-
a function of the knowledge level associated with ing complexity: Organizational implications of a
the current system or process. From the validation scientific. New York: Basic Books.
philosophy section, the bleak view is that true
validation is impossible. Thus, we point out where Balci, O. (1995). Principles and techniques of
models continue to provide benefits and lay out simulation validation, verification. In Proceedings
the case for model sanctioning as a function and of the 1995 Winter Simulation Conference, eds. C.
model use and purpose. Alexopoulos, and K. Kang, 147-154. Piscataway,
The purpose of this chapter was to improve New Jersey: Institute of Electrical and Electronics
any reader’s knowledge base with respect to the Engineers, Inc.
historical bases of ABM and the philosophical
positions regarding model validation. Another
purpose was to spark some debate regarding the

53
Agent-Based Modeling

Balci, O. (1998). Verification, validation, and Davis, P. K. (1992). Generalizing concepts and
accreditation. In Proceedings of the 1998 Winter methods of verification, validation, and. Santa
Simulation Conference, eds. D. J. Medeiros, E. F. Monica, CA: RAND.
Watson, J. S. Carson and M. S. Manivannan, 41-
Davis, P. K., & Blumenthal, D. (1991). The base
48. Piscataway, New Jersey: Institute of Electrical
of sand problem: A white paper on the state of
and Electronics Engineers, Inc.
military. Santa Monica, CA: RAND.
Bankes, S. C. (2002). Agent-based modeling: A
Epstein, J. M., & Axtell, R. (1996). Growing arti-
revolution? Proceedings of the National Academy
ficial socieities: Social science from the bottom up.
of Sciences of the United States of America, 99(10),
Washington, DC: Brookings Institution Press.
7199–7200. doi:10.1073/pnas.072081299
Feinstein, A. H., & Cannon, H. M.
Banks, J., Carson, J. S., Nelson, B. L., & Nicol, D.
(2002). Constructs of simulation evalua-
M. (2001). Discrete-event system simualtion (3rd
tion. Simulation & Gaming, 33(4), 425–440.
Ed.). Upper Saddle River, NJ: Prentice-Hall.
doi:10.1177/1046878102238606
Barlas, Y., & Carpenter, S. (1990). Philosophical
Feinstein, A. H., & Cannon, H. M. (2003). A
roots of model validation: Two paradigms. System
hermeneutical approach to external validation of
Dynamics Review, 6(2), 148–166. doi:10.1002/
simulation models. Simulation & Gaming, 34(2),
sdr.4260060203
186–197. doi:10.1177/1046878103034002002
Boumans, M. (2006). The difference between
Ferber, J. (1999). Multi-agent systems: An in-
answering a ‘why’ question and answering a
troduction to distributed artificial intelligence.
‘how much’ question. In G. K. Johannes Len-
Harlow, UK: Addison-Wesley
hard, & T. Shinn (Eds.), Simulation: Pragmatic
construction of reality; sociology of the sciences Fishman, G. S., & Kiviat, P. J. (1968). The statis-
yearbook (pp. 107-124). Dordrecht, The Nether- tics of discrete-event simulation. Simulation, 10,
lands: Springer. 185–195. doi:10.1177/003754976801000406
Bremermann, H. J. (1962). Optimization through Frigg, R., & Reiss, J. (2008). The philosophy of
evolution and recombination. In F. T. J. M.C. simulation: Hot new issues or same old stew?
Yovits, & G.D. Goldstein (Eds.), Self-oganizing Gershenson, C. (2002). Complex philosophy.
systems (pp. 93-106). Washington D.C.: Spartan The First Biennial Seminar on the Philosophical,
Books. Methodological.
Burks, A. W., & Neumann, J. v. (1966). Theory of Gleick, J. (1987). Chaos: Making a new science.
self-reproducing automata. Urbana and London: New York: Viking.
University of Illinois Press.
Gödel, K. (1931). Über formal unentscheidbare
Casti, J. L. (1995). Complexification: Explaining a sätze der principia mathematica und verwandter.
paradoxical world through the science of surprise Monatshefte Für Mathematik Und Physik, (38),
(1st Ed.), New York: HarperPerennial. 173-198.
Czerwinski, T. (1998). Coping with the Bounds: Hill, R. R., McIntyre, G. A., Tighe, T. R., &
Speculations on Nonlinearity in Military Affairs. Bullock, R. K. (2003). Some experiments with
Washington, DC: National Defense University agent-based combat models. Military Operations
Press. Research, 8(3), 17–28.

54
Agent-Based Modeling

Hodges, J. S. (1991). Six or so things you can do Kleindorfer, G. B., O’Neill, L., & Ganeshan, R.
with a bad model. Operations Research, 39(3), (1998). Validation in simulation: Various positions
355–365. doi:10.1287/opre.39.3.355 in the philosophy of science. Management Science,
44(8), 1087–1099. doi:10.1287/mnsc.44.8.1087
Holland, J. H. (1995). Hidden order: How adap-
tation builds complexity. Cambridge, MA: Helix Knuuttila, T. (2006). From representation to pro-
Books. duction: Parsers and parsing in language. In G. K.
Johannes Lenhard, & T. Shinn (Eds.), Simulation:
Ihde, D. (2006). Models, models everywhere.
Pragmatic construction of reality; sociology of
In G. K. Johannes Lenhard, & T. Shinn (Eds.),
the sciences yearbook (pp. 41-55). Dordrecht,
Simulation: Pragmatic construction of reality;
The Netherlands: Springer.
sociology of the sciences yearbook (pp. 79-86).
Dordrecht, The Netherlands: Springer. Küppers, G., & Lenhard, J. (2005). Validation of
simulation: Patterns in the social and natural sci-
Ilachinski, A. (2000). Irreducible semi-autono-
ences. Journal of Artificial Societies and Social
mous adaptive combat (ISAAC): An artificial-life
Simulation, 8(4), 3.
approach to land warefare. Military Operations
Research, 5(3), 29–47. Küppers, G., & Lenhard, J. (2006). From hierar-
chical to network-like integration: A revolution
Jensen, P. A., & Bard, J. F. (2003). Operations
of modeling. In G. K. Johannes Lenhard, & T.
research models and methods. Hoboken, NJ: John
Shinn (Eds.), Simulation: Pragmatic construction
Wiley & Sons.
of reality; sociology of the sciences yearbook (pp.
Johnson, A. (2006). The shape of molecules to 89-106). Dordrecht, The Netherlands: Springer.
come. In G. K. Johannes Lenhard, & T. Shinn
Küppers, G., Lenhard, J., & Shinn, T. (2006).
(Eds.), Simulation: Pragmatic construction of
Computer simulation: Practice, epistemology, and
reality; sociology of the sciences yearbook (pp.
social dynamics. In G. K. Johannes Lenhard, & T.
25-39). Dordrecht, The Netherlands: Springer.
Shinn (Eds.), Simulation: Pragmatic construction
Kincaid, H. (1998). Philsophy: Then and now. of reality; sociology of the sciences yearbook (pp.
In N. S. Arnold, T. M. Benditt & G. Graham 3-22). Dordrecht, The Netherlands: Springer.
(Eds.), (pp. 321-338). Malden, MA: Blackwell
Langton, C. G. (1989). Artificial life. Artificial
Publishers Ltd.
Life, 1–48.
Klein, E. E., & Herskovitz, P. J. (2005). Philosophi-
Latane, B. (1996). Dynamic social impact: Ro-
cal foundations of computer simulation validation.
bust preditions from simple theory. In U. M. R.
Simulation \& Gaming, 36(3), 303-329.
Hegselmann, & K. Triotzsch (Eds.), Modelling
Kleindorfer, G. B., & Ganeshan, R. (1993). The and simulatiion in the social sciences fromthe
philosophy of science and validation in simula- philosophy of science point of view. New York:
tion. In Proceedings of the 1993 Winter Simulation Springer-Verlag.
Conference, ed. G.W. Evans, M. Mollaghasemi,
Law, A. M. (2007). Simulation, modeling, and
E.C. Russell, and W.E. Biles, 50-57. Piscataway,
analysis (4th Ed.). New York: McGraw-Hill.
New Jersey: Institute of Electrical and Electronic
Engineers, Inc. Levy, S. (1992). Artificial life: A report from
the frontier where computers meet. New York:
Vintage Books.

55
Agent-Based Modeling

Macal, C. M., & North, M. J. (2006). Tutorial Sargent, R. G. (2005). Verification and validation
on agent-based modeling and simulation part 2: of simulation models. The Proceedings of the 2005
How to model. In Proceedings of the 2006 Winter Winter Simulation Conference, (pp. 130-143).
Simulation Conference, eds. L. R. Perrone, F. P.
Schelling, T. C. (2006). Micromotives and mac-
Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and
robehavior (2nd Ed.). New York: WW Norton
R. M. Fujimoto, 73-83. Piscataway, New Jersey:
and Company.
Institute of Electrical and Electronics Engineers,
Inc. Schmid, A. (2005). What is the truth of simula-
tion? Journal of Artificial Societies and Social
Mandelbrot, B. B. (1982). The Fractal Geometry
Simulation, 8(4), 5.
of Nature. New York: W.H. Freeman.
Shinn, T. (2006). When is simulation a research
Miller, J. H., & Page, S. E. (2007). Complex adap-
technology? practices, markets, and. In G. K. J.
tive systems: An introduction to computational
Lenhard, & T. Shinn (Eds.), Simulation: Prag-
models of social life. Princeton, NJ: Princeton
matic construction of reality; sociology of the
University Press.
sciences yearbook (pp. 187-203). Dordrecht, The
Mitchell, M., Crutch, J. P., & Hraber, P. T. (1994). Netherlands: Springer.
Dynamics, computation, and the “edge of chaos”:
Simon, H. A. (1962). The architecture of complex-
A re-examination. In G. Cowan, D. Pines, & D.
ity. Proceedings of the American Philosophical
Melzner (Eds), Complexity: Metaphors, Models
Society, 106(6), 467–482.
and Reality. Reading, MA: Addison-Wesley.
Simon, H. A. (1996). The sciences of the artificial
Morrison, M., & Morgan, M. S. (1999). Models
(3rd Ed.). Cambridge, MA: The MIT Press.
as mediating instruments. In M. S. Morgan, & M.
Morrison (Eds.), Models as mediators (pp. 10-37). Stanislaw, H. (1986). Tests of computer
Cambridge, UK: Cambridge University Press. simulation validation: What do they mea-
sure? Simulation & Games, 17(2), 173–191.
Naylor, T. H., & Finger, J. M. (1967). Verifica-
doi:10.1177/0037550086172003
tion of computer simulation models. Manage-
ment Science, 14(2), 92–106. doi:10.1287/ Weaver, W. (1948). Science and complexity.
mnsc.14.2.B92 American Scientists, 36.
North, M. J., & Macal, C. M. (2007). Managing Winsberg, E. (1999). Sanctioning models: The epis-
business complexity: Discovering strategic solu- temology of simulation. Science in Context, 12(2),
tions with. New York, NY: Oxford University 275–292. doi:10.1017/S0269889700003422
Press.
Winsberg, E. (2001). Simualtions, models,
Pidd, M. (2003). Tools for thinking: Modelling and theories: Complex physical systems and
in management science. (2nd Ed., pp. 289-312) their. Philosophy of Science, 68(3), 442–454.
New York: John Wiley and Sons. doi:10.1086/392927
Resnyansky, L. (2007). Integration of social sci- Winsberg, E. (2003). Simulated experiments:
ences in terrorism modelling: Issues, problems. Methodology for a virtual world. Philosophy of
Edinburgh, Australia: Australian Government Science, 70, 105–125. doi:10.1086/367872
Department of Defence, DSTO Command and
Control.

56
Agent-Based Modeling

Winsberg, E. (2006a). Handshaking your way standard, or specification requirements.


to the top: Simulation at the nanoscale. In G. Validation: The process of determining
K. J. Lenhard, & T. Shinn (Eds.), Simulation: whether a simulation model is an accurate repre-
Pragmatic construction of reality; sociology of sentation of a system.
the sciences yearbook (pp. 139-151). Dordrecht, Emergence: The process of coherent patterns
The Netherlands: Springer. of behavior arising from the self-organizing as-
pects of complex systems.
Winsberg, E. (2006b). Models of success versus the
Cellular Automata: A discrete model consist-
success of models: Reliability without. Synthese,
ing of a grip of cells each of which have a finite
152, 1–19. doi:10.1007/s11229-004-5404-6
number of defined states where the state of a cells
Wolfram, S. (1994). Cellular Automata and is a function of the states of neighboring cells and
Complexity: Collected Papers. Reading, MA: the transition among states is according to some
Addison-Wesley Publishing Company. predefined updating rule.
Cybernetics: The science of control and com-
Zeigler, B. P., Praehofer, H., & Kim, T. G. (2000).
munication in the animal and the machine.
Theory of modeling and simulation (2nd Ed.).
Complex Adaptive Systems: A complex,
New York: Academic Press.
self-similar collection of interacting agents, acting
in parallel and reacting to other agent behaviors
within the system.
KEY TERMS AND DEFINITIONS Nonlinearity: In terms of system output refers
to the state wherein the system output as a col-
Agent-Based Modeling: A computational lective is greater than the sum of the individual
model for simulating the actions and interactions system component outputs.
of autonomous individuals in a network, with a Simulation: The imitation of some real thing,
view to assessing their effects on the system as state of affairs, or process. The act of simulating
a whole. something generally entails representing certain
Verification: The act of reviewing, inspect- key characteristics or behaviors of a selected
ing, testing, etc. to establish and document that a physical or abstract system.
product, service, or system meets the regulatory,

57
58

Chapter 4
Verification and Validation
of Simulation Models
Sattar J. Aboud
Middle East University for Graduate Studies, Jordan

Mohammad Al Fayoumi
Middle East University for Graduate Studies, Jordan

Mohamed Alnuaimi
Middle East University for Graduate Studies, Jordan

ABSTRACT
Unfortunately, cost and time are always restraints; the impact of simulation models to study the dynamic
system performance is always rising. Also, with admiration of raising the network security models, the
complexity of real model applications is rising too. As a result, the complexity of simulation models
applications is also rising and the necessary demand for designing a suitable verification and valida-
tion systems to ensure the system reliability and integrality is very important. The key requirement to
study the system integrity is to verify the system accuracy and to validate its legality regarding to pre-
specified applications causes and validly principles. This needs different plans, and application phases
of simulation models to be properly identified, and the output of every part is properly documented. This
chapter discusses validation and verification of simulation models. The different approaches to deciding
model validity are presented; how model validation and verification relate to the model development
process are discussed; various validation techniques are defined; conceptual model validity, model
verification, operational validity, and data validity; superior verification and validation technique
for simulation models relied on a multistage approach are described; ways to document results are
given; and a recommended procedure is presented.

INTRODUCTION the specialists impression of model development


and their knowledge in verification and validation
Regarding rigorous analysis of divers current ideas of simulation models in organizations (Brade and
for verification and validation of simulation models, Lehmann, 2002), within the topics of the Symposium
in 2002 a developed structured verification and
validation method, also denoted to as verification
DOI: 10.4018/978-1-60566-774-4.ch004

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Verification and Validation of Simulation Models

and validation triangle (Brade, 2003). This method models, the decision makers with data resulting
deal with the following key points of successful from a findings of the systems, and individuals
verification and validation: affected via decisions relied on such systems are
accurately concerned if the system and its find-
• Using the structured and stepwise ing are accurate. This matter is addressed during
technique model validation and verification. Model vali-
• Reengineering verification and validation dation is generally defined a corroboration that a
throughout model development computerized model with its area of applicability
• Testing intermediate results possesses an acceptable range of accuracy reliable
• Constructing a chain of facts relied on veri- with the planned use of a model and is a definition
fication and validation results employed. Model verification is often defined as
• supplying templates for documentation ensuring that a program of a computerized system
of verification and validation activities and and its performance are accurate, and is a defini-
findings tion accepted. The system becomes accredited
• Accomplish verification and validation ac- during model accreditation. The system accredi-
tivities separately. tation concludes if the system satisfies specified
model accreditation measures consistent with the
To perform this aim, the standard verification particular process. The related issue is system
and validation triangle provides: reliability. The system reliability is concerned
with developing a confidence wanted via possible
• An outline of fault types that are probably users in the system and in a data resulting from
is concealed in the intermediate result on a system that they are eager to employ a system
which the verification and validation ac- and a resulting data.
tivities are concentrated. The model must be developed for a specific
• Verification and validation stages to aim and its validity fixed with respect to that aim.
measure the accuracy and validity of every When the aim of the system is to solve a selec-
intermediate result separately. tion of questions, a validity of a system wants to
• Verification and validation sub stages be fixed with respect to every question. Many
with generic verification and validation collection of experimental status are usually
goals to investigate the possible test of cer- needed to define the area of system intended
tain types of intermediate results and out- applicability. The system can be valid for one
side information. collection of experimental status and invalid for
• Guide point to implement verification and another. The model is measured valid for a collec-
validation methods and to reuse former- tion of experimental status when its precision is
ly generated verification and validation within its satisfactory range that is the amount of
results. precision needed for a system intended aim. This
• Outline the dependence between various usually needs the system result variables of inter-
verification and validation goals and the est that is a system variables employed in solving
effect of cyclic test on the integrity of the the questions that a system is being developed to
simulation model. answer is specified and that their needed amount
of precision is specified. The amount of precision
Simulation models are increasingly employed needed must be identified before to beginning the
these days in solving difficulty and in decision development of a system for each early in the
making. Thus, a developers and consumers of these system development process. When the variables

59
Verification and Validation of Simulation Models

of interest are arbitrary variables, then characteris- generally very important, mainly if really high
tic and functions of the arbitrary variables such as system confidence is needed.
means and variances are usually is of main interest
and are employed in fixing system validity. Some
versions of the system are normally developed VALIDATION PROCESS
before to getting an acceptable valid system. The
substantiation the model is valid, that is system There are three fundamental methods are em-
verification and validation, is usually measured ployed in determining if the simulation model is
to be a process and is generally part of the system valid or invalid. Every one of these methods needs
development process. a model development team to lead validation and
Entire documentation needs an obvious defini- verification as an element of a model development
tion of the intermediate outcome that creates during process that will be discussed in this section. The
system development. Data modeling is a lengthy most common method for the development team
and difficult work to accomplish, and some study is to create a decision as to if the system is valid.
stated that up to 33% of the complete time used in This is a subjective decision relied on a outcome
a simulation analysis can be used on data modeling of a different tests and assessments guided as
(Wang and Lehmann, 2008). Moreover, the qual- an element of the model development process.
ity of data is also a vital factor for the reliability Another method, usually named independent
assessment of simulation models. Accordingly verification and validation, employs the third
for the verification and validation process, truth trusted party to determine if a model is valid. The
and correctness of data gathering, analysis, and third trusted party is independent from a model
alteration is essential. But the data used must be development team and a system user. Following
measured in each system development stage and the model is developed; a third trusted party leads
within the range of the standard of verification and an assessment to decide its validity. Relied on the
validation method. This is not intended for as an validation, a third trusted party creates a particular
independent matter, and not considered in depth. decision on the validity of a system. This method
Though, in this chapter, we attempt to enhance is often employed if a large cost is linked with the
the standard verification and validation method difficulty a simulation model is being employed
by attaching an extra consideration resultant from for and to assist in system reliability. The third
the advice feedback (Spieckermann, Lehmann and trusted party is also generally employed for model
Rabe, 2004) which the verification and validation authorization.
operation of data modeling is obviously managed. The assessment made in an independent
Furthermore, with an advanced fashion thought, verification and validation method ranges from
we will show how this system can be modified just reviewing a verification and validation
according to the specific requirements of some guided via a model development team to a entire
known simulation systems. verification and validation attempt. (Sagent,
It is always very costly and time consuming 2000) explains experiences over this range of
to determine that a system is completely valid assessment via the third trusted party on energy
over the entire field of its planned applicability. models. The conclusion that we reach is that an
Instead, tests and assessments are guided until entire independent verification and validation
sufficient confidence is got that a system can assessment is very costly and time consuming
be considered valid for its planned application for what is got. Researcher vision is that when
(Sargent, 1999). The cost of system validation is the third trusted party is employed, it must be

60
Verification and Validation of Simulation Models

throughout a model development process. When The conceptual model: is the arithmetical or
a model has been developed, these authors con- logical statement imitated of the problem entity
sider that generally the third trusted party must developed for the specific report. The computer-
assess just a verification and validation that has ized system is the conceptual model executed on
been done. The last method for deciding if the the machine. The conceptual model is developed
system is valid is to employ a scoring model during an analysis and prototype phase, the com-
(Balci, 1998). Scores are determined subjectively puterized system is developed during a computer
if performing different parts of a validation pro- programming and executing phase, and infer-
cess and then combined to decide type scores ences regarding the problem entity are got via
and a total score for a simulation system. The guiding machine experiments on a computerized
simulation model is measured valid when its total system in an experimentation phase.
and type scores are greater than certain passing Conceptual model validity: is described as
scores. This method is uncommonly employed formative that hypothesizes and assumptions
in practice. However, these researchers do not motivating a conceptual model are approved and
think in the employ of a scoring system for that a model representation of a problem entity is
formative validity since: realistic for an intended aim of a system.
Computerized model verification: is defined
• The subjective of this system tends to be un- as a computer programming and executing of a
seen and therefore seems to be objective conceptual model.
• The scores should be determined in certain Operational validity: is defined as a model
way generally subjective means result behavior has enough correctness for a model
• The system could receive a passing score intended aim over the area of system intended
but have a fault that requires correction. applicability.
• The scores could results over trust in a sys- Information validity: is defined as the infor-
tem or is employed to claim that certain mation necessary for system construction, model
system is better than another one. assessment and testing, and guiding the system
experiments to solve the difficulty is sufficient
However, we now describe the validation and and accurate.
verification model regarding the development Numerous versions of a system are generally
process. There are two general methods to observe developed in the modeling process before to get-
this relationship. The first method employs certain ting a satisfactory valid model. Throughout every
kind of full model development process, and the model iteration the model validation and verifica-
second method employs certain kind of simple tion are achieved (Sargent, 1996). The diversity
model development process. (Banks, Gerstein, and of validation methods are employed, which are
Searles, 1998) evaluated effort employing both explained in the following section. There is no
of these methods we concluded that the simple algorithm and process exists to choose which
method more obviously lights up validation and methods to employ.
verification model. These researchers recommend
the service of a simple method (Sargent, 1999).
Consider the typical version of a system process VALIDATION METHODS
in Figure 1.
The problem entity: Are a system suggested, In this section we are going to explains different
thought, condition, policy, and phenomena to be validation methods and tests employed in model
formed. validation and verification. Most of the methods

61
Verification and Validation of Simulation Models

Figure 1. Typical version of a system process

Problem
En ty

Opera onal Conceptual


Validity Model Validity

Experimenta on Analysis &


Modeling
Data
Validity

Computerized Computer Conceptual


Model Programming and Model
Implementa on

Computerized
Model
Veri ca on

explained in this section are obtained from the are compared to outputs of other valid systems.
literature, while certain could be explained slightly For instance:
in a different way. They may be employed either
subjectively or objectively. With objectively, we • Straightforward cases of a simulation mod-
are employing certain kind of statistical test or el could be compared to recognized outputs
mathematical method, for instance hypothesis of investigative systems.
tests and confidence period. The combination • A simulation model could be compared to
of methods is usually employed. These methods other simulation systems that have been
are employed for validation and verification the validated.
sub-systems and general system.
Animation: A model operational per- Degenerate Tests: A degeneracy of a system
formance is showed graphically as a model behavior is tested via suitable collection of results
moves from side to side time. For instance, of an input and internal variables. For instance,
the travels of parts through a factory during a does an average number in a queue of the single
simulation are exposed graphically. Different server remain to increase with respect to rate if an
outputs of a simulation system being validated arrival rate is greater than the check time?

62
Verification and Validation of Simulation Models

Event Validity: The events of occurrences of a system. The high amount of variability lack of
simulation system are compared with a real model consistency could result a system outcomes to be
to decide if they are equal. For instance the events uncertain and when characteristic of a problem
are deaths in a fire section simulation. entity, might question a suitability of a policy or
Extreme Condition Tests: The system struc- model is examined.
ture and result must be reasonable for any extreme Multistage Validation: As (Wang and Leh-
and improbable combination of levels of parts in mann, 2008) suggested joining the three histori-
the model; for instance when in process inventories cal techniques of rationalism, empiricism, and
are zero, production result must be zero. positive economics into a multistage procedure
Face Validity: Face validity is request persons of validation. This validation technique embraces
knowledgeable concerning a model if a system of:
and its performance are logical. This method
could be employed in deciding when the logic • Developing the system assumptions on
in a conceptual model is accurate and when the theory, remarks, common knowledge, and
system input-output relationship is sound. role.
Determined Values: Determined values for • Validating the system assumptions where
instance, constants are employed for different likely via empirically assessment them.
system input and internal variables. This must • Comparing testing an input-output rela-
permit a checking of system output versus easily tionship of a system to an actual one.
computed results.
Historical Information Validation: When Operational Graphics: Results of different
historical information exists or if information is performance measures. For instance number in
collected on a model for construction or testing queue and proportion of servers busy, are displayed
a system, part of the information is employed to graphically as a system moves during time. For
construct a system and the remaining information instance the dynamic behaviors of execution in-
is employed to fix test if a system behaves as a dicators are visually shown as a simulation model
model does. This testing is guided via making moves during time.
a simulation model with either example from Parameter Variability Sensitivity Analysis:
distributions or traces (Brade, 2003). This method embraces of changing the values of
Historical Methods: The three historical an input and internal variables of the system to
methods of validation are rationalism, empiricism, decide the effect in the model behavior and its
and positive economics. Rationalism supposes result. The same relationships must happen in a
that each one knows if the basic assumptions model as in the actual one. Those variables are
of the system are factual. Logic deductions are sensitive, for instance cause important changes in a
employed from these assumptions to develop an model performance and must be create sufficiently
accurate valid system. Empiricism needs each correct before to employing the system. This can
assumption and result to be empirically validated. need iterations in system development.
Positive economics needs just that a system is Predictive Validation: A system is employed
able to expect a future and is not concerned with to predict forecast a model performance, and then
the system assumptions or structure connecting comparisons are concluded between the model
relationships or system. performance and a system forecast to decide
Internal Validity: some duplication runs when they are equal. The data can come from an
of a stochastic model are created to decide an operational model or from experiments achieved
amount of internal stochastic variability in the on the model. For instance field checks.

63
Verification and Validation of Simulation Models

Traces: The behavior of various sorts of par- when any information transformations are done,
ticular entities in the system is traced followed such as disaggregation, they are suitably achieved.
during a system to decide when the model logic Unfortunately, there is not much that can be made
is accurate and when a necessary correctness is to ensure that the information is accurate. The best
got. that could be performed is to develop beneficial
Turing Tests: Individuals who are well- measures for gathering and upholding it, check
informed regarding the operations of the model the gathered information employing methods for
are asked when they can differentiate between example internal consistency tests, and display
model and system result (Wang and Lehmann, for outliers and deciding when they are accurate.
2008) include statistical tests for employ with When an amount of information is big, a data base
Turing tests. must be developed and kept.

INFORMATION VALIDITY CONCEPTUAL MODEL VALIDATION

Although information validity is in fact not Validity of conceptual model is deciding by:
considered as a component of model validation,
we will argue it since it is generally hard, time • The concepts and assumptions that essen-
consuming, and costly to got enough, correct, tial for a conceptual model are accurate.
and suitable information, and is often the cause • The model illustration of a problem entity
that tries to validate the model fail. Information and the system structure, logic, and arith-
is required for three reasons: for construction the metical are sound for the intended aim of
conceptual model, for validating a model, and for a system.
achieving experiments with a validated system.
In model validation we are worried just with the The concepts and assumptions constructing a
first two sorts of information. model must be checked employing arithmetical
To construct the conceptual model we should analysis and statistical techniques on problem
have sufficient information on the problem entity information. Cases of concepts and assump-
entity to develop models that can be employed tions are fixed, linearity, independence and Poisson
to construct the system, to develop the logical arrivals. Cases of applicable statistical techniques
and mathematical relationships in a model that are appropriate distributions to information,
will permit it to sufficiently indicate the prob- guessing variables values from the information
lem identity for its intended aim, and to check a and plotting the information to decide when they
system underlying assumptions. Also, behavioral are fixed. Also, all concepts employed must be
information is required on a problem entity to evaluated to guarantee they were affected suitably;
be employed in the operational validity step of for instance, when a Markov chain is employed,
comparing the problem entity performance with is a model have a Markov characteristic.
the model activities. Generally, this information However, every sub-model and the entire
is model input/output. When this information is model should be assessed to decide when they
not available, high model confidence typically are practical and accurate for the intended aim of
cannot be got, since sufficient operational validity the system. This must include deciding when the
cannot be accomplished. correct detail and aggregate relationships have
The concern with information is that suitable, been employed for the model intended aim, and
correct, and adequate information is available, and when the suitable structure, logic, and arithmeti-

64
Verification and Validation of Simulation Models

cal and relationships have been employed. The design and program modularity. In this case
main validation methods employed for these verification is mainly concerned with deciding
assessments are face validation and traces. Face that the simulation tasks for example a pseudo
validation has experts on the problem entity random number generator, time-flow mechanism
evaluate a conceptual model to decide when it is and the computer system have been encoded and
accurate and practical for its aim. This generally executed properly.
needs checking the flowchart or graphical chart, There are two essential methods for testing
or the collection of model formulas. simulation program: static testing line and
The use of traces is the tracking of entities dynamic testing line. In static testing line the
during every sub-model and the entire model to machine program is analyzed to decide when
decide when the logic is accurate and when the it is accurate via employing such methods as
necessary correctness is preserved. When faults well-structured accuracy, proofs, and testing the
are found in a conceptual model, it should be structure characteristics of a program. In dynamic
adjusted and conceptual model validation done testing line a machine program is implemented
once more. under various conditions and the results got includ-
ing those created throughout an implementation,
are employed to decide when computer software
VERIFICATION OF THE MODEL and its executions are accurate. The methods usu-
ally employed in dynamic testing line are traces,
The model verification guarantees that a computer investigations of input-output relations employing
completion of a conceptual model is accurate. various validation methods, internal consistency
The main part effecting verification is when the tests, and encoded serious parts to decide when
simulation language for example Java is employed. the same values are got. When there are a large
The use of a special-purpose simulation language number of parameters, one might combine a
will cause in containing fewer faults than when few of the parameters to decrease the number of
the general-purpose simulation language is em- checks required or employ some kinds of design of
ployed. The use of a simulation language generally experiments. It is essential to be conscious while
decreases the timing required in programming, testing the accuracy of the machine program and
but it is inflexibility. If the simulation language is its execution that faults is caused via the informa-
used, verification is mainly concerned with guar- tion, a conceptual model, the computer software,
anteeing that a fault free simulation language has or a computer execution.
been employed, a simulation language has been
correctly executed on a machine, that a tested for
accuracy pseudo random number generator has INFORMATION MODELING
been correctly executed, and that the system has PROCESS
been encoded properly in a simulation language.
The main methods employed to decide that a Information modeling process in simulation
model has been encoded properly are structured model is of major significance for handling a
well and trace. simulation study (Banks, Caeson, Nelson and
When the higher level language has been Nicol, 2005) and usually consists of the fol-
employed, then a machine program must have lowing steps.
been designed, and implemented employing
methods established in software engineering. 1. Information analysis
These contain such methods as object-oriented 2. Information gathering

65
Verification and Validation of Simulation Models

3. Information transformation There are many statistical methods available


4. Results analysis and understanding. (Law and Kelton, 1991) that may be implemented
for distribution choosing in different circum-
Information Analysis stances. As soon as this job has been entirely
done, arbitrary samples employed as input to the
This step is to manage information identification simulation model and created from the specific
needed for model development and simulation distribution task.
tests from the model to be considered. Both quali-
tative and quantitative information which may Information Gathering
be employed as dimensions to depict the model
of interest must be studies. Along with the given According to the result of analysis, information
restrictions in the original simulation review, for gathering must be processed in a systematic
instance, cost and time restrictions, examination model. For certain circumstances information
will be prepared to demonstrate from which gathering is a simple examination and measure-
resources the preferred information adaptation ment of the quality of interest appears enough for
of verification and validation processes. Finally, the current information requirements, but from
the contributions of the chapter the future work time to time, documents should be employed.
is considered. Wang and Lehmann arises, which Documents employed for information modeling
techniques must be employed, and what timetable are typically in a form of figures, reports, tables
must be followed, in order the information to be and so on. But, the present documentations do not
gathered more accurately and efficiently. automatically imply accessibility for information
Information came from a diversity of sources, modeling. One of the main difficulties concerning
are employed for various aspects regarding the documents is the correct result in documents
simulation model. Mainly, there are three types of in a simulation study which may be wealthy in
information (Sargat, 2000) for model input: statistics, nevertheless poor in contents in fact
requisite for the examination aim. In addition, the
• Information for identifying the model documents can be available in parts. As a result,
• Information for test the aptitude to determine and clarify the right
• Information for performing simulation information hold in documents is of particular
experiments. significance.

Based on the provided information the prob- Information Transformation


ability distributions must be fixed, which is em-
ployed to identify the behavior of the arbitrary Equally qualitative and quantitative informa-
processes driving the model under consideration. tion require to be transformed into a computer
Basically there are two alternatives in determining process form during model processing. Certain
probability distributions: information are employed to identify the model
being constructed, and verification and validation
• Employ the real sample information to de- of simulation models ultimately, become inte-
note the probability distributions. grated into the model constructed; while the other
• Fixed a theoretical probability distribu- information that are employed to compare with
tion corresponding to the information the simulation result, usually require to be handled
gathered. independently from the model under analysis. So,
it is necessary for information transformation not

66
Verification and Validation of Simulation Models

just to format information properly in format, and Normally, information is manually collected by
in type that are necessary via certain software direct observation and measurement. But, because
applications employed but, also to set up an idea the individual recording information can easily be
which facilitates a dependable information storing disturbed via the model to be measured, and in
and information retrieving, and also the efficient the same way, the model behavior itself can also
information transfer between a simulation program be affected via the attendance of an observer. It is
and its cooperating information depository. unworkable for an observer to gather information
just exactly and orderly as preferred and intended.
Results Analysis and Understanding To tackle the difficulties derived from the manual
environment, techniques to use automatic informa-
Results are created as outputs of simulation tion gathering must be implemented.
experiments determined via providing input. Some new models for automatic informa-
Thus, because arbitrary samples resulting from a tion gathering and information integration in
known probability distribution tasks are normally simulation models (Balci, 1998) have been used,
employed, the model result is influenced via by which an information interface between a
arbitrary factors consequently. When the result simulation model and a commercial business
are not analyzed sufficiently, they can easily be model is developed, which gathers, stores and
misunderstanding, thus that false conclusions con- retrieves information, and also provides infor-
cerning the model of interest are depicted, despite mation employed for simulation test completely
how well the model has been really constructed. automatically. Since of the high practical require-
For analyzing results, it is significant to make ments but, these methods could be implementing
sure when the model being modeled and come to in a restricted number of services, for instance, in
an end or not, further to differentiate between a modeling production industry.
steady case and a temporary no steady case since Unfortunately, there are certain circumstances
different statistical methods must be employed where it is impractical to collect information. For
for every case. instance, when the model under consideration does
not continue in an actual form or if it is too complex
and time consuming to collect information from an
SETTING UP CREDIBLE existing model. This problem can also happen via
INFORMATION MODELS collection information only in certain processes
of the model. In such situations, expert opinion
Collecting High-Quality Information and knowledge of the model and its behavior are
crucial to make assumptions concerning the model
Getting suitable information from the model of for the purposes by which a simulation model
interest is a vital ground to accomplish a valid is proposed. Furthermore, where appropriate,
simulation model. The process for gathering infor- information collected from different, but similar
mation is frequently subjective to a large number models could also be measured.
of subject and object issues. The topics describe
below address the tackle of the difficulties that 2. Applying Historical Information
encountered in information gathering.
Historical information from the model being
1. Gathering Information Manually or examined could be employed for a point of model
Automatically validation, by which results of the actual model
and the model are compared via employing the

67
Verification and Validation of Simulation Models

identical historical information (Robertson and As illustrated in Figure 2 which is introduced


Perera, 2001). But, the comparison in this way in a form following the literature (Spieckermann,
is practical, when the real situation of the model Lehmann and Rabe, 2004), the results of every in-
can be measured to be equal to the situation of formation model in phase give further knowledge
the model, from which the historical information concerning the model of interest to make the model
occurred. The same information variables cannot progress, while the model under construction
ensure the model and the model are driven pre- returns the feedbacks which prompt more infor-
cisely on the identical situation, since the historical mation modeling attempt or a another iteration.
information normally employed for model valida- Finally, the simulation conclusion is reached at
tion cover only a restricted range of influencing the final intersection of the both processes. Infor-
elements on the original model. mation verification and validation is intended to
disclose any quality lacks that included in every
3. Gathering information via interviewing intermediate result of information model being
conducted from information requirements analysis
There is no hesitation that interviews are to result analysis and interpretation. Information
important for information collecting. With the verification is defined to ensure that information
assist of interviews the expert knowledge not are changed during the modeling process precisely
however documented in certain real form can in form and content; while information validation
be therefore communicated to another. In the is involve with determining if the employed of in-
example studies above, where information is not formation model sufficiently satisfies the intended
available from the present model, interviews can reason of the simulation goals. As (Spieckermann,
be the only method to get some data about the Lehmann and Rabe, 2004) reported, information
model to be modeled. It must be observed that verification and validation must be done in ac-
interviewing is a process of getting both objec- cordance with model verification and validation
tive and subjective data. (Wang and Lehmann, throughout the whole development process of a
2008) reported that it is always verification and simulation model.
validation of simulation model significant to dif- Improved verification and validation pro-
ferentiate between facts and opinions; together cess, which is extended via involving an exact
are essential and valuable but should be processed consideration of information modeling, consists
in a different way. of two closely associated elements of verifica-
tion and validation activities: verification and
Integrating Information validation model and verification and validation
Verification and Validation information. The major stress in this section is
placed on explaining information verification
While certain part of activities in information and validation and the relationship between the
collecting and information analyzing may be two elements. More details regarding the funda-
performed concurrently with model development, mental principle of the verification and validation
information modeling in general is via no way an triangle may be obtained in. In verification and
individual process, but intimately coupled within validation model, every well defined intermediate
the original simulation study and so, must be con- result generated during model development from
sidered as an integral part of the general simulation structured problem description to model results is
model. Simply like the type of model construction, input to verification and validation phase. Every
conducting information model develops also in a verification and validation phase is again split
modular and iterative mode. into more sub-phases, with a defined sub goal to

68
Verification and Validation of Simulation Models

Figure 2. Information simulation modeling

69
Verification and Validation of Simulation Models

discover the internal faults or alteration faults. information is generated via editing, the gather
In the sub phases the absence of result internal unprocessed information by modeling process. So,
faults in every particular intermediate result must information verification and validation involves
be confirmed. For instance, in sub phase must be credibility review of unprocessed information and
make sure that the problem description is free of processed information employed for generating
error and inconsistence, and in another sub phase every intermediate result. It must be observed
a syntax check may be implemented to the formal that unprocessed information are typically only
model comparison of the selected formalism. In relevant for getting structured problem descrip-
any other sub phases, the pair wise comparison tion and theoretical model, may not be directly
between the present intermediate result and every appropriate for generating formal model and other
preceding intermediate result may be created to later intermediate results. Throughout verification
validate the absence of alteration mistakes. For and validation of unprocessed information related
example in some sub phases the formal model to every intermediate result, the following matters
may be compared with the theoretical model, the must be ensured:
structured problem description, and the sponsor
requirements one by one through repeating the • Verification and validation of processed in-
comparison versus an intermediate result. In this formation concentrate on ensuring that all
way more strength of verification and validation information employed for every intermedi-
activities is reached, and then the credibility set ate result are correct and accurate in their
up so far may also be strengthen accordingly. transformed from throughout the model
Wang and Lehmann with respect to information development, with the following standard
flow, every verification and validation phase is matters.
then extended via extra verification and valida- • Information resources are reasonable for
tion activities of information derived from the the intended reason of the model.
resultant information modeling process. Two • It is significant when the information is de-
kinds of information in the information flow dur- rived from a different model.
ing model development must be distinguished: • To assess the assumptions of self govern-
unprocessed information and processed informa- ment and homogeneity made on the col-
tion. Unprocessed information is found directly lected information sets.
from various resources, and usually unformed • To control that information have been trans-
information. While the empirical information, formed in essential structure and form.
which are inherited from preceding simulation • To make certain the probability distribution
applications and typically given at the start of the employed and the associated parameters
analysis, may be available in a well-formed and are practical for the information gathering,
input ready fashion. for instance, via employing a goodness-of-
fit test
• To make certain that sufficient independent
THE EXTENDED VERIFICATION simulation runs have been performed for a
AND VALIDATION TRIANGLE stochastic model.

They are in this respect also unprocessed informa- Both quantitative and qualitative informa-
tion, since it is via any means necessary to the use tion must be measured adequately, and the amount
of such information to ensure when they are in of information collected is sufficient for the more
fact reasonable for the present context. Processed study. Verification and validation of simulation

70
Verification and Validation of Simulation Models

model for documentation aims, the contents At the start of a simulation model, tailoring
of information verification and validation in of the model development process employed in
the single phases are inserted to the verification the simulation study is of prime significance for
and validation proposal and verification and preparing simulation and verification and valida-
validation report, according to the well defined tion project. According to the determinations of
document structures. the simulation model restrictions, project-specific
results and the associated verification and valida-
tion activities can be identified, and the irrelevant
MULTISTAGE TAILORING results are so ignored. For instance, when it is
determined that the formal model is not neces-
As mentioned above, executing verification and sary for the present plan, the related results of
validation activities in verification and validation the formal model and its defined verification
sub phases redundantly, for example comparing and validation activities remain therefore out of
the formal model independently with the concep- consideration. By considering the specified result
tual model, the structured problem description dependencies, the plan adaptation to the result
and the sponsor wants, takes further point of level is conducted. This is the cause that during
view of model evaluation into consideration. So the model development further results to be de-
the concluded verification and validation results veloped can be chosen, while the results present
in this manner are more dependable. However, in the simulation project can be removed since
for many simulation models in practice it is not the obligations between the results are identified.
possible to completely perform all verification Moreover, tailoring is conducted at the role level.
and validation activities recommended in the It means every plan Wang and Lehmann role has
verification and validation triangle because of only the access according to its authority issued
time and budge restrictions. In such examples, a in the simulation project. Based on this concept,
slim set of verification and validation activities, a supporting tool, so-called simulation tailoring
which performs the credibility evaluation still in a assistant, is prototypically implemented.
particular acceptable level despite the restrictions,
must be tailored.
Relied on the detailed documentation require- DOCUMENTATION
ments of verification and validation (Lehmann,
Saad, Best, Kster, Pohl, Qian, Walder, Wang and Verification and validation system documenta-
Xu, 2005), in which project restrictions, interme- tion is generally critical in persuasive individu-
diate results and their dependencies, verification als of the precision of a system and its outcome,
and validation acceptance criteria, verification and and must be involved in the simulation model
validation activities, and project role is well de- documentation. For a common discussion on
fined and specified, a multistage tailoring process documentation of computer-based models (Gass,
is suggested not only for model development, but 1999). Both detailed and outline documentation
also for conducting verification and validation, are preferred. The detailed documentation must
including the following stages: embrace specifics on the analysis, assessment,
data, and the outputs.
1. Tailoring at the process level
2. Tailoring at the result level
3. Tailoring at the role level

71
Verification and Validation of Simulation Models

REMARKS ON SYSTEM VALIDATION every process of a system when new information


or model understanding has happened because its
Researchers notice that the following steps should final validation.
be made in system validation:

1. An agreement created before developing CONCLUSION


the system between a system development
group and a system sponsors and if possible On the basis of the model verification and valida-
the individuals, identifies the basic validation tion idea, we introduce in this chapter an improved
method and a minimum selection of specific model for development and conducting verifica-
validation mechanisms is employed in the tion and validation models and simulation results.
validation process. By integrating the further information verification
2. Identify the amount of precision needed of and validation activities for every verification
system output variables of interest for the and validation phase, this idea extends the con-
system intended application before to begin- sideration range of the verification and validation
ning the development of a system or each triangle and refines the associated documentation
early in a system development process. requirements of intermediate results and verifica-
3. Analysis if possible an assumptions and tion and validation results consistently. Within the
hypothesizes underlying a system. scope of model planning, a multistage tailoring
4. In certain system iteration, achieve at least concept for the purpose of plan specific adapta-
face validity on the conceptual model. tion is presented. By arranging the tailoring ef-
5. In certain system iteration, at least discover forts respectively at the levels of process, result
the model behavior employing the computer- and role, this generic tailoring idea offers a high
ized system. degree of flexibility and feasibility for conducting
6. In the last system iteration, accomplish simulation and verification and validation under
comparisons, between the system and system different restrictions.
behavior output for numerous selections of The system validation and verification is
experimental cases. important in a development of the simulation
7. Develop validation documentation model. Unfortunately, there is no collection of
for addition in the simulation system certain tests that could easily be used to deciding
documentation. an accuracy of a model. In addition, no algorithm
8. When the system is employed over certain exists to decide what methods or processes to
of time, develop an agenda for cyclic evalu- employ. Each new simulation plan introduces a
ation of the system validity. new and unique idea.

The systems rarely are developed to be em-


ployed more than one time. The process for evalu- FUTURE WORK
ating the validity of these systems over their life
cycles requires to be developed, as indicated in More studies can concentrate on:
step eight. No common process can be provided,
as every state is dissimilar. For instance, when • Implementing ideas of verification and val-
no information is available on a model and a idation and tailoring process in an actual
model was initially developed and validated; then simulation project in training.
revalidation of the system must happen before to • Extending the tool support for performing

72
Verification and Validation of Simulation Models

verification and validation tasks, for in- Law, A. M., & Kelton, W. D. (2002). Simulation
stance, identifying the internal output de- Modeling and Analysis, (3rd Ed.). New York:
pendency when developing inter- mediate McGraw-Hill.
output conceptual model.
Lehmann, A., Saad, S., Best, M., Kِster, A., Pohl,
S., Qian, J., Walder, C., Wang, Z., & Xu, Z.,
The Verification and Validation of simulation
(2005). Leitfaden für Modelldokumen-tation,
models is developing a verification and valida-
Abschlussbericht (in German). ITIS e.V.
tion tool to maintain choosing and implementing
suitable verification and validation techniques in Robertson, N., & Perera, T. (2001). Feasibility for
various verification and validation milieu. Automatic Data Collection. In Proceedings of the
2001 Winter Simulation Conference.
Sargent, R. (2000). Verification, Validation, and
REFERENCES
Accreditation of Simulation Models. In Z. Wang,
Balci, O. (1998). Verification, Validation and Test- A. Lehmann (Eds.) Proceeding of the Winter
ing. In J. Banks (Ed.) Handbook of Simulation. Simulation Conference (p. 240).
Hoboken, NJ: John Wiley & Sons Sargent, R. G. (1996). Some Subjective Valida-
Banks, J., Carson, J., II, Nelson, B., & Nicol, D. tion Methods Using Graphical Displays of Data.
(2005). Discrete-Event System Simulation, (4th Proc. of 1996 Winter Simulation Conf., (pp.
Ed.). Upper Saddle River: Pearson Education 345–351).
International. Sargent, R. G. (1996). Verifying and Validating
Banks, J., Gerstein, D., & Searles, S. P. (1998). Simulation Models. Proc. of 1996 Winter Simula-
Modeling Processes, Validation, and Verification tion Conf., (pp. 55–64).
of Complex Simulations: A Survey. Methodol- Sargent, R. G. (1999). Verification and Valida-
ogy and Validation . Simulation Series, 19(1), tion of Simulation Models. Proc. of 1998 Winter
13–18. Simulation Conf., (pp. 121–130).
Brade, D. (2003). A Generalized Process for Simulations, D. O. D. Improved Assessment
the Verification and Validation of Models and Procedures Would Increase the Credibility of
Simulation Results. Dissertation, Universit‫ن‬t der Results, (1987). Washington, DC: U. S. General
Bundeswehr München, Germany. Accounting Office, PEMD-88-3.
Brade, D., & Lehmann, A. (2002). Model Valida- Spieckermann, A., Lehmann, A., & Rabe, M.
tion and Verification. In Modeling and Simulation (2004). Verifikation und Validierung: berlegun-
Environment for Satellite and Terrestrial Commu- gen zu einer integrierten Vorgehensweise. In K.
nication Networks– Proceedings of the European Mertins, & M. Rabe, (Hrsg), Experiences from the
COST Telecommunication Symposium. Boston: Future Fraunhofer IRB, Stuttgart, (pp. 263-274).
Kluwer Academic Publishers. Stuttgart, Germany.
Gass, S. I. (1999). Decision-Aiding Models: Wang, Z., & Lehmann, A. (2008). Verification and
Validation, Assessment, and Related Issues for Validation of Simulation Models and Applications.
Policy Analysis. Operations Research, 31(4), Hershey, PA: IGI Global.
601–663.

73
Verification and Validation of Simulation Models

KEY TERMS AND DEFINITIONS Operational Validity: is defined as a model


result behavior has enough correctness for a model
Verification and Validation: to measure the intended aim over the area of system intended
accuracy and validity of every intermediate result applicability.
separately. Information Validity: is defined as the infor-
Data Modeling: is a lengthy and difficult work mation necessary for system construction, model
to accomplish, and some study stated that up to assessment and testing, and guiding the system
33% of the complete time used in a simulation experiments to solve the difficulty is sufficient
analysis can be used on data modeling and accurate.
Model Validation: is generally defined cor- Animation: A model operational performance
roboration that a computerized model with its area is showed graphically as a model moves from
of applicability possesses an acceptable range of side to side time. For instance, the travels of parts
accuracy reliable with the planned use of a model through a factory during a simulation are exposed
and is a definition employed. graphically.
The Conceptual Model: is the arithmetical or Static Testing Line: In static testing line the
logical statement imitated of the problem entity machine program is analyzed to decide when
developed for the specific report. The conceptual it is accurate via employing such methods as
model is developed during an analysis and pro- well-structured accuracy, proofs, and testing the
totype phase. structure characteristics of a program.
The Computerized System: is the conceptual Dynamic Testing Line: In dynamic testing
model executed on the machine. The computer- line a machine program is implemented under
ized system is developed during a computer pro- various conditions and the results got including
gramming and executing phase, and inferences those created throughout an implementation, are
regarding the problem entity are got via guiding employed to decide when computer software and
machine experiments on a computerized system its executions are accurate.
in an experimentation phase.

74
75

Chapter 5
DEVS-Based Simulation
Interoperability
Thomas Wutzler
Max Planck Institute for Biogeochemistry, Germany

Hessam Sarjoughian
Arizona Center for Integrative Modeling and Simulation, USA

ABSTRACT
This chapter introduces the usage of DEVS for the purpose of implementing interoperability across
heterogeneous simulation models. It shows that the DEVS framework provides a simple, yet effective
conceptual basis for handling simulation interoperability. It discusses the various useful properties of the
DEVS framework, describes the Shared Abstract Model (SAM) approach for interoperating simulation
models, and compares it to other approaches. The DEVS approach enables formal model specification
with component models implemented in multiple programming languages. The simplicity of the integra-
tion of component models designed in the DEVS, DTSS, and DESS simulation formalisms and imple-
mented in the programming languages Java and C++ is demonstrated by a basic educational example
and by a real world forest carbon accounting model. The authors hope, that readers will appreciate the
combination of generalness and simplicity and that readers will consider using the DEVS approach for
simulation interoperability in their own projects.

INTRODUCTION problems there exist already simulations models.


Much work has been dedicated to develop the
Interoperability among simulators continues to be models, estimate parameters, verify the simulations
of key interest within the simulation community. and validate the model assumptions by comparing
A chief reason for this interest is the existence of model results to observations. Hence, it is desirable
heterogeneous legacy simulations which are devel- to reuse such existing models for additional tasks
oped using a variety of programming practices and and to integrate parts of several existing models into
software engineering approaches. For many studied a common framework. However, this integration
is hampered by the fact that the existing models
DOI: 10.4018/978-1-60566-774-4.ch005 have been developed in different programming

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
DEVS-Based Simulation Interoperability

languages, with different software and different The purpose of this chapter is to discuss the
software engineering approaches. There exist benefits of DEVS concept compared to several
already several approaches to implement a joint commonly applied alternatives of tackling the
execution and some of them are summarized in problem of simulation interoperability. To this end
the background section. In this book chapter we we introduce the Shared Abstract Model (SAM)
suggest an additional approach to realize interop- interoperability approach (Wutzler & Sarjoughian,
erability among disparate simulators that utilizes 2007). We will exemplify its application first by
the Discrete Event System Specification (DEVS). a simple educational example model. This simple
We argue that this approach yields several ben- example provides the basis to compare the SAM
efits. One of the most important benefits is the approach to other DEVS-based approaches. Fi-
reduced complexity for adapting heterogeneous nally we present a setup using SAM that we used
legacy models. to solve the problem posed in Figure 1. Next, we
The interoperability issue will be exempli- start with a discussion of several commonly used
fied with a typical problem which we encoun- alternatives of tackling the problem of simulation
tered in our work. The task was to construct a interoperability.
forest ecosystem carbon tracking model. With
such a model the exchange of carbon between
the forest ecosystem and the atmosphere can BACKGROUND
be studied. One can then implement and test
several scenarios of forest management and There exist several alternative approaches to
simulate how these influence the exchange of construct a model from the component models
carbon dioxide with the atmosphere to explore such as the one described in the Introduction
how these management strategies influence section (refer to Figure 1). One solution to the
atmosphere carbon concentrations and global interoperability task is to develop all models using
warming. There already existed a model for the a single programming language. We could have
stand growth, i.e. the growth of trees within a reimplemented the soil model in JAVA as a time-
plot, which was based on extensive measurement stepped model. The reimplementation alternative,
of tree properties within the study region (Ap- however, impedes independent development of the
pendix A). It accounted for competition among component models by different research groups.
different trees and incorporated knowledge of This disadvantage combined with weak support
many species. Hence, the source code was quite for simulation model verification and validation
extensive. There already existed also a model for results makes reimplementation unattractive and
the soil carbon balance that have been used and generally impractical. In particular, there is a
validated by many research groups (Appendix growing need for alternative simulation interoper-
B). Only the management strategies had to be ability approaches in the domain of environmental
implemented as a new component model. modeling and simulation (Papajorgji et al., 2004;
However, the stand growth model was specified Filippi & Bisgambiglia, 2004; Roxburgh & Da-
as discrete-time model (DTSS) and implemented vies, 2006).
in JAVA and the soil carbon balance model was A second alternative is to use the High Level
specified as continuous (differential equations) Architecture (HLA) simulation middleware
model (DESS) and implemented in C++ (see (IEEE, 2000). It supports interoperation among
Figure 1). So how could we couple the heteroge- simulated and physical software/systems. The
neous models and perform a joint and integrated HLA standard is widely used and considered more
simulation? powerful in terms of supporting different kinds

76
DEVS-Based Simulation Interoperability

Figure 1. Motivation example: Simulating a coupled forest ecosystem carbon budget model which is
composed of component models that have been specified in simulation formalisms and programming
languages

of simulations (e.g., conservative and optimistic basis for developing interoperable simulated and
(Fujimoto, 2000)) that can be mapped into HLA non-simulated federates. The standard rules are
Object Model Template. The HLA Object Model divided into federation and federate rules where
Template (HLA/OMT) plays a key role in building each set of rules specify the constraint under which
interoperable simulations. Its primary role is to federates and federations and interface specifica-
specify HLA-relevant information about federates tion work with one another. The federation rules
that collectively form one or more federations. are primarily concerned with execution and there-
To simulate a federation of models (referred to fore with HLA Runtime Infrastructure (RTI). The
as federates), Federation Object Model (FOM) specification of the HLA rules are defined such
and Simulation Object Model (SOM) must that in a federation execution, all data exchanges
be developed. FOM is used as a specification must be in accordance with FOM data, and each
for exchange of data and general coordination SOM can be owned by at most one federate. The
among members of a federation. SOM is used interoperation of simulations must be defined in
as a standardized mechanism for federates to terms of a set of services including time, data,
interact. The FOM and SOM are specified in and ownership management services. Hence,
terms of object-oriented concepts and methods given the HLA’s intended level of generality,
(Unified Modeling Language). Important model developing interoperable simulations in general
specifications are object and interaction classes remains demanding despite availability of tools.
which define data types and exchanges among Furthermore, it is difficult to ensure simulation
federates. Another part of HLA/OMT is routing correctness with HLA (Lake et al. 2000).
spaces which helps to support efficient data dis- A third alternative is to use DEVS concept to
tribution among federates. realize simulation interoperability. We prefer this
The HLA standard rules and interface specifica- alternative because of several benefits. First, the
tions together with HLA/OMT provide a general DEVS framework is based on formal syntax and

77
DEVS-Based Simulation Interoperability

semantics that can ensure simulation correctness THE SAM APPROACH OF TACKLING
among heterogeneous DEVS-based simulations SIMULATION INTEROPERABILITY
(Sarjoughian & Zeigler, 2000). The model and
simulator are separated from one another, which In the ecological modeling example one compo-
greatly helps the verification and validation of nent model is implemented in JAVA and another
the simulator and the model respectively (Zeigler one is implemented in C++ (see Figure 1). For-
& Sarjoughian, 2002). Second, there exist freely tunately, there exist several DEVS simulation
available DEVS simulation environments for engine implementations for both programming
most common programming languages. Hence, languages. We chose DEVSJAVA (ACIMS, 2003)
the coverage of existing models that can poten- to simulate component models that have been
tially take part in coupled simulations is high. implemented in JAVA and Adevs (Nutaro, 2005)
Third, we will show, that the interfaces required to simulate component models that have been
for interoperations are not complex. This helps to implemented in C++.
keep the necessary effort to overcome heteroge- Both simulation engines share the same formal
neity in programming languages and simulation specification and the abstract parallel DEVS simu-
engines on a feasible level. And fourth, common lation protocol. However, their realizations are
modeling formalisms can be represented in the quite different. For example, the communication
DEVS paradigm. This includes all event driven between a coordinator and its child coordinators
formalisms (Zeigler et al., 2000b), time-stepped is implemented very differently in DevsJava and
systems (DTSS), and continuous systems (DESS) in ADevs. So how could we set up a simulation
by quantization (Cellier & Kofman, 2005). In in which Adevs and DEVSJAVA models can be
summary, the properties of DEVS combined coupled together? One strategy was to support
with advances in object-oriented software design interoperability at the modeling level. For example
methods make it feasible to adapt existing code one could allow Adevs simulator B to execute a
with heterogeneities in programming languages, DEVSJAVA component model A. This strategy,
simulation engines, and simulation formalisms to which is taken by the SAM approach, is depicted
take part in interoperable simulations. in Figure 2. The level of independence from
There already exist several other approaches in implementation details is achieved by standard-
order to interoperate several DEVS environments. izing the basic interface of the atomic model and
Efforts are underway to standardize interfaces of its invocation by an atomic simulator. As shown
several DEVS environments (DEVS-Standard- in Figures 2 and 3, the Abstract Model Interface
ization-Group, 2008). In this book chapter we is defined using the meta-language OMG-idl
will first describe the SAM approach (Wutzler & (Vinoski, 1997).
Sarjoughian, 2007), which is based solely on the The Abstract Model Interface corresponds
description of an abstract atomic model. Alterna- to the mathematical representation in the DEVS
tive DEVS based approaches will be discussed specification (Zeigler et al., 2000a). It only slightly
and compared after the description of the SAM differs from the DEVS atomic simulation protocol
approach. by making the various transition functions (δext,
δint, δconf) already returning the result of the time
advance function (λ). This modification has been
introduced to save computing time during inter-
process communications.
The disparity between a particular simulation
engine (e.g., Adevs) and a model implementation

78
DEVS-Based Simulation Interoperability

Figure 2. SAM interoperability approach. All the DEVS simulation engines share the same simulation
protocol. This allows to specifify this in a meta language and allow the simulator in one programming
language to execute models that have been written in other languages

(e.g., DEVSJAVA) is handled with the Model engine specific invocations of the atomic model.
Proxy and Model Adapter as shown in Figure For an atomic component model the implementa-
4. For each simulator, which should execute a tion of the adapter is straightforward.
model within a different simulation engine, a Coupled models are mapped into this schema
model proxy is created. This proxy translates the by a special model adapter. In this approach the
engine specific method invocations of the atomic nature of coupled component model is transpar-
model into method invocations of the abstract ent. The simulator only sees an atomic model.
model interface. This is possible because of the DEVS closure
On the other side, a model adapter translates under coupling property. This means, that every
the invocations of the abstract model interface into coupled model can be treated as an atomic model

Figure 3. The interface of the atomic DEVS model specified in the meta-language OMG-idl

79
DEVS-Based Simulation Interoperability

Figure 4. Software architecture of the SAM approach. Proxy and Adapter which are based on the DEVS
interface together with a middleware allow inter process and inter DEVS-simulation engine commu-
nication

with respect to its external inputs and outputs. (see Figure 5). The atomic and coupled models
At the implementation level, this is achieved by are shown as blocks and couplings between them
modeling the invocation of a coordinator of a are shown as unidirectional arrows with input
coupled model, i.e. how parent coordinators call and output port names attached to them. The
the coordinator, as an atomic model. The invoca- generator atomic model generates job-messages
tion of the coordinator usually happens in a DEVS at fixed time intervals and sends them via the
engine specific way. However, by modeling it as Out port. The transducer atomic model accepts
an atomic model, the coupled case is mapped to job-messages from the generator at its Arrived
the already solved atomic case. For details and port and remembers their arrival time instances.
the description of the example implementations It also accepts job-messages at the Solved port.
of the proxies and adapters in DEVSJAVA and When a message arrives at the Solved port, the
Adevs (C++) we refer the reader to (Wutzler & transducer matches this job with the previous
Sarjoughian, 2007). The way of how the SAM ap- job that had arrived on the Arrived port earlier
proach works, is best illustrated with an example and calculates their time difference. Together,
using a very simple model. these two atomic models form an experimental
frame coupled model. The experimental frame
(ef) model sends the generators job messages on
AN SIMPLE EXAMPLE OF the Out port and forwards the messages received
USING THE SAM APPROACH on its In port to the transducers Solved-port. The
transducer observes the response (in this case the
We will illustrate the usage of the SAM approach turnaround time) of messages that are injected into
of tackling simulation interoperability by a simple an observed system. The observed system in this
model that consists of three component models case is the processor atomic model. A processor
that are coupled in a hierarchical manner. accepts jobs at its In port and sends them via Out
port again after some finite, but non-zero time
The Experimental-Frame period. If the processor is busy when a new job
Processor Model arrives, the processor discards it.
In order to demonstrate the SAM approach, we
The experimental-frame processor (ef-p) model partitioned the component models into two differ-
is a simple coupled model of three atomic models ent simulation engines. The coupled experimental

80
DEVS-Based Simulation Interoperability

Figure 5. Experimental frame (ef)-processor (p) model

frame model was implemented in DEVSJAVA model proxy was used as any other Adevs atomic
while the processor atomic model and the overall model within the Adevs simulation (Figure 7(c)).
simulation were implemented in Adevs. The example was developed and run on personal
This setup is depicted by Figure 6. Rounded computers running the operating system Windows
boxes (Adevs, DEVSJAVA, and Middleware) XP. It has been tested on a single machine and in
represent operating system processes; white addition also with running the component models
angled boxes represent simulators, dark gray boxes on different machines.
represent models, light grey shapes represent
interoperation components, and arrows represent
interactions. COMPARISON OF DEVS
The model the model proxies and the model INTEROPERABILITY APPROACHES
adapters for the atomic and coupled models in
the DEVSJAVA and Adevs simulation engines After having introduced and demonstrated the
were developed beforehand. They need to be SAM approach, we can now compare it to other
developed only once for each simulation engine. approaches. There exist alternative approaches
The experimental frame coupled model, a message of using DEVS to implement interoperability
translator, and the model adapter were constructed between simulation models written in different
and started in a DEVSJAVA server process (Figure programming languages.
7(a)). Further the CORBA-Object of the model The first DEVS based approach for building
adapter was constructed and published using the interoperable simulations is called DEVS-Bus
SUN Object Request Broker (ORB) and naming (Kim et al., 2003). The DEVS-Bus, which was
service which is part of the JDK 1.5 (SUN, 2006). originally introduced to support distributed execu-
The CORBA-stub of this adapter was then obtained tion, uses HLA (Dahmann et al., 1999; Fujimoto,
in the C++/Adevs client process using the ACE/ 1998) and ACE-TAO ORB in logical and (near)
TAO ORB version 1.5 (Schmidt, 2006). Together real-time (ACIMS, 2000; Cho et al., 2003). The
with a message translator the model proxy was DEVS-Bus framework conceptually consists of
constructed (Figure 7(b)). The message translator three layers: the model layer, the DEVS layer,
had been introduced into the SAM approach to and the HLA (High Level Architecture) layer.
handle the transfer of message contents between The DEVS layer provides a common framework,
different programming languages that are more called the DEVS-Bus, so that such simulation
complex than stings or numbers. Finally, the models can communicate with each other. Finally,

81
DEVS-Based Simulation Interoperability

Figure 6. Distributed setup of the ef-p model

the HLA layer is employed as a communication controls the communication between several mod-
infrastructure, which supports several important els and the participating models or simulators are
features for distributed simulation. In both the aware of the distributed setup. In contrast, in the
SAM and DEVS-Bus approaches generic adapters SAM approach a model proxy is used and by this,
for each simulation engine are used for conformity the heterogeneous models become transparent to
to the DEVS. If this adapter is developed once, all the root coordinator. The DEVS-Bus can result
models designed for the given simulation engine in a performance bottleneck on simulations with
can take part in heterogeneous simulation. The many component models (Kim et al., 2003). The
major difference between the approaches is that SAM approach shows good scaling properties
the DEVS-Bus defines a new simple protocol. (Wutzler & Sarjoughian, 2007).
Contrary, the SAM approach is specified entirely A second approach for simulation interoperabil-
within the DEVS specification. The DEVS-Bus ity is to execute DEVS simulations using Service

Figure 7. Constructing and using the remote experimental frame sub-model

82
DEVS-Based Simulation Interoperability

Oriented Architecture (SOA) technology (Mittal its suitability we go one step further and apply
& Risco-Martin, 2007). It is based on the SOA it to the real world application example that was
(Erl, 2005) and the specification of participating introduced in the beginning.
models within the XML-based language DEVSML
(Mittal & Risco-Martin, 2007). Upon execution
initialization, the syntax of the DEVSML models A MORE COMPLEX EXAMPLE OF
is translated to the syntax of the DEVSJAVA or USING THE SAM APPROACH
DEVS-C++ models. The translation requires
all the models to be specified in the DEVSML In order to demonstrate the applicability of the
syntax. A key implication is that the behaviors SAM approach to more complex real world situ-
of the target DEVSJAVA and DEVS-C++ models ations of model interoperability we describe a
must be represented within the DEVSML which setup of the solution to problem in Figure 1 and
is significantly restricted as compared with those the introduction. In order to let the heterogeneous
of the Java and C++ programming languages. The component models take part in the joint simulation
high-level DEVSML abstraction is well suited for a few adaptations were necessary.
coupled models, but imposes fundamental restric-
tions on specifying atomic models. Therefore, Adaptation of the
the SOA approach cannot address interoperating Component Models
existing models that are developed by different
teams in different programming languages. In the ecological example shown in Figure 1, the
A third approach for interoperable DEVS- product model is event-driven and belongs to the
based simulators was suggested by Lombardi class of discrete-event models. The stand growth
et al. (2006). It uses adapters for the simulators component model, however, runs in time steps
instead of adapters for the models. The advan- of five years (DTSS) and the soil carbon bal-
tage is that this approach allows transforming ance model is a continuous time model (DESS).
hierarchical models to flat models where only We used general adapters that were developed
leaf node models exchange messages. In the together with the SAM approach to interoperate
SAM approach the hierarchical model structure these models. The general adapter for the time
is preserved and messages need to be passed up stepped models collected all input events during
and down the hierarchy. However, we argue that one period, executed the transition function at the
simulators must communicate at each instance of time steps and generated output events at these
time when events occur. In contrast, models must time steps. In order to use the DESS model in the
communicate only at the time instances when the coupled simulation, the computation of the deriva-
model is imminent, i.e. it undergoes internal or tives was encapsulated into a single function. This
external transitions. Hence, in the SAM approach derivative function was then evaluated using the
there are fewer inter-process communications quantization approach, which was already a part
required (Wutzler & Sarjoughian, 2007) and we of the Adevs simulation engine. Hence, after the
expect that this will outweigh the overhead of non-trivial development of the general adapters,
passing messages down the hierarchy within one the adjustments to the component models were
simulation engine. quite simple and straightforward. So we could
In summary we argue that for the purpose of proceed with the setup of the simulations.
integrating heterogeneous submodels, the SAM
approach is best suitable from all the discussed
approaches. However, in order to demonstrate

83
DEVS-Based Simulation Interoperability

Figure 8. Distributed setup of the interoperability solution of the motivation example

Setup of the Simulation the data structures from implementation specific


formats to the idl-specifications.
The setup was almost as straightforward as in The second major difference to the simple
the simple example (Figure 7). The adapter of example was that the submodels required a
the coupled model of soil carbon dynamics was quite complex parameterization and initializa-
specified in Adevs. The adapter was started in tion. In order to handle the parameterization in a
a separate server process. This process regis- centralized manner, i.e. not scattered across the
tered the Abstract model interface of the model component model operating system processes, we
adapter with the CORBA Naming service. In implemented initialization and parameterization
a second DEVSJAVA process the model proxy of the component model hierarchy by sending
then could connect to this interface and invoke xml-formatted strings to the initialization func-
the operations of the soil model adapter via the tions of the component models. By this way the
CORBA middleware. The product model as well interface of the abstract model was extended
as the model adapter for the stand growth model by only one two methods (Listing 2a). Further,
were directly specified in DEVSJAVA. Hence it extensions involved by getting names of the com-
was now straightforward to construct a coupled ponent models and its ports and setting or getting
simulation of all three heterogeneous component the state for interruption and continuation of the
models (see Figure 8). model execution.
The major difference to the simple example was The coupled model was used in research in
that the data exchanged between the component natural sciences and results can be found for
models were more complex than simple strings. example in (Wutzler, 2008). The component
A major part of the implementation effort was models will be further developed independently
devoted to developing the idl-specification of the by different research groups and the incorporation
structured data that was exchanged, i.e. the mes- of future versions will require solely a minimum
sage contents. In addition to the general model effort of recoding in the model adapters and
adapter and model proxies, message translator the parts responsible for parameterization and
components had to be developed, that converted initialization.

84
DEVS-Based Simulation Interoperability

Figure 9. Constructing and using the remote experimental frame sub-model

CONCLUSION seamlessly communicate with one another while


keeping the adaptation of existing programming
Based on the comparison of DEVS based ap- code uncomplicated.
proaches with other approaches of implement-
ing interoperability across simulation models
we conclude that DEVS based approaches are ACKNOWLEDGMENT
applicable and very suitable for this task. There
exist several DEVS based approaches which were Due to courtesy of SCS, the Society for Modeling
designed for different goals and therefore have and Simulation International, this book chapter
different advantages and disadvantages. With the partly reproduces contents of the article: Wutzler,
goal of designing a general approach that supports T. & Sarjoughian, H. S. 2007: Interoperability
verification of coupled models and at the same among parallel DEVS simulators and models
time fosters a relatively uncomplicated adaptation implemented in multiple programming languages.
of existing simulation models implemented in Simulation Transactions, 83, 473-490.
different programming languages, we conclude
that the SAM approach is best suitable and has
great potentials. This conclusion was supported by REFERENCES
demonstrating the usage of the SAM approach by
both a very simple and a more complex real world ACIMS. (2000). DEVS/HLA software. Re-
application. Further development and standardiza- trieved September 1st, 2008, from http://www.
tion is required for other simulation services such acims.arizona.edu/SOFTWARE/software.shtml.
as formats of data exchange between component Arizona Center for Integrative Modelling and
models and component model parameterization. Simulation.
The DEVS-based SAM interoperability approach
made it possible to overcome heterogeneity in
programming languages while allowing models
specified in different modeling formalisms to

85
DEVS-Based Simulation Interoperability

ACIMS. (2003). DEVSJAVA modeling & simula- IEEE. (2000). HLA Framework and Rules (Ver-
tion tool. Retrieved September 1st, 2008, from sion IEEE 1516-2000). Washington, DC: IEEE
http://www.acims.arizona.edu/SOFTWARE/ Press.
software.shtml. Arizona Center for Integrative
Kaipainen, T., Liski, J., Pussinen, A., & Karj-
Modeling and Simulation.
alainen, T. (2004). Managing carbon sinks by
Cellier, F., & Kofman, E. (2005). Continuous changing rotation length in European forests.
system simulation. Berlin: Springer. Environmental Science & Policy, 7(3), 205–219.
doi:10.1016/j.envsci.2004.03.001
Cho, Y. K., Hu, X. L., & Zeigler, B. P. (2003). The
RTDEVS/CORBA environment for simulation- Kim, Y. J., Kim, J. H., & Kim, T. G. (2003).
based design of distributed real-time systems. Heterogeneous Simulation Framework Using
Simulation Transactions, 79(4), 197–210. DEVS BUS. Simulation Transactions, 79, 3–18.
doi:10.1177/0037549703038880 doi:10.1177/0037549703253543
Dahmann, J., Salisbury, M., Turrel, C., Barry, Lake, T., Zeigler, B., Sarjoughian, H., & Nutaro,
P., & Blemberg, P. (1999). HLA and beyond: In- J. (2000). DEVS Simulation and HLA Lookahead,
teroperability challenges. Paper no. 99F-SIW-073 (Paper no. 00S-SIW-160). Presented at Spring
presented at the Fall Simulation Interoperability Simulation Interoperability Workshop Orlando,
Workshop Orlando, FL, USA. FL, USA.
DEVS-Standardization-Group. (2008). General Liski, J., Palosuo, T., Peltoniemi, M., & Sievanen,
Info. Retrieved September 1st, 2008, from http:// R. (2005). Carbon and decomposition model Yasso
cell-devs.sce.carleton.ca/devsgroup/. for forest soils. Ecological Modelling, 189(1-2),
168–182. doi:10.1016/j.ecolmodel.2005.03.005
Erl, T. (2005). Service-Oriented Architecture
Concepts, Technology and Design. Upper Saddle Lombardi, S., Wainer, G. A., & Zeigler, B. P.
River, NJ: Prentice Hall. (2006). An experiment on interoperability of DEVS
implementations (Paper no. 06S-SIW-131). Pre-
Filippi, J. B., & Bisgambiglia, P. (2004). JDEVS:
sented at the Spring Simulation Interoperability
an implementation of a DEVS based formal
Workshop Huntsville, AL, USA.
framework for environmental modelling. Envi-
ronmental Modelling & Software, 19(3), 261–274. Mittal, S., & Risco-Martin, J. L. (2007). DEVSML:
doi:10.1016/j.envsoft.2003.08.016 Automating DEVS Execution over SOA Towards
Transparent Simulators Special Session on DEVS
Fujimoto, R. (1998). Time management in
Collaborative Execution and Systems Modeling
the High-Level Architecture. Simulation:
over SOA. In Proceedings of the DEVS Integrative
Transactions of the Society for Modeling and
M&S Symposium, Spring Simulation Multicon-
Simulation International, 71(6), 388–400.
ference, Norfork, Virginia, USA, (pp. 287–295).
doi:10.1177/003754979807100604
Washington, DC: IEEE Press.
Fujimoto, R. (2000). Parallel and distributed
simulation systems. Mahwah, NJ: John Wiley
and Sons, Inc.
Hasenauer, H. (2006). Sustainable forest man-
agement: growth models for europe. Berlin:
Springer.

86
DEVS-Based Simulation Interoperability

Mund, M., Profft, I., Wutzler, T., Schulze, E.D., Porté, A., & Bartelink, H. H. (2002). Modelling
Weber, G., & Weller, E. (2005). Vorbereitung für mixed forest growth: a review of models for
eine laufende Fortschreibung der Kohlenstoffvor- forest management. Ecological Modelling, 150,
räte in den Wäldern Thüringens. Abschlussbericht 141–188. doi:10.1016/S0304-3800(01)00476-8
zur 2. Phase dem BMBF-Projektes “Modellunter-
Roxburgh, S. H., & Davies, I. D. (2006). COINS:
suchungen zur Umsetzung des Kyoto-Protokolls”.
an integrative modelling shell for carbon account-
(Tech. rep., TLWJF, Gotha).
ing and general ecological analysis. Environ-
Nabuurs, G. J., Pussinen, A., Karjalainen, T., Er- mental Modelling & Software, 21(3), 359–374.
hard, M., & Kramer, K. (2002). Stemwood volume doi:10.1016/j.envsoft.2004.11.006
increment changes in European forests due to cli-
Sarjoughian, H., & Zeigler, B. (2000). DEVS and
mate change-a simulation study with the EFISCEN
HLA: Complementary paradigms for modeling
model. Global Change Biology, 8(4), 304–316.
and simulation? Simulation Transactions, 17(4),
doi:10.1046/j.1354-1013.2001.00470.x
187–197.
Nagel, J. (2003). TreeGrOSS: Tree Growth Open
Schmidt, D. C. (2006). Real-time CORBA with
Source Software - a tree growth model compo-
TAO. Retrieved September 5th, 2008, from http://
nent.
www.cse.wustl.edu/ schmidt/TAO.html.
Nagel, J., Albert, M., & Schmidt, M. (2002). Das
SUN. (2006). JDK-ORB. Retrieved September
waldbauliche Prognose- und Entscheidungsmod-
1st, 2008, from http://java.sun.com/j2se/1.5.0/
ell BWINPro 6.1. Forst und Holz, 57(15/16),
docs/guide/idl/
486–493.
Vinoski, S. (1997). CORBA - Integrating diverse
Nutaro, J. J. (2005). Adevs. Retrieved Jan 15, 2006,
applications within distributed heterogeneous
from http://www.ece.arizona.edu/ nutaro/
environments. IEEE Communications Magazine,
Palosuo, T., Liski, J., Trofymow, J. A., & Titus, 35(2), 46–55. doi:10.1109/35.565655
B. D. (2005). Litter decomposition affected by
Wutzler, T. (2008). Effect of the Aggregation of
climate and litter quality - Testing the Yasso
Multi-Cohort Mixed Stands on Modeling Forest
model with litterbag data from the Canadian
Ecosystem Carbon Stocks. Silva Fennica, 42(4),
intersite decomposition experiment. Ecological
535–553.
Modelling, 189(1-2), 183–198. doi:10.1016/j.
ecolmodel.2005.03.006 Wutzler, T., & Mund, M. (2007). Modelling mean
above and below ground litter production based
Papajorgji, P., Beck, H. W., & Braga, J. L. (2004).
on yield tables. Silva Fennica, 41(3), 559–574.
An architecture for developing service-oriented
and component-based environmental models. Eco- Wutzler, T., & Reichstein, M. (2007). Soils
logical Modelling, 179(1), 61–76. doi:10.1016/j. apart from equilibrium – consequences for soil
ecolmodel.2004.05.013 carbon balance modelling. Biogeosciences, 4,
125–136.
Peltoniemi, M., Mäkipää, R., Liski, J., &
Tamminen, P. (2004). Changes in soil carbon Wutzler, T., & Sarjoughian, H. S. (2007). In-
with stand age - an evaluation of a modelling teroperability among parallel DEVS simulators
method with empirical data. Global Change and models implemented in multiple program-
Biology, 10(12), 2078–2091. doi:10.1111/j.1365- ming languages. Simulation Transactions, 83(6),
2486.2004.00881.x 473–490. doi:10.1177/0037549707084490

87
DEVS-Based Simulation Interoperability

Zeigler, B. P., Praehofer, H., & Kim, T. G. (2000). DEVS Model Specification: the specification
Theory of modeling and simulation 2nd Edition. of atomic and coupled models as mathematical
New York: Academic Press. structures.
DEVS Abstract Atomic Model Interface:
Zeigler, B. P., & Sarjoughian, H. S. (2002). Im-
specification of the operations of the atomic DEVS
plications of M&S Foundations for the V&V of
model in a meta programming language.
Large Scale Complex Simulation Models, Invited
Simulator and Coordinator: realizations
Paper. In Verification & Validation Foundations
of the DEVS atomic and coupled simulation
Workshop Laurel, Maryland, VA., (pp. 1–51).
protocols to execute atomic and coupled models,
Society for Computer Simulation. Retrieved
respectively.
from https://www.dmso.mil/public/transition/
SAM Approach: approach for implementing
vva/foundations
simulation interoperability based on the DEVS
Zeigler, B. P., Sarjoughian, H. S., & Prae- model specification and the DEVS atomic model
hofer, H. (2000). Theory of quantized sys- interface.
tems: DEVS simulation of perceiving agents. Model Adapter: software component of a
Cybernetics and Systems, 31(6), 611–647. DEVS simulation engine that maps the DEVS
doi:10.1080/01969720050143175 abstract atomic model interface to the imple-
mentation specific atomic model interface and
the direction and execution of a coordinator of a
coupled model.
KEY TERMS AND DEFINITIONS
Model Proxy: software component of a DEVS
simulation engine that maps the implementation
Simulation Interoperability: ability to build
specific atomic model interface to the DEVS
a heterogeneous simulator from two or more dif-
abstract atomic model interface.
ferent simulators. Models can belong to distinct
formalisms and simulation engines can be imple-
mented in multiple programming languages.

88
DEVS-Based Simulation Interoperability

APPENDIX A: THE TREEGROSS STAND GROWTH MODEL

The TreeGrOSS (Tree Growth Open Source Software) model (Nagel, 2003) is a public domain variant
of the BWinPro model (Nagel et al., 2002). According to the classification of Porté and Bartelink (2002)
it belongs to the class of non-gap distance-independent tree models. The empirical model is based on
data of a growth and yield experiments of about 3500 plots in northern Germany. It uses the potential
growth concept (Hasenauer, 2006), which reduces species and site dependent potential relative height
growth of a top height tree ihrelPot by the single trees competition situation (Eq. A1).

ihrel = ihrelPot + p1 (h100 / h) p2 (A1)

Where pi are species specific constants, h100 is the topheight of the stand, i.e. the average height of
the highest 100 trees, and h the height of the considered specific tree. The basal area growth of a tree
is estimated by Eq. A2.

ln(DaBasal ) = p0 + p1 ln(cS ) + p2 ln(age) + p3c66 + p4 c66 c + p5 ln(Dt ) (A2)

Where pi are species specific constants, cS is the crown surface area calculated from diameter, height
of the tree, and the topheight of the stand, age is the tree age, Dt is the time period of usually 5 years,
c66 is the competition index (Figure 10) and c66 c is an index that increases when the competition situ-
ation is relieved, i.e. neighbouring trees are thinned.

Further, the model was extended by thinning routines based only on information of the sum of basal
area and mean quadratic diameter of thinned trees. These routines selected trees randomly from a prob-
ability distribution of tree diameters. Eventually, one side of a Gaussian distribution with a mean of the
cohorts minimum or maximum diameter was used, respectively to thinning from below or above, and a
standard deviation chosen in a way, so that the expected quadratic mean diameter of thinned trees was
equal to the specified one.

The model and the extensions were validated against plot data of permanent sampling inventories of
three monospecific stands and two multi-cohort stands within the study region. An example is shown
in Figure 11. The TreeGrOSS model performed at least as good as local yield tables with significant
improvements for co-dominant and suppressed cohorts.

The complete time series, which at several stands covered more than 100 years, were kindly provided
by the Eberswalde forestry research institute and the chair of Forest Growth and Timber Mensuration
at TU-Dresden and preprocessed by Mund et al. (2005).

89
DEVS-Based Simulation Interoperability

Figure 10. Calculation of the competition index in TreeGrOSS (Nagel 2003, used with permission). At a
height of 2/3 (or 66%) of the crown length all crowns are cut, if they reach that height. If the crown base
is above the height then cross sectional area of that tree will be taken. The sum of the cross sectional
area is divided by the stand area

APPENDIX B: THE YASSO SOIL MODEL

The soil carbon model Yasso was designed by Liski et al. (2005) in order to model soil carbon stocks of
mineral soils in managed forests. Figure 12 displays the model structure and the flow of carbon.

The colonization part (Figure 12 (a)) describes a delay before decomposers can attack the parts of the
woody litter compartments and additionally describes the composition of the different litter types of
compartments that correspond to the kinetically defined pools. The decomposition part (Figure 12 (b))

Figure 11. Comparison of inventoried timber volume from a suppressed beech cohort of the permanent
inventory plot Leinefelde 245 to model predictions by a yield table (Dittmar et al. 1986) and predictions
of the TreeGrOSS model

90
DEVS-Based Simulation Interoperability

Figure 12. Flow chart of the Yasso model. a) species dependent part of litter colonization and separation of
litter into chemical compounds b) species independent part of decomposition of chemical compounds

describes the decomposition of the chemical compounds. The fwl-pool can be roughly associated with
undecomposed litter, the cwl-pool with dead wood, and all the other parts with organic matter in soil
including the organic layer. The decay rates are dependent on mean annual temperature (or alternatively
effective temperature sum) and a drought index (difference between precipitation and potential evapo-
transpiration during the period from Mai to September). In the standard parameterization the decay
rates of the slower pools are less sensitive to temperature increase than the fast pools (humus one: 60%,
humus two: 36% of sensitivity of fast pools). The model has been tested and successfully applied to
boreal forest (Peltoniemi et al., 2004), litter bag studies in Canada (Palosuo et al., 2005), and as part of
the CO2FIX model all over Europe (Nabuurs et al., 2002; Kaipainen et al., 2004). In order to simulate
multi-species stands the colonization part was duplicated and parameterized for each tree cohort and
coupled all the duplicates to the single species independent decomposition part.

The soil pools were initialized by spin-up runs with repeated climate data of the last century and aver-
age soil carbon inputs. The average soil carbon inputs were derived for each species by simulating the
stand growth model over an entire rotation cycle including final harvest (Wutzler & Mund, 2007). Soil
carbon inputs for cohorts, i.e. tree groups, in multi-cohort stands were decreased by the proportion of
tree groups basal area to stands basal area. In order to account for soil degradation in the past, the slowest
pool was reset after the spin-up run so that the sum of pools match the carbon stocks that were obtained
by spatial extrapolation of observed carbon stocks using the dominating tree species and site conditions
(Wutzler & Reichstein, 2007).

91
92

Chapter 6
Experimental Error
Measurement in Monte
Carlo Simulation
Lucia Cassettari
University of Genoa, Italy

Roberto Mosca
University of Genoa, Italy

Roberto Revetria
University of Genoa, Italy

ABSTRACT
This chapter describes the set up step series, developed by the Genoa Research Group on Production
System Simulation at the beginning of the ’80s, as a sequence, through which it is possible at first
statistically validate the simulator, then estimate the variables which effectively affect the different
target functions, then obtain, through the regression meta-models, the relations linking the independent
variables to the dependent ones (target functions) and, finally, proceed to the detection of the optimal
functioning conditions. The authors pay great attention to the treatment, the evaluation and control of the
Experimental Error, under the form of Mean Square Pure Error (MSPE), a measurement which is always
culpably neglected in the traditional experimentation on the simulation models but, that potentially can
consistently invalidate with its magnitude the value of the results obtained from the model.

INTRODUCTION elaboration gives quantitative data making possible


evaluation of the target system deriving from the
The canonical modelling type of many disciplines formulated target function (see Figure 1).
and, particularly, for that concerned by this chap- Therefore, when the reality is excessively com-
ter, of many engineering studies is that allowing plex, the attempt to close it inside a rigid formalism
to supply representations of the phenomenal of an equation series is practically impossible, then,
reality through mathematical propositions whose the going on this way can mean the changing of
its real characteristics due to the introduction of
DOI: 10.4018/978-1-60566-774-4.ch006

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Experimental Error Measurement in Monte Carlo Simulation

conceptual simplifications necessary, in any case, put, at each experimental act, one or at most few
to achieve the model making up. input variables, produces, in output, partial and
The fact that, with this formulation one of the inhomogeneous among them scenarios, since
modelling fundamental principles stating that “it is related to punctual single situations and not re-
the model which should adapt to the target reality sulting from an univocal matrix of experimental
and not the reality which must be simplified to be responses (see Figure 2). Therefore such a gap can
put down in a model” is failed, has a great impor- be conveniently overcome using some planned
tance since, under these conditions, the value of test techniques (see Figure 3) borrowed from the
the achievable results, afterward, will result from Design of Experiments and the Response Sur-
not much significant to distorted with a serious face Methodology, through which translate the
prejudice for their usability for system correct experimental responses, resulting from suitable
analysis purposes, as, unfortunately, often takes solicitations imposed to the model through values
place for numerous models of complex systems assigned to the independent variables according
acting in the discrete in presence of stochastic to pre-organized schemes, in real state equations
character. valid inside a pre-established domain of the target
In these cases, the simulating tool able to system operating field.
avoid the purely analytic model typical rigidities In the following pages the set up step series (see
exploiting the logic proposition flexibility, with Figure 4), developed by the Genoa Research Group
which compensate the descriptive impossibilities on the production system simulation at the begin-
of the mathematic type ones, is the only tool able ning of the ’80s are shown as a sequence, through
to allow the investigated system effective and which it is possible at first statistically validate
efficient representation. the simulator, then estimate the variables which
The remarkable power, in terms of adherence effectively affect the different target functions,
to the target reality, of discrete and stochastic then obtain, through the regression meta-models,
simulation models is, therefore, frequently dis- the relations linking the independent variables to
sipated in the traditional experimentation phase the dependent ones (target functions) and, finally,
that, as it is carried out, is, indeed, strongly proceed to the detection of the optimal functioning
self-limiting and never able to make completely conditions (Mosca and Giribone, 1982).
emerging that developed inside the model. The
what if analysis, indeed, making varying in in-
DETERMINATION OF THE
SIMULATION RUN
Figure 1.
Introduction

After the model construction and analogical vali-


dation (that is after having shown the model ability
to faithfully describe the investigated reality) it is
necessary to achieve the statistic validation. With
this acceptation it is intended to make clear the
confirmation of a suitable model ability to treat the
stochastic character transferred in it by the target
system. To understand the determinant importance
of this aspect it is sufficient to observe how, in a

93
Experimental Error Measurement in Monte Carlo Simulation

Figure 2.

simulation model, there is always an accumula- of the built models, even with a great adherence
tion of frequency distributions, whose in progress and reproductive ability of the target reality. And
effects intersect and/or superpose by conditioning in spite of it many “modelling and simulation”
the experimental response amount. texts whose length became classics warn about
A great part of simulation model builders and/ the connected risks and suggested some useful
or users of the same, therefore, were, and are still methodologies for the concerned problem solv-
today, missing of the conceptual perception of the ing, whose schematisation can be so summarized:
problems connected with a correct duration of the downward of the analogical validation and before
simulation run, a duration, among others, deeply the experimental campaign beginning, it is neces-
connected with the stochastic character treat- sary to verify that the two essential conditions,
ment and to its effect invalidating the responses called of stabilisation, related respectively to the

Figure 3.

94
Experimental Error Measurement in Monte Carlo Simulation

Figure 4.

beginning transitory and the presence, just, of ning quiet state (for the production systems for
stochastic character, are met. example: empty warehouses, stopped machines,
resting operators, non active transportation sys-
Duration of the Beginning tems etc.) to that of the same system starting with
Transitory or Warm-Up Period the resulting unbalances characteristics of each
transitory condition.
The warm-up period is that run beginning phase The time where the transitory condition ends, is
allowing inside the model in its evolution phase, normally estimated by the expert experimenters on
that the simulated system come from its beginning their sensations according to the global behaviour
zero state, of first operating starting, to the regime which can be deduced from a given model or other
operation condition. During this simulated period, similar ones on which they have already operated.
which is not representative of the model normal Moreover there are also scientific method-
operation condition, the accounting neither of the ologies with which achieve the evaluation of
target function, nor of the statistically significant the warm up period; this is done putting under
parameters obviously, is not carried out since the control pre-determined point of interest of the
data supplied by the model can be assimilated to modelled system, the so called regeneration
situations which, in the system life, take place at points (warehouse stock levels, queue entity be-
its start-up and, at the most, few other times after fore the machines etc.); from the study of their
particular events (ex. Important maintenances, behaviour it is possible to evaluate, just, with a
revamping). good approximation, the time of the passage, in
This experimental phase is totally analogous to the simulated system, from the transitory to the
that which we observed in the life of a new system regime condition.
built at the time of the passage from the begin-

95
Experimental Error Measurement in Monte Carlo Simulation

Simulation Run Length by the simulation model. Explicitly speaking this


means that, until the casual extractions carried out
This is the second essential problem that, still by each frequency distribution is not sufficiently
today, after more than forty years of common use high to make possible the supply of not distorted
of the simulation tool, or it has not been perceived estimation of the starting single distribution statis-
by the most (it is sufficient read many of the pub- tic parameters, the results which can be obtained
lished articles on specialized reviews or attend the from the simulation run shall not be in line with
main international event concerning this field), or those supplied by the real system, being just af-
by those who faced it, it has been unsatisfactorily fected by an extra-noise, which is characteristic
solved through approaches based on the experi- of each investigated target function, completely
ence and/or on punctual methodologies, in any ascribable to an insufficient duration of the ex-
case, very expensive in terms of time-machine and perimentation phase.
such, really, to bring to conclusions not different
from those empirically detected by the already
mentioned skilled experimenters. THE ITIM APPROACH TO THE
Among the three mentioned cases the first is RUN LENGTH PROBLEM
necessarily the most dangerous in terms of pos-
sible negative impact on the investigated target Background
functions, while the other two limit their effect
to expensive, though almost causal, extension of The ITIM approach to the run length problem
the experimentation activity. starts from a fundamental remark shared by all
All the before stated, now then see where those that in their study, research, work activity
the need to correctly determinate a simulation deal with the DOE and RSM methodologies by
run length evaluation raises. The presence in the suitably interiorizing the meaning and the analytic
model of frequency distributions taken from the and descriptive power of these techniques. Fol-
target reality in order to describe it at the best, lowing, exactly, the DOE approach philosophy,
as mentioned in the paragraph 1, is one of the the system concerned by the experimentation can,
qualifying arguments of the simulation existence as a first approximation and given as expected the
justification as excellent modelling: indeed the following deepening, be considered as a black box
simulation allows to face a stochastic reality with subjected to the imposition of precise experimental
an equally and identically stochastic model. solicitations. Under the effect of the impact of these
The essential distinction between the two solicitations we want to understand what of the
situations (reality vs model) is represented by the same system, measured by means of a size called
fact that, while the real system sees conditioned total variability, can be ascribed to the effect, on
the targets assigned to it by the background noise the target function, of each independent variable,
through which the different stochastic sources having an effective conditioning ability on the
express in it, the model is affected, in the output response, and what, on the contrary, not result-
results by a double uncertainty source represented ing as referable singularly or in combination to
by the superposition of that naturally existing one of them, represents the ignorance rate or non
in the real system (not simple but composite knowledge of the experimenter of the target system
stochastic character deriving both from endogen behaviours. The ignorance measuring is commit-
and exogenous causes) with which related to the ted to a quantitative term called Experimental
Monte Carlo Method exploitation, allowing the Error, commonly expressed through a statistic size
metabolism of the real system stochastic character called Sum Squares Error (SSE) or, equally, through

96
Experimental Error Measurement in Monte Carlo Simulation

the Mean Squares Error (MSE) of the previous plant configurations, would have been translated
directly derived. Evidently, as in any experimental into cost variations shown in the economic offers
situation, the higher is the SSE percentage com- quantified in millions of dollars.
pared to the SST total variation, the higher is the In a parallel way the first applications of DOE
misunderstanding level of the investigated system and RSM, carried out on the already mentioned
behaviours, which can be appreciated through the models, had pointed out the need to make use
modus agendi of the independent variables. The of run lengths, then estimated by Italimpianti in
above said expressed in other words means, by 2.880’ for the warming up and 1.000.000’ for the
extremising, that made 100 the SST value, if the standard run, incompatible with the experimental
SSE is zero, the detected independent variables, campaign period (averagely 1.000 runs for each
with their behaviours, explicated by the relevant campaign with a run length of 50’ machine for
SSEFF, would allow to completely explain also the each run) necessary to verify the design solutions
studied target function. In the opposite case, that detected by the Engineering.
is in the case that the SSE value would be 100, the By contextually studying the two problems,
detected independent, whose SSEFF would result they advanced that they could represent the two
all null, should not have any incident ability on sides of the same medal: the Experimental Er-
the target, in the field of the surrounding condi- ror variable side, linked to the dimension of the
tions considered for the investigated system and same samples resulting from the extractions of
the results obtained would represent nothing if the frequency distributions in the model, could
not the background noise echo. The Experimental not increase with the rise, in the simulated time,
Error is, then, in all the trials, the simulation ones of the same sample dimension under the effect of
included, a fundamental comprehension element the resulting improvement of the statistic infer-
for which its knowledge and for as possible, its ence, until to annul themselves for a simulation
control, represent the distinctive element of a t time tending to the infinite, that is sufficiently
valid experimentation plan compared to a plan high, with stabilisation of the background noise
that is not only insufficient but also potentially according to the value linked to the sole endogen
dangerous for the lack of result transparency and stochastic character to the modelling target reality
reliability which, from the same error, will result (with each target function having its characteristic
strongly affected. noise).
At the end of the ’70s, because of the above Therefore a methodology through which it
mentioned reasons, Mosca and Giribone of the was possible to point out the evolution in the
ITIM required to quantify the error affecting simulated time of the Experimental Error would
the responses of some big iron and steel plant have allowed, at each run time, to know the entity
simulation models on which they were carrying of the whole noise afflicting the experimental
out experimentation campaigns with the Operat- results for which, complying with the elimination
ing Researchers of Italimpianti, one of the most impossibility of the real system intrinsic stochastic
important companies at the international level, of character once achieved an error level considered
big production or infrastructure plant design and adequate for the single investigated target func-
realisation. Contingently the problem passed over tion, The run would have been interrupted at the
the simple scientific speculation and was economi- corresponding time and the punctual value of the
cally important for the Company since the result searched response, extrapolated or interpolated
masking by the Experimental Error, which could at the detection time provided by the design. To
affect the responses of the simulators used by the make an example, always making reference to
Offer Engineering to detect the most advantageous the already mentioned simulations of iron and

97
Experimental Error Measurement in Monte Carlo Simulation

steel plants, the application of the new method- sizes called Sum Squares and Mean Squares
ology brought, besides to the knowledge of the of the pure Experimental Error through the
MSPE evolution curve in the simulated time, to relations:
determinate a run length, calculated on the mostly
N
unfavoured target function, equal to a fourth of SS PE (ti ) j 1
(y j (ti ) y (ti ))2
j
that normally used in the previous simulations
N
according to precautionary estimations carried
j 1
(y j (ti ) y (ti ))2
j
out by the same model builders. MS PE (ti )
n0 1
The ITIM Methodology
for each 1 ≤ i ≤ N being:
The reference scheme for the simulation optimal
run length detection can be articulated as follows • SSpe(ti) the Sum Squares of the
(Mosca and Giribone, 1986): Experimental Error of the N simulation
having a ti duration
1. Choice of a t time, of attempt, of the suf- • MSPE (ti) the corresponding Mean Squares
ficiently wide simulation run length. To • Yi (ti) the response of the j-th target func-
merely exemplifying, for cascade production tion observed in a ti duration run
systems, we can think to a t equal to 12-24 • y j (ti) the average of the responses ob-
month of operation of the real system served in the same ti time in the n0 simula-
2. Simulation program predisposition with: tion run carried out in parallel.
◦ assignation to the independent vari-
able of the variability field central It should be noticed that
value of each of them
◦ trigging seed replacement, at each • since the Experimental Error, by its nature,
following run, of the random num- in a complex system simulation model de-
bers ruling the casual extractions pends on a higher and higher number of in-
◦ simulation trial replication for n0 dependent causes is distributed according
times, as many as are the central test to the central limit theorem as NID (0,σ2)
provided by the relevant CCD. • complying with the Cochran’s Theorem
the expected MSPE value is just the σ2 vari-
After having established, then, N detection mo- ance of the Pure Experimental Error
ments in the dependent variable value simulated
time, all equidistant each others by an identical In other words the Experimental Error, for each
Δt, so that t1=t0+ Δt; t2=t1+ Δt…,for each of the n0 target function, if the number of central test n0 is
run it is predisposed a file containing the N detec- sufficiently high, assumes, complying with the
tions relevant to each of the target functions that central limit theorem, a Gaussian distribution of
you intend to investigate. This procedure scheme which we calculate the average value y j (ti) and
is equivalent, under the conceptual profile, to the the σ2 variance (ti) under the form of MSPE (ti) for
N run with a ti duration, with i=1…N, replicated which the MSPE (ti) gives a representation of the
for n0 times Experimental Error evolution in the simulated
period. Graphical representation of the N values
3. to the N detection moments having a ti, dura- of the MSPE(ti) and interpretation by the experi-
tion for each i, we calculate the two statistic

98
Experimental Error Measurement in Monte Carlo Simulation

Figure 5.

menter of an interpolating curve which trend is ei = ±3 σ 2 (ti )


classically like that of a knee curve (see Figure
5): for little values of ti, indeed, the number of then, in the following runs it will be know,
extractions from the single frequency distributions afterward, the fact that the responses punctu-
results reduced then the re-sampled extracted data ally obtained by the simulator shall be inter-
allow approximate interferences and, as such, preted, really, inside a variability field defined
very different from run to run, with an obvious through ei. By wanting exemplify if, under
effect on the investigated target function which, particular operating conditions, y1(ti) = 1000
at ti parity supplies, in each of the n0 runs, values and σ2 = 100 the real target function value shall
also extremely misaligned.
not 1000 but y1* (ti ) = 1000 ± 3 100 that is
With the ti growing the samples become richer
under the effect of the Pure Experimental Error
in width and, as result, the target functions val-
ues, at instant ti parity tend in the n0 runs to more 970 £ y1* (ti ) £ 1030
marked value homogeneity, being still intended the
dependence of the single MSPE on the investigated 4. in the case of additional target functions in the
target function type as well as on the stochastic time, such as the production, to maintain the
character entity in the real system. After having classic knee trend of the MSPE, trend which
detected the different areas of stability which are can be well and easily visually interpreted,
normally in a MSPE evolution curve in the time it is necessary to carry out a normalisation
on temporal width Δt1, Δt2 etc which the experi- operation of each response by dividing
menter can evaluate through the application of a their value, at each of the N ti detection
Fisher test on the variances, the run stop moment moments for the same ti time according to
y (ti )
can be assumed according to, at condition that it the y N (ti ) =
complies with the characteristic background noise ti
of the investigated real system, the acceptable 5. in the case of the investigation simultane-
considered Experimental Error level on the final ously carried out on different target functions
result and expressed through the: the optimal run length complying with the
remarks of which at the previous paragraph

99
Experimental Error Measurement in Monte Carlo Simulation

4, will result to be that of the target function campaigns, consists of a computer-based produc-
attaining the stabilisation phase in the longest tion line in which operate 10 different typologies
simulated time. of operating machines (see Figure 6) and it has
been developed in Simul8 Professional by Simul8
Methodology Application Corporation.
to a Test Simulator The detected configuration for the attainment
of a particular production target, 180.000 pieces
The physical system which will be used after produces in 380 days, provides, totally, the use of
modelling through discrete and stochastic simu- 10 operating machine typology divided as shown
lator to show the real possibilities offered to the in the Table 1.
designer/manager of complex systems deriving The operating machines do not undergo to
by the application of the Experiment Theory to significant failures during the production phase
the planning of the simulation experimentation since they are daily maintained in the moments

Figure 6.

100
Experimental Error Measurement in Monte Carlo Simulation

of line-stop. The working times, by piece, for called Pure Quadratic Curvature test, allowing to
each machine, are expressed under the form of evaluate the eventual curving in a model of the
frequency distributions characterised as shown 1st order or to not launch the center points in case
in the table. of re-use of a central composed design, which is
The law of the raw material arrival is a negative used for the research of a 2nd order link among
exponential with an average of 8 minutes. the independent and dependent variables.
On the contrary there are not limits to the As explained in the previous phase of the theory
productions resulting from the output warehouse organisation, which is the base of the simulation
dimension. run optimal period, the steps to carry out on the
For the run curve building, after having ob- model are the following ones:
served that the decisional freedom degrees are
represented by the sole 4 typologies of machines 1. fix:
shown with the letters A,B,C,D, we consider as a. the number of tests to carry out: 5
opportune to assume a positioning of the 4 deci- b. the run length: 780 working days of 8
sional variable at the central level of the relevant hours each
variability ranges, that is: c. the detection path ∆t: 1 day
d. the total number of r detection time
A = 8 ; B = 5 ; C = 14 ; D = 7 moments: 780
2. provide to the production function normalisa-
While the remaining typologies results to be tion, step by step, obtained by dividing the
automatically defined, just, by the lack of freedom cumulated production in the ti time by the
degrees deriving from the choices already done. number of ti days:
By fixing in 5 the number n0 of central replica-
tions we can obtain, if we would decide to carry The output obtained from the 5 simulation
out the following investigations on the screen- runs, by changing at each new run the random
ing and the RSM exploiting factorial designs or number trigging seeds, are transferred inside the
central composed designs, even the possibility of Microsoft EXCEL tables organized as shown
their re-use to carry out the screening or the so hereunder (see Figure 8).

Table 1.

Code Typology Min N° Max N° Freq Distrib working Charact stat


period Parameters
A Milling machines 7 9 Uniform 3-5
B Multistep grinding machines 4 6 Normal μ=3,8; σ=1,5
C Nitriding machines 13 15 Normal μ= 4,5;σ=1
D Dimensional check 6 8 Normal μ=6,8;σ= 1,5
E Lathes 8 8 Triangular 3-4-5
F Balancing machines 6 6 Uniform 1-2
G Hardness check 5 5 Triangular 1-2-3
H Washing 5 5 Fixed 2,5
I Packaging 4 4 Uniform 1-2
L Grinding Machines 7 7 Normal μ=2,4; σ=0,5

101
Experimental Error Measurement in Monte Carlo Simulation

Clearly, particularized to the concerned case, target function, y*(t0) really shall be read as one
the MSPE formula will be so structured: of the possible values included in an interval

780 (y( ti )-y( ti ))2 yMED * -3 MSPE £ y *(to ) £ yMED * +3 MSPE


MS PE ( ti ) i 1
4

In correspondence of 380 simulated days the


where y(ti) represents the 5 simulator responses in
MSPE on the normalized value of the production
the 5 runs in the moments ti e y( t i ) the average is of the order of 5∙10-2 then:
of these responses.
The example shown hereunder describes the σ = 5 ⋅ 10−2 = 0,23pz
necessary steps to calculate the MSPE (ti) (see
Figure 8). In the MSPE (ti) column it is possible
and, then,
to see the values, just, of the Mean Squares and
Pure Experimental Error which, for the Cochran’s
± 3σ ≈ ± 0,7 pz
Theorem, represents the best variance estimator
σ2 of the experimental error distributed, just, as
value with a negligible impact on a daily average
a NID(0, σ2).The MSPE (ti ) evolution is shown
production of about 494 pieces. The graphic exam
in Figure 7.
reveals a good curve arrangement already starting
The MSPE (ti ) supplies, therefore, at each from 160/170 simulated days in correspondence of
ti the error quadratic average difference. whom the MSPE value is of the 9∙10-2 order then
Being the responses under each ti at their turn
distributed as a Gauss ( y( t i ) , σ2) we can deduce σ = 9 ⋅ 10−2 = 0,3pz
that, once selected the run optimal duration having
a σ2 variance, the simulator response, under a given ± 3σ ≈ ± 0,9 pz

Figure 7.

102
Experimental Error Measurement in Monte Carlo Simulation

from which we can evict that by doubling the Methodology Critical Analysis
simulation run we have a paltry gain in terms
of benefice on the normalized production date What described above gives rise at least to two
masking. questions, which can spring out doubts about the
In a purely speculative optic, we can observe generalisation possibility of the methodology
by opportunely magnifying the scale on the y axe, (Mosca e Giribone, 1982). They are:
as shown in the Figure 9, a slow descending trend
of the MSPE with adjustment, probably definitive, 1. Is there the possibility that the random num-
around a value of 9∙10-3 to which correspond a σ ber sets chosen for the n0 runs can in some
of 0,1 pz. The choice to stop the run at 380 days way condition, as it is obvious, the “stories”
depends exclusively on the opportunity to have that the simulator tells, make varying the
not to make adding runs to attain the cumulated trend of the MSPE curve and, as result, dif-
production whose value, said in passing, in all five ferently place the instant corresponding to
launches carried out on the central values widely the optimal period?
overcome the required 180.000 pz (included the 2. Can the auto-correlation, which, undoubt-
experimental error). edly, exists among the following responses
in the ti time of each of the n0 run condition
the methodology validity?

Figure 8.

103
Experimental Error Measurement in Monte Carlo Simulation

Figure 9.

In both cases to show the groundlessness of • the 5 MSPE evolution curves of each target
the questions proposed instead of going on the function tends to stabilize at the same in-
way of difficult and almost undoubtedly non- stant tott
exhaustive theoretical dissertations we preferred • the MSPE, and then the variances of the
to base ourselves on experimental analysis and Pure Experimental Error, as we can see
the interpretative simplicity and solidity of their from the curve trend, are, for low values of
immediateness. Therefore, with reference to the ti strongly conditioned by the effect of the
possible dependence of the MSPE evolution curve random numbers and give rise to different
on the random numbers ruling the extractions Gaussian k distributions for each ti. With
from the frequency distributions in the module the simulated time passing the Gaussian
according to the Monte Carlo technique, with k tend to superpose up to create a single
reference to the first of the two remarks we will normal distribution with a σ2 variance.
show that, through the application to a real case, From this assertion results the consequent
which is sufficient to run k set of n0 central test confirmation of the tott independence from
being each k ≥ 5 and operating for each set the the random number generators since the
total changing of the trigging seed of the random behaviour differences in the Experimental
number generators ruling the extraction from the Error curve exist only in the beginning
frequency distribution in the model and, therefore, temporal moments and not in the follow-
of all the random numbers which are used in each ing in which the k set of simulation test
simulation run. Such an operations causes that the generate curves of the MSPE which can be
simulator tells stories, at least initially, different absolutely superposed.
the ones from the others. As we can remark in
the Figure 10, just, relevant to 5 series of tests For as concern, on the contrary, the possible
for each target function carried out on the already response autocorrelation effect on the MSPE
described model and used as guide test case for evolution curve, in output from the model at
the whole chapter: the following time ti, we can show that it results

104
Experimental Error Measurement in Monte Carlo Simulation

Figure 10.

non influent. The problem in it is, conceptually, then the eventual correlation among the responses
consistent since each story told by a simulator is of the following blocks would irremediably in-
such that the value of each of the j target functions validate the results. In the methodology concep-
under exam in the same instant ti is, undoubt- tualization we had therefore thought to eliminate
edly, affected from that occurred in the previous such a risk through the choice of an interval Δt
times, from t0 to ti-1, and affects the entity of the width between two following detections of the
responses in the following times from ti+1 to tr: target functions values, sufficiently wide, so as
just to exemplify we can observe as a very low to make that only the first “few” events of the
occurrence probability extraordinary event, which sub-th run would result correlated with the last
could be in a production system an unusual failure events of the sub-(i-1)th run for which, by direct
of a fundamental machine or the combined effect consequence, y j (ti ) e y j (ti -1 ) are affected by
of simultaneous failures on more machines with autocorrelation almost not influent. In other terms
significant periods of machine stop superposition, the base idea was that to damp, until to annul it
not only would strongly condition the story of that almost completely, the unavoidable correlation
precise run but it could generate a distorting effect existing among the following instants of one
on the average value estimation and, then, of the same run through the horizontal “cut” of the n0
punctual values of SSPE and MSPE in each of the simulations, carried out at the temporal instants
following detections. The set up methodology for ti at which the target function detections are car-
the study of the MSPE evolution presupposes, on ried out. By the effect of this procedure is as the
the contrary, that the remarks in each of the r ∙ n0 experimenter would carry out n0 simulations with
fictitious simulations, through which we calculate a t1 period, n0 simulations with a t2 period,….n0
at each time ti the MSPE values, are independent simulation with a tr period.

105
Experimental Error Measurement in Monte Carlo Simulation

All that before stated, the totally experimental THE INDEPENDENT


check of this assumption validity has been car- VARIABLE SCREENING
ried out through the “blanked test” method which
provides to carry out, just, n0 ∙ r distinct simulation Introduction
launches with periods, by blocks of n0, respectively
t1, t2,…tr without intermediate detections, and as It is an operation that, in the simulation trial
result, as such totally independent the ones from design methodology, has been placed between
the other in terms of told stories and, then, of the MSPE evolution curve building, necessary to
punctual valorisation of the j target functions. determinate the run time having the best ratio price/
Only to make an example, it is possible to performance in connection with the experimenter
observe that in the case of the simulation guide result precision requirements, and the following
model we organized, as shown in Figure 12, an experimental plan organisation, in the strict sense
experimental campaign providing n0=5; tr=375 of the word, as it is shown in the Figure 4. The
days; r=9 the blanked test method involves the Design of Experiments literature, in which the
run of 9 groups of 5 simulations each for a total independent variable screening is shown with
of 45 simulation runs, having durations included the name of effect analysis (Montgomery, 2005;
between 35 and 375 days, with detection of the Mosca and Giribone 1982), gives to this operation
target function values and, then, of the MSPE only two targets, one conceptual and one utilitarian,
at the end of each run. with a certain importance:
Easy to verify as the MSPE resulting curve
obtained with the above described technique, • Conceptual target→ when we face the study
does not show starting from the conditions of first of a complex system, an error which cannot
stabilisation, differences statistically remarkable be never committed, since it is so serious
compared to those obtained with the proposed to prejudge the validity of the whole mod-
methodology (see Figure 11). elling, is that to not take into consideration
one or more independent variables able to

Figure 11.

106
Experimental Error Measurement in Monte Carlo Simulation

affect, with their behaviour, at least one of comparison among the same variable, a classifica-
the target functions concerned by the ex- tion by importance, target by target, this technique
perimenter study. As result, in the choice represents a cognitive tool, on the behaviours of
of the independent variable set which will the target system, of absolute importance for the
be taken into consideration, it is necessary experimenter and the designer/manager.
to pay great attention not only to include
those that, by experience or knowledge of • Utilitarian target → the detection of the
the system, shall affect the j target functions independent variable with a low effect on
but also those that, for any reason, it could a target function allows, as we will see in
be assumed that can have some condition- the guide example and according to theo-
ing effect on the dependent variables. risation of the Design of Experiments, to
“kill” the variability of those independent
The need to avoid this risk brings, as result, variables, resulted after the screening op-
in the experimentation organisation phase, to eration as non significant, by assigning
make growing, sometimes also significantly, the them constant values, chosen in the field
number of independent variables which are taken of the beginning variability range, in the
into consideration. Since, therefore, the number following experimentation, the utilitarian
of simulation runs provided by the following ap- effect of this approach on the number of
plication of the Response Surface Methodology tests to carry out is evident when the total
increases, at least, with exponential law (laws number of the tests necessary to carry out
designs of the 2k series) each additional variable, a central composite design is explained, a
compared to those really incident on the target design for the description of the statistical
function, generates a devastating multiplicative characteristics for which see the following
effect on the run number which should be carried paragraph 4, which are:
out (Myers and Montgomery, 1995). All that before
stated, it should result clear how a technique, just N = 2k + nc + 2 ⋅ K
that of screening, allowing at the experimental
plan organisation bottom to know the possibilities being k, as already before shown, the num-
of each k independent variable to affect each of ber of independent variables detected by the
the j target functions and to make, as result, in a experimenter for the design achievement.

Figure 12.

107
Experimental Error Measurement in Monte Carlo Simulation

All the before stated it result, obviously, the dent variable able to condition the production
display of that we have defined as utilitarian target target function.
of the screening operation and which can so sum- Therefore, in the choosing of the employed
marized: for each of the afterward selected inde- number to assign to a particular group of machines
pendent variables and resulted as non significant of that system, the plant manager has detected in
at the screening, the number of necessary test for the time and with the experience a variability field
a complete description of a given target function such for which inside it he manages to achieve
is halved compared to that initially assumed. the production fixed targets, after a screening on
It is well, therefore, specify that, when we the production target function almost certainly
face problems in which it is necessary to study the “employed number” independent variable will
simultaneously the behaviour of several dependent result not much or no significant, while it could
variables, the offered advantage by the “conceptual be for the “useful” target function. This is why,
target” of the screening technique continues to we repeat it, the range and importance concepts
fully express as in the case of single dependent are indissolubly connected. To realize it, it is suf-
problems, since, dependent by dependent, we ficient to place in the simulation model the num-
continue to make clear and clear the role that, on ber of employed in a contiguous range, levelled
it, have the single independent variables. “The downwards, and, immediately the position of the
utilitarian target”, on the contrary, is often more variable in the relative importance classification
difficult to achieve since an independent variable will change (Mosca and Giribone, 1983).
which would result not much significant for the A remarkably important second notice concern-
j-th target function can, on the contrary, express a ing the screening is related to the methodology
remarkable importance on the dependent variable ability, whose exhaustive description is in the
(j+1)-th. if that is how things are any independent DOE texts listed in the bibliography, to analyse
variable, if the situation would be extended to all not only the capacity of the independent variables,
the k variable, shall not be eliminated from the as single, to affect the target functions, but, also,
experimental plan then the number of simulation the so called and already mentioned interaction
launches to carry out would be that correspond- effects of the second, third ….k-th order that
ing to the N test of the full central composite is the independent variables to affect the target
design(eventually reduced, if too onerous, through functions or in combination between two or more
the application of a suitable DOE methodology of them.
called “factorial fractioning”). Obviously, at the purposes of the two meth-
odological targets proposed for the screening, the
Experimental Statistics References fact that a variable affects a dependent in single or
combined action is not important: in a case or in the
The concept of an independent variable importance other, it will result able to affect a target function
(or “factor” in the DOE terminology) is not an and, as such, important and non eliminable. The
absolute concept but relevant to the value range experience acquired in the production and service
that the same variable can assume in the field of complex system study through the construction of
pre-fixed variability for it by the experimenter of more than three hundred different reality models
the system manager and by the considered target allowed to the authors to focus some remarks
function. from those derive some empiric rules, which, even
The manpower resource in a not fully if in the fields of non exhaustive generalisation
computer-based manufacturing system can be with which it is necessary to see these typologies
affirmed afterward, is undoubtedly an indepen- of assumptions can represent a useful warning

108
Experimental Error Measurement in Monte Carlo Simulation

for the experimenter who starts the study of the Since at the high/low level of an independent
screening methodology: variable, A for ex, are linked the low levels, the
high ones and the relevant combinations of high
• Besides to all or part of the effects of the and low of the other concerned factors that is that
first order, generally, some effects of the the physical interpretation of a factor with the sole
second order can result significant, rarely exception of the case of two sole independent vari-
enough those of the third order, practically ables is all except elementary for the experimenter
never those from the fourth order on it is determinant, in a relative comparison optic,
• Two independent variables A and B giving the knowledge of the sole effect magnitude, since
rise to significant effects of the first order it will be this to reveal if an effect is important
do not generate, necessarily, a significant and how it is in percentage, compared to the other
interaction effect effects, really important. (In the same reading key
• On the contrary two independent variables there is the effect algebraic sign of the effect to
giving rise to non significant effects have which, as result, it is not possible attribute any
more times generated significant effects of information content).
interaction. This concretely means that in Only to make an example in the case of a 22
the field of the assumed variability ranges design the 4 experiment responses can be obtained
the two variables are not, as single, able to with:
affect the dependent while their combined
effect can result, in any case, important. • A1 (at the first level) and B at the two levels
B1 and B2 (then A1→B1 and A1→B2)
Stated that, as we have already mentioned • A2 (at the second level) and B at the
before, for an exhaustive treatment of the theoreti- two levels B1 and B2 (then A2→B1 and
cal experimental statistics assumptions read the A2→B2)
DOE literature mentioned in the bibliography,
for an adequate comprehension of the screening For which, by representing the effect of A the
methodology is necessary to remind at least some produced variation between the response average
definitions. under A2 and the average under A1, can be easily
After having remembered that in the factorial understood as the intelligibility of the same cannot
designs of the 2k series each independent variable go further, for the experimenter the “easy” inter-
can assume, in the case of quantitative names, one pretation of its magnitude. This is, in any case,
of the values included between the specified ends more than sufficient for the purposes assigned to
of the same range, it is defined as main effect of the screening operation and summarized in the
an independent variable the production variation two targets enumerated above
entity for a given target function, between the The interaction effect between two or more
average of the values assumed by the dependent independent variables, much more complex
variable in correspondence of the high level of A to define, expresses the ability of two or more
and the average of the values calculated in cor- independent variables to affect a given depen-
respondence of the low level of A (Montgomery, dent through a combined action of the average
2005). responses at the simultaneously high, low and
This said in other terms, the main effect is only mixed levels (for their theoretical deepening see
the capacity measure of an independent variable Montgomery, 2005; Box and Draper, 1987).
to affect a given dependent when the independent Under the strictly methodological profile the
goes from the range low level to the high one. knowledge of the EFFECT size allows to make,

109
Experimental Error Measurement in Monte Carlo Simulation

case by case, a relative importance classification SST = SSTRAT + SS E


of the independent variables, singularly or by
multiple combinations, to affect the dependent that is the total statistic variability of all the re-
ones. marks of a given experimental plan it is possible
Clearly if the effect of A is 1000, the effect to divide it in the two main components:
of B is 1 and the AB interaction effect is 0,5 it is
possible to affirm that: a) SSTRAT SS EFF of the 1°, 2°,…,K° order
or Sum Squares of the Effects through which
• A is undoubtedly more important than B in it is possible to express the experimenter
the carried out trial field, of the analyzed capacity to understand how the total statistic
target function and of the variable ranges variability can be attributed to the indepen-
assigned to the two independent variables dent variable behaviour
• With as much reasonable certainty it should b) SSE Sum Square of the experimental error,
be possible to affirm the negligibility of which is only the experimenter ignorance
B in affecting, directly or in combination level face to a multitude of causes intervening
with A, the target function for which the B to condition the experiment (including its
independent variable can be transformed in eventual interpretation model), without that
constant, by assigning to it one of the pre- he manages to explain them in a statistically
assigned values of the variability range. consistent way. The casual effects are joined
in it as non systematic measure errors, hav-
If, on the contrary, the B effect would be 10 ing endogen and/or exogenic interference
and the AB interaction would be 5 it could arise effects affecting the dependent variables
some doubts about the real negligibility of B. without that the experimenter manages to
For these situations is possible to obtain a detect neither the triggering causes or to
statistically reliable analysis to carry out through singularly measure the results. After having
a suitable table of the Variance Analysis built estimated the SS E , the higher is SS E the
through the concept of “contrast; the contrast al- less the independent variables manage to
lows, indeed, to obtain the suitable amounts and explain the behaviour of the analysed de-
then averages of the squares through which make pendent variable. When the trial is translated
MS EFF into a meta-model SS E can be divided into
a statistic summary Fo = to compare with
MSe two components, one relevant to the SS PE
an F of tabled Fisher and which can be detected, on experimental phase and one relevant to the
the relevant tables, through the chosen α reliability SSLOF model. All that before stated for as is
level and the freedom degrees υ1= 1 always for competence of the screening methodology,
MSEFF and υ2 for the pure experimental error. the possibility to carry out an exhaustive
investigation which contemplates besides to
The Pure Experimental Error the effect pure analysis also the contraction
of the ANOVA table, requires to attain to the
In order to understand the part of the ANOVA test trial MS E knowledge.
and the operations which should be carried out to
organise the experimental plan, to achieve it, it is
sufficient to recall the basis relation of the whole
DOE philosophy:

110
Experimental Error Measurement in Monte Carlo Simulation

Screening Suitable high, the high simulation experimentation


Experimental Designs costs/times, exists the reasonable certainty
that some of the independent variables will
The screening operation can be carried out in result non significant, the experimenter is
different ways according to the desired precision willing to pay a contribution, growing with
level and the test number that you want to achieve the reduction of the test number, in terms
on the simulation model: of effect “confusion”. This last statement
means that the experimenter will not able
1. Daniel method: it is used in the case of to isolate the single effects, then the effect
mono-replicated factorial trials. It makes representative number of a particular factor
reference to a normal probability diagram will contain not only that but also other fac-
showing in the abscissa the factor effects and tors (Montgomery, 2005).
in the ordinate the pk. The effects resulting
reasonably aligned on a straight line are non By making use of minimal aberration and
significant effects while the misaligned are maximal resolution concepts it will be possible
significant, the first are, as result, distributed to obtain, fractioning by fractioning, the minimal
as a NID(0,σ2) then, calculating the SS of perturbation on the evaluations.
these effects and adding them, Daniel obtains Through a further trial replication with genera-
the Sum Squares of the pure error, whose tors which are copies in negative of the first, it
freedom degrees are equal to the sum of the will be possible, moreover, the separation of the
non significant effect number. effects considered most significant and then, for
2. Replication of the full factorial design: with as possible, isolate or confuse with high order
usual formulas we can calculate the SST, interaction the main effects and the interactions
SSEFF, SSE and the ANOVA. obviously, for of the second order which are, generally, those
both cases 1 and 2, if it is made reference greatly representative of the system behaviour.
to a factorial design and the effect analysis
would give as such clear indications on the Methodology Application
relevant positions and the role of the different to the Guide Model
independent variables taken singularly or in
interaction, the same effect analysis could be For the achievement of the screening operation
considered exhaustive at the screening opera- on the simulation model used as test-case we
tion purposes, without making reference to remember that the independent variables affect-
the interpretation through the ANOVA. ing the production under the designer and/or the
3. Center points addition: as from theory, these line manager control are represented by the four
trials, which are replicated in correspondence machine typologies:
of the design centre allow, beside to the
achievement of the test about the pure curv- 1. A = milling machines range: 7-9 machines
ing, the measure of the experimental pure 2. B = multistep grinding machine range: 4-6
error SSPE and then of the MSPE necessary machines
to obtain the F0 summaries for the Fisher 3. C = nitriding machines range: 13-15
test. machines
4. High level fractioning factorial designs: it 4. D = dimensional control range: 6-8
is advised to make use of them when the machines
concerned independent variable number is

111
Experimental Error Measurement in Monte Carlo Simulation

The question that in this trial management 1


SS A = (a + ab + ac + abc − (1) − b − c − bc) 2
phase is necessary to make is then: are all the 2⋅2 3

four machine typologies, classified with the let-


ters A,B,C,D, in the field of the variability ranges where the capital letters between parentheses
planned for them by the decision-maker, really able represent, just, the sum of the two results obtained
to generate variation on the production realized from each single experimental response.
by the plan in a particular period of time? The small letter shows that the corresponding
The expected operative answer is no/yes and variable entered in the trial at its high variability
in the affirmative case is necessary to know which range level while the lack of reading means that
typologies result really influent. the variable is entered at its low level. Then:
Of the four possible investigation methods
listed before, see the low number of the con- 8. a is the value which is obtained by adding
cerned independent variables we consider three the two responses obtained by soliciting the
of them: target system with the factor (independent
variable) A placed at the high range level
5. Mono-replicated factorial design with the with B and C placed at their low levels
Daniel Methodology application 9. (1) is the value which is obtained with A,B
6. Factorial design with replication and C all at the lower level
7. Factorial design with the addiction of 5 center 10. abc is the value which is obtained with A,B
points and C all at the higher level

After having remembered that the factor ef- In the case of a 4 variable factorial design the
fect calculation can be normally carried out using contrast consists of 16 terms corresponding to the
the so called sign table from which not only the 16 A,B,C,D experimental level combinations and
“contrasts” are deduced but also it is possible to the relevant calculations, if achieved without the
easily take both the factor effects and their Sum help of a PC (it would be better if it was equipped
Squares through: with an experimental statistic software), result
expensive in terms of necessary time and generate
2 ⋅ (contrast ) 1 frequent calculation errors.
EFF = = k −1 (contrast )
2 ⋅n
k
2 Make, then, reference in the applicative ex-
ample to the Design-Expert 6.0.10, the tool of
(contrast ) 2 the Stat-Ease, Inc.
SS EFF =
n ⋅ 2k In the Factorial section chose the 4 factor “2
level factorial” design with a single replication
being n the number of replicated tests to use in for a total of 24=16 trials.
the formulas when the target function is expressed The levels for the four variables are those
under each experimental level, through the addic- previously shown that is:
tion of the obtained responses.
In the case of a 3 independent variable facto- 7≤A≤9
rial design replicated twice the effect of A and its
Sum Squares result to be so configured: 4≤B≤6

A=
1
(a + ab + ac + abc − (1) − b − c − bc) 13 ≤ C ≤ 15
2 ⋅ 23−1

112
Experimental Error Measurement in Monte Carlo Simulation

6≤D≤8 and Figure 15). As it is possible to observe in the


Figure 15, 12 effects on 15, that is all the effects
The software directly proposes the experimen- excepting B, D and BD (having values respec-
tal plan that is the levels at which the 4 factors tively of 5047,18929 and 5064, with contribution
must be placed in the 16 simulation runs which percentages of 6,22%, 87,51% and 6,26%), result
shall be carried out. placed along a straight line.
Each run, obviously, shall have duration equal This means that, as theorised by Daniel (1959),
to the time selected after the previous research these effects are negligible and distributed as a
analysis of the optimal run length. The 16 re- NID (0, σ2) while the significant ones are out of
sponses are placed in the “Response 1” column the same having an average ≠ 0.
(Figure 13). Now, by adding up the SS of the non significant
Then under “Evaluation” all the 15 first, effects we are able to calculate the Experimental
second, third and fourth order effects are se- Error and use it for the construction of an ANOVA
lected. In the “Response” screen under Effects→ table in B,D and BD for a 4 time replicated 22
View→Effects List it is possible to see, for each factorial design, whose experimental plan is shown
effect, both the value and its Sum Squares, as well in the Figure 16.
as the effect contribution % on the total and, under Stat-Ease allows to carry out this operation,
View→ Half Normal Graph, an easy interpretable which is substantially equivalent to the 4 factor
graphic representation of a Gauss plan showing beginning design projections in 2 factor, B and
in the abscissa the factor effects (see Figure 14 D, 4 factorial designs simply by acting under Ef-

Figure 13.

113
Experimental Error Measurement in Monte Carlo Simulation

Figure 14.

Figure 15.

114
Experimental Error Measurement in Monte Carlo Simulation

fects→ View→ Effects List and clicking on the M same function for which they shall be, therefore,
column, in correspondence of each non significant fixed at any value of their range, particularly, at
effect, so as to transform the “M” in “e”. the central one.
Coming back under ANOVA the relevant The production target function results, there-
variance analysis table is displayed (Figure 17) fore, in the analysed production design in the
in which the freedom degrees are: shown variability ranges, as affected at first, and
in a preponderant way, by the variable D, number
11. for SST → being 16 the total number of of machines for the dimensional check, but also,
observation 16-1=15 even if in a less important way, of the variable B,
12. for the Model SS → consisting of the 3 number of multistep grinding machine and, in the
terms relevant to B,D and BD, each with 1 same way, by the combined effect of B and D. It
freedom degree then for 3 freedom degrees is interesting to note how, complying with what
in total said above, in spite of D has a very remarkable
13. for the SS of the global Error →15-3 =12 importance, its interaction effect with B is greatly
less important in affecting the production.
The ANOVA table confirms, then, the sig- The second screening method which is analy-
nificant character of the B, D and BD variables sed is that of the factorial trial replication, which
with respect of the production target function for its activation requires the execution of further
and the non influence of the remaining on the 16 simulation trials, one for each of the factor

Figure 16.

115
Experimental Error Measurement in Monte Carlo Simulation

Figure 17.

combinations detected in the previous design(see 14. for SST = 32-1 = 31


Figure 18). 15. for SSMODEL = 15
After having carried out the additional runs 16. for SSE = 31-15 = 16 which can all be at-
and obtained the relevant responses it is neces- tributed to the experimental Pure Error
sary to provide to their input in Stat-Ease with
identical procedure compared to what described By deselecting from effect “M” to error “e”,
at the previous case and with the sole difference under Effects, the non significant effects, it is
to specify, in the first window, 2 replications possible to obtain a new ANOVA table in B,D and
instead of 1. BD, exploiting the 32 carried out tests.
After having re-selected all the 15 effects under The table confirms, obviously, the importance
Evaluation go on under Response →Effects→ Ef- of the two variables and of the 3 effects and
fects List to obtain the effect magnitude reading, proposes a regression meta-model, interpret-
which results about superposable to the previous ing the production target function which we will
and, in any case, with such variation which do analyze in detail in the following paragraph, for
not modify their importance classification (the which also the freedom degrees are available and
SSEFF are obviously bigger than the previous), necessary to carry out the Lack of Fit Fisher test
and the signal effect contribution percentage (see (see Figure 21).
Figure 19). The last of the 3 selected methods to carry out
Under ANOVA, it is possible to observe that the screening operation is, undoubtedly, to con-
the trial replication allows to dispose also of the sider highly profitable in the relationship price/
freedom degrees for the Experimental Pure Error performance since with only 21 points, those of
evaluation. From the organised ANOVA table of the first design, with the addition of 5 central
the experimental plan results still the importance replications (see Figure 22), obtains:
on the production target function of the indepen-
dent variables B,D and BD (see Figure 20). 1. the same effects and Sum Squares of the
Said passing, the freedom degrees are now: Daniel Method

116
Experimental Error Measurement in Monte Carlo Simulation

Figure 18.

2. a measure of the Pure Experimental Error test, respectively concerning the regressive
3. the Pure Quadratic Curvature Test, allowing approach validity and the model suitability
to verify if the 1st order model is adequate lack.
to describe the situation, after the decline of
error non significant factors and dispose of We remember at this concern that the freedom
the freedom degrees for the I and II Fisher degrees are respectively:

117
Experimental Error Measurement in Monte Carlo Simulation

Figure 19.

Figure 20.

118
Experimental Error Measurement in Monte Carlo Simulation

Figure 21.

Figure 22.

119
Experimental Error Measurement in Monte Carlo Simulation

• SST = 21-1 = 20 RESPONSE SURFACE


• SSEFF = 15 CONSTRUCTION AND
• SSPC = 1 OPTIMAL DESIGN CHOICE
• SSERR = 20-15-1 = 4
Theoretical Fundamentals
For the case of the 4 factor complete design(see
Figure 23), while for the two factor design, after The worry of the Experimental Statisticians con-
screening (see Figure 24): cerning the need to detect designs allowing to
obtain regression models with which approximate
• SST = 20SSEFF = 3 the real response surface (or natural connection
• SSERR = 20-3 = 17 of which (SSPE = nc-1 = among the independent and dependent variables)
5-1= 4; SSLoF = 17-4= 13) whose variance is in the same time the smallest
possible but also the most steady along the inves-
tigated field (Myers and Montgomery, 1995). The
expression of the Prediction Variance Var(yˆ(x ))
of the regression model is pointed out:

Var (yˆ(x )) x (m )' (X ' X ) 1


x (m ) 2

Figure 23.

120
Experimental Error Measurement in Monte Carlo Simulation

from which it can be just evicted that it depends possible, as result, also to reduce the whole ŷ
on: model, it has been levered on two concepts of
optimal variance and orthogonal character.
1. The investigated point field position, through We remember, in passing, that a model is an
the positional vector x(m) optimal variance when the variances of all the bi
2. On the experimental design adopted through (i=1…k) are all equal among them and equal to
the (X’X)-1 matrix being X a particular σ2
matrix, expressed in Coded variable terms, the variance of b0 = , being N the number of
N
a -1; +1 corresponding to the low/high level, experimental levels eventually replicated appear-
which consider the experimental design type ing in the X matrix and which, a I order design is
adopted and the presence of higher order defined orthogonal when X’X is a diagonal matrix
effects and interaction and, still, an optimal variance design is, always,
3. On the regression model with which the real orthogonal while an orthogonal design could not
response surface is approximated, through be at optimal variance since the X matrix, read
the variance of the σ2 error, a variance for example the case of the central test addic-
containing both the component which can tion example can respect the vector orthogonal
be attributed to the pure experimental error character condition, a column expressed by the
and that which can be attributed to the lack product:
of adaptation.
xi '⋅ x j = 0
The way to face the problem underwent in the
second half of the XXI century significant evolu- even if not all the terms contained in X placed
tions. For the first order models, whose regression at value ±1.
equation is expressed through: ŷ Xb being For the II order designs, to the concept of the
b = ( X ' X ) −1 ⋅ X ' y, starting from the consideration regression equation variance minimization has
that if it would possible to reduce the single bi been placed before the concept of masking rate
variance of the regression equation, it would be achievement by the error on the responses supplied

Figure 24.

121
Experimental Error Measurement in Monte Carlo Simulation

by the model used as predictor, which is the steadi- • Factorial designs of the 2k series or even-
est possible on the whole detected area. Box and tually 2k-pfractionary, for which it is pos-
Hunter developed at this purpose the concept of sible to obtain the optimal variance and/
rotation possibility which they intended to impose or the orthogonal character, allow to op-
to the Scaled Prediction Variance Vx: erate on independent variables acting in a
continuous or discrete way in the field of
N Var yˆ(x ) ranges characterised by a quantitative or
Vx 2 qualitative high/low level (on/off), having
a link with the target function which can
be expressed through a regression model
the maintenance of its value at a constant level at
of the first order defined linear simple or
least on each of the infinite spheres having a radius
multiple in relation to the presence of one
equal to the distance between a domain point and
or more independent variables
the design centre. In the obvious impossibility to
obtain a constant Vx on the whole field, Box and
ŷ bˆ0 bˆ1x 1 bˆ2x 2 bˆ12x 1x 2
Hunter try to detect experimental designs allow-
ing to have an equal value of the Vx for all the
link which, translated in geometric terms, ex-
field points placed at the same distance from the
presses itself in the space a k dimensions under
design centre, as well as trends of the same Vx as
the form of hyper-planes which, in case of mixed
most uniform as possible. It can be immediately
terms or interactions become of the twister type.
observed how the Vx, being sterilized compared
Each xi is normally Coded in -1;+1. The number of
to Var(yˆ(x )) of the σ2 variance, depends only
points necessary for a single design is equal to:
on the adopted design type, on the investigated
experimental domain point and a penalisation N
2k ⋅ n
coefficient making growing it at the increasing of
the carried out test number. It results, then, use-
being k the number of independent variables and
ful in the phase of the choice of the experimental
n the number of replications which generally will
design to adopt to evaluate which is the most suit-
result ≥ 2, eventually integrated with a suitable
able design, even in compared to the previously
number of tests carried out at the design centre
exposed characteristics and the requirements of
in relation with the protection of particular char-
the same experimenter. Obviously, once chosen
acteristics of the same design.
the design, it will be the σ2 variance to play a non
secondary role in the final model quality. All the
• Central composite design for which is pos-
before stated, it result how the knowledge of the
sible, by suitably acting, guarantee the ro-
properties shown before could be a valid guide
tation possibility. It allows, for two level
for the choice of the factorial designs, funda-
variable having the same characteristics of
mental class for the target reality adaptation to
the analysed ones for the factorial design,
the first order models and of the spherical/cubic
to make clear the link among the indepen-
central composed designs for the adaptation with
dents and the dependents which can be ex-
second order models, each time more convenient
pressed through second order regression
in relation with the expected reliability type and
models as:
the maximal number of available tests. We want
add also that: ŷ bˆ0 bˆ1x 1 bˆ2x 2 bˆ12x 1x 2 bˆ11x 12 bˆ22x 22

122
Experimental Error Measurement in Monte Carlo Simulation

with a required test number equal to the response, A and C, at the central values
of the relevant variability ranges
N = 2k + nc + 2 ⋅ k 2. the 24 factorial, in A,B,C,D, replicated twice
which, as for the previous paragraph 1), it is
being: projected in a 22 factorial design replicated
8 times
• 2k the factorial core test number 3. the 24 factorial, in A,B,C,D, mono-replicated
• nc the added center points through which is with the addiction of 5 center points giving
possible to obtain the experimental error raise to a 4 time replicated 22 factorial design
• 2∙k the so called adding axial points neces- with the addiction of 5 center points
sary to obtain all the second order model b
◦ The experimental error normality A zero cost fourth design can be added to
checks, which must be distributed as the above-mentioned ones for a performance
a NID(0, σ2) and on the σ2 variance comparison:
constant character, must be always
carried out downstream of the model 4. 22 factorial with 5 center points
construction
◦ The experimental error measure un- Now we want to analyze the quality of the
der the MSE form and the same scis- mathematical relation which can be obtained
sion in its two MSPE (Pure Error) from the four experimental designs in a cost vs
and MSLOF (Lack of Fit) components advantage optic that is the number of tests carried
and it is indispensable to carry out out related to the response “bounty”, by meaning
the two cascade Fisher tests on the with this word both the check test type and the
regression and built, as seen, the response quality in the investigated field in terms
MSPE evolution curve. A DOE/RSM of orthogonal character, optimal variance and rota-
software, as the already mentioned tion possibility as well as, finally, the Confidence
Design Expert, makes possible a Interval width in the average response.
helped choice of the whole experi- 1st design: 24 factorial in A,B,C,D, mono-
mental path and an almost null dura- replicated, which after the screening transform
tion for the execution of an absolute- itself into a factorial 22 in B and D replicated 4
ly insignificant computational mass, times.
graphics included. The tests carried out on the model have been
16 in total according to the scheme of which at
In starting up the screening operation we the Figure 13.
wanted explore 3 different designs: The analysis which can be carried out on
Vˆ y x
1. the mono-replicated factorial in A,B,C,D for the 2
that is on the standard deviation
which we have carried out 24 simulation tests
standardized by the predicted response means
and which, seeing the screening results, has
values included between a minimum of 0,25 and a
been projected in four 22 factorial designs
maximum of 0,50. This is an index of the selected
(that is one 22 in B and D replicated 4 times
experimental design quality and it gives an idea
effect of the constant degradation through the
of how, in the experimental field, the error will
placement of the two not influent variables on
afflict the answer by amplifying itself from the

123
Experimental Error Measurement in Monte Carlo Simulation

centre, where it is minimal, towards the range • the regressive approach validity test is
ends where it is the maximum. passed
By observing the graphic which can be ob- • it is not possible, being absent the central
tained from Design Expert (see Figure 25) under test, have information about the presence
Evaluation→graph→view→ standard error/ of the Pure Quadratic Curvature in the de-
contour we can notice how the iso-error curve sign centre
tracks are not perfectly circular. • it is not possible to carry out the test on the
This since, being the regression model assumed lack of adaptation since we have not the
of the 1st order with a mixed term x1 x2 , the rota- necessary freedom degrees. The design is
tion possibility property is, at least partially, lost then of the SATURATED type.
for which Vˆ yˆ x is function of the distanced
from the centre also in the investigation direction. It result that, relatively to the regression equa-
On the contrary the optimal variance condition tion, which is proposed by the software in Coded
and the orthogonal character one maintain, as it variable, under the form

is possible to confirm from the matrix (X ' X )


-1
ŷ = 1,820E+0,05+5715,62B+21127,63D+5740,
exam which result to be diagonal and from which 38BD
2
is obtained Var (ˆ0 ) Var (ˆi ) Var (ˆij )
(see Figure 26). 16
it can be represented with the twister surface which
After having carried out the processing on can be obtained under Response→ Graphs→
the design (see Response at Design Expert sec- View→ Response→ 3D but nothing can be added
tion) under Response/Anova we notice from the about the adapted model type for which some
ANOVA table that: doubts could remain.
It is possible, on the contrary, calculate the
• the MSE value is 14144,92 standard error on the average response that is

Figure 25.

124
Experimental Error Measurement in Monte Carlo Simulation

the square root of the V yˆ x , error which, as yˆ(x 0 ) t MSEx 0 '(X ' X ) 1 x 0
,N p 1
the Stat-Ease shown under Response→ Graphs 2
and View→ Standard Error/3D, fluctuates in the
investigated field between a minimum of 29,73 The Figure 29 displays the confidence interval
and a maximum of 59,46. (see Figure 30).
The Figure 28 displays the confidence interval 2nd design: 22 replicated 4 times with the ad-
on the average response calculated through dition of 5 center points.

Figure 26.

Figure 27.

125
Experimental Error Measurement in Monte Carlo Simulation

Figure 28.

Figure 29.

The test which they have been taken into con- in correspondence of the design centre. We can
sideration in total, now, are 21, consisting of 16 then notice that:
of the previous design and 5 suitably replicated

126
Experimental Error Measurement in Monte Carlo Simulation

Figure 30.

V yˆ x showing a non significant adaptation lack


• standardized standard deviation 2 of the chosen model
• the central curving test show, on the other
has a minimum of 0,2182 at the design
end, a modest significant character, the re-
centre and a maximum of 0,4848 in the
factorial design range extreme points (see sult of the x1 x2 term presence inducing a
Figure 30) faintly distortive effect in the prevailing
• as in the case of the 1° design the iso- plane trend the yˆ f (x 1, x 2 ), as prean-
error lines show the rotation possibility nounced by the contribution percentage of
loss 1,5 percent of the curving effect on the ef-
fect totality
the matrix (X ' X ) analysis (see Figure
-1

• the regression equation in coded variabil-
31) shows the orthogonal character main-
ity confirms a twister plane (see Figure
tenance but we have, as effect of the cen-
32), now wholly validated by the Lack of
tral test presence, the optimal variance loss
Fit and Pure Quadratic Curvature tests,
since
through the equation:
2 2
Var (ˆ0 ) Var (ˆi ) Var (ˆij ) ŷ 1,820∙105+5667,75B+21128D+5658,87BD
21 16
equation which, from the analysis of the single
• the MSPE value is 13173,3 while, by dis- terms, results extremely near to the previous one
posing now of the freedom degrees, it is
possible to calculate also the MSLOF, which • the standard error on the average response
is 21856,04, and carry out the relevant test (see Figure 33) has a minimum at the design

127
Experimental Error Measurement in Monte Carlo Simulation

Figure 31.

Figure 32.

128
Experimental Error Measurement in Monte Carlo Simulation

Figure 33.

Figure 34.

centre of 25,04 and a maximum in corre- line represents the mean response, the red one
spondence of the range ends of 55,65. represents the upper bound and the yellow one
the lower bound.
The confidence intervals are shown in Figure 3rd design: it descends from the 24 replicated
34 and their magnifications are shown in Figure twice originating a 22, in B and D, replicated 8
35, Figure 36 and Figure 37 in which the blue times.

129
Experimental Error Measurement in Monte Carlo Simulation

Figure 35.

Figure 36.

130
Experimental Error Measurement in Monte Carlo Simulation

Figure 37.

Figure 38.

131
Experimental Error Measurement in Monte Carlo Simulation

2
The test which have been taken into consid-
Var (ˆ0 ) Var (ˆi ) Var (ˆij )
eration in total are 32 and precisely the 16 of the 32
1° design to which we add further 16 points rep-
licated in the same points of the previous design. and the orthogonal character.
We can notice that:
• the MSPE value is equal to 15543,81
• the standardized standard deviation has a • the MSLOF, value, the whose calculation
minimum, in the design centre, of 0,176 we have the necessary freedom degrees, is
and a maximum of 0,353 factorial range 29843,81
ends • the MSPE test shows the non significance
• it remains obviously the rotation possibil- of the assumed model adaptation lack to
ity loss deriving from the presence of the the experimental data while it is totally
(X ' X )
-1
x1 x2 mixed term, while the missing the information about an eventual
matrix analysis confirms both the optimal curving presence since, not being carried
variance with out the center points, the suitable test can-
not be achieved

Figure 39.

132
Experimental Error Measurement in Monte Carlo Simulation

• the detected regression equation (see Figure • the standardised standard deviation has a
40) is, again, almost superimposable to the minimum in 0,33 at the design centre and
previous and it has the form a maximum of 0,92 in the range extreme
point of the factorial design (see Figure
ŷ = 1,820E+0,05+5674,75B+21129,75D+5666, 42)
19BD • as for all the other designs, the rotation
possibility fails, even if moderately
• the standard error on the estimated re- • the (X ' X ) matrix analysis shows the
-1

sponse, expressed as V yˆ x , has a mini- orthogonal character presence but also the
mum at the design centre equal to 26,02, loss of the optimal variance caused by the
and a maximum in correspondence of the central test presence (see Figure 43)
range ends equal to 52,04 (see Figure 41) • the MSE value is 13173,3 which, being cal-
culated on the same central test of the 2nd
4th design: We decided to test, finally, the sig- design, has the same value. On the contrary,
nificant character margin of a high value design the freedom degrees are missing to study
of the test number ratio vs obtained responses MSLOF while the Pure Quadratic Curvature
quality and we choose a 22 factorial design in B test shows how not much significant be-
and D, mono-replicated with 5 replications carried cause of the presence of the x1 x2 interac-
out at the design centre. tion as well as of the consequent modifica-
The required tests are, in total, 9 and they are, in tion generated on the plane configuration
of the ŷ f (x1 x2 ) without mixed terms.
this case, obtained from those already used for the
first three designs and, then, with a zero cost.
• The impossibility to carry out the Lack
We can notice that:
of Fit test would leave, now, some doubts

Figure 40.

133
Experimental Error Measurement in Monte Carlo Simulation

Figure 41.

Figure 42.

about the adaptation opportunity with the ŷ = 1,819E0,05+5743,5B+21100D+5755,5BD


twister plane rather than with a 2nd order
surface relation representing in a three dimension way
• The regression equation in coded variables, the usual twister plan (see Figure 44)
shows, still, bˆi values very near to the equa-
tions obtained for previous design

134
Experimental Error Measurement in Monte Carlo Simulation

Figure 43.

Figure 44.

• The standard error on the average response, A reduced number of tests is paid in accuracy
intended as V yˆ x , shows a minimum on the final result.
at the centre equal to 38,25 and a maximum
Comparison Among the
corresponding to the range ends equal to
Four Design Alternatives
106,507 (see Figure 45).
Stated that all the four analysed designs man-
The Confidence Interval is about 3 times wider
aged to detect relations among the dependent
compared to the second design(22 replicated 4
variable and the two independent ones, almost
times with 5 center points), as it was logic provide
superimposable, very different from design to
from the experimental error entity.
design, results the global information content at

135
Experimental Error Measurement in Monte Carlo Simulation

Figure 45.

the experimenter disposal to validate the detected range points for which or you are sure beforehand
regression model. The 3rd design, that is the 22 of a really 1st order link, still twister, or, not having
factorial design replicated 4 times, even if it has at your disposal information inside the extreme
the optimal character and orthogonal character limits, you risk to consider as correctly adaptable
variance properties, showing all the bˆi of the to the real response surface a model which really is
minimal variance equation, complying with the not. Under this profile of undoubted interest there
used test number, nothing can be said about the are two designs (2nd and 4th) using the replica-
adopted model order not being possible to carry tion at the design centre. If it is true, indeed, that
put neither the Lack of Fit test nor that on the the center points causes the loss of the variance
Pure Curvature; then some doubts about the real optimal character, it is also true that the orthogonal
validity of the 1st order model with interaction character is safeguarded and that already with nine
would be totally justified. experimental tests we can obtain the information
By maintaining the same experimental ap- on the curving presence and the eventual need
proach type, that is that of the factorial design also of additional supplementary tests for a complete
in presence of a whole replications of the n0 origi- analysis in the exposed test case.
nal tests then projected in a 22 design replicated The 21 tests of the 2° designs allows, on the
8 times after screening, is not possible to obtain contrary, an exhaustive analysis since they supply
a fully explanation of all the doubts concerning also the freedom degrees to evaluate the eventual
the detected ŷ n adherence to the real response lack of the first order model adaptation to an
surface. The 3rd design, indeed, even if it uses experimental plan, enriched by the information
32 simulation tests, being orthogonal and with about the investigated reality behaviour in cor-
optimal variance and allowing also the execution respondence of the design centre. In this case it is
Lack of Fit test, has the conceptual defect to use explained the reason for which a first order model,
as investigation points the sole extreme variability obviously modified by the presence of the x1x2

136
Experimental Error Measurement in Monte Carlo Simulation

Figure 46.

mixed term, can be considered valid in presence FINAL CONSIDERATIONS


of a modest entity central curvature, such as just
the deformation compared to the pure flat trend, The integration between Simulation Model, Ex-
which is typical of the so called twister plans. To perimental Design and Response Surface Meth-
confirm this assertion it is sufficient to observe the odology allows to the designer, plant manager, to
ANOVA table which could be obtain by imaging to draw the following operating conclusions:
adapt with a model including the sole main effects,
the reality described through the 2nd design(16 • of the 4 typologies of operating machines,
factorial points and 5 central). on which number we could intervene to
It is possible to evict from it that the model reach the target production set to 180.000
is not, now, able to adapt the experimental data pieces, the milling machines (typology A)
and also considerably seen the F0 summary and the nitriding machines (typology B)
average value and, then, the investigated reality has a non significant incident capacity for
(see Figure 47). which, if no further considerations linked to
It should be noted, still, how the MSE global the system operation would not intervene,
error entity, in which contribution of the BD effect we can consider correct chose for them an
Mean Square by enriching it, mask also the even intermediate number between the assumed
if modest curving in realities present in the correct maximum and minimum that is A = 8 and
model, just expressed by the 2nd design by the B =14 (the eventual additional machine
effect of the x1 x2 term (see Figure 46). for each typology can act as reserve in case
of accidental and not foreseeable failures
which we have not considered).

137
Experimental Error Measurement in Monte Carlo Simulation

Figure 47.

Figure 48.

• of the other two categories of operating which is overcome, with a little safety
machines, the machines for the dimension- margin, 182.000 vs 180.000, in the B = 5
al check (type D) result to be important e D = 7 configuration (see Figure 49).
compared to those of the B type, absolute- Opportunity reasons, therefore, can ad-
ly prominent for the target achievement, dress the designer/ manager to change this

138
Experimental Error Measurement in Monte Carlo Simulation

Figure 49.

Figure 50.

mix in B = 4 e D = 8 with a production These data, taken from the graphics obtained
in the period increased up to about 192.000 from the detected regression equation, must,
or consider the B = 5 e D = 8 configura- still consider the error term influence and, then,
tion with a production of about 202.500 be evaluated in the field of the Confidence In-
pieces. terval.

139
Experimental Error Measurement in Monte Carlo Simulation

We remember therefore how, form the analy- and D = 7 . The graphics hereunder attest the
sis carried out to determinate the run length, normality condition satisfaction of the error and
the experimental pure error put into evidence, off the σ2variance constant character (see Figure
through the MSPE , values so obtained to 50, Figure 51 and Figure 52)
make acceptable already the configuration B = 5

Figure 51.

Figure 52.

140
Experimental Error Measurement in Monte Carlo Simulation

ACKNOWLEDGMENT Mosca, R., & Giribone, P. (1983). O.r. muliple-


objective simulation: experimental analysis of
The authors want to thank the PhD student Carlo the independent varibales ranges. In M.H. Hamza
Caligaris for the generation of some response (Ed.), IASTED International Symposium on
surfaces of this chapter by using the software Applied Modelling and Simulation (pp. 68-73).
MatLab v.14. Calgary, Canada: ACTA Press.
Mosca, R., & Giribone, P. (1986). FMS: construc-
tion of the simulation model and its statistical
REFERENCES
validation. In M.H. Hamza (Ed.), IASTED Inter-
For a better understanding of the theoretical re- national Symposium on Modelling, Identification
quirements of Experiment Theory and Response and Control (pp. 107-113). Calgary, Canada:
Surface Methodology it is advised the study of ACTA Press.
the following books: Mosca, R., & Giribone, P. (1986). Flexible manu-
Box, G. E. P., & Draper, N. R. (Eds.). (1987). Em- facturing system: a simulated comparison between
pirical Model- Building and Response Surfaces. discrete and continuous material handling. In M.H.
New York, NY: John Wiley & Sons. Hamza (Ed.), IASTED International Symposium
on Modelling, Identification and Control (pp.
Montgomery, D. C. (Ed.). (2005). Design and 100-106). Calgary, Canada: ACTA Press.
Analysis of Experiments. New York: John Wiley
& Sons. Mosca, R., & Giribone, P. (1988). Evaluation of
stochastic discrete event simulators of F.M.S. In
Mosca, R., & Giribone, P. (1982).Optimal length P. Breedveld et al. (Ed.), IMACS Transactions on
in o.r. simulation experiments of large scale pro- Scientific Computing: Modelling and Simulation
duction system. In M.H. Hamza (Ed.), IASTED of Systems (Vol. 3, pp. 396-402). Switzerland:
International Symposium on Modelling, Identifi- Baltzer AG.
cation and Control (pp. 78-82). Calgary, Canada:
ACTA Press. Mosca, R., & Giribone, P. (1993). Critical analysis
of a bottling line using simulation techniques. In
Mosca, R., & Giribone, P. (1982). An interactive M.H. Hamza (Ed.), IASTED International Sym-
code for the design of the o.r. simulation experi- posium on Modelling, Identification and Control
ment of complex industrial plants. In M.H. Hamza (pp. 135-140). Calgary, Canada: ACTA Press.
(Ed.), IASTED International Symposium on Ap-
plied Modelling and Simulation (pp. 200-203). Myers, R. H., & Montgomery, D. C. (Eds.). (1995).
Calgary, Canada: ACTA Press. Response Surface Methodology. New York: John
Wiley & Sons.
Mosca, R., & Giribone, P. (1982). An application
of the interactive code for design of o.r. simulation
experiment to a slabbing-mill system. In M.H.
Hamza (Ed.), IASTED International Symposium
on Applied Modelling and Simulation (pp. 195-
199). Calgary, Canada: ACTA Press.

141
Experimental Error Measurement in Monte Carlo Simulation

Sincich, T. (Ed.). (1994). A Course in Modern that is uncontrolled and generally unavoidable.
Business Statistics. New York: Macmillan College It is distributed as a NID (0, σ2) and its unbiased
Publishing Company. For a wider vision of the estimator is E(MSE).
relationship between simulation and experiment Response Surface Methodology (RSM): is a
design applied to complex industrial systems it collection of statistical and mathematical tecniques
is possible to make reference o the following useful for developing, improving and optimazing
articles and/author memories: Mosca, R., & processes (Myers and Montgomery, 1995).
Giribone, P. (1982).A mathematical method for Regression Analysis: the statistical tecniques
evaluating the importance of the input variables used to investigate the relationships among a
in simulation experiments. In M.H. Hamza (Ed.), group of variables and to create models able to
IASTED International Symposium on Modelling, describe them.
Identification and Control (pp. 54-58). Calgary, Central Composite Design (CCD): is the
Canada: ACTA Press. best design to obtain second order regression
metamodels.
Confidence Interval for a Parameter: is
an interval of numbers within which we expect
KEY TERMS AND DEFINITIONS
the true value of the population parameter to be
Design of Experiments (DOE): refers to the contained at a specified confidence level.The
process of planning the experiment so that ap- endpoints of the interval are computed based on
propriate data that can be analysed by statistical sample information (Sincich, 1994).
methods will be collected, resulting in valid and Mean Square Pure Error (MSPE): is an in-
objective conclusions (Montgomery, 2005). trinsic characteristic of each experiment and also
Factorial Experiment: is an experimental of each simulation model and is strictly connected
strategy in which factors are varied together, to the investigated reality, since it is directly de-
instead of one at a time (Montgomery, 2005). pendent on the overall stochasticity of which this
Experimental Error: is the noise that afflict reality is affected.
the experimental results. It arises from variation

142
143

Chapter 7
Efficient Discrete Simulation
of Coded Wireless
Communication Systems
Pedro J. A. Sebastião
Instituto de Telecomunicações, Portugal

Francisco A. B. Cercas
Instituto de Telecomunicações, Portugal

Adolfo V. T. Cartaxo
Instituto de Telecomunicações, Portugal

ABSTRACT
Simulation can be a valuable tool for wireless communication system’s (WCS) designers to assess the
performance of its radio interface. It is common to use the Monte Carlo simulation method (MCSM),
although this is quite time inefficient, especially when it involves forward error correction (FEC) with
very low bit error ratio (BER). New techniques were developed to efficiently evaluate the performance of
the new class of TCH (Tomlinson, Cercas, Hughes) codes in an additive white Gaussian noise (AWGN)
channel, due to their potential range of applications. These techniques were previously applied using a
satellite channel model developed by Lutz with very good results. In this chapter, we present a simula-
tion method, named accelerated simulation method (ASM), that provides a high degree of efficiency
and accuracy, namely for lower BER, where the application of methods like the MCSM is prohibitive,
due to high computational and time requirements. The present work generalizes the application of the
ASM to a WCS modelled as a stochastic discrete channel model, considering a real channel, where
there are several random effects that result in random energy fluctuations of the received symbols. The
performance of the coded WCS is assessed efficiently, with soft-decision (SD) and hard-decision (HD)
decoding. We show that this new method already achieves a time efficiency of two or three orders of
magnitude for SD and HD, considering a BER = 1 ´ 10-4 , when compared to MCSM. The presented
performance results are compared with the MCSM, to check its accuracy.

DOI: 10.4018/978-1-60566-774-4.ch007

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Efficient Discrete Simulation of Coded Wireless Communication Systems

EFFICIENT DISCRETE SIMULATION systems at low BER in an additive white Gaussian


OF CODED WIRELESS noise (AWGN) channel. Based on that work, new
COMMUNICATION SYSTEMS techniques were developed to evaluate the perfor-
mance of block codes in an AWGN channel, in a
Modern communications are already part of very efficient way (Cercas, Cartaxo & Sebastião,
everyone’s life. Technologies such as wireless 1999) They were also applied to a satellite chan-
fidelity (Wi-Fi), universal mobile telecommuni- nel model developed by Lutz, Cygan, Dippold,
cations system (UMTS) and its developing long Dolainsky & Papke, (1991) obtaining very good
term evolution (LTE), and worldwide interop- agreement with the foreseen upper bounds of some
erability for microwave access (WiMAX) are FEC schemes performance (Sebastião, Cercas &
already familiar. For regions of the globe where Cartaxo, 2002).
these technologies are not yet implemented, new In this chapter, we present a new simulation
interface radio systems are being deployed us- method named accelerated simulation method
ing satellites, so as to allow the use of common (ASM) since it can provide a high degree of ef-
wireless systems anywhere and anytime, that is, ficiency when compared with the MCSM, which
a global communications system. increases for very low BER, while maintaining
Whatever the radio systems may be, they all very good accuracy.
have one aspect in common: prior to implementa- This method can be applied to communi-
tion, several studies must be done to assess the cation systems with hard-decision (HD) and
performance of its radio interface. It is common to soft-decision (SD) decoding, in a more realistic
use the Monte Carlo simulation method (MCSM) channel model.
to evaluate the system’s performance including In this chapter, the ASM was applied to com-
some or all of its radio interface components, pute the performance of the new class of Tomlin-
such as scrambling, coding, modulation, filtering, son, Cercas, Hughes (TCH) codes. These codes
channel effects and all of its counterparts at the are a class of non-linear block codes that were
receiver. Depending on the required accuracy, devised for a wide range of applications, includ-
this is usually quite a time consuming task, even ing FEC. These codes exhibit good performance
when high computational systems are used. This and undertake maximum likelihood soft-decision
situation gets even worse when systems involve decoding with a very simple decoder structure,
forward error correction (FEC) with very low bit using digital signal processing techniques and
error ratios (BER), typically less than 10-6. correlation in the frequency domain. Furthermore,
As an alternative to MCSM, there are several they can be used simultaneously in code division
techniques described in the literature which can multiple access (CDMA), channel estimation and
help to shorten this time consuming task. One of synchronization of the receiver, due to the very
the most efficient techniques is the importance good correlation properties of its code words
sampling (IS) technique; however this has the (Cercas, Tomlinson & Albuquerque, 1993).
disadvantage that it is only applicable to some The main contribution of this chapter is the
particular cases, and cannot be generalized for a generalization of the ASM method to a more
wider range of applications (Jeruchim, Balaban realistic channel model. Real channels are far
& Shanmugan, 2000, pp. 710-737). An important more complex than AWGN ones, as the signal is
contribution to solve this problem has been given also affected by other effects such as multipath
by (Bian, Poplewell & O’Reilly 1994) who op- components with different delays and other inter-
timised some simulation techniques in order to ference phenomena that result in random energy
evaluate the performance of coded communication fluctuations of the received symbols, that is, the

144
Efficient Discrete Simulation of Coded Wireless Communication Systems

signal is affected by fading that can be described SD and HD decoding is very important to design
by a given statistical model. In this chapter, we communication systems. Modern communication
consider channels with slow fading (the fading is systems are so complex that the evaluation of their
at least constant over the duration of a symbol) performance can only be assessed by simulation
(Simon & Alouini, 2000) and non-selective in and not by analytical formulation. A classic and
frequency (all frequency components of the re- widely used simulation method to evaluate the
ceived signal are affected in a similar way). This performance of a communication system is the
assumption is commonly used even for modern MCSM (Jeruchim, Balaban & Shanmugan, 2000).
wideband systems using orthogonal frequency- However, this method requires that all informa-
division multiplexing (OFDM). With these as- tion data is processed throughout all blocks of a
sumptions, this model can be applied to typical system, which may result in a long simulation
personal and mobile communications systems, time, especially for low BER values.
taking into account, for example, fading for non- In this case, we are interested in the probability
selective channels in frequency with shadowing, of occurrence of rare events, that is, the occurrence
as well as any type of modulation and coding. of an error in a very large number of transmitted
This process will be exemplified for a wireless symbols. In the case of MCSM, all transmitted
communication system (WCS), with a Rayleigh symbols are simulated, using large computational
channel and TCH codes. resources during a long time, without any real
Following this introductory section, the re- advantage. To validate the simulation results with
maining text is organized as follows: Section a good acceptable accuracy, it is necessary to
“Simulation of Coded Wireless Communication wait until a relevant number of these rare events
Systems” describes the role of simulation in the happen, that is errors. For example, assume that
evaluation of the performance of coded WCS and BER = 1 ´ 10-6 , i.e., in average, we have a single
introduces the ASM. Section “Discrete Channel received bit in error for every million transmit-
Model” describes the discrete channel model ted bits. To achieve an acceptable accuracy we
(DCM) for a WCS. Section “Accelerated Simu- need to simulate 10 to 100 bits received in error,
lation Method Description” describes the ASM so the MCSM must simulate 10 to 100 million
to be used with SD and HD decoding in WCS. transmitted bits, which must be processed through
Section “Simulation Results” presents some re- the entire transmission system, composed by a
sults to validate the proposed simulation method series of different blocks.
and section “Conclusions” summarises the main In order to mitigate the simulation time needed
conclusions. Seven appendixes add more detail to to obtain the BER of a WCS, especially for lower
some subjects presented along the chapter. error ratios, a general simulation method is pre-
sented, called ASM. The main objectives of this
method are time efficiency and accuracy of the
SIMULATION OF CODED WIRELESS obtained results.
COMMUNICATION SYSTEMS Generally, we can say that the ASM consists
on generating a sequence of symbols, with the
A key issue in the development of new com- length necessary for all simulation, in which each
munication systems, prior to its implementation, symbol can be predicted to be, or not, in error, after
is the assessment of its performance in a given being transmitted through the channel, according
environment where, supposedly, it will operate. to its known statistics. Then, this method will only
In WCS with FEC, the main parameter character- simulate the symbols, or groups of symbols such
izing performance is its BER. Its knowledge for

145
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 1. WCS model with a DCM

a b R â
CODER DCM DECODER

as code words that can result in received errors, word need to be generated. Depending on the chan-
ignoring all others. nel statistics, amplitude samples are generated for
Assuming that we are simulating a coded the bit positions in error or non error. Similarly to
system that uses block codes (this can easily be HD decoding, this modified code word is decoded
generalised to convolutional codes), the generated and the resulting estimated information word is
sequence of symbols containing the positions of compared with the original, to obtain the number
the expected errors in the channel is divided in of bits in error.
blocks with the length of a code word. Since in a The mentioned objectives of efficiency and
coded system we cannot process individual bits accuracy in ASM are achieved as follows. The
but complete received code words at the decoder, efficiency in ASM is due to three factors: only
this simple procedure allows us to check if the events in error are simulated, which are rare events
number of expected errors in a given code word for low BER; only code words with a number of
exceeds, or not, the error correcting capacity of erroneous bits exceeding the code’s error capacity
the code being used. In the ASM, only those code are simulated and the description of the WCS is
words with a number of erroneous bits exceeding concentrated in the DCM, that is a single system
the error correcting capacity of the code will be block, as described in the next section. The ac-
simulated, provided all others will certainly be curacy of the method also depends on the referred
corrected, and therefore they don’t need to be DCM, which must be as close as possible to the
processed. real environment.
For those code words requiring simulation, an The characterization of the DCM is then
information word is generated and coded, accord- crucial for the ASM and is presented in the fol-
ing to the rules of the code being used. Then, its lowing section.
bits are changed in accordance with the positions
of errors and non errors, in the previously gener-
ated sequence of symbols. DISCRETE CHANNEL MODEL
This processing can be more or less simple
according to the decoding method used. For HD The DCM used in the ASM includes all blocks
decoding it is only needed to invert the bits (as- of a transmission chain and it accounts for all
suming that these symbols are binary) for each physical phenomena of a WCS. In Figure 1 we
position of the code word, that was indicated by can see the DCM which is located between the
the previously generated error sequence as an er- coding and decoding blocks, in the transmitter
roneous bit. This modified code word is decoded and receiver, respectively. a denotes a generated
and the resulting estimated information word is information word (message), b is a transmitted
compared with the original information word, to code word corresponding to that information
obtain the effective number of bits in error. word, R contains the demodulated signal and â
If SD decoding is used, the simulation time is is a estimated information word.
longer, once all amplitudes for each bit in a code

146
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 2. Discrete channel model used in the ASM


Discrete

sm ( t ) X ( t )
channel
model
b MOD.
(PSK) BPF

Z 0 ( t )
Propagation
X ( t ) channel

I1 ( t ) Z1 ( t ) L

.
Propagation
channel ∑ Z ( t )
l =0
l
.
. .
. .
IL ( t ) Propagation Z L ( t )
channel

N (t )
R (t ) DEMOD. R
⊕ BPF
Y ( t )
(PSK)

The main objective of the DCM characteriza- involved, including a three-dimensional descrip-
tion in a WCS is to allow the study and analysis of tion of the signal.
digital communication systems, namely its coding The meaning of variables shown in the DCM
blocks, interleavers and all its components that is as follows: sm(t) is the mth symbol at instant t,
help to reduce the number of received information corresponding to the modulated vector b (set of
errors. With the DCM it is possible to get the error information bits or information word), X (t ) is a
distribution in the channel for a given propagation sample of the transmitted signal after band-pass
model, the amplitude distribution of the received filtering, IL (t ) is the Lth interfering signal, ZL (t )
signal including antenna characteristics, thermal is the Lth received signal affected by the propa-
noise (modelled as AWGN), interfering sources gation channel, Y (t ) is a sample of the random
and the type of modulation used. signal at the input of the receiver, including signal
Figure 2 represents a generic DCM including plus interference, N(t) is a sample of thermal noise
all these factors, in which the signal is represented with Gaussian distribution and R(t) is a sample
by its low-pass equivalent. of the actual signal received by the demodulator.
As it can be observed in Figure 2, the blocks Interfering signals can be originated in systems
of the DCM are: modulator (MOD.), demodulator operating in the same frequency band (co-channel
(DEMOD.), band-pass filters (BPF) and a general interference) (Yacoub, 2000), in which case the
representation of the propagation channel which use of high order filters helps to mitigate it, or
may include a combination of different random in adjacent frequency bands (adjacent-channel
effects to better fit the random physical phenomena interference) (Ha, 1990).

147
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 3. General model for the propagation channel of a WCS including the effects of transmitting
and receiving antennas

g e (Θ, Φ )

N
 l 
⊗ ∑ Αp ⋅ Αsn ⋅ Αf n ⋅ W  t − n  ⋅ exp ( − jϕn ) ⋅ g r ( Θ n ,Φ n )
Z 0 ( t )
X ( t ) W ( t )
n
n =0  c

 lj 
 ⋅ exp ( − jϕ j ) ⋅ g r ( Θ j ,Φ j )
J

∑ Βp ⋅ Βs j ⋅ Βf j ⋅ I1  t −
Z1 ( t )
I1 ( t )
j
j =0  c 
.
.
.

K
 l 
IL ( t )
∑ Βp
k =0
k ⋅ Βsk ⋅ Βf k ⋅ IL  t − k
 c
 ⋅ exp ( − jϕk ) ⋅ g r ( Θ k ,Φ k )
 Z L ( t )

Figure 3 represents the propagation channel, channel as the random variables (RV), Ap,As and
including the effects of transmitting and receiv- Af1, that are the free space attenuation, attenua-
ing antennas. tion due to shadowing and fading, respectively.
The propagation channel accounts for all the In a similar way, interfering signals are affected
physical random phenomena happening in the by parameters Bp, Bs and Bf2. The first two types
physical medium, where electromagnetic waves of attenuation exhibit a large scale variation in
travel from emitter till the receiver. It is common the time domain3, while the last one exhibits
to assume the mobile radio channel as linear, but a small scale variation4 (Rappaport, 2002, pp.
time-variant, due to the possible relative random 205-210).
movement of the receiver terminal to the emitter Free space attenuation is assumed determinis-
or even of surrounding objects in a WCS. In the tic and there are many empirical models used to
propagation channel the transmitted signal can evaluate it in WCS environments, e.g., (Walfish &
reach the receiver directly, indirectly or in both Bertoni, 1988; Hata, 1990; Rappaport, 2002, pp.
ways. Indirectly received signals are due to reflec- 145-167; Ikegami, Yoshida, Takeuchi & Umehira,
tion, diffraction or dispersion of the transmitted 1984; Bertoni, Honcharenko, Macel & Xia, 1994;
signal in surrounding objects such as buildings, Parsons, 1992, pp. 90-93; Lee, 1989, Chapter
trees or even the orography. The received signal is 3;ITU-R, P341-5, 1999; Feuerstein, Blackard,
then a summation of all these contributions, each Rappaport, Seidel & Xia, 1994; Alexander, 1982;
one having its own attenuation, delay and phase Hashemi, 1993b; Molkdar, 1991) amongst oth-
shift. Furthermore, each replica of the received ers. In appendix A, we present an expression to
signal can also be affected by the movement of characterize the free space attenuation.
a terminal (emitter or receiver) or the movement Attenuation due to shadowing has a stochastic
of surrounding objects, which translates to a behaviour since, for a fixed transmission dis-
Doppler shift. tance and carrier frequency, the received signal
As shown in Figure 3, we define the parameters has a random behaviour due to the movement
that affect the transmitted signal in the propagation of surrounding objects and other effects (Su-

148
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 4. PSK modulator and signals


process in the emitter and receiver, including
modulator, demodulator, filtering, and remaining
+∞
2 Es  t − n ⋅ Ts 
∑ ⋅ rect   sm ( t )
blocks by the simple use of software.
The most popular modulation techniques
n =−∞ Ts  Ts 
⊗ used in WCS are linear. In linear modulation the
transmitted signal sm(t) changes linearly with the
modulating signal. These techniques are spectrally
cos ( 2πf ct + φm )
efficient, which is an important property, since the
electromagnetic spectrum is already saturated.
A very popular linear digital modulation tech-
zuki, 1977; Hashemi, 1979; Rappaport, Seidel & nique used in WCS is the phase shift keying (PSK).
Takamizawa, 1991; Yegani & McGillem, 1991; In this technique the amplitude remains constant
Hashemi, 1993a). In appendix B we present two but the phase changes according to the transmitted
expressions to characterize the probability density symbol. Figure 4 shows the basic operations of a
function (PDF) of power and amplitude for this PSK modulator and corresponding signals.
phenomenon. The modulated signal at the output of this
Attenuation due to fading is derives from the modulator can be written as:
existence of multipath components and it has a
sm (t ) = Â éêb (t ) × exp ( j 2p fct )ùú
stochastic behaviour, however its variation in
the time domain is fast, when compared to at- ë û (1)
tenuation due to shadowing. Variables ln and φn
define the length and phase of the nth received where
multipath component, both for signal and inter-
ference. Variables Θ and Φ define azimuth and +¥ æt - nT ö÷
b (t ) = çç
elevation angles for the transmitting and receiv- å n b × rect s ÷
çç T ÷÷
è ø
n =-¥ s
ing antennas. The speed of light in vacuum is (2)
represented, as usually, by c. Appendix C pres-
ents some insight on this subject together with and
the derivation of analytical expressions for the
propagation channel. 2Es
bn exp j m
,
The three types of attenuation just mentioned Ts
act on the signal in the propagation channel and
are responsible for its random behaviour. 2
m
m 1, m 1, , M
The modulator transforms the baseband signal M (3)
in a bandpass signal centred on a carrier that can
easily propagate in the mentioned propagation Es is the symbol energy, Ts is the symbol time
channel of the WCS. Only digital modulation is and ϕm is the phase associated with symbol m. We
considered since it presents considerable advan- also assume that during a symbol interval Ts the
tages over analogue modulation such as improved number of cycles of the carrier is an integer.
immunity to noise and fading, ease of multiplexing In the receiver, the demodulator is used to
input data, improved security and the possibility recover the basedband signal sent on a carrier
to implement FEC and digital signal processing through the propagation channel, using the inverse
(DSP) techniques for most of the process. DSP operation. Assuming that the PSK demodulator
techniques can actually implement the overall is coherent and is synchronized, then it has infor-

149
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 5. PSK demodulator

R (t )
BPF
Ts
RI
⊗ ∫
0
dt

2
⋅ cos ( 2πf ct + φm )
Ts b̂n
Dec.

BPF lD
Ts
RQ
⊗ ∫ 0
dt

2
− ⋅ sin ( 2πf ct + φm )
Ts

ìï ü
mation about carrier frequency and phase of the ïì ì
ï æ æ l öö ï ü üï
ï
ïï ï ï
ï
ïï N ïApn × Asn × Afn × b (t )× exp j ççç2p fc çççt - n ÷÷÷÷÷÷ ×ï
ï ï ï
ï
ï
ï
ïï
ïïge (Q, F) å Â ï í ç
è çè c ÷ø÷÷ø ï
ý× ï
ï
ï
ï
ïï
ï ï ï ï ï
received signal. Figure 5 shows the schematic of ïï ï ï ï ï
ï
ïï
ï
n = 0
ï
ï
ï
î
×exp ( - j j ) × exp ( j2 p × D × t ) ï
ï
ï
þ
ï
ï
ï
ï
ï ýï
n n
í
ïï ×ï
Ts
ï ì ì
ï ï æ lk ÷ö ü
ï ü
ïï ï
a PSK demodulator using base functions, cor- ïï ï ï ç ï ïï ï +
R= ò ï
íï L K
ï
 
ï íBpk × Bsk × Bfk × I l ççt - ÷÷ ex xp ( j2p × Dk × t )ý ×ï
ïï
ï ýdt N
ïï×gr (Qn , Fn ) + å å í ï çè c ÷ø ï ýï ï
0 ïï
ïï ï ï
î þ ïï ï
ï ï ï
relators and a decision circuit (Proakis, 2001, ïï ï ïï ï
ïï
ïï
ïï
î
l = 1 k = 0
ï
ï
î
×g (
ï r k k Q , F ) ï
ïï
þï
þï
ïï
ï
ï
ï
ï é 2 ù ï
ï
pp. 232-236). ï
ï
ï
ê
×ê
ï ê Ts
× cos (2p fct ) -
2
× sin (2p fct )úú ï
ï
ï
ï
ï Ts ú ï
îë
ï û ï
þ
Each correlator in the PSK demodulator
compares the received signal R(t) with each base (4)
function f1(t) and f2(t) (Proakis, 2001, pp. 172) by
multiplication followed by integration in a symbol As shown in appendix D, this can be simplified
period Ts, so as to recover the baseband transmitted so as to write the vector of the received signal
symbol. After detection, the signal amplitude of amplitudes as:
each symbol is compared with a threshold value
lD so as to estimate the set of information bits bˆn R = Es × As × Af × éêRa Rb ùú + N
ë û (5)
of vector b. This type of decision is known as hard
decision (HD). Systems that have a SD decoder where Ra and Rb depend on the base functions
following the demodulator don’t need to make hard used in correlators. Appendix D also presents
decisions since the complex signal at the output a simplified and useful expression for a typical
of the demodulator can be directly fed to it. case using binary phase shift keying (BPSK)
Equation (4) characterizes the signal at the out- modulation.
put of the DCM. It is deduced in appendix D and
assumes PSK modulation, emitting and receiving
antennas gain, AWGN noise, interfering signals ACCELERATED SIMULATION
with a given distribution and the three types of METHOD DESCRIPTION
attenuation mentioned: free space attenuation,
shadowing and fading. In the previous sections the role of simulation
to obtain the performance of coded WCS was
shown, as well as how to characterize a general

150
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 6. Algorithm for implementing the ASM method in a WCS, applied in the evaluation of the per-
formance of block codes for both HD or SD decoding

DCM, which can be adapted and simplified for possible to present a very efficient algorithm for
a particular case study, to be efficiently used in assessing the performance of any coding scheme
simulation. in any real environment.
Based on the general principles, it is now The algorithm for implementing the ASM

151
Efficient Discrete Simulation of Coded Wireless Communication Systems

method, as shown in the flowchart of Figure 6, differ, the counter for erroneous words is
can be summarized as follows: incremented, as well as the counter for er-
roneous bits, depending on the number of
1) Define the number of erroneous words to different bits detected. The counter for the
be processed for each Eb/N0 value, within total number of information words (or bits)
the considered range. As in the Monte Carlo processed is also incremented.
method this is related to the required preci- 8) This process is repeated until the desired
sion and defines the end of the simulation number of erroneous information words, or
cycle. bits, is reached.
2) Generate the sequence of intervals between
consecutive errors L1,L2, L3,…, according to This algorithm is suitable for evaluating the
the method described in Appendix E so as to performance of different families of error correct-
define the position of erroneous symbols in ing codes in any desired environment, since all
the channel for the coding, modulation and parameters can be specified.
channel conditions desired. The estimated bit error ratio, BER ˆ , is then
3) If the block of n symbols corresponding to evaluated by:
the code word being processed contains more
errors e than the error-correcting capacity of
ˆ = NIBe
the code ecc, i.e., e > ecc, then a new infor- BER
NIBTOTAL
mation word length k is generated, based on (6)
a uniform distribution. This word is coded
following the rules of the considered code, where NIBe is the number of information bits
resulting in a code word length n, which is in error. The total number of information bits,
then modulated. NIBTOTAL, is the number of information bits in each
4) If SD decoding is used, the generated code word of length k multiplied by the total number of
word is replaced by a set of n amplitude coded words, NCW, and can be expressed as:
samples, Rne or Re, according to the in-
formation provided by the L values, noise NIBTOTAL = k × NCW = NIBne + NIBe (7)
and fading distributions. The expressions to
obtain those amplitude samples are presented where NIBne is the number of information bits
in Appendix F. without errors, found within information words
5) If HD decoding is used, the generated set of without errors (IWne) or within information words
n coded and modulated symbols, sometimes with errors (IWe):
also referred to as code word, for simplic-
ity, have their amplitudes inverted or not, NIBne = NIBne (IWne ) + NIBne (IWe ) (8)
according to the information of the corre-
sponding L stream. This amplitude inversion
The total number of code words NCW can also
corresponds to exchange “-1” with “+1” and
be expressed as:
vice-versa.
6) In both cases, the resulting code word is then
NCW = NCWe =0 + NCW0<e£ecc + NCWe>ecc
sent to the decoding block.
7) The output of the decoding block estimates (9)
the transmitted information word, which,
is compared with the generated one. If they

152
Efficient Discrete Simulation of Coded Wireless Communication Systems

where NCWe=0 is the number of code words In Figure 7 we can also see a major block
without errors, NCW0<e£ecc is the number of code named “Re and Rne generator”. Its function is to
words with a number of errors equal or smaller generate the simulated amplitude samples for a
then the error-correcting capacity of the code, coded and modulated block of n symbols in the
and NCWe>ecc is the number of code words with channel, according to the procedure explained in
a number of errors exceeding the error-correcting Appendix F. As can be seen, it generates an Re or
capacity of the code. This allows us to define the Rne sample, depending if the corresponding sym-
processing advantage (PA). This is defined as the bol position in the code word being processed was
time required, to accomplish simulation using in error (“1” in upper stream) or not (“0” in upper
the ASM relative to the MCSM and can have stream), respectively. As shown, the generation
a significant impact on the simulation time and of these samples is based on the generation of a
computational resources needed: coded and modulated block of n symbols affected
by the considered levels of noise and fading, as
well as the previous information.
NCW
PA = It is important to note that this procedure is
NCWe>ecc (10)
only used for soft-decision decoding. When HD
decoding is used, as shown in Figure 8, this is a
As already mentioned, this processing advan-
lot simpler and quicker. In this case, the lower
tage increases for high signal-to-noise ratios or
generated symbol stream will have its symbols
very low fading conditions, allowing us to easily
inverted whenever they correspond to a “1” in the
evaluate performance results where the applica-
upper stream and remain unchanged otherwise,
tion of other methods, like Monte Carlo, would
prior to be submitted to the decoding block. It
be prohibitive.
is also important to note that this block is only
As an illustrative example, we consider BPSK
activated in the simulation chain whenever the
modulation, so the binary symbols “0” or “1” will
number of errors in a given code word exceeds
be replaced by the corresponding amplitudes and
the error-correcting capacity of the code. Since for
respectively. Figure 7 and Figure 8 depict the ASM
high signal-to-noise ratios most of the code words
algorithms used to evaluate the performance of a
will not be in error, this simulation method can
given coded system with SD and HD decoding,
achieve a significant processing time advantage
respectively. Here we assume that a block code
relative to the MCSM, without compromising
length n with k information bits and an error-
its accuracy.
correcting capacity of ecc bits is used, so all the
In Appendix G we derive the probability of er-
basic processing is done in blocks length n. The
ror of the channel and the cumulative probability
upper stream of zeros and ones, divided in blocks
density function (CPDF) for error and non error
length n of a code word is the generated stream
amplitudes, applied to the case of Rayleigh fading
containing the error’s occurrence position, ac-
distribution, additive white Gaussian noise and
cording to the method described in Appendix E.
BPSK modulation.
Therefore, the presence of a “0” means that the
corresponding symbol in the channel was not
mistaken by the channel effects considered (noise,
SIMULATION RESULTS
fading, etc.) and the presence of a “1” means that
the corresponding symbol in the channel is wrong.
In this section, we illustrate the proposed simula-
The distance values L between two consecutive
tion method for estimating the performance of the
ones was generated by that method, using the
TCH family of codes. The obtained results are
appropriate random variables.

153
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 7. ASM algorithm to evaluate the performance of a TCH code with SD decoding

compared with the MCSM. BPSK modulation (Babich & Lombardi, 2000;Yegani & McGillem,
is assumed, as well as a non-selective Rayleigh 1991), the performance of this simulation method
channel with different fading levels. The simula- is illustrated using a simple TCH(16,6,2) block
tion time was measured and compared in these code in a Rayleigh environment.
conditions, for both HD and SD decoding and for The results were compared for several fading
several values of BER. levels, ranging from a less severe, e.g., rural envi-
Although different TCH codes have been tested ronment, with an average fading of 0dB, to a more
with different channel distributions, including Rice severe one, e.g., urban, with an average fading of
(1958), Nakagami-m (Yacoub, 1999), Lognormal -20dB. The BER values considered, range from
(Bullington, 1977; Braun & Dersch, 1991), Suzuki 1×10-3 to 1×10-9 since these values are used for
(which includes both Rice and Lognormal distribu- most practical applications such as voice, video
tions) (Suzuki, 1977) and Weibull (which is fre- or data (Tanenbaum, 1998). Each value plotted
quently used for radio communication channels) in the graphs was determined considering 1000

Figure 8. ASM algorithm to evaluate the performance of a TCH code with HD decoding

154
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 9. Performance of a TCH(16,6,2) code in a Rayleigh channel, with average fading of 0dB, using
different simulation methods for both SD and HD decoding

words in error, which guarantees results with ment). Besides the range of BER values is now
high accuracy. smaller, we can verify that we can take exactly the
Figure 9 plots the performance of TCH(16,6,2) same conclusions as in Figure 9, if we just make
code with an average fading of 0dB (rural environ- a shift in the x axis by 20dB. This illustrates the
ment), as a function of the information bit signal influence of the increased fading.
to noise ratio Eb/N0. As we can see for SD decod- Figure 11 shows a comparison of simulation
ing, the two methods give similar performance times needed by each method, for the simulations
results. For HD decoding the results are similar, shown in Figure 9, i.e., for an average fading of
with a small deviation from the MCSM. As we 0dB
should expect, the advantage of SD decoding Generally, we can observe that the PA of this
relative to HD decoding is notorious in this type method is inversely proportional to BER, as previ-
of channel relative to a simple AWGN channel. ously predicted. That advantage is small for high
For BER » 1 ´ 10-3 ; 1 ´ 10-6 and 3 ´ 10-8 the values of BER, however it is for low values of BER
coding gain obtained by SD decoding in this that the ASM method is most useful.
Rayleigh channel is G » 6dB; 10dB and 12dB , The time advantage is significantly higher
respectively, while in an AWGN channel the when HD decoding is used, since there is no need
coding gain was G » 2dB; 2.2dB and 2.5dB to generate the Re and Rne values. In this case,
(Sebastião, 1998). the time advantage, for BER » 1 ´ 10-4 is 1568
Figure 10 plots results for the same code in relative to the MCSM, while for SD decoding it is
similar conditions, the only difference being that 40. This time advantage further increases for lower
the average fading is now -20dB (urban environ- BER values. For example, for BER £ 1 ´ 10-5

155
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 10. Performance of a TCH(16,6,2) code in a Rayleigh channel, with average fading of -20dB,
using different simulation methods for both SD and HD decoding

it is greater than 7000 and 350, for HD and SD modulation, coding and channel statistics reflect-
decoding, respectively. ing the physical phenomena to model.
This method presents a significant time effi-
ciency, also demonstrated by simulation results,
CONCLUSION that increases for good channel conditions and
high signal-to-noise ratios, especially for very
The ASM presented in this chapter provides an low BER values, where the application of the
efficient way to evaluate the performance of wire- MCSM gets nearly prohibitive and requires huge
less radio communication systems. It is clearly computational resources.
more efficient than the MCSM. In this chapter, Both ASM and MCSM were used to evaluate
the ASM was used to evaluate the performance of the performance of a TCH(16,6,2) code, for SD
a coded system in a given environment, for which and HD decoding receivers, using BPSK modula-
the modulation, noise and channel statistics were tion, AWGN noise and Rayleigh fading, although
fully specified, including AWGN noise, fading, other codes and channel models have previously
shadowing and interference. been tested with similar results. This new method
The application of this method is based on the achieves a time efficiency of two or three orders
DCM of the system, which contains the description of magnitude for SD and HD, considering a
of all relevant parameters of the WCS, including BER = 1 ´ 10-4 , when compared to MCSM.

156
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 11. Comparison of the simulation times needed by different methods to evaluate the performance
of a TCH(16,6,2) code in a Rayleigh channel, average fading of 0dB, for both SD and HD decoding.

Simulation results also showed that both methods Bian, Y., Poplewell, A., & O’Reilly, J. J. (1994).
present similar performance results. Novel simulation technique for assessing coding
system performance. IEE Electronics Letters,
30(23), 1920–1921. doi:10.1049/el:19941297
REFERENCES
Braun, W. R., & Dersch, U. (1991). A physical
Alexander, S. E. (1982). Radio propagation within
mobile radio channel model. IEEE Transac-
buildings at 900 MHz. IEE Electronics Letters,
tions on Vehicular Technology, 40(2), 472–482.
18(21), 913–914. doi:10.1049/el:19820622
doi:10.1109/25.289429
Aulin, T. (1979). A modified model for fading
Bullington, K. (1977). Radio propagation for
signal at a mobile radio channel. IEEE Transac-
vehicular communications. IEEE Transactions
tions on Vehicular Technology, 28(3), 182–203.
on Vehicular Technology, 26(4), 295–308.
doi:10.1109/T-VT.1979.23789
doi:10.1109/T-VT.1977.23698
Babich, F., & Lombardi, G. (2000). Statistical anal-
Cercas, F. A., Tomlinson, M., & Albuquerque, A.
ysis and characterization of the indoor propagation
A. (1993). TCH: A new family of cyclic codes
channel. IEEE Transactions on Communications,
length 2m. International Symposium on Informa-
48(3), 455–464. doi:10.1109/26.837048
tion Theory, IEEE Proceedings, (pp. 198-198).
Bertoni, H. L., Honcharenko, W., Macel, L. R., &
Xia, H. H. (1994). UHF propagation prediction for
wireless personal communications. IEEE Proceed-
ings, 82(9), 1333–1359. doi:10.1109/5.317081

157
Efficient Discrete Simulation of Coded Wireless Communication Systems

Cercas, F. A. B., Cartaxo, A. V. T., & Sebastião, Hata, M. (1990). Empirical formula for propa-
P. J. A. (1999). Performance of TCH codes with gation loss in land mobile radio services. IEEE
independent and burst errors using efficient tech- Transactions on Vehicular Technology, 29(3),
niques. 50th IEEE Vehicular Technology Confer- 317–325. doi:10.1109/T-VT.1980.23859
ence, Amsterdam, Netherlands, (VTC99-Fall),
Haykin, S. (2001). Communication systems (4th
(pp. 2536-2540).
Ed.). Chichester, UK: John Wiley & Sons, Inc.
Clarke, R. H. (1968). A statistical theory of mo-
Helstrom, C. W. (1984). Probability and stochas-
bile radio reception. The Bell System Technical
tic processes for engineers (1st Ed.). New York:
Journal, 47, 957–1000.
MacMillan.
Feuerstein, M. J., Blackard, K. L., Rappaport, T.
Ikegami, F., Yoshida, S., Takeuchi, T., & Ume-
S., Seidel, S. Y., & Xia, H. H. (1994). Path loss,
hira, M. (1984). Propagation factors controlling
delay spread, and outage models as functions of
mean field on urban streets. IEEE Transactions
antenna height for microcellular system design.
on Antennas and Propagation, 32(8), 822–829.
IEEE Transactions on Vehicular Technology,
doi:10.1109/TAP.1984.1143419
43(3), 487–489. doi:10.1109/25.312809
ITU-R, P.341-5, (1999). The concept of transmis-
French, R. C. (1979). The effect of fading and
sion loss for radio links, [Recommendation].
shadowing on channel reuse in mobile radio. IEEE
Transactions on Vehicular Technology, 28(3), Jakes, W. C. (1974). Microwave mobile communi-
171–181. doi:10.1109/T-VT.1979.23788 cations (1st Ed.). Washington, DC: IEEE Press.
Ha, T. T. (1990). Digital satellite communications Jeruchim, M., Balaban, P., & Shanmugan, K.
(2nd Ed.). New York: McGraw-Hill. S. (2000). Simulation of communication sys-
tems – modelling methodology and techniques
Hansen, F., & Meno, F. I. (1977). Mobile fading-
(2nd ed.). Amsterdam, The Netherlands: Kluwer
Rayleigh and lognormal superimposed. IEEE
Academic.
Transactions on Vehicular Technology, 26(4),
332–335. doi:10.1109/T-VT.1977.23703 Lee, W. C. Y. (1989). Mobile cellular telecommu-
nications systems. New York: McGraw-Hill.
Hashemi, H. (1979). Simulation of the urban
radio propagation channel. IEEE Transactions Lutz, E., Cygan, D., Dippold, M., Dolainsky,
on Vehicular Technology, 28(3), 213–225. F., & Papke, W. (1991). The land mobile sat-
doi:10.1109/T-VT.1979.23791 ellite communication channel – recording,
statistics, and channel model. IEEE Transac-
Hashemi, H. (1993a). Impulse response model-
tions on Vehicular Technology, 40, 375–386.
ling of indoor radio propagation channels. IEEE
doi:10.1109/25.289418
Journal on Selected Areas in Communications,
11(7), 967–978. doi:10.1109/49.233210 D. Molkdar, D. (1991). Review on radio propaga-
tion into and within buildings. IEEE Proceedings,
Hashemi, H. (1993b). The indoor radio propaga-
138(11), 61-73.
tion channel. Proceedings of the IEEE, 81(7),
943–968. doi:10.1109/5.231342 Papoulis, A. (1984). Probability, random vari-
ables, and stochastic processes (2nd Ed.). New
York: McGraw-Hill, Inc.

158
Efficient Discrete Simulation of Coded Wireless Communication Systems

Parsons, J. D. (1992). The mobile radio propa- Simon, M. K., & Alouini, M. (2000). Digital
gation channel (1st Ed.). Chichester, UK: John communication over fading channels. Chichester,
Wiley & Sons, Inc. UK: John Wiley & Sons.
Proakis, J. G. (2001). Digital communications Suzuki, H. (1977). A statistical model for ur-
(4th Ed.). New York: McGraw- Hill. ban radio propagation. IEEE Transactions on
Communications, 25(7), 673–680. doi:10.1109/
Rappaport, T. S. (2002). Wireless communications
TCOM.1977.1093888
principles and practice (2nd Ed.). Upper Saddle
River, NJ: Prentice Hall. Tanenbaum, A. S. (2003). Computer networks (4th
ed.). Upper Saddle River, NJ: Prentice Hall.
Rappaport, T. S., Seidel, S. Y., & Takamizawa,
K. (1991). Statistical channel impulse response Walfish, J., & Bertoni, H. L. (1988). A theoretical
models for factory and open plan building radio model of UHF propagation in urban environments.
communication system design. IEEE Transac- IEEE Transactions on Antennas and Propagation,
tions on Communications, 39(5), 794–806. 36, 1788–1796. doi:10.1109/8.14401
doi:10.1109/26.87142
Yacoub, M., Baptista, J. E. V., & Guedes, L.
Rice, S. O. (1958). Distribution of the duration G. R. (1999). On higher order statistics of the
of fades in radio transmission: Gaussian noise Nakagami-m distribution. IEEE Transactions
model. The Bell System Technical Journal, 37(3), on Vehicular Technology, 48(3), 790–794.
581–635. doi:10.1109/25.764995
Ross, M. S. (1987). Introduction to probability and Yacoub, M. D. (2000). Fading distributions and
statistics for engineers and scientists. Chichester, co-channel interference in wireless systems.
UK: John Wiley & Sons, Inc. IEEE Antennas & Propagation Magazine., 42(1),
150–159. doi:10.1109/74.826357
Sebastião, P. J. A. (1998). Simulação eficiente do
desempenho dos códigos TCH através de mod- Yegani, P., & McGillem, C. D. (1991). A statistical
elos estocásticos [Efficient simulation to obtain model for the factory radio channel. IEEE Trans-
the performance of TCH codes using stochastic actions on Communications, 39(10), 1445–1454.
models], (Master Thesis, Instituto Superior Téc- doi:10.1109/26.103039
nico – Technical University of Lisbon, 1998),
Lisboa, Portugal.
Sebastião, P. J. A., Cercas, F. A. B., & Cartaxo, KEY TERMS AND DEFINITIONS
A. V. T. (2002). Performance of TCH codes in a
land mobile satellite channel. 13th IEEE Inter- ASM: Accelerated Simulation Method pro-
national. Symposium on Personal, Indoor and posed to reduce the simulation time by some orders
Mobile Radio Communication., (PIMRC2002), of magnitude when compared with traditional
Lisbon, Portugal, (pp. 1675-1679). methods, like Monte Carlo, while keeping the
same precision.
Shanmugam, K. S. (1979). Digital and analog Coding Performance: Performance of codes,
communication systems. Chichester, UK: John usually expressed by its bit error ratio, for a given
Wiley & Sons. system and environment, expressed by its chan-
nel model.

159
Efficient Discrete Simulation of Coded Wireless Communication Systems

Efficient Simulation: Simulation that is input, or output, of the decoder is quantized with
less time consuming, and if possible with lower more than two levels or is left unquantized.
computational requirements, when compared with Discrete Channel Model: The discrete chan-
classic simulation methods like Monte Carlo. nel model (DCM) maps an alphabet of M input
FEC: Forward error correction is a method symbols at the input of the channel to N output
commonly used in most digital data transmission symbols at its output (N > M), so as to be used in
systems to enable correction of some of the data the simulation chain. The DCM is modelled by
symbols received, due to noise and interference the probability of each symbol at the input of the
introduced by the channel that separates the emit- channel and by the set of transition probabilities
ter from the receiver. This is usually done by an caused by errors that depend on the randomness
error correcting code that detects and corrects of the physical phenomena described in the radio
some of the symbols received, without the need propagation channel.
for retransmission. These codes always introduce WCS: Wireless Communication System
some redundancy to the transmitted symbols, is a system in which the transmitter and the
which are then removed at the receiver to esti- receiver are not physically connected by wires
mate the transmitted symbols. The performance so the transmission is made using radio waves
of a given error correcting code depends on its that propagate in the radio channel and carry the
characteristics and also the channel model, so it desired information.
is often evaluated by simulation.
Hard-Decision Decoding: The decoding
process is said to be “hard” when the signal at
the input, or output, of the decoder is binary, i.e., NOTATION: SYMBOLS
a signal with only two quantization levels.
The Monte Carlo Simulation Method is Lowercase Roman Letters
a classic method to simulate the performance
of a system in which all the required elements a – Information word vector.
and system parameters are included for all time â – Estimated information word vector.
instants. Depending on the system under test: this b – Coded word vector.
may result in a complex method requiring high b(t) - Input sequence of symbols.
computational requirements and a long time to bn - Transmitted symbols.
get accurate results bˆn (t ) - Vector of bits estimated by the
Radio Propagation Channel: This is the demodulator.
environment in which the radio signal carrying c - Velocity of light in vacuum.
the data information travels from the emitter till d - Distance between transmitting and receiving
the receiver. The radio signal, and therefore the antennas.
information that it carries, is affected by thermal d0 – Reference distance in the far field of the
noise, antenna characteristics and several other antenna.
effects that depend on that environment such as e - Error.
fading, shadowing, reflections, and other effects. eb - Energy of bit.
which is usually described by a channel model ecc - Error correcting capacity of the code.
that approaches the physical phenomena. f - Frequency.
Soft-Decision Decoding: The decoding pro- fAf (a f ) - Probability density function of the
cess is said to be “soft” when the signal at the random variable of the fading, Af.

160
Efficient Discrete Simulation of Coded Wireless Communication Systems

fAs (as ) - Probability density function of the Uppercase Roman Letters


random variable of the shadow attenua-
tion, As. BER - Bit error ratio.
fc - Transmitted carrier’s frequency of the band- BER ˆ - Estimated bit error ratio.
pass signal. Dn - Doppler shift.
fD - Doppler frequency. Eb - Information bit energy.
fL(l) - Probability density function of the random Es - Symbol energy.
variable of the interval between consecutive E[N] - Average of noise amplitudes.
errors, L. E(Af) - Average of the fading.
fN(n) - Probability density function of the random FL(l) - Cumulative probability density function
variable of the additive white Gaussian of random variable L.
noise, N. FR(r) - Cumulative probability density function
fR(r) - Probability density function of the random of random variable R.
variable of the received signal, R. FRe(r) - Cumulative probability density function
fRe(r) - Probability density function of the random of random variable Re.
variable of the received signal for values in FRne(r) - Cumulative probability density function
error, Re. of random variable Rne.
fRne(r) - Probability density function of the random G - Gain of the coded WCS.
variable of the received signal for values in IL (t ) - Interfering source.
non error, Rne. IWe - Information words with errors.
fG (g ) - Probability density function of the random IWne - Information words without errors.
variable of the shadow power variation, Γ. L - Interval between two consecutive errors.
f1 (t ), f2 (t ), f j (t ) - Base functions. M - Number of constellation symbols.
N - Vector of the noise amplitudes.
g - Error correcting capacity.
NCW - Number of code words.
ge (Q, F) - Transmitter antenna gain. NIBe - Number of information bits in error.
gr (Q, F) - Receiver antenna gain. NIBne - Number of information bits without
k - Number of bits in an information word. error.
lD - Decision threshold. NIBTOTAL - Total number of information bits.
ln - Length of path n. N(t) - Additive white Gaussian noise
m - Modulation symbols. samples.
n – Number of bits in a code word. N0 - Noise spectrum density.
ne – Non error. N1, N2 - Noise samples.
r – Signal amplitude. PA - Processing advantage for the accelerated
rect éëê×ùûú - Rectangular function. simulation method.
sm(t) - Symbol m, corresponding to a set of Pe - Probability of error.
bits of coded word, according to the used Rl - Amplitude reference level.
modulation. Ra - Auxiliary variable.
t - Time. Rb - Auxiliary variable.
t0 - Reference time. Re - Error samples.
v - Velocity of terminal station. RI - In phase component of the demodulated
amplitude.
Rne - Non error samples.

161
Efficient Discrete Simulation of Coded Wireless Communication Systems

RQ - Quadrature component of the demodulated 2


sAfs - Variance of the Rayleigh probability den-
amplitude. sity function.
R(t) - Received signal at the demodulator. tn - Signal delay of path n.
R - Vector with the random amplitudes of the π - Constant given by the ratio between perim-
received signal. eter and diameter of a circumference.
R1, R2 - Amplitudes for binary phase shift ϕm - Phase of m symbol.
keying. φn - Carrier phase shift for the nth path.
T - Received signal time exceeding the refer-
ence level. Uppercase Greek Letters
Ts - Symbol time.
U - Uniform random variable. A - Fading and attenuation due to shadowing
W(t) - Transmitted signal. that affects the signal.
W (t ) - Complex envelope of transmitted Af - Fading that affects the signal.
signal. Ap (d0 ) - Free space attenuation in the refer-
X (t ) - Transmitted signal.
 dB
ence field, in dB.
Y (t ) - Complex envelope of received signal plus Ap(d) - Free space attenuation for the transmit-
interfering sources at the demodulator. ted signal as a function of distance.
Z L (t ) - Complex envelope of interfering
 Ap - Free space attenuation for the transmitted
signal. signal.
Z0(t) - Received bandpass signal. As - Attenuation due to shadowing for the trans-
Z0 (t ) - Complex envelope of received signal. mitted signal.
Bf - Fading for interfering signal.
Lowercase Greek Letters Bp - Free space attenuation for interfering
signal.
αf – Range of values for the random variable Bs - Attenuation due to shadowing for interfer-
Af. ing signal.
αs – Range of values for the random variable Γ - Random variable of the signal power varia-
As. tion, due to shadowing.
β – Parameter that characterizes the environment Θ - Azimuth angle.
of communication systems. Φ - Elevation angle.
δ ln - Variation of path length.
γ – Range of values for the random variable Γ. Other Letters
mGdB - Average power variation due to shadow-
 éêë×ùúû - Real part.
ing in dB.
λc - Carrier wavelength of transmitted signal.
Notation: ACRONYMS
σ - Standard deviation of filtered additive white
gaussian noise.
2 ASM - Accelerated simulation method.
sGdB - Power variance, due to shadowing, in
AWGN - Additive white Gaussian noise.
dB.
2
BER - Bit Error Ratio.
s N - Noise variance at the correlator’s output. BPF - Band-pass filters.
sh2 - Variance of complex gain. BPSK – Binary phase shift keying.

162
Efficient Discrete Simulation of Coded Wireless Communication Systems

CDMA - Code division multiple access. ENDNOTES


CPDF – Cumulative probability density
1
function. Variables p, s, f, represent path, shadowing
DCM - Discrete channel model. and fading, respectively.
2
DEMOD. – Demodulator. Variable B is used to express the interfering
DSP – Digital signal processing. signal.
3
FEC - Forward Error Correction. This phenomenon is characterized by a slow
HD - Hard-decision. variation of the received amplitude of the
LTE - Long term evolution. signal during a long period or large distances
MCSM - Monte Carlo simulation method. between transmitter and receiver.
4
MOD – Modulator. This is used to describe the phenomena
OFDM - Orthogonal frequency-division in which the amplitude and phase of the
multiplexing. received signal, or its multi-path compo-
PA – Processing advantage. nents, exhibit fast variations in very short
PDF – Probability density function. periods of time or small distances between
PSK – Phase shift keying. transmitter and receiver, in the order of a
RV – Random variable. few wavelengths.
SD - Soft-decision.
TCH – Tomlinson, Cercas and Hughes family of
codes.
UMTS - Universal mobile telecommunications
system.
WCS - Wireless communication systems.
Wi-Fi - Wireless fidelity.
WiMAX - Worldwide interoperability for mi-
crowave access.

163
Efficient Discrete Simulation of Coded Wireless Communication Systems

APPENDICES

Free Space Attenuation

Free space attenuation Ap depends essentially on frequency and distance. Assuming that in a radio com-
munication link the carrier frequency remains approximately constant, we can emphasize its dependency
on distance d between emitter and receiver as:

b
æ d ö÷
Ap (d ) µ ççç ÷÷ (A.1)
çèd0 ÷ø

where parameter β depends on the communication channel (Rappaport, 2002, p. 139, table 4.2). Con-
sidering d0 as a reference distance in the distant zone of the antenna, we can express the average of Ap,
in dB, as:

æ d ö÷
Ap (d ) = Ap (d0 ) + 10 × b × log ççç ÷÷ (A.2)
dB dB çèd0 ÷ø

where the reference value Ap (d0 )dB can be obtained by evaluation of the electromagnetic field.

Attenuation Due to Shadowing

This phenomenon occurs due to several physical propagation effects that take place when obstacles
interfere in the normal path of electromagnetic waves. For example, diffraction occurs at the edges of
buildings or similar objects, reflections take place in the terrain and other objects like buildings, disper-
sion may occur in the soil, trees, buildings, bridges, etc., refraction may occur with walls and windows
and absorption may occur in green zones like forests or parks. The coexistence of all these effects may
have a slow (small scale) or fast variation in time (large scale). The random nature of this phenomenon
results in a variation of the received power signal, centred on its average value. (Hansen & Meno, 1977)
which is the corresponding free space attenuation and is deterministic. Its stochastic behaviour can be
characterized by a lognormal PDF, that is, it follows a normal PDF in which its results are expressed
using logarithmic units.
The physical phenomena behind the lognormal distribution include all effects previously identified,
which interfere with electromagnetic waves in a given environment. As a result, electromagnetic waves
travel through different and independent paths until they reach the receiver. Considering that there is a
great number of signal contributions arriving from independent paths, they converge at the receiver, where
they are added, to a signal that can be characterized by a random variable with Gaussian distribution
(central limit theorem) (Helstrom, 1984, pp. 223-227). This is usually represented by the power of the
received signal in decibels (dB). Some authors related the PDF of the received signals with experimental
results (Suzuki, 1977); French, 1979; Hansen & Meno, 1977). According to this last author, the average
power of the received signal, γ, follows a lognormal distribution given by:

164
Efficient Discrete Simulation of Coded Wireless Communication Systems

æ ö
( ) ÷÷÷÷ × u
2
10 çç 10 × log10 (g ) - mg dB
fG (g ) = ç
× exp ç- ÷÷ (g )
2p × sg dB × ln (10) × g çç 2 sg2 dB ÷÷
çè ø (B.1)

2
where mgdB and sgdB are the average and variance, respectively, of the received signal power affected
by shadowing, in dB. The step function u(γ) is used since the lognormal PDF only makes sense for
nonnegative values.
Sometimes it is useful to know the amplitude of the received signal just characterized by a power
variation due to shadowing γ. Based on its equation, and using some manipulation, it was possible to
derive an expression for the PDF of the corresponding amplitude of the signal, As, which can be ex-
pressed as:

æ ö
( ) ÷÷÷÷÷ × u as
2

20 (
çç 10 × log as 2 2 - m
ç )
fAs (as ) = ( )
10 g dB
× exp çç- ÷÷
p ×sg dB × ln (10) × as çç 2
2 s g dB ÷÷
çè ø
(B.2)

Attenuation Due to Fading

The phenomenon responsible for this type of attenuation, Af, usually known as fading, is dispersion
of the electromagnetic wave in the surrounding environment which originates multipath propagation.
Fading produces a variation in the amplitude and/or phase of the received signal and can be verified
in the time or frequency domain. This variation can be fast or slow depending on several factors such
as the velocity of the surrounding objects, or even the receiver, number and type of objects and com-
munication distance.
A typical radio propagation channel considers n different paths length ln. Each of these signal replicas
arrive at the receiver with delay tn = ln c . Let us define Afn as the fading associated with the signal
arriving from path n. If we assume that all objects are static, including emitter, receiver and all objects
in the environment, then its effect on the channel is only delay spread with different attenuation factor
and the channel is said to be time invariant. On the other hand, if the different paths vary with time, the
channel is said to be time variant and therefore Afn and tn change with time.
As we can see in the general model for the propagation channel shown in Figure 3, the mathematical
characterization of fading considers the transmission of a bandpass signal with a carrier frequency fc and
complex envelope W (t ) . Since we want to concentrate our attention on the characterization of fading,
we omit by now the attenuation due to the other effects (free space and shadowing), so the bandpass
signal can be written as:

W (t ) = Â éêW (t ) × exp ( j 2p fct )ùú (C.1)


ë û

and the received signal as:

165
Efficient Discrete Simulation of Coded Wireless Communication Systems

Z 0 (t ) = Â éêZ0 (t ) × exp ( j 2p fct )ùú


ë û (C.2)

where  éëê×ùûú is the real part operator.


In these conditions, and assuming that all objects are static, the received signal Z0(t) can be written
as:

N æ l ö÷ éN æ l ö÷ é æ l ö÷ù ù
Z 0 (t ) = å Afn ×W çççt - n ÷÷ = Â êê å Afn ×W çççt - n ÷÷ × exp êê j 2p fc çççt - n ÷÷úú úú
çè c ÷ø êë n =0 çè c ø÷ êë çè c ÷øú ú
n =0
ûû (C.3)

The complex envelop of the lowpass equivalent of this signal, relating wavelength and carrier fre-
quency by lc = c fc , can be written as:

N æ l ö÷ æ l ö÷
Z0 (t ) = å Afn W çççt - n ÷÷ × exp ççç- j 2p n ÷÷
n =0 çè c ÷ø çè lc ø÷
(C.4)

(
Equation (C.4) shows that the phase shift of the carrier from path n is jn º 2p ln lc . Therefore )
the delay associated with the nth path tn is ln/c and we can rewrite (C.4) as:

N
Z0 (t ) = å Afn ×W (t - tn ) × exp (- jjn )
n =0 (C.5)

Equation (C.5) clearly shows that the lowpass equivalent of the received signal depends on the at-
tenuation, phase shift and delay associated with each replica of the signal arriving from path n.
A more generic mathematical model can be achieved using the modifications proposed in (Aulin,
1979) for the two dimensional model presented in (Clarke, 1968). This considers a model with the gain
of transmitting or receiving antennas in which the signal can arrive from any direction defined by its
azimuth Θ and elevation angle Φ. This model also considers movement, so the static restriction was
removed.
Therefore, and defining the velocity of the receiver terminal as v, the length variation of path n can
be written as:

dln = -v × (t - t0 )× cos (Qn ) × sin (Fn )


(C.6)

and the complex envelope of the received signal expressed by equation (C.5) can be written as:

N æ l + dln ö÷ é æl + dl ö÷ù
Z0 (t ) = å Afn ×W çççt - n ÷÷ × exp ê - j 2p çç n
ê
n ÷ú
çç l ÷÷ú
n =0 ç
è c ÷
ø êë è øúû
c
(C.7)

166
Efficient Discrete Simulation of Coded Wireless Communication Systems

which can be expressed in the form:

N æ
ç v × (t - t0 ) × cos (Qn ) × sin (Fn )ö÷÷
Z0 (t ) = å Afn ×W ççt - tn + ÷÷ ×
çç c ÷÷
n =0 è ø
æ æ v÷ ö ö
÷
ç
×exp (- jjn ) × exp çç j 2p cos (Qn ) × sin (Fn ) × ççç ÷÷ × (t - t0 )÷÷÷
çè çè lc ÷ø ÷ø
(C.8)

 f = Af × exp (- jj ) and assuming that:


This equation can be simplified using A n n n

v × (t - t0 ) × cos (Qn ) × sin (Fn )


<< tn (C.9)
c

By introducing the frequency shift due to movement or Doppler shift fD, at carrier frequency f = fc,
(Jakes, 1974, p. 20), that is:

v
fD = fc
c (C.10)

The Doppler shift associated with the nth path can be expressed as:

Dn = cos (Qn ) sin (Fn ) fD


(C.11)

so we can express the complex envelop of the lowpass equivalent of the received signal as:

N
Z0 (t ) = å A
 f ×W (t - t ) × exp ( j 2pD t )
n n n
n =0 (C.12)

This equation defines the propagation channel, emphasizing the effects of Doppler shift and delay
spread, and it is of major importance for the characterization of the DCM.

Demodulated Amplitude in a Discrete Channel Model

A general expression for the received, and demodulated, signal in a WCS using a DCM is expressed
by equation (D.1). The demodulator can be represented by a bank of correlators and its associated base
functions to obtain the demodulated signal R(t) (Proakis, 2001, pp. 232-236). For example, in PSK
modulation with a modulation index M ³ 2 there are two base functions.

167
Efficient Discrete Simulation of Coded Wireless Communication Systems

ìì ìï öö üï üï ïü
ïïïïï ïï
æ
çç æ
ççt - ln ÷÷÷÷ ×ïï ïï ïï
ïïï ï N
ï A p × A s × A f × b (t ) × exp j ç2p fc ç ÷ ÷ ï ïï ïï
ïïïïge (Q, F) å Â íï ç çè c ÷ø÷÷ø ý ×
n n n
è ï ïï ïï
ïïïï n =0 ï
ïï×exp (- jjn ) × exp ( j2p × Dn × t ) ï
ïï ïï ïï
ïïïï ï
î þï ï ×ïï
ïïíï ì ïì ü ýï
Ts
ï æ lk ÷ö ü
ï ï ïï ïï
ïïï K ï ï ç ï ï
R= ò íï
ïïïï×g (Q , F ) + å å ïí ïï
L  
ï íBpk × Bsk × Bfk × I l ççt - ÷ ex
ç c
÷
÷
x p ( j2p × Dk
× t )ý
ï
×ïïïï ýïdt + N
ï
è ø ý
ïþ ïïï ïï
0
ïïïï r n n ï î
l =1 k =0 ï ïïï ïï
ïïïï ïï×gr (Qk , Fk ) ïïþïïïþ ïï
ïï ï
î ï
î
ïï é 2 ù ïï
2 ïï
ïï× êê × cos (2p fct ) - × sin (2p fct )úú
ïï
ïï ê Ts Ts úû ïïþ
ïî ë
(D.1)

In this expression we have:

+¥ æt - nT ö÷
b (t ) = åb × rect ççç s ÷
÷
n =-¥
n
çè Ts ÷ø
(D.2)

and

2Es 2p
bn = × exp ( j fm ), fm = (m - 1), (m = 1, , M )
Ts M
(D.3)

where M is the modulation index, 2Es T is the carrier amplitude, Es is the symbol energy and Ts is
s
the symbol time. The energy of an information bit Eb is related to Es by Eb = Es log2 (M ) . N is a vector
containing the random amplitudes of noise at the output of the demodulator after low-pass filtering. Since
the noise that entered this filter, assumed linear, is Gaussian, then the noise at its output is also Gauss-
ian, (Haykin, 2001, pp. 56, property 1) and (Proakis, 2001, pp. 234), with E éëê N ùûú = 0 and s N2 = N 0 2 ,
N 2
where 0 is the power spectral density of noise N(t) (Haykin, 2001, pp. 318-322).
The complex envelope for the received signal with the DCM shown in Figure 2, will now be deduced
with a few assumptions. First of all we assume PSK modulation, so a PSK will be used. We also assume
non-selective fading in the frequency domain, i.e., l n c  Ts , we assume that this fading is slow, i.e.
Ts  1 fD and therefore it remains constant for at least a symbol duration. We also assume that the an-
tenna presents a constant gain for all multipath signals involved, we do not consider interfering signals
and the attenuation due to shadowing follows a lognormal PDF.
We also define the amplitude attenuation that affects a given symbol, due to fading and shadowing,
with random variables Af and As, respectively. Fading also produces a random variation in the signal
phase. In the analysis of communication systems, it is often assumed that random changes in the signal
phase are not relevant and can be corrected in the receiver, namely if we use an ideal coherent receiver.
On the other hand, if the modulation does not require a coherent receiver, then its random change does
not affect the receiver performance (Simon, 2000, pp. 15-16) and so it can be neglected. Therefore we

168
Efficient Discrete Simulation of Coded Wireless Communication Systems

can simplify equation (D.1) to:

ïìïé ù ïü
ïïêêAs × Af × 2Es × cos (2p f t + f )úú × ïïï
ïïê ú ïïï
TS c m
ïíë Ts
û ý dt + N
R= ò ïï é 2 2 ù ïï
0
ïï× ê cos (2p fct ) - sin (2pfct )úú ïï
ê
ïï ê T
ïî ë s
Ts úû ïïïþ
(D.4)

which can be written in the simple form:

R = Es × As × Af × éêRa Rb ùú + N
ë û (D.5)

where

1
Ra = cos (fm ) -
4p fc
( )
sin (fm ) + sin (fm + 4p fcTs )
(D.6)

and

1
Rb = sin (fm ) -
4p fc
( )
cos (fm ) + cos (fm + 4p fcTs )
(D.7)

Assuming that Ts = N c fc , where Nc is an integer, and assuming BPSK modulation, we can further
simplify equation (D.5) to:

éR ù éê E × As × Af ùú éN ù
ê 1ú = ê s
ú + êê úú
1
êR ú ê ú N
êë 2 úû ë- E × As × Af
s û êë 2 úû (D.8)

where R1 e R2 are demodulated amplitudes corresponding to transmitted symbols s1 and s2, respectively.
N1 and N2 are random variables with Gaussian distribution with E[N] = 0 and s N2 = N 0 2 .

Error’s Occurrence Position

The occurrence of errors in the channel is directly related to the power, or amplitude, of the received
signal. In the analysis of positions where errors in the DCM are more likely to occur, we assume that
they correspond to periods for which the amplitude of the received signal r is below a given reference
threshold level Rl and that these periods have a geometric distribution. For this model we sample the
received amplitude of the signal every symbol period, that is, at the symbol frequency, as shown in
Figure 12. Defining T as an independent random variable that accounts for the time in which the signal

169
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 12. Sampling process of the received signal amplitude affected by fading.

is above Rl, that is a number of n integer symbol periods in which that condition is verified, then we can
express its probability of occurrence as:

n
Pr (T = n ) = éêPr (r ³ Rl )ùú × Pr (r < Rl )
ë û (E.1)

which is a geometric distribution (Ross, 1987).


As shown in Figure 12, the amplitude of the received signal is sampled every Ts and the number of
symbols not affected by fading in each interval, i.e. when r > Rl, is given by T = nTs. Therefore, the
number of symbols considered, it depends both on the symbol rate and reference level Rl. For the ex-
ample depicted in the Figure 12, we show a sequence for n=28, 20 and 13 consecutive symbol periods
not affected by fading.
Then, the probability of transmitting a sequence of l - 1 = n sequential symbols in the channel without
error, for example a sequence corresponding to l transmitted zeros, is given by the geometric PDF:

l -1
fL (l ) = (1 - Pe ) × Pe , l = 1, 2,... (E.2)

where L is the RV defining the interval between errors and Pe is the channel probability of error, some-
times referred to as transition probability. The corresponding CPDF is given by:

n i -1

FL (l ) = å (1 - Pe ) Pe
i =1 (E.3)

170
Efficient Discrete Simulation of Coded Wireless Communication Systems

So

n i -1

FL (l ) = å (1 - Pe ) Pe
i =1 (E.4)

which can be simplified to:

FL (l ) = 1 - (1 - Pe ) ,
l
Pe ¹ 0
(E.5)

and using the analytical transformation method referred in (Jeruchim, Balaban & Shanmugan, 2001,
pp. 377-378), we have

(1 - Pe )l = 1 -U
(E.6)

where U is an uniform RV. After some manipulation we can write:

log(1 -U )
L=
log(1 - Pe )
(E.7)

Considering that both (1-U) and U are RVs with the same uniform distribution and L only takes
integer values, equation (E.7) can be expressed as:

log(U )
L=
log(1 - Pe )
(E.8)

in which denotes the “integer greater or equal to” operator.

Amplitude Samples for the Received Signal

Once the positions of the erroneous symbols in the transmitted sequence in the channel are known, we
need to generate R samples with amplitudes in accordance with that condition, that is, samples that are
likely to cause errors (Re), as well as samples that do not cause errors (Rne), for the remaining posi-
tions. These samples are generated according to the corresponding PDF of A = Af × As that models
the amplitudes of the digital channel.
For simplicity, and assuming that s1(t) = +1, the PDFs of the generated samples are given by:

fR (r )
(
fRe (r ) = Pr A × s1 (t ) + N < 0 = ) FR (0)
× u (-r )
(F.1)

171
Efficient Discrete Simulation of Coded Wireless Communication Systems

and

fR (r )
(
fRne (r ) = Pr A × s1 (t ) + N > 0 = ) 1 - FR (0)
× u (r )
(F.2)

where u(r) is the step function.


Assuming that Af and As are statistically independent, its PDF is given by (Helstrom, 1984, pp.
139-147):

¥
1 æaö
fA (a ) = ò × fAs ççç ÷÷÷ × fAf (as )d as

as è as ÷ø
(F.3)

Since A and N are independent RVs, the PDF of the received signal amplitudes R, fR(r), can be
obtained using the convolution of their PDFs, which can be easily evaluated using their characteristic
function (Papoulis, 1984, pp. 155-158).
The probability of error when s1(t) = +1 can be achieved by:

FRe (0) = ò fR (r ) dr
-¥ (F.4)

The CPDFs of Re and Rne are given, respectively, by:

FRe (a ) = ò fRe (r ) dr
-¥ (F.5)

and

FRne (a ) = ò f (r ) dr Rne
0 (F.6)

Finally, the desired samples are evaluated by Re = FRe-1 (U ) and Rne = FRne-1
(U ) using the trans-
formation method (analytical or empirical) described in [Jeruchim, Balaban & Shanmugan, 2000, pp.
377-380]. When sending the other symbol, i.e., s2(t) = -1, the generation of these Re and Rne samples
follows exactly the same procedure, except that they are multiplied by -1 at the end.

Application Example for Fading with Rayleigh


Distribution and Additive White Gaussian Noise

Let us consider that the amplitude A = Af follows a Rayleigh distribution described by its PDF:

172
Efficient Discrete Simulation of Coded Wireless Communication Systems

af æ a f 2 ö÷
çç
fAf (a f ) = × exp çç- 2÷
÷ × u (a f )
÷
è 2 × sh ÷ø
2
sh (G.1)

where σh is the standard deviation of the propagation channel and u(αf), denotes the step function, in
which αf is the range of values for the RV Af.
Similarly, consider that noise samples follow a Gaussian PDF:

1 æ n2 ö
fN (n ) = × exp ççç- 2 ÷÷÷
2ps çè 2s ÷ø
(G.2)

where σ is the standard deviation of the filtered AWGN and n is the range of values for RV N.
Since the performance of a given WCS is usually expressed by its BER and this one can be written as
a function of the ratio between the received energy per information bit Eb and the power spectral density
of noise N0, Eb/N0, it is useful to express the RVs as a function of it. By doing so, and considering the
normalization eb = 1 [Eb = 10 log10 (eb )], its standard deviation σ can be expressed as:

1
s=
(E )
N 0 10
2 × 10 b (G.3)

It is useful to express the fading samples as a function of the average fading, E(Af):

¥
p
E ( Af ) = ò a f × f (a f ) d a f
Af
= sh ×
0
2
(G.4)

that can also be written as

æç E(Af ) ö÷
çç dB ÷
÷÷
çç ÷
çè 10 ÷÷ø
10
sh =
p
2 (G.5)

( )
where E (Af )dB = 10 × log10 E (Af ) . The variance of the Rayleigh PDF is then given by:

æ 4 - p ö÷
sAf 2 = sh2 × çç ÷
çè 2 ÷÷ø (G.6)

We can now express fR(r) as a function of these variances:

173
Efficient Discrete Simulation of Coded Wireless Communication Systems

ïìï æ sh2 r 2 ö÷ üï
ïï2s 2 1 + 1 + exp ççç ÷÷ 2p × r + ïï
ïï 2 2 çç 2s + 2s s  ÷÷
4 2 2 ÷ ïï
s sh è h ø ïï
ïï ïï
æ r ö÷ ï
2 ï æ 1 1 ö
÷ ï
ç ç
exp çç- 2 ÷÷ × í
æ ö
çç sh × 2
+ 2 × r ÷÷÷ý
çè 2s ÷ø ïï s 2
r 2
÷÷ ç s 2
sh ÷÷ïï
ïï+ exp ççç h
÷ 2 p × r × erf
çç
çç ÷÷ïï
çç 2s 4 + 2s 2s 2 ÷÷÷
ïï
ïï è h ø çç ( 2
2 × s + sh 2
) ÷÷ïï
÷÷ï
ïï ç
çè ÷÷ïï
ï
î ø þïï × u -r
fR (r ) = ( )
æ ö÷
çç 1 1
çç2s ×
çè
+
s 2 sh2
(
2
× s + sh × 2p ÷÷2
) ÷
÷ø
(G.7)

For BPSK modulation, matrix s has two terms: s1(t) = +1 and s2(t) = +1, for information bits “0” and
“1”, respectively. Considering that these symbols are equiprobable, and the decision threshold is assumed
to be zero, the probability of error in the channel is obtained from the CPDF of the received signal R,
i.e., Pe = FR (0) . Substituting equation (G.7) in (F.4) and after some mathematical manipulation, we
can express the probability of error as:

1 sh
FR (0) = 1-
2 s 2 + sh2 (G.8)

which agrees with the results presented in (Proakis, 2001, pp. 155-158).
The derivation of the CPDFs for Re and Rne using a Rayleigh distribution can be obtained from
equation (G.9). After some algebraic manipulation:

Figure 13. CPDF FRe (r ) for Eb / N 0 = 10dB and three fading values

174
Efficient Discrete Simulation of Coded Wireless Communication Systems

ïìï ìï æ æ ö÷ö÷üïï ïüï


ïï sh ïï æ ö÷ çç çç
2
sh2 × r ÷÷÷÷ï ïï
1 - exp ç -
2
çç 2(s2 +s2 ) ÷÷ ç
r
× ç 1 + erf ç ÷÷÷÷ý +ï
ïï 2 2 í çç
ïí s + sh ïïï
ï
î
è h ø ç
çè ( 2
çè 2 × s + sh
2
) ÷÷÷÷ïï ïï
øøïïþ ý
ïï æ r ö÷ ïï
ïï s2 çç ïï
ïï+ +erf ÷÷ ïï
ççè
2s ÷ø
2 2
ïî s + s ïþ
FRe (r ) = h
× u (-r )
2 × FR (0)
(G.9)

Similarly, and using equation (G.9), the CPDF for Rne gives:

ïìï sh ïìï æ ö÷ æç æ sh ×r ö÷ö÷ïüï æ r ö÷ïüï


ïí í 1 - exp ç - r2
×
çç 2×(s2 +s2 ) ÷÷ çç 1 + erf ç ÷
çç 2×s×(s2 +s2 ) ÷÷÷÷ýï + erf çç ÷÷ïý
ç
(
ïï s 2 + s 2 ) ïïî è h ø è è h øø
ïþ è 2s ÷øïï
ç
ïþ × u r
FRne (r ) = ïî ()
h

(
2 × 1 - FR (0) ) (G.10)

The expressions for RVs Re and Rne, are simply obtained by inverting the previous equations.
Figure 13 and Figure 14 plot the analytical expressions derived for the CPDF of Re, considering
Eb N 0 = 10dB and Eb N 0 = 30dB , respectively. Figure 13 shows that 99% of the Re samples have
amplitudes greater than –1.4 and -1.7 for E (Af ) = 0dB and E (Af ) = -15dB; -30dB , respectively.
In a similar way, Figure 15 and Figure 16 show the PDF of Rne samples considering E (Af ) = 0dB
and E (Af ) = -20dB , respectively. Figure 15 shows that 50% of the Rne samples have amplitudes
greater than 1.0 and 2.0 for Eb N 0 = 15dB; 30dB and Eb N 0 = 0dB , respectively. About 90% of
the samples are lower than 2.0 and 4.0 for Eb N 0 = 15dB; 30dB e 0dB , respectively. The probability
of having more samples with small amplitude is then greater. Namely, we can observe that 90% of the

Figure 14. CPDF FRe (r ) for Eb / N 0 = 30dB and three fading values

175
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 15. CPDF FRne (r ) for E (Afs ) = 0dB and three Eb / N 0 values

Figure 16. CPDF FRne (r ) for E (Afs ) = -20dB and three Eb / N 0 values

Rne samples are smaller than +0.2, +0.6, +3.8, for Eb N 0 = 30dB; 15dB and 0dB , respectively. At
low Eb N 0 values these samples are more diverse.
From these results we conclude that Re samples depend more on Eb N 0 while Rne samples de-
pend more on the average fading E(Af). This fact can be easily explained since the Gaussian PDF has
a greater contribution to samples that are able to produce errors (Re) and the Rayleigh PDF on those
who don’t (Rne).
The evaluation of the distribution functions for RVs Re and Rne is a fundamental task using the ASM
for estimating the performance of block codes with SD decoding. Once they are obtained, it is an easy
task to generate the amplitude samples corresponding to the potential cases of error or non-error as de-
scribed. Since the generated Re and Rne samples play a crucial role in the presented ASM method and its

176
Efficient Discrete Simulation of Coded Wireless Communication Systems

Figure 17. Theoretical and simulated distribution of the Rne random variable, considering several
average fading values

Figure 18. Theoretical and simulated distribution of the Re random variable, considering several aver-
age fading values

precision, Figure 17 and Figure 18 show their distribution, using some deduced theoretical expressions
and simulated results. As we can observe, the match is nearly perfect, which is an important indication
for the validation of the simulation results obtainable with this simulation method.

177
178

Chapter 8
Teaching Principles of Petri
Nets in Hardware Courses
and Students Projects
Hana Kubátová
Czech Technical University in Prague, Czech Republic

ABSTRACT
The paper presents the principles of using Petri Net formalism in hardware design courses, especially
in the course “Architecture of peripheral devices”. Several models and results obtained by student in-
dividual or group projects are mentioned. First the using of formalism as a modeling tool is presented
consecutively from Place/Transition nets to Coloured Petri nets. Then the possible Petri Nets using as a
hardware specification for direct hardware implementation (synthesized VHDL for FPGA) is described.
Implementation and simulation results of three directly implemented models are presented.

INTRODUCTION ferences between them are mentioned. Everything is


described according the teaching of formal methods
Petri nets (PN) are a well established mechanism for used in digital systems design especially during the
system modeling. They are a mathematically defined design of peripheral devices of computers, where
formal model, and can be subjected to a large variety some asynchronous and parallel actions without
of systems. PN based models have been widely used any coupling to internal clock frequency have to
due to their ease of understanding, declarative, logic be designed and modeled.
based and modular modeling principles, and finally This basic knowledge and method of the model
because they can be represented graphically. Since construction is enlarged and exploited in student
Petri nets began to be exploited in the 1960s, many group projects. Petri net based models have been
different types of models have been introduced and used e.g. in the design of a processor or a control
used. The most popular models are presented in this system architecture with special properties, e.g.
paper, their main advantages are shown and the dif- fault-tolerant or fault-secure, hardware-software
co-design, computer networks architecture, etc.
DOI: 10.4018/978-1-60566-774-4.ch008 This has led to the development of PN models in

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

some Petri net design tools and then the analysis description levels. The first problem to be solved
and simulation of models using these tools. After during the design process is the construction of a
this high-level design has been developed and good model, which will enable the specification
validated it becomes possible, through automatic and further handling and verification of the dif-
translation to a Hardware Description Language ferent levels of this design. Therefore this paper
obviously used in digital hardware design (VHDL presents such model constructions on the basis
or Verilog design languages), to employ proper of a number of simple examples. These experi-
implementation in a programmable hardware ments can use the automatic translation process
(FPGA – field programmable gate array or CPLD from PNML language to the synthesized VHDL
– complex programmable logic device). description that can be easily implemented in
Our students have been teaching and making FPGA. We have used universal VHDL descrip-
many experiments based on the massive using tion divided into several blocks. The blocks can
of FPGA or CPLD development kit. They have be easily modified.
practical skills from the 3rd semester of bachelor The paper is organized as follows. The brief
studies (Kubátová, 2005; Bečvář, 2006). It enables description of used hardware and software tools
a custom device to be rapidly prototyped and tested is in section 2. Section 3 contains the concrete
(however the ASIC implementation is also pos- example and construction of its models by Petri-
sible). An FPGA version of a digital circuit is likely nets of several types. The brief description of ex-
to be slower than the equivalent ASIC version, due periments for the dinning philosophers’ problem,
to the regularly structured FPGA wiring channels the producer-consumer PN model and a railway
compared to the ASIC custom logic. However, critical rail are described in section 4. Section 5
the easier custom design changes, the possibil- concludes the paper.
ity of easy FPGA reconfiguration, and relatively
easy manipulation make FPGAs very good final
implementation bases for our experiments. This TECHNOLOGY AND DESIGN TOOLS
is most important due to strict time schedule of
these student group projects – everything must Xilinx and Altera are two most significant com-
be finished during one semester. panies in the market of programmable devices.
Most models used in the hardware design Their products, including circuits, design kits
process are equivalent to the Finite State Machine and development systems, are comparable. As
(FSM), (Adamski, 2001; Erhard, 2001; Gomes, Xilinx provides better university support, we
2001; Uzam, 2001). It is said that the resulting have decided to use Xilinx products. ISE 7.1i is
hardware must be deterministic, but we have the development system in which students design
found real models that are not equivalent to an their circuits. Our students have practical skills
FSM. Therefore we have concentrated on those with the design kits Digilent XCRP (Digilent,
models with really concurrent actions, with various 2009), equipped with CPLD Xilinx CoolRun-
types of dependencies (mutual exclusion, paral- ner XCR3064 (Xilinx, 2009) and FPGA Xilinx
lel, scheduled), and have studied their hardware Spartan2E.
implementation. ModelSim simulator is used for simula-
Petri nets are a good platform and tool in tion. In introductory and intermediate course
the “multiple-level” design process. They can students simulate designs created in Xilinx
serve as a specification language on all levels of ISE. In advanced course designs are created
specifications, and as a formal verification tool in HDL Designer and then again simulated in
throughout these specification and architectural ModelSim.

179
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 1. The Petri net model of two printers working in parallel

Petri net models construction, analysis and the handshake scheme. The control unit uses the
simulations are performed by the Petri net software control signal STROBE to signal “data valid” to
tools Design/CPN, (CPN Tools, 2009), JARP, the target units – printers, receivers. The printer’s
(Jarp Petri Nets Analyzer Home Page, (2009), signal “data is printing” to the control unit by
CPN Tools, (Coloured Petri Nets at the University ACK signals. After the falling edge of a STROBE
of Aarhus, (2009). The MICROSOFT based or signal, all printers must react by the falling edges
UNIX based platforms is allowed. of ACK signals to obtain the next portion of data
(e.g., a byte). Our Petri net will model coopera-
tion between only two printers A, and B, with one
PETRI NET DEFINITIONS AND control unit C, see Figure 1.
EXAMPLES USED IN COURSES Following essential conditions and actions
have been identified:
Petri nets can be introduced in many ways, ac-
cording to their numerous features and various • List of conditions:
applications (Girault, 2003). Many attempts have ◦ p1: control unit C has a byte prepared
been made to define the principles of basic types for printing
of Petri nets. The way chosen here involves a ◦ p2: control unit C is waiting for sig-
brief introduction to the basic principles and to the nals ACK
hierarchical construction of the most complicated ◦ p3: control unit C is sending a byte
and widely used Petri net based models used in and a STROBE signal to printer A
professional software tools. The example is chosen ◦ p4: printer A is ready to print
to explain to our students the necessity of some ◦ p5: printer A is printing a byte
formalism in hardware design. ◦ p6: printer A sends ACK signal
◦ p7: control unit C sends STROBE =
Example and its Model 0 to A
◦ p8: control unit C is sending a byte
The essential features of Petri nets are the prin- and a STROBE signal to printer B
ciples of duality, locality, concurrency, graphical ◦ p9: printer B is ready to print
and algebraic representation (Girault, 2003). These ◦ p10: printer B is printing a byte
notions will be presented on a simple model of a ◦ p11: printer A sends ACK signal
handshake used by printers communicating with ◦ p12: control unit C sends STROBE =
a control unit that transmits data according to 0 to B

180
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 2. Initial state of the Petri net from figure

• List of actions: the totality of its input and output objects


◦ t1: control unit C sends STROBE = 1 (pre- and post- conditions, input and out-
◦ t2: control unit C sends STROBE = 0 put places, …) together with the element
◦ t3: printer A sends ACK = 1 itself.
◦ t4: printer A sends ACK = 0 • The principle of concurrency for Petri nets:
◦ t5: printer B sends ACK = 1 transitions having a disjoint locality occur
◦ t1: printer B sends ACK = 0 independently (concurrently).
• The principle of graphical representation
Separating or identifying passive elements for Petri nets: P-elements are represented
(such as conditions) from active elements (such by rounded graphical elements (circles, el-
as actions) is a very important step in the design lipses, …), T-elements are represented by
of systems. This duality is strongly supported by edged graphical symbols (rectangles, bars,
Petri nets. Whether an object is seen as active or …). Arcs connect each T-element with
passive may depend on the context or the point its locality, which is a set of P-elements.
of view of the system. The following principles Additionally, there may be inscriptions such
belong to the essential features of Petri nets that as names, tokens, expressions, guards.
express locality and concurrency: • The principle of algebraic representation
for Petri nets: For each graphical repre-
• The principle of duality for Petri nets: sentation there is an algebraic representa-
there are two disjoint sets of elements: tion containing equivalent information. It
P-elements (places) and T-elements (tran- contains the set of places, transitions and
sitions). Entities of the real world, inter- arcs, and additional information such as
preted as passive elements, are represented inscriptions.
by P-elements (conditions, places, resourc-
es, waiting pools, channels etc.). Entities In contrast to concurrency, there is the notion of
of the real world, interpreted as active conflict. Some transitions can fire independently
elements, are represented by T-elements (e.g. t4 and t6 in Figure 2, only tokens must be
(events, transitions, actions, executions inside the input places), but there can be Petri
of statements, transmissions of messages nets that model mutual exclusion, see Figure 3.
etc.). Concurrent transitions behave independently and
• The principle of locality for Petri nets: the should not have any impact on each other. Some-
behavior of a transition depends exclu- times this can depend on the state of the net – these
sively on its locality, which is defined as transitions can behave independently.

181
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 3. Concurrency of t3 and t5 transitions a) after t1 firing both t3 and t5 are enabled, b) after t3
firing t5 still remains enabled

Definition 1. A net is a triple N = (P, T, F) input places and are added to the output places.
where The transitions can fire concurrently (simultane-
ously – independently, e.g. t3 and t5 in Figure 3a,
• P is a set of places or in conflict, see Figure 4).
• T is a set of transitions, disjoint from P,
and Place/Transition Net, Arc-
• F is a flow relation F Í (P x T) Ç (T x P) for Constant Coloured Petri Net
the set of arcs.
According Definition 1 the arc can be only simple
If P and T are finite, the net is said to be fi- – only one token can be transmitted (removed or
nite. added) from or to places by firing of a transition.
The state of the net is represented by tokens in Place/transition nets are nets in the sense of Defi-
places. The tokens distributions in places are called nition 1 together with a definition of arc weights.
markings. The holding of a condition (which is This can be seen as an abstraction obtained from
represented by a place) is represented by a token more powerful coloured Petri nets by removing
in the corresponding place. In our example, in the the individuality of the tokens. The example de-
initial state control system C is prepared to send rived from the Petri net from Figure 1 is shown in
data (a token in place p1), printers A and B are Figure 5. Here more (two) printers are expressed
ready to print (token s in places p4 and p9), see only by two tokens in one place p4. The condition
Figure 3. A state change or marking change can “all printers are ready” expressed by two tokens
be performed by firing a transition. A transition in place p4 and fulfilled by multiply edge from
“may occur” or “is activated” or “is enabled” or place p6 to transition t2.
“can fire” if all its input places are marked by a Definition 2. A place/transition net (P/T
token. Transition firing (the occurrence of a transi- net) is defined as a tuple NPT =<P, T, Pre, Post>
tion) means that all tokens are removed from the where:

182
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 4. Mutual exclusion of places p5 and p10, transition t3 and t5 in conflict, a) initial state where
t3 and t5 are both enabled, b) the state after t3 firing, where t5 is not enabled

• P is a finite set (the set of places of NPT) <P, T, F, W, m0>. For a net systemS = < NPT, m0>
• T is a finite set (the set of transitions of the set RS(S):= {m | $ w Î T*, m0 →w m}, where
NPT), disjoint from P, and T* is the sequence of transitions and w Î T* and
• Pre, Post Î Ν|P|x|T| are matrices (the back- t Î T, is the reachability set.FS(S):= {w Î T* | $
ward and forward incidence matrices of m, m0→w m } is a set of occurrence-transition
NPT). C = Pre – Post is called the incidence sequences (or a firing-sequence set) of S. It is
matrix of NPT sometimes convenient to define the set Occ(S) of
occurrence sequences to be the set of all sequences
The set of these arcs is F:={(p,t) Î P x T |Pre[p,t] of the form
> 0} Ç {(t,p) Î T x P |Post[p,t] > 0}. This inter-
pretation leads to the alternative definition, which m0, t0, m1, t1, m2, t2,…...,tn-1, mn(n ≥ 1)
is closer to the graphical representation.
Definition 3. A place/transition net (P/T net) is such that
defined as a tuple NPT =<P, T, F, W>, where:
mi→timi+1 for i Î {0, …., n-1}.
• (P, T, F) is a net (see Definition 1) with fi-
nite sets P and T, and The tokens in Figures 2, 3, 4 and 5 are not
• W: F →N \ {0} is a function (weight distinguished from each other. The tokens repre-
function) senting printers A and B are distinguished by their
places p4 and p9. A more compact and more natural
NPT together with an initial marking (m0) is way is to represent them in one place p4&p9 by
called a P/T net system S = < NPT, m0> or S = individual tokens A and B. Distinguishable tokens
are said to be coloured. Colours can be thought of
as data types. For each place p, colour set cd(p)
Figure 5. Place/transition net
is defined. In our case cd(p4&p9) = {A, B}. For a
coloured net we have to specify the colours, and
for all places and transitions, particular colour
sets (color domains). Since arc inscriptions may
contain different elements or multiple copies of an
element, multisets (bags) are used, ‘bg’. The bag
over a non-empty set A is a function bg: A →N,
sometimes denoted as a formal sum ∑aÎΑ bg(a)’a.
Extending set operations sum and difference to
Bag(A) are defined (Girault, 2003).

183
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Definition 4. An arc-constant coloured Petri 7. Transition t3 can fire if there is an object A in


net (ac-CPN) is defined as a tuple place p4&p9 (and an indistinguishable token in
the place p3). When it fires, token A is removed
NaC = <P, T, Pre, Post, C, cd> from place p4&p9 and is added to place p5&p10,
and an (indistinguishable) token is added to p6.
where Places p4&p9 and p5&p10 have the colour domain
printers = {A, B} denoting printer A and printer
• P is a finite set (the set of places of NaC), B. The control process is modelled by tokens
• T is a finite set (the set of transitions of (STROBE). Colour domains are represented
NaC), disjoint from P, by lower case italics near the place symbols in
• C is the set of colour classes, Figure 6. Places p3, p6, p7, p8, p11 and p12 are
• cd: P→C is the colour domain mapping, assumed to hold an indistinguishable token and
and therefore have the colour domain token = {•},
• Pre, Post Î Β|P|x|T| are matrices (the back- which is assumed to hold by default. The net
ward and forward incidence matrices of from Figure 2 (ordinary or black-and-white PN)
NaC) such that Pre[p,t] Î Bag(cd(p)) and and the net from Figure 6 (coloured PN) contain
Post[p,t] Î Bag(cd(p)) for each (p,t) Î P x T. the same information and have similar behavior.
C = Post - Pre is called incidence matrix Only two places are “safe”. This CPN is called
of NaC. arc-constant, since the inscriptions on the arcs are
constants and not variables.
In this definition Β is taken as the set Bag(A), Colour sets:
where A is the union of all colour sets from C. The
difference operator in C = Post – Pre is a formal • control = {s}
one here, i.e. the difference is not computed as a • printers = {A, B}
value. A marking is a vector m such that m[p] Î • constants: s, A,
Bag(cd(p)) for each p Î P. The reachability set,
firing sequence, net system and occurrence have The next step will be to simplify the graph
the same meaning as for P/T nets. structure of ac-CPN. We will represent the mes-
sages “STROBE signal sent to printer A” (stA)
Coloured Petri Net and “STROBE signal sent to printer B” (stB),
ACK signal sent from printer A (ackA) and ACK
The example for constructing Coloured Petri nets signal sent from printer B (ackB). We can con-
(CPN) is discussed in several following examples nect places p3 and p8, p6 and p11, p7 and p12, in
and figures derived from our original model of Figure 7 they are named by the first name of the
parallel printers. Arc-constant CPN in Figure 7 is connected places. The behaviour of the net is the
simply derived from the initial example, with the same. As a new feature of this net, transition t2
same meaning of all places and transitions. Places has to remove both signals ackA and ackB from
p4 and p9 (and p5 and p10) originally used for dis- place p6. The expression ackA + ackB denotes the
tinguishing two printers are connected (“folded”) set {ackA, ackB}. Transition t3 is enabled if both
to one place here named p4&p9 (p5&p10). For ackA and ackB are in place p4 and by t3 firing
a transition t, it is necessary to indicate which of both tokens are removed. Therefore in the general
the individual tokens should be removed (with case, bags (multisets) will be used instead of sets.
respect to its input places). This is done by the The transition firing rule for arc-constant CPN can
inscriptions on the corresponding arcs in Figure be expressed as: all input places must contain at

184
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 6. Arc-constant CPN

least as many individual tokens as specified by the • control = {s}


corresponding arcs. The transition firing means • printers = {A, B}
that these tokens are removed and added to the • ack = {ackA, ackB}
output places as indicated by arc inscriptions. • strobe = {stA, stB}
The firing rule for ac-CPN is sketched in • constants:s, A, B, ackA,
Figure 8. • ackB, stA, stB
Colour sets:

Figure 7. Arc-constant CPN without three places

Figure 8. Firing rule for ac-CPN

185
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 9. Firing rule for CPN

In a coloured Petri net the incidence matrices 1. Select a binding such that the guard holds
cannot be defined over Β = Bag(A) as for arc- (associate with each variable a value of its
constant CPNs. The different modes or bindings colour), Figure 9b)
of a transition have to be represented. These are 2. Temporarily replace variables by associated
called colours, and are denoted by cd(t). There- constants, 9c)
fore the colour domain mapping cd is extended 3. Apply the firing rule from ac-CPN from
from P to P Ç T. In the entries of the incidence Figure 8 as shown in 9d) (remove all appro-
matrices for each transition colour, a multiset has priate tokens from input and add to output
to be specified. This is formalized by a mapping places according the arc inscriptions).
from cd(t) into the bags of colour sets over cd(p)
for each (p,t) Î P x T. The firing rule should be understood as a single
Our example expressed by CPN is shown in step from 9a) to d). If the binding x=a, y=b, z=c
Figure 10. The number of places and transitions is selected, then the transition is not enabled in
corresponds to the P/T net in Figure 5, but the this binding, since the guard is not satisfied. The
expression power is greater. For each transition a selection of a binding is local to a transition.
finite set of variables is defined which is strictly Definition 5. A coloured Petri net (CPN) is
local to this transition. These variables have types defined by a tuple
or colour domains which are usually the colours NCPN = <P, T, Pre, Post, C, cd> where
of the places connected to the transition. In Fig-
ure 10 the set of variables of transition t3 is {x, • P is a finite set (the set of places of NCPN),
y}. The types of x and y are dom(x) = printers • T is a finite set (the set of transitions of
and dom(y) = ack, respectively. An assignment NCPN), disjoint from P,
of values to variables is called a binding. Not all • C is the set of colour classes,
possible bindings can be allowed for a correctly • cd: P Ç T → C is the colour domain map-
behaving net. The appropriate restriction is de- ping, and
fined by a predicate at the transition, which is
called a guard. Now the occurrence (firing) rule Pre, Post Î Β|P|x|T| are matrices (the backward
is as follows, see Figure 9, where all places have and forward incidence matrices of NCPN) such
the colour set cd(p) = objects = {a, b, c}, and the that Pre[p,t]: cd(t) → Bag(cd(p)) and Post[p,t]:
colour domain of all variables is also objects: cd(t) → Bag(cd(p)) are mappings for each pair
(p,t) Î P x T.

186
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 10. CPN model of printers Figure 11. The dining philosophers PN model

PETRI NET APPLICATIONS


IN STUDENTS PROJECTS

We performed several experiments with direct


implementation of the Petri nets model in hardware
Β can be taken as the set of mappings of the (FPGA). The results were presented in (Kubátová,
form f: cd(t) → Bag(cd(p)). C = Post - Pre is 2004; Kubátová, 2003, June; Kubátová, 2003,
called incidence matrix. September).These models are briefly described
The mapping Pre[p,t]: cd(t) → Bag(cd(p)) here. They were constructed in software tools
defines for each transition the colour (occurrence (Design/CPN or JARP editor) and from these
mode) β Î cd(t) of t a bag Pre[p,t] (β) Î Bag(cd(p)) tools their unified description in PNML language
denoting the token bag to be removed from p (Kindler, 2004). was directly transformed (by
when t occurs (fires) in colour β. In a similar our pnml2vhdl compiler) to synthesized VHDL
way, Post[p,t] (β) specifies the bag to be added and then into the FPGA bitstream. The non-
to p when t occurs (fires) in colour β. The overall deterministic selection of one from more enabled
effect of the action performed on the transition transitions was implemented by the hardware aids
firing is given by a tuple corresponding to the arcs (the linear feedback shift register - LFSR and the
connected with t. counter with preset), (Kubátová, 2003).
The colours of the transition can be seen as We have modeled 5 philosophers, who are
particular subsets of tuples cd(t) Í Bag(cd(p1)) x dining together, Figure 11. The philosophers each
… x Bag(cd(p|P|)), i.e., vectors having an entry have two forks next to them, both of which they
for each place. But this can be an arbitrary set need in order to eat. As there are only five forks it
as well. Effective representations of this set are is not possible for all 5 philosophers to be eating
necessary. The mappings Pre[p,t] and Post[p,t] at the same time. The Petri net shown here models
can be denoted by vectors, projections, func- a philosopher who takes both forks simultane-
tions and terms with functions and variables (see ously, thus preventing the situation where some
Figure 10). philosophers may have only one fork but are not
able to pick up the second fork as their neighbors
have already done so. The token in the fork place

187
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

(places P1, P2, …, P5) means that this fork is free. CONCLUSION
The token in the eat place (places P6, P7, …, P10)
means that this philosopher is eating. This paper describes our practical use of Petri nets
We also performed experiments with a in digital hardware design courses. Different levels
“producer-consumer” system, Figure 12. Our and types, practical and concrete styles of model-
FPGA implementation used 59 CLB blocks, 47 ing are presented on the basis of the simple and
flip-flops with maximum working frequency 24.4 clear examples which are understandable to our
MHz. (The relatively great number of flip-flops students from 8th semester. The example presented
is consumed mainly for the hardware random here in which parallel printers are served by a
choice of a transition to be fired.) The maximum controlling process, was chosen due its practical
input capacity parameter for places (the size of presentation and practical iterative construction
the counter) was set to the value 3. The average during the teaching process at the Department of
occupation of a place called “buffer“ (average Computer Science and Engineering (DSCE) of
number of tokens) during 120 cycles (transition the Czech Technical University in Prague.
firings) was 1.43, (Kubátová, 2004). Our recent and future effort has involve the
Our real application experiment models a practical and direct implementation of Petri net
railway with one common critical part – a rail, based models into FPGA based design kits in-
see Figure 13. The PN model, Figure 14, has volving some visible inputs and outputs and with
the initial marking where tokens are in places respect to quantitative properties: space, time,
“1T” and “3T” (two trains are on rails 1 and 3, power and reliability.
respectively), “4F” (a critical rail is free) and 2F
(rail 2 is free). This model has eight places, two
places T (train) and F (free) for each rail: a token ACKNOWLEDGMENT
in the first place means that the train is in this
rail (T-places), and the second means that this This research was in part supported by the
rail is free (F-places). This was described and MSM6840770014 research program.
simulated in the Design/CPN system and then
it was implemented in the real FPGA design kit
ProMoX, (Kubátová, 2003 June).

Figure 12. Producer – consumer model

188
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Figure 13. Railway semaphore model Figure 14. The PN model of 4 rails

REFERENCES
JarpPetri Nets Analyzer Home Page (2009). Re-
trieved from http://jarp.sourceforge.net/
Adamski, M. (2001). A Rigorous Design Meth-
odology for Reprogrammable Logic Controllers. Kindler, E. (Ed.). (2004). Definition, Implemen-
In Proc. DESDes’01, (pp. 53 – 60), Zielona Gora, tation and Application of a Standard Interchange
Poland. Format for Petri Nets. In Proceedings of the Work-
shop of the satellite event of the 25th International
Bečvář, M., Kubátová, H., & Novotný, M.
Conf. on Application and Theory of Petri Nets,
(2006). Massive Digital Design Education for
Bologna, Italy.
large amount of undergraduate students. [Royal
Institute of Technology, Stockholm.]. Proceedings Kubátová, H. (2003, June). Petri Net Models in
of EWME, 2006, 108–111. Hardware. [Technical University, Liberec, Czech
Republic.]. ECMS, 2003, 158–162.
Coloured Petri Nets at the University of Aarhus
(2009). Retrieved from http://www.daimi.au.dk/ Kubátová, H. (2003, September). Direct Hardware
CPnets/ Implementation of Petri Net based Models. [Linz,
Austria: J. Kepler University – FAW.]. Proceedings
Digilent, (2009). Retrieved from http://www.
of the Work in Progress Session of EUROMICRO,
digilentinc.com
2003, 56–57.
Erhard, W., Reinsch, A., & Schober, T. (2001).
Kubátová, H. (2004). Direct Implementation of
Modeling and Verification of Sequential Control
Petri Net Based model in FPGA. In Proceedings
Path Using Petri Nets. In Proc. DESDes’01, (pp.
of the International Workshop on Discrete-Event
41 – 46) Zielona Gora, Poland.
System Design - DESDes’04. Zielona Gora: Uni-
Girault, C., & Valk, R. (2003). Petri Nets for Sys- versity of Zielona Gora, (pp. 31-36).
tems Engineering. Berlin: Springer-Verlag.
Kubátová, H., & Novotný, M. (2005). Contem-
Gomes, L., & Barros, J.-P. (2001). Using Hierar- porary Methods of Digital Design Education.
chical Structuring Mechanism with Petri Nets for Electronic Circuits and Systems Conference, (pp.
PLD Based System Design. In Proc. DESDes’01 115-118), Bratislava FEI, Slovak University of
(pp. 47 – 52), Zielona Gora, Poland. Technology.

189
Teaching Principles of Petri Nets in Hardware Courses and Students Projects

Tools, C. P. N. (2009). Retrieved from http://wiki. Xilinx, (2009). Retrieved from http://www.xilinx.
daimi.au.dk/cpntools/cpntools.wiki com/f
Uzam, M., Avci, M., & Kürsat, M. (2001). Digital
Hardware Implementation of Petri Nets Based
Specification: Direct Translation from Safe
Automation Petri Nets to Circuit Elements. In
Proc. DESDes’01, (pp. 25 – 33), Zielona Gora,
Poland.

190
191

Chapter 9
An Introduction to
Reflective Petri Nets
Lorenzo Capra
Università degli Studi di Milano, Italy

Walter Cazzola
Università degli Studi di Milano, Italy

ABSTRACT
Most discrete-event systems are subject to evolution during lifecycle. Evolution often implies the devel-
opment of new features, and their integration in deployed systems. Taking evolution into account since
the design phase therefore is mandatory. A common approach consists of hard-coding the foreseeable
evolutions at the design level. Neglecting the obvious difficulties of this approach, we also get system’s
design polluted by details not concerning functionality, which hamper analysis, reuse and maintenance.
Petri Nets, as a central formalism for discrete-event systems, are not exempt from pollution when facing
evolution. Embedding evolution in Petri nets requires expertise, other than early knowledge of evolution.
The complexity of resulting models is likely to affect the consolidated analysis algorithms for Petri nets.
We introduce Reflective Petri nets, a formalism for dynamic discrete-event systems. Based on a reflective
layout, in which functional aspects are separated from evolution, this model preserves the description
effectiveness and the analysis capabilities of Petri nets. Reflective Petri nets are provided with timed
state-transition semantics.

INTRODUCTION updated, or extended with new features, during


lifecycle. Evolution can often imply a complete
Evolution is becoming a very hot topic in discrete- system redesign, the development of new features
event system engineering. Most systems are subject and their integration in deployed systems.
to evolution during lifecycle. Think e.g. of mobile It is widely recognized that taking evolution into
ad-hoc networks, adaptable software, business account since the system design phase should be
processes, and so on. Such systems need to be considered mandatory, not only a good practice. The
design of dynamic/adaptable discrete-event systems
DOI: 10.4018/978-1-60566-774-4.ch009 calls for adequate modeling formalisms and tools.

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Introduction to Reflective Petri Nets

Unfortunately, the known well-established formal- overcome. Separation of concerns could be ap-
isms for discrete-event systems lack features for plied to a Petri net-based modeling approach as
naturally expressing possible run-time changes well. Design information (in our case, a Petri net
to system’s structure. modeling the system) will not be polluted by non
System’s evolution is almost always emulated pertinent details and will exclusively represent
by directly enriching original design information current system functionality without patches. This
with aspects concerning possible evolutions. This leads to simpler and cleaner models that can be
approach has several drawbacks: analyzed without discriminating between what is
and what could be system structure and behavior.
• all possible evolutions are not always Reflection (Maes, 1987) is one of the mechanisms
foreseeable; that easily permits the separation of concerns.
• functional design is polluted by details re- Reflection is defined as the activity, both
lated to evolutionary design: formal mod- introspection and intercession, performed by
els turn out to be confused and ambiguous an agent when doing computations about itself
since they do not represent a snapshot of (Maes, 1987). A reflective system is layered in
the current system only; two or more levels (base-, meta-, meta-meta-level
• evolution is not really modeled, it is speci- and so on) constituting a reflective tower; each
fied as a part of the behavior of the whole layer is unaware of the above one(s). Base-level
system, rather than an extension that could entities perform computations on the application
be used in different contexts; domain entities whereas entities on the meta-level
• pollution hinders system’s maintenance perform computations on the entities residing on
and reduces possibility of reuse. the lower levels. Computational flow passes from
a lower level (e.g., the base-level) to the adjacent
Petri nets, for their static layout, suffer from level (e.g., the meta-level) by intercepting some
these drawbacks as well when used to model events and specific computations (shift-up action)
adaptable discrete-event systems. The common and backs when meta-computation has finished
modeling approach consists of merging the Petri (shift-down action). All meta-computations are
net specifying the base structure of a dynamic sys- carried out on a representative of lower-level(s),
tem with information on its foreseeable evolutions. called reification, which is kept causally con-
A similar approach pollutes the Petri net model nected to the original level. For details look at
with details not pertinent to the system’s current Cazzola, 1998.
configuration. Pollution not only makes Petri net Similarly to what is done in Cazzola, Ghoneim,
models complex, hard to read and to manage, it & Saake, 2004, the meta-level can be programmed
also affects the powerful analysis techniques/tools to evolve the base-level structure and behavior
that classical Petri nets are provided with. when necessary, without polluting it with extra
System evolution is an aspect orthogonal to information. In Capra & Cazzola, 2007 we apply
system behavior, that crosscuts both system de- the same idea to the Petri nets domain, defining a
ployment and design; hence it could be subject reflective Petri net model (hereafter referred to as
to separation of concerns (Hürsch & Videira Reflective Petri nets) that separates the Petri net
Lopes, 1995), a concept traditionally developed describing a system from the high-level Petri net
in software engineering. Separating evolution (Jensen & Rozenberg, 1991) that describes how
from the rest of a system is worthwhile, because this system evolves upon occurrence of some
evolution is made independent of the evolving events/conditions. In this chapter we introduce
system and the above mentioned problems are Reflective Petri nets, and we propose a simple

192
An Introduction to Reflective Petri Nets

state-transition semantics as a first step toward (Chiola, Dutheillet, Franceschinis, & Haddad,
the implementation of a (performance-oriented) 1993). SWN are the high-level counterpart of
discrete-event simulation engine. With respect to Generalized Stochastic Petri nets (GSPN) (Aj-
other proposals recently appeared with similar mone Marsan, Conte, & Balbo, 1984), the Petri
goals (Cabac, Duvignau, Moldt, & Rölke, 2005; net class used for the base-level. In other words,
Hoffmann, Ehrig, & Mossakowski, 2005), Re- the unfolding of a SWN is defined in terms of a
flective Petri nets do not define a new Petri net GSPN.
paradigm, rather they rely upon a combination of This section introduces SWN semi-formally,
consolidated classes of Petri nets and reflection by an example. The GSPN definition is in large
concepts. What gives the possibility of using part derived. Figure 1 shows the portion of the
existing tools and analysis techniques in a fully evolutionary framework (Figure 3) that removes
orthogonal fashion. The short-time perspective a given node from the base-level PN modeling the
is to integrate the GreatSPN graphical simula- system (reified as a WN marking). The removal
tion environment (Chiola, Franceschinis, Gaeta, of a node provokes as side-effect the withdrawal
& Ribaudo, 1995) to directly support Reflective of any arcs connected to the node itself. Trying
Petri nets. to remove a marked place or a not-existing node
In the rest of the chapter, we briefly present cause a restart action. We assume hereafter that
the (stochastic) Petri net classes used for the the reader has some familiarity with ordinary
two levels of the reflective model; then we in- Petri nets.
troduce Reflective Petri nets and the associated A SWN is a 11-tuple
terminology, focusing on the (high-level) Petri (T , P, {C1 , , Cn }, C , W + , W - , H , F , P , M 0 , λ )
net component (called framework) realizing the
where P is the finite set of places, T is the finite
causal connection between the logical levels of the
set of transitions, P ∩ T = ∅. With respect to
reflective layout; at last we provide a stochastic
ordinary Petri nets, places may contain ‘‘colored’‘
state-transition semantics for Reflective Petri nets;
tokens of different identity. C1 , , Cn are finite
finally we present some related work and draw our
basic color classes. In the example there are only
conclusions and perspectives. An application of
two classes C1, and C2, denoting the base-level
Reflective Petri nets to dynamic workflow design
nodes, and the different kinds of connections
will be presented in the companion chapter (Capra
between them, respectively. A basic color class
& Cazzola, 2009).
may be partitioned in turn into static sub-classes,

Ci = kCi,k .
SWN AND GSPN BASICS C assigns to each s ∈ P ∪ T a color domain,
defined as Cartesian product of basic color classes:
Colored Petri nets (CPN) (Jensen & Rozenberg, e.g. tokens staying at place BLreif|Arcs are trip-
1991) are a major Petri net extension belong- lets n1, n2 , k1 C 1 C 1 C 2. A CPN transition
ing to the high-level Petri nets category. For the actually folds together many elementary ones, so
meta-level of Reflective Petri nets we have cho- one speaks of instances of a colored transition.
sen Well-formed Nets (WN) (Chiola, Dutheillet, In Figure 1 C (t ) = C1 , for t ¹ delAFromToN ;
Franceschinis, & Haddad, 1990), a CPN flavor C (delAFromToN) C 1 C 1 C 1 C 2 . A n
(enriched with priorities and inhibitor arcs) retain- instance of delAFromToN is thus a 4-tuple
ing expressive power, characterized by a structured n1, n2 , n 3 , k1 .
syntax. For performance analysis purposes, we are A SWN marking M maps each place p to an
considering Stochastic Well-formed nets (SWN) element of Bag (C ( p )) . M0 denotes the initial

193
An Introduction to Reflective Petri Nets

Figure 1. A Well-Formed Net

marking. 〈 X 2 , X 3 , X 4 〉 in Figure 1 (surrounding


W - , W + and H assign each pair(t, p) T P transition delAFromToN) is a function-tuple
an (input, output and inhibitor, respectively) arc whose 1st and 2nd components are class-1 func-
function C (t ) Bag(C (p)). Any arc function is tions, while the 3rd one is a class-2 function:
formally expressed as a (linear combination of) 〈 X 2 , X 3 , X 4 〉 (〈 n1 , n2 , n3 , k1 〉 ) = 1 ⋅ n2 ×1 ⋅ n3 ×1 ⋅ k1 ,
function-tuple(s) 〈 f1 , , f n 〉, tuple components that is, 1 ⋅ 〈 n2 , n3 , k1 〉.
are called class-functions. Each fi is a function, Φ associates a guard [ g ] : C (t ) → {true, false}
C (t ) → Bag (C j ), Cj being the color class on i-th to each transition t. A guard is built upon a
position in C ( p ) , and is called class-j function. set of basic predicates testing equality be-
Letting F : 〈 f1 , , f n 〉 and tc : 〈c1 , , cm 〉 ∈ C (t ), tween projection applications, and member-
then F (tc ) = f1 (tc ) × f n (tc ), where operator ship to a given static subclass. As an example,
× denotes the multi-set Cartesian product. Each [ X 1 = X 2 ∨ X 1 = X 3 ](〈 n1 , n2 , n1 , k1 〉 ) = true.
fi is expressed in terms of elementary functions: A transition color instance tc ∈ C (t ) has con-
the only ones appearing in this chapter are the cession in M if and only if, for each place p:
projection Xk (k ≤ m), defined as X k (tc ) = ck ,

and the constants S and Sj,k, mapping any tc to Cj (i) W (t , p )(tc ) ≤ M ( p ),
and Cj,k, respectively.

194
An Introduction to Reflective Petri Nets

(ii) H (t , p )(tc ) > M ( p ), With respect to SWN definition, W + , W - , H


are functions T × P → . Analogously, a mark-
(iii) Φ (t )(tc ) = true ing m is a mapping P → . The definitions of
concession, enabling, firing given before are still
(the operators >, £, +, - are here implicitly valid (guards have disappeared), but for replac-
extended to multisets). Π : T →  assigns a ing F (t , p )(tc ) by F(t, p), and interpreting the
priority level to each transition. Level 0 is for operators in the usual way.
timed transitions, while greater priorities are
for immediate transitions, which fire in zero SWN Symbolic Marking Notion
time.
tc is enabled in M if it has concession, and no The particular syntax of SWN color annota-
higher priority transition’s instances have conces- tions allows system symmetries to be implicitly
sion in M. It can fire, leading to M’: embedded into SWN models. This way efficient
algorithms can be applied, e.g., to build a compact
∀p ∈ P M'( p ) = M ( p) + W + (t , p)(tc ) − W − (t , p)(tc )
Symbolic Reachability Graph (SRG) (Chiola,
Dutheillet, Franceschinis, & Haddad, 1997),
Finally, λ : T →  assigns a rate, character-
+
with an associated lumped CTMC, or to launch
izing an exponential firing delay, to each timed symbolic discrete-event simulation runs. These
transition, and a weight to each immediate transi- algorithms rely upon the notion of Symbolic
tion. Weights are used to probabilistically solve Marking (SM).
conflicts between immediate transitions with A SM provides a syntactical equivalence re-
equal priority. lation on ordinary SWN colored markings: two
The behavior of a SWN model is formally markings belong to the same SM if and only if
described by a state-transition graph (or reachabil- they can be obtained from one another by means
ity-graph), which is built starting from M0. As a of permutations on color classes that preserve
result of the SWN time representation, the SWN static subclasses.
reduced reachability graph, which is obtained by Formally, a SM M̂ comprises two parts specifying
suitably removing those markings (called van- the so called dynamic subclasses and the distribution
ishing) enabling some immediate transitions, is of colored symbolic tokens (tuples built of dynamic
isomorphic to a Continuous Time Markov Chain subclasses) over places, respectively. Dynamic sub-
(CTMC) (Chiola, Dutheillet, Franceschinis, & classes define a parametric partition of color classes
Haddad, 1993). preserving static subclasses: let Ĉi and si denote the
Special restart transitions, denoted by pre- set of dynamic subclass of Ci (in a given M̂), and the
fix rest, are used in our models once again for number of static subclasses of Ci. The j-th dynamic
modeling convenience (we might always trace subclass Z j ∈ Cˆ i refers to a static subclass, denoted
i

it back to the standard SWN definition). While d ( Z ji ) , 1 ≤ d ( Z ji ) ≤ si, and has an associated
the enabling rule of restart transitions doesn’t
cardinality | Z ji | , i.e., it represents a parametric
change, their firing leads a SWN model back to
set of colors (we shall consider cardinality one
the initial marking.
dynamic subclasses). It must hold:
The class of Petri nets used for the base-level
correspond to the unfolded (uncolored) version
of SWN, that is, GSPN (Ajmone Marsan, Conte,
∀k :1 si ∑ j:d ( Z ij )=k
| Z ji |=| Ci,k |.
& Balbo, 1984). A GSPN is formally a 8-tuple
(T , P, W + , W − , H , Π, m 0 , λ ). The token distribution in M̂ is defined by a

195
An Introduction to Reflective Petri Nets

function mapping each place p to a multiset on possible evolutions, and to dynamically adapt
the symbolic color domain of p, Cˆ( p ), obtained system’s model when evolution occurs.
replacing Ci with Ĉi in C ( p ) . The adopted reflective architecture (sketched
Among several, possible equivalent forms, the in Figure 2) is structured in two logical layers. The
SM canonical representative (Chiola, Dutheillet, first layer, called base-level PN, is represented by
Franceschinis, & Haddad, 1997) provides a uni- the GSPN specifying the system prone to evolve;
vocal representation for SM, based on a lexico- whereas the second layer, called meta-level is
graphic ordering of dynamic subclass distribution represented by the evolutionary meta-program; in
over places. our case the meta-program is a SWN composed
by the evolutionary strategies, which might
drive the evolution of the base-level PN. More
REFLECTIVE PETRI NETS precisely, in the description below we will refer
to the (untimed) carriers of SWN (i.e., WN nets)
The Reflective Petri nets approach (Capra & Caz- and GSPN, respectively, according to (Capra &
zola, 2007) quite strictly adheres to the classical Cazzola, 2007). Considering also the stochastic
reflective paradigm (Cazzola, 1998). It permits extension is straightforward, as discussed at the
anyone having a basic knowledge of ordinary end of the next sub-section.
Petri nets to model a system and separately its We realistically assume that several strategies

Figure 2. A Snapshot of the Reflective Layout

196
An Introduction to Reflective Petri Nets

Figure 3. A Detailed View of the Framework Implementing the Evolutionary Interface

197
An Introduction to Reflective Petri Nets

are possible at a given instant: in such a case preserve the state (marking) of this area during its
one is selected in non-deterministic way (default execution. To this aim the base-level execution is
policy). Evolutionary strategies have a transac- temporary suspended (using priority levels) until
tional semantics: either they succeed, or leave the the reflective framework has inhibited any changes
base-level PN unchanged. to the influence area of the selected evolutionary
The reflective framework, realized by a WN strategy. The base-level PN afterward resumes.
as well, is responsible for really carrying out the This approach would favor concurrency between
evolution of the base-level PN. It reifies the base- levels, and in perspective, between evolutionary
level PN into the meta-level as colored marking strategies as well.
of a subset of places, called base-level reification, The whole reflective architecture is charac-
with some analogy to what is proposed in Valk, terized by a fixed part (the reflective framework
1998. The base-level reification is updated every WN), and by a part varying from time to time
time the base-level PN enters a new state, and is (the base-level PN and the WN representing the
used by the evolutionary meta-program to observe meta-program). The framework hides evolution-
(introspection) and manipulate (intercession) the ary aspects to the base-level PN. This approach
base-level PN. Each change to the reification will permits a clean separation between evolutionary
be reflected on the base-level PN at the end of a model and evolving system model (look at the
meta-program iteration, i.e., the base-level PN companion chapter (Capra & Cazzola, 2009)
and its reification are causally connected and the for seeing the benefits), which is updated only
reflective framework is responsible for maintain- when necessary. So analysis/validation can be
ing this connection. carried out separately on either models, without
According to the reflective paradigm, the base- any pollution.
level PN runs irrespective of the evolutionary
meta-program. The evolutionary meta-program Reflective Framework
is activated (shift-up action), i.e., a suiTable
strategy is put into action, under two conditions The framework formalization in terms of (S)WN
non mutually exclusive: i) when triggered by an allows us to specify complex evolutionary patterns
external event, and/or ii) when the base-level PN for the base-level PN in a simple, unambiguous
model reaches a given configuration. way.
Intercession on the base-level PN is carried The reflective framework (Figure 3) driven on
out in terms of basic operations on the base- the content of the evolutionary interface performs
level reification suggested by the evolutionary a sort of concurrent-rewriting on the base-level
strategy, called evolutionary interface, which PN, suitably reified as a WN marking.
permit any kind of evolution regarding both the Places with prefix “BLreif”1 belong to the
structure and the current state (marking) of the base-level reification (BLreif), while those having
base-level PN. prefix “EvInt” belong to the evolutionary interface
Each evolutionary strategy works on a specific (EvInt). Both categories of places represent inter-
area of the base-level PN, called area of influ- faces to the evolutionary strategy sub-model.
ence. A conflict could raise when the changes While topology and annotations (color do-
induced by the selected strategy are reflected mains, arc functions, and guards) of the frame-
back (shift-down action) on the base-level, since work are fixed and generic, the structure of basic
influence area’s local state could vary, irrespec- color classes and the initial marking need to be
tive of meta-program execution. To avoid pos- instantiated for setting a link between meta-level
sible inconsistency, the strategy must explicitly and base-level. In some sense they are similar to

198
An Introduction to Reflective Petri Nets

formal parameters, which are bound to a given framework, i.e., its encoding as a WN marking,
base-level PN. takes place at system start-up (initialization of
+ −
Let BL0 : ( P0 , T0 , W0 , W0 , H 0 , Π 0 , m 0 ) be the the reification), and just after the firing of any
base-level PN at system start-up. The framework base-level transition, when the current reification
basic color classes are C1: NODE, C2: ArcType. We is updated.
have Definition 1, where P0 ⊆ Place, T0 ⊆ Tran. Definition 3 (reification marking)The reifica-
tion of Petri net BL : ( P, T , W , W , H , Π, m 0 ),
+ −
Class ArcType identifies two types of WN arcs,
input/output and inhibitor. Class NODE collects reif (BL ) , is the marking:
the potential nodes of any base-level PN evolu-
tions, therefore it should be set large enough to be M (BLreif | Nodes) = ∑ 1⋅ n
n∈P ∪T
considered as a logically unbounded repository.
The above partitioning of NODE into singleton M (BLreif | Prio) = ∑(Π(t ) + 1) ⋅ t
t∈T
static subclasses may be considered as a default
choice, which might be further adapted, depending M (BLreif | Marking) = ∑m ( p) ⋅ p
p∈P
0

on modeling/analysis needs. Symbols pi (tj) denote


base-level places (transitions) that can be explic-
M (BLreif | Arcs)(〈 p, t , i / o〉 ) = W − ( p, t )
itly referred to in a given evolutionary strategy. 
M (BLreif | Arcs)(〈t , p, i / o〉 ) = W + ( p, t )
Instead symbols xi(yj) denote anonymous places ∀p ∈ P, t ∈ T M (BLreif | Arcs)(〈 p, t , h〉 ) = H ( p, t )
(transitions) added from time to time to the base- M (BLreif | Arcs)(〈t , p, h〉 ) = 0
level without being explicitly named. To make it 

possible the automatic updating of the base-level
reification (as explained in the sequel), also these The evolutionary framework’s colored initial
elements can be referred to, but only at the net marking (M0) is the reification of base-level PN at
level, by means of WN constant functions. system start-up ( reif (BL0 ) ). Place BLreif| Nodes
holds the set of base-level nodes; the marking of
Base-Level Reification place BLreif|Arcs encodes the connections be-
tween them: the term 2〈t2 , p1 , i / o〉 corresponds
The color domains for the base-level PN reifica- to an output arc of weight 2 from transition t2 to
tion are given below. place p1.
Definition 2 (Reification Color Domains) Transition priorities are defined by the mark-
C ( p) : NODE ∀p ∈ BLreif \{BLreif | Arcs}
ing of BLreif|Prio: if t2 is associated to priority
C (BLreif | Arcs) : ARC = NODE × NODE × ArcType level k, there will be the term (k + 1) ⋅ 〈t2 〉 in
BLreif|Prio. The above three places represent the
The reification of the base-level into the base-level topology: any change operated by the

Definition 1. (Basic Color Classes)

ArcType = i / o ∪ h
Named p Unnamed p Namedt Unnamedt
     
NODE = p ∪ p  ∪ x ∪ x   t
1 2 
 1 2  1 ∪ t2  ∪ y1 ∪ y2 
 
∪ null
Place Tran

199
An Introduction to Reflective Petri Nets

evolutionary strategy to their marking causes a The other way round, a well-defined WN
change to the base-level PN structure that will be marking provides a univocal representation for
reflected at any shift-down from the meta-level the base-level PN.
to the base-level. Definition 5 (base-level mapping)The
The marking of place BLreif|Marking de- GSPN bl(M ) : ( P, T ,W + ,W − , H , Π, m 0 ) ,
fines the base-level (current) state: the multiset associated to a well-defined M, is such
2〈 p1 〉 + 3〈 p2 〉 represents a base-level marking t h a t : P = Place ∩ M (BLreif | Nodes) ,
where places p1 and p2 hold two and three tokens, T = Tran ∩ M (BLreif | Nodes) , ∀p ∈ P
respectively. At the beginning BLreif|Marking m 0 ( p ) = M (BLreif | Marking)( p ) , ∀t ∈ T
holds the base-level initial state. Π (t ) = M (BLreif | Prio)(t ) − 1,finally W , W + , H
-

The marking of BLreif|Marking can be modi- are set as in Definition 3 (reading equations from
fied by the evolutionary strategy itself, causing right to left).
a real change to the base-level current state im- From definitions above it directly follows
mediately after the shift-down action. bl (reif (BL )) = BL . By the way M0 is assumed
Conflicts and inconsistencies due to the concur- well-defined. Through the algebraic structural
rent execution of several strategies are avoided by calculus for WN introduced in Capra, De Pierro,
defining an influence area for each strategy; such & Franceschinis, 2005 it has been verified that
an influence area delimits a critical region that can well-definiteness is an invariant of the evolution-
be accessed only by one strategy at a time. More ary framework (Figure 3), and consequently of
details on the influence areas are in the section the whole reflective model. The proof, involving
about the model semantics. a lot of technicalities, is omitted.
The meaning of each element of the BLreif Including the time information of GSPN and
interface is summarized in Table 1. Let us only SWN in the reflective model is immediate, once
remark that some places of the interface (e.g. we restrict to integer values for transition rate/
BLreif|Arcs) hold multisets, while other (e.g. weights (as if λ where a mapping T →  +). The
BLreif|Nodes) logically hold only sets (in such encoding of transition parameters then would
a case the reflective framework is in charge of be analogous to transition priorities. The BLreif
eliminating duplicates). interface (and of course also EvInt) would include
As subject to change, the base-level reification an additional place BLreif|Param (EvInt| Param),
needs to preserve a kind of well-definiteness over with domain NODE. A base-level transition t1 with
the time. Let m be the support of multiset m, i.e., firing rate k would be reified by a token k ⋅ 〈t1 〉 on
the set of elements occurring on m . place BLreif|Param.
Definition 4 (well-defined marking)Let n1,
n2: NODE, k: ArcType . M is well-defined if Evolutionary Framework Behavior
and only if
The evolutionary framework WN model imple-
• M (BLreif | Marking) ⊆ Place ∩ M (BLreif | Nodes) ments a set of basic transformations (rewritings)
• M (BLreif | Prio) ≡ Tran ∩ M (BLreif | Nodes) on the base-level PN reification. Its structure is
• if n1 occur on M (BLreif | Arcs) then modular, being formed by independent subnets
n1 ∈ M (BLreif | Nodes) (easily recognizable) sharing interface BLreif,
• 〈 n1 , n2 , k 〉 ∈ M (BLreif | Arcs) ⇒ each implementing a basic transformation.
〈 n1 , n2 〉 ∈ Place × Tran ∨ The behavior associated to the evolutionary
〈 n1 , n2 〉 ∈ Tran × Place ∧ k = i / o framework is intuitive. Every place labeled by the
EvInt prefix holds a (set of) basic transformation

200
An Introduction to Reflective Petri Nets

Table 1. The Evolutionary Interface API and the Base-Level Reification Data Structure

Evolutionary Interface (the asterisk means that the marking is a set)


EvInt| newTran* EvInt| newPlace*
adds an anonymous transition to the base-level reification. adds an anonymous place to the base-level reification.
EvInt| newNode* EvInt| FlushP*
adds a given new node in the base-level reification. flushes out the current marking of a place in the base-level reifica-
tion.
EvInt| IncM EvInt| decM
increments the marking of a place in the base-level. decrements the marking of a place in the base-level.
EvInt| newA EvInt| delA
adds a new arc between a place and a transition in the base- deletes an arc between a place and a transition in the base-level
level reification. reification.
EvInt| delNode* EvInt| setPrio
deletes a given node in the base-level reification (places changes the priority to a node in the base-level reification.
must be empty).
EvInt| shiftDown*
instructs the framework to reflect the changes on the base-
level.
Reification (the asterisk means that the marking is a set)
BLreif| Nodes* BLreif| Marking
the content of this place represents the nodes of the base- the content of this place represents the current marking of the base-
level PN. level PN.
BLreif| Arcs BLreif|Prio
the content of this place represents the arcs of the base-level the content of this place represents the transition priorities of the
PN. base-level PN.

command(s) issued by the evolutionary strategy Term 2〈 p1 〉 occurring on place EvInt|incM


sub-model. Every time a (multiset of) token(s) is is interpreted as ‘‘increase the current marking
put on one of these places, a sequence of immedi- of place p1 of two units’‘. Many commands of
ate transitions implementing the corresponding the same kind can be issued simultaneously,
command(s) is triggered. A succeeding command e.g. 2〈 p1 〉 + 1〈 p3 〉 on EvInt|incM. Depending on
results in changing the base-level reification, that their meaning, some commands are encoded by
is, the marking of BLreif places. multisets (as in the last examples), while other
The implemented basic transformations are: are encoded by sets. Interface EvInt is described
adding/removing given nodes (EvInt|newNode, on Table 1 and is implemented by the net on
EvInt|delNode), adding anonymous nodes Figure 3.
(EvInt|newPlace, EvInt|newTran), adding/re- In some cases command execution result must
moving given arcs (EvInt|newA, EvInt|delA), be returned back: places whose prefix is Res hold
increasing/decreasing the marking of given places command execution results, e.g., places Res|newP
(EvInt|incM, EvInt|decM), flushing tokens out and Res|newT record references to the last nodes
from places (EvInt|FlushP), finally, setting the that have been added to the base-level reification
priority of transitions (EvInt|setPrio). The color anonymously. Initially they hold a null reference.
domain of each place (either NODE or ARC) As interface places, they can be acceded by the
corresponds to the type of command argument, evolutionary strategy sub-model.
except for EvInt|newPlace, EvInt|newTran, which Single commands are carried out in consistent
are uncolored places. and atomic way, and they may have side effects.

201
An Introduction to Reflective Petri Nets

Let us consider for instance deletion of an exist- Base-level Introspection. The evolutionary
ing node, which is implemented by the subnet framework includes basic introspection com-
depicted (in isolation) in Figure 1. Assume that mands. Observation and manipulation of base-
a token n1 is put in place EvInt|delNode. First level PN reification are performed passing through
the membership of n1 to the set of nodes cur- the framework evolutionary interface; what
rently reified as not marked is checked (transi- enhances safeness and robustness of evolution-
tion startDelN). In case of positive check the ary programming. Figure 4 shows (from left to
node is removed, then all surrounding arcs are right) the subnets implementing the computation
removed (transition delAfromToN), last (if n1 of the cardinality (thereupon the kind) of a given
is a transition) its priority is cleared (transition arc, the preset of a given base-level node, and the
clearPriox1). Otherwise the command aborts and current marking of a given place (subnets comput-
the whole meta-model composed by the reflec- ing transition priorities, post-sets, inhibitor-sets,
tive framework and the evolutionary strategy is and checking existence of nodes, have a similar
restarted, ensuring a transactional execution of structure).
the evolutionary strategy. A unique restart transi- As for the basic transformation commands,
tion appears in Figure 3, with input arcs having each subnet has a single entry-place belonging
an ‘‘OR’‘ semantics. to the evolutionary interface EvInt and performs
Different priority levels are used to guarantee atomically. Introspection result is recorded on
the correct firing sequence, also in case of many de- places having the Res| prefix, accessible by the
letion requests (tokens) present in EvInt|delNode evolutionary strategy: regarding e.g., preset com-
simultaneously. Boundedness is guaranteed by the putation, a possible result (after a token p1 has been
fact that each token put on this place is eventually put in place EvInt|PreSet) is 〈 p1 , t2 〉 + 〈 p1 , t3 〉,
consumed. meaning the preset of p1 is {t2, t3} (other results are
The other basic commands are implemented encoded as multisets). Since base-level reification
in a similar way. Let us only remark that newly could be changed in the meanwhile, every time a
introduced base-level transitions are associated new command is issued any previously recorded
to the default priority 0 (encoded as 1). result about command’s argument is cleared
Priority levels in Figure 3 are relative: after (transitions prefixed by string “flush”).
composing the evolutionary framework WN
model to the evolutionary strategy WN model, the The Evolutionary Strategy
minimum priority in the evolutionary framework
is set greater than the maximum priority level used The adopted model of evolutionary strategy
in the evolutionary strategy. (only highlighted in Figure 2) specifies a set of
Any kind of transformation can be defined as arbitrarily complex, alternative transformation
a combination of basic commands: for example patterns on the base-level (each denoted hereafter
‘‘replacing the input arc connecting nodes p and as i-th strategy or sti), which can be fired when
t by an inhibitor arc of cardinality three’‘ corre- some conditions (checked on the base-level PN
sponds to put the token 〈 p, t , i / o〉 on EvInt|delA reification by introspection) hold and/or some
and the term 3〈 p, t , h〉 on place EvInt|newA. external events occur.
Who designs a strategy (the meta-programmer) Since a strategy designer is usually unaware
is responsible for specifying consistent sequences of the details about the WN formalism, we have
of basic commands, e.g., he/she must take care provided him/her with a tiny language that allows
of flushing the contents of a given place before everyone to specify his own strategy in a simple and
removing it. formal way. As concerns control structures the lan-

202
An Introduction to Reflective Petri Nets

Figure 4. Basic introspection functions

guage syntax is inspired by Hoare’s CSP (Hoare, hoc’‘ language notations described in the sequel.
1985), enriched with a few specific notations. As Guard true means the corresponding strategy may
concerns data types, a basic set of built-in’s and be always activated at every shift-up. A guard op-
constructors is provided for easy manipulation of tionally ends with an input command simulating
nets. The use of a CSP-like language to specify the occurrence of some external events.
a strategy allows its automatic translation into a A more detailed view of this general schema in
corresponding WN model. We will provide some terms of Petri nets is given in Figure 5. Figure 5(a)
examples of mapping from pieces of textual strat- shows the non-deterministic selection, whereas
egy descriptions into corresponding WN models. Figure 5(b) shows the structure of i-th strategy.
In Petri nets literature there are lot of examples Color domain definitions are inherited from the
of formal mappings from CSP-like formalisms evolutionary framework WN. An additional basic
(e.g. process algebras) to (HL)Petri nets models color class (STRAT = st1 ∪ stn) represents pos-
(e.g. Best, 1986 and more recently Kavi, Sheldon, sible alternative evolutions
Shirazi, & Hurson, 1995), from which we have Focusing on Figure 5(a), we can observe
been inspired. that any shift-up is signaled by a token in the
The evolutionary meta-program scheme cor- homonym place, and guards (the boxes on the
responds to the CSP pseudo-code2 in Figure 6. picture, which represent the only not fixed parts
The evolutionary strategy as a whole is cyclically of the net) are evaluated concurrently, accordingly
activated upon a shift-up, here modeled as an to the semantics of CSP alternative command.
input command. A non-deterministic selection of After the evaluation process has been completed
guarded commands then takes place. Each guard is one branch (i.e., a particular strategy) is chosen
evaluated on base-level reification by using ‘‘ad- (transition chooseStrat) among those whose guard

203
An Introduction to Reflective Petri Nets

Figure 5. Meta-Program Generic Schema

was successfully evaluated (place trueEval). By particular ‘‘open’‘ places (e.g. External|eventk in
the way, introspection has to be performed with Figure 5(a). The idea is that such places should
priority over base-level activities, so the low- be shared with other sub-models simulating the
est priority in Figure 5(a) is set higher than any external event occurrence. If one is simply in-
base-level PN transition, when the whole model terested in interactively simulating the reflective
is built. In case every guard is valued false the model, he/she might think of such places as a sort
selection command is restarted just after a new of buttons to be pushed by request.
shift-up occurrence transition noStratChoosen),
avoiding any possible livelock. Occurrence of (a) The Strategy Selection Submodel
external events is modeled by putting tokens in (b) The Strategy Structure

204
An Introduction to Reflective Petri Nets

Figure 6. CSP code for the meta-program scheme

strategy_1()
The ith Strategy. The structure of the WN model □
implementing a particular evolutionary strategy is guard_2 → strategy_2()
illustrated in Figure 5(b). It is composed of fixed □
and programmable (variable) parts, which may be true → strategy_3()
easily recognized in the picture. It realizes a sort □
of two-phases approach: during the first phase ...
(subnet freeze( «patterni» )) the meta-program sets ]
the local influence area of the strategy, a portion ]
of the base-level Petri Net reification potentially
subject to changes. This area is expressed as a During the freezing phase the base-level model
language’s “pattern”, that is, a parametric set of is ‘‘suspended’‘ to avoid otherwise possible incon-
base-level nodes defined through the language sistencies and conflicts: this is achieved by forcing
notations, denoted by a colored homonym place transitions of freeze( «patterni») subnet to have a
in Figure 5(b). The pattern contents are flushed at higher priority than base-level PN transitions. The
any strategy activation. A simple isolation algo- freeze(«patterni») sub-model is decomposed in
rithm is then executed, which freezes the strategy turn in two sub-models that implement the influ-
influence area reification, followed by a shift-down ence area identification and isolation, respectively.
action as a result of which freezing materializes at While the latter has a fixed structure, the former
the base-level PN. The idea is that all transitions might be either fixed or programmable, depending
belonging to the pattern, and/or able to change the on designer needs (e.g. it might be automatically
marking of places belonging to it, are temporary derived from the associated guard).
inhibited from firing, until the strategy execution After the freezing procedure terminates the
has terminated (the place pattern* holds a wider evolutionary algorithm starts (box labeled by
pattern image after this computation). dostrategyi in Figure 5(b)), and the base-level
resumes from the ‘‘suspended’‘ state: what is
*[shift-up ? sh-up-occurred → implicitly accomplished by setting no depen-
[ dence between the priority of dostrategyi subnet
guard_1; event_1 ? event_1-occurred → transitions (arbitrarily assigned by the meta-

205
An Introduction to Reflective Petri Nets

programmer) and the priority of base-level PN reification. Any change of state at base-level PN
transitions (in practice: setting the base-level provoked by transition firing is instantaneously
PN lowest priority equal to the priority level, as- reproduced on the reification, conceptually
sumed constant, of dostrategyi subnet). The only maintaining base-level unawareness about the
forced constraint is that dostrategyi submodel can meta-program. The firing of base-level transition
exclusively manipulate (by means of framework’s t1 results in withdrawing one and two tokens from
evolutionary interface) the nodes of base-level places p1 and x1, respectively, and in putting one
reification belonging to the pattern previously in p2.While token consumption is emulated by
computed (this constraint is graphically expressed a suitable input arc function (〈 S p1 〉 + 2 ⋅ 〈 S x1 〉),
in Figure 5(b) by an arc between dostrategyi box token production is emulated by an output arc
and place «patterni» ). As soon as the base-level function (〈 S p2 〉). The complete splitting of class
PN enters a new state (marking), the newly entered NODE allows anonymous places introduced into
base-level state is instantaneously reified into the base-level (x1) to be referred to by means of
the meta-level. This reification does not involve SWN constant functions. The occurrence of transi-
the base-level area touched by the evolutionary tion t1 is signaled to the meta-program by putting
strategy, which can continue operating without in- one token in the uncolored boundary-place ShUp|
consistency. Before activating the final shift-down shift-up (Figure 5(a)).
(which ends the strategy and actually operates the Shift-down action. The shift-down action is the
base-level evolution planned by the strategy) the only operation that cannot be directly emulated
temporary isolated influence area is unfrozen in at Petri nets (WN) level, but that should be man-
a very simple way. aged by the environment supporting the reflective
The described approach is more flexible than a architecture simulation. This is not surprising,
brute-force blocking one (where the base-level is rather is a consequence of the adopted choice
suspended for the whole duration of the strategy) of a traditional Petri nets paradigm to model an
while guaranteeing a sound and consistent system evolutionary architecture. The shift-down action
evolution. It better adheres to the semantics and takes place when the homonym uncolored (meta-)
the behavior of most real systems (think e.g. of transition of the framework (Figure 3) is enabled.
a traffic control system), which cannot be com- This transition has the highest priority within the
pletely suspended while their evolution is being whole reflective model, its occurrence replaces the
planned. current base-level PN with the Petri net described
by the current reification, according to Definition
Casually Connecting the Base- 5. After a shift-down the base-level restarts from
Level and the Meta-Program the (new) base-level initial marking, while the
meta-program continues executing from the state
The base-level and the meta-program are (re- preceding the shift-down.
ciprocally) causally connected via the reflective Putting all together. The behavior of the whole
framework. reflective model (composed of the base-level
Shift-up action. The shift-up action is realized PN, the evolutionary framework interface and
for the first time at system start-up. The idea (il- the meta-program) between consecutive shift-
lustrated in Figure 7) is to connect in transparent, downs can be represented using a uniform, Petri
fully automatic way the base-level PN to the evo- net-based approach. We are planning to extend the
lutionary framework interface by means of colored GreatSPN tool (Chiola, Franceschinis, Gaeta, &
input/output arcs drawn from any base-level PN Ribaudo, 1995), which supports the GSPN and
transition to place BLreif | Marking of base-level SWN formalisms, to be used as editing/simula-

206
An Introduction to Reflective Petri Nets

Figure 7. Reification implemented at Petri net level

tion environment of Reflective Petri nets. For that putting token null in both places Res|newP and
purpose it should be integrated with a module Res|newT (Figure 4), and one uncolored token in
implementing the causal-connection between place startMetaProgram (Figure 5(a)).
base-level and meta-program.
The reflective framework, the evolutionary Meta-Language Basic Elements
meta-program, and the base-level are separated
sub-models, sharing three disjoint sets of bound- The meta-programming language disposes of four
ary places: the base-level reification, the evolu- built-in types NAT, BOOL, NODE, ArcType and
tionary interface, and the places holding basic the Set and Cartesian product (×) constructors. The
command results. Their interaction is simply arc (ARC: NODE × NODE × ArcType), arc with
achieved through superposition of homonym multiplicity (ArcM: ARC × NAT), and marking
places. This operation is supported by the Algebra (Mark: NODE × NAT) types are thus introduced,
module (Bernardi, Donatelli, & Horvàth, 2001) this way a multi-set can be represented as a set.
of GreatSPN. Place, Tran and static subclass names can be used
Following the model composition, absolute to denote subtypes or constants (in case of single-
priority levels must be set, respecting the reciprocal tons), and new types can be defined on-the-fly by
constraints between components earlier discussed using set operators.
(e.g. framework’s lowest priority must be grater Each strategy is defined in terms of basic
than meta-program’s highest priority). Finally, the actions, corresponding to the basic commands
whole model’s initial marking is set according to previously described. Their signatures are:
Definition 3 as concerns base-level reification,

207
An Introduction to Reflective Petri Nets

• newNode(Set(NODE)), newPlace(), Below is an example of guard is (in the cur-


newTran(), remNode(Set(NODE)); rent version of the language quantifiers cannot
• flush(Set(Place)) be nested):
• addArc(Set(ArcM)), remArc(Set(Arc));
• incMark(Set(Mark)), decMark(Set(Mark)) exists t:Tran|isempty (pre(t) ∪
• setPrio(Set(Tran)) inh(t)).

A particular version of repetitive command can


be used. Letting Ei be a set (Grammar 1): Having at our disposal a simple meta-program-
ming language, it becomes easier specifying (even
*(e1 in E1, ..., en in En)[ «command» ] complex) parametric base-level evolutions, such as
‘‘for each marked place p belonging to the preset
makes the instruction «command» be executed of t, if there is no inhibitor arc connecting p and
iteratively for each e1∈ E1,.., en∈ En; at each t, add one with cardinality equal to the marking
iteration, variables e1,.., en are bound to par- of p’‘, which becomes:
ticular elements of E1,.., En, respectively. If E1
is a color (sub-)class, then we implicitly refer *(p in pre(t)) [ #p>0 and card(<p,t,h>)==0 →
to its elements that belong to the base-level addArc(<p,t,h,#p>)]
reification.
The meta-programmer can refer to base-level The code of the freezing algorithm act-
elements either explicitly, by means of constants, ing on a precomputed influence area (box
or implicitly, by means of variables. isolate(«patterni») in Figure 5(b)), which is one
By means of the assignments p=newPlace(), of the fixed parts of the meta-program, is given in
t=newTran(), it is also possible to add unspecified Figure 8. all base-level transitions that belong to
nodes to the base-level, afterwards referred to by the pattern, or that can change its local marking
variables p,t. (state), are temporarily prevented from firing by
Base-level introspection is carried out by means adding a new (marked) place to the base-level
of simple net-expressions allowing the meta- reification, to which pattern transitions are con-
programmer to specify patterns, i.e., parametric nected via inhibitor arcs. A shift-down action
base-level portions meeting some requirements is then activated to freeze the base-level PN.
on base-level’s structure/marking. Unfreezing is simply achieved by removing the
The syntax for patterns and guards is shown in artificially introduced inhibitor place at the end
Grammar 1. The symbols: pre(n), post(n), inh(n), of the evolutionary strategy (Figure 5(b)).
#p, card(a) denote the pre/post-sets of a base-level
PN node n, the set of elements connected to n via [
inhibitor arcs, the current marking of place p, and isempty(pattern) → skip
the multiplicity of an arc a, respectively. They are □
translated into introspection commands (Figure not(isempty(pattern)) →
4). A pattern example is: pattern* = {};
isolating_pattern = newPlace();
{p:Place|#p > #p1 and isempty incMark(<isolating_pattern,1>);
(pre(p)∩ inh(p))}, *(p in Pattern ∩ Place)
[true → pattern* ∪ = pre(p) ∪ post(p)];
where p1 is a constant, and p is a variable. *(t in pattern* ∩ Tran)

208
An Introduction to Reflective Petri Nets

Figure 8. CSP Code for the Isolating-Pattern Subnet (Language’s Keywords are in Bold)

[true →newArc(<isolating_pattern,t,h,1>)]; we can naturally set the following notion of state


shiftDown; for Reflective Petri nets:
] Definition 6 (state). A state of a Reflective
Petri net is a marking Mi of the base-meta PN
obtained by suitably composing the base-level PN
A MARKOV-PROCESS FOR (a GSPN) and the meta-level PN (a SWN).
REFLECTIVE PETRI NETS Then, letting t ≠ shiftdown be any transition
(color instance) enabled in Mi, according to the
The adoption of GSPN (Ajmone Marsan, Conte, SWN (GSPN) firing rule, and Mj be the mark-
& Balbo, 1984) and SWN (Chiola, Dutheillet, ing reached upon its firing, we have the labeled
Franceschinis, & Haddad, 1993) for the base- and state-transition
meta- levels of the reflective layout, respectively,
λ (t )
has revealed a convenient choice for two reasons: Mi → M j,
first, the timed semantics of Reflective Petri nets
is in large part inherited from GSPN (SWN); sec- where λ(t) denotes a weight, or an exponential
ondly, the symbolic marking representation the rate, associated with t, depending on whether t is
SWN formalism is provided with can be exploited timed or immediate.
to efficiently handle the intriguing question related There is nothing to do but consider the case
to how identifying equivalences during a Reflec- where Mf is a vanishing marking enabling the
tive Petri net model evolution. pseudo-transition shift-down: then,
On the light of the connection set between
base- and meta- levels, the behavior of a Reflec- w =1
Mf → M′0 ,
tive Petri net model between any meta-level
activation and the consequent shift-down is fully
described in terms of a SWN model, the meta-level M′0 being the marking of the base-meta PN
PN, including (better, suitably connected to) an obtained by first replacing the (current) base-level
uncolored part (the base-level PN). This model PN with the GSPN isomorphic to the reification
will be hereafter denoted base-meta PN. Hence, marking (once it has been suitably connected to

209
An Introduction to Reflective Petri Nets

Table 2.

Grammar 1 BNF for language expressions.


Element ::= NODE | Arc†
NODE ::= «variable» | «constant» | singleton ( NodeSet )
Arc ::= < NODE , NODE , «arc_type» >
Expression ::= «digit» | BasicExpr
BasicExpr ::= # «place»‡ | card( Set ) | card ( Arc ) | prio ( «transition» )
Predicate ::= BasicExpr RelOp Expression | kind ( Arc ) EqOp «arc_type» | NODE InExpr | NODE is connected to
NODE | isempty ( Set )
RelOp ::= <|>|=
EqOp ::= =\= | =
Set ::= { } | { ArcList } | NodeSet | «static_subclass» | «color_class» | Element | Set SetOp Set
SetOp ::= ∩ |∪ |\
ArcList ::= Arc | ArcList , Arc
NodeSet ::= { } | { NodeList } | Pattern | AlgOp ( NodeSet ) | NODE
NodeList ::= NODE | NodeList , NODE
AlgOp ::= pre | post | inh
Pattern ::= { «variable» InExpr | Guard }
Guard ::= Predicate | LogOp «variable» InExpr Predicate |
not ( Guard )
InExpr ::= ∈ | in «place» | in NodeSet
LogOp ::= exists | foreach
BoolOp ::= and | or
† Terminals are in bold font, non-terminals are in normal font. ‡ Terms in «» represent elements whose meaning can be inferred from the
model.

the meta-level PN), then firing shift-down as it PN, which is unnecessary to describe the evolv-
were a normal immediate transition. ing system. The second concern is even more
Using the same technique for eliminating critical, and indirectly affects efficiency: there
vanishing states as it is employed in the reduced is no way of recognizing whether the modeled
reachability graph algorithm (Ajmone Marsan, system, during its dynamics/evolution, reaches
Conte, & Balbo, 1984), it is possible to build a equivalent states. The ability of deciding about
CTMC for the Reflective Petri net model. a system’s state-transition graph finiteness and
strongly-connectedness, of course strictly related
Recognizing Equivalent Evolutions to the ability of recognizing equivalent states, is
in fact mandatory for performance analysis: we
The state-transition graph semantics just intro- know that the most important sufficient condition
duced precisely defines the (timed) behavior of a for a finite CTMC to have stationary solution
Reflective Petri net model, but suffers from two (steady-state) is to include one maximal strongly
evident drawbacks. First, it is highly inefficient: connected component.
the state description is exceedingly redundant, More generally, most techniques based on
comprising a large part concerning the meta-level state-space inspection rely on the ability above.

210
An Introduction to Reflective Petri Nets

Recognizing equivalent evolutions is a tricky The technique we use to recognize equivalent


question. For example, it may happen that (appar- base-level evolutions relies on the base-level
ently) different strategies cause in truth equivalent reification and the adoption of a symbolic state
transformations to the base-level PN (the evolving representation for the base-meta PN that, we recall,
system), which cannot be identified by Definition results from composing in transparent way the
6. Yet, the combined effect of different sequences base-level PN and the meta-level PN.
of evolutionary strategies might produce the same We only have to set as initial state of the
effects. Even more likely, the internal dynamics of Reflective Petri net model a symbolic marking
the evolving system might lead to reach equivalent (M̂ 0) of the base-meta PN, instead of an ordinary
configurations. The above question, which falls one: any dynamic subclass of Unnamedp (Un-
into a graph isomorphism sub-problem, as well as namedt) will represent an arbitrary “unnamed”
the global efficiency of the approach, are tackled place (transition) of the base-level PN.
by resorting to the peculiar characteristic of SWN: Because of the simultaneous update mecha-
the symbolic marking notion (Chiola, Dutheillet, nism of the reification, and the consequent one-
Franceschinis, & Haddad, 1997). to-one correspondence along the time between
For that purpose, we refer to the following the current base-level PN and the reification at
static partition of class NODE: the meta-level, we can state the following:
Definition 7 (equivalence relation) Let M̂ i,
Named p Namedt
  
NODE = p1 ∪  pk ∪ Unnamed p ∪ t

M̂ j be two symbolic states of the Reflective Petri
1 ∪  tn ∪ Unnamedt . ˆ ≡M ˆ if and only if their restric-


Place
 
Tran net model. M i j
tions on the reification set of places have the same
Symbols pi, tj denote singleton static subclasses. canonical representative.
Conversely, Unnamedp and Unnamedt are static Lemma 1. Let M ˆ ≡M ˆ . Then the base-level
i j
subclasses collecting all anonymous (i.e., indis- PNs at states M̂ i and M̂ j are isomorphic.
tinguishable) places/transitions. Behind there is a Consider the very simple example in Figure 9,
simple intuition: while some (“named”) nodes, for which depicts three base-level PN configurations,
the particular role they play, preserve the identity at different time instants. The hypothesis is that
during base-level evolution, and may be explicitly while symbol t2 denotes a ‘‘named’‘ transition,
referred to during base-level manipulation, oth- symbols xi and yj denote ‘‘unnamed’‘ places
ers (“unnamed”) are indistinguishable from one and transitions, respectively. Since there are no
another. In other words any pair of “unnamed” inhibitor arcs we assume that arcs are reified as
places (transitions) might be freely exchanged on tokens (2-tuples) belonging to NODE × NODE.
the base-level PN, without altering the model’s We assume that all transitions have the same
semantics. There are two extreme cases: Namedp priority level, so we disregard the reification of
(Namedt) = ∅ and, opposite, Unnamedp (Un- priorities.
namedt) = ∅. The former meaning all places/ We can observe that the Petri nets on the left
transitions can be permuted, the latter instead all and on the middle are nearly the same, but for their
nodes are distinct. current marking: we can imagine that they represent
It is remarkable that the static partition of a possible (internal) dynamics of the base-level PN.
class NODE actually used for the base-meta PN Conversely, we might think of the right-most Petri
is different from the previous one, given that net as an (apparent) evolution of the base-level PN
any places of base-level PN must be explicitly on the left, in which transition y2 has been replaced
referred to when connecting the base-level PN by the (new) transition y3, new connections are set,
to the meta-level PN (Figure 7). and a new marking is defined.

211
An Introduction to Reflective Petri Nets

Figure 9. Equivalent Base-Level Petri Net Evolutions

Nevertheless, the three base-level configura- thus, they are equivalent.


tions are equivalent, according to Definition 7. It With similar arguments, the left-most and the
is sufficient to take a look at their respective reifi- right-most Petri nets of Figure 9 are shown to
cations, which are encoded as symbolic markings be equivalent. The left-most Petri net’s reifica-
(multisets are expressed as formal sums): consider tion is:
for instance the base-level PN on the left and on
the middle of Figure 9, whose reification are: ˆ ′′(BLreif | Nodes) = y + y + t + x + x + x + x ,
M 1 3 2 1 2 3 4

ˆ (BLreif | Nodes) = y + y + t + x + x + x + x ,
M 1 2 2 1 2 3 4
ˆ ′′(BLreif | Marking) = x + x ,
M 1 2

ˆ (BLreif | Marking) = x + x ,
M ˆ ′′(BLreif | Arcs) = 〈 x , t 〉 + 〈t , x 〉 + 〈 x , y 〉 + 〈 y , x 〉 +
M
1 4 1 2 2 3 3 1 1 1

ˆ (BLreif | Arcs) = 〈 x , t 〉 + 〈t , x 〉 + 〈 x , y 〉 + 〈 y , x 〉 + 〈 x , t 〉 +
M 1 2 2 3 3 1 1 1 2 2
〈 x2 , y3 〉 + 〈 y3 , x4 〉 + 〈 x4 , t2 〉 + 〈t2 , x2 〉

〈t2 , x4 〉 + 〈 x4 , y2 〉 + 〈 y2 , x2 〉 ˆ ′′ can be obtained from one another


M̂ and M
through the following permutation:
and
{x2 ↔ x4 , y3 ↔ y2 },
ˆ ′(BLreif | Nodes) = y + y + t + x + x + x + x ,
M 1 2 2 1 2 3 4

The canonical representative for these equiva-


ˆ ′(BLreif | Marking) = x + x
M lent base-level PN’s reifications (i.e., states of the
3 2,

Reflective Petri net model), computed according


ˆ ′(BLreif | Arcs) = 〈 x , t 〉 + 〈t , x 〉 + 〈 x , y 〉 + 〈 y , x 〉 +
M to the corresponding SWN algorithm, turns out
1 2 2 3 3 1 1 1

to be M̂.
〈 x2 , t2 〉 + 〈t2 , x4 〉 + 〈 x4 , y2 〉 + 〈 y2 , x2 〉

respectively. They can be obtained from one an- RELATED WORKS


other by the following permutation of “unnamed”
places and transitions (we denote by a ↔ b the Although many other models of concurrent and
bidirectional mapping: a → b, b → a ): distributed systems have been developed, Petri
Nets are still considered a central model for con-
{x1 ↔ x2 , x3 ↔ x4 , y1 ↔ y2 }, current systems with respect to both the theory
and the applications due to the natural way they

212
An Introduction to Reflective Petri Nets

allow to represent reasoning on concurrent active whereas the object itself is an elementary net sys-
objects which share resources and their chang- tem. So, an object can migrate across a net system.
ing states. Despite their modeling power (Petri This bears some resemblance with logical agent
nets with inhibitor arcs are Turing-equivalent) mobility. Even if in the original Valk’s proposal
however, classical Petri nets are often consid- no dynamic changes are possible, many dynamic
ered unsuiTable to model real systems. For that architectures introduced afterward (including in
reason, several high-level Petri nets paradigms some sense also the approach proposed in this
(Colored Petri nets, Predicate/Transition Nets, chapter) rely upon his paradigm.
Algebraic Petri nets) have been proposed in the Some quite recent proposals have extended
literature (Jensen & Rozenberg, 1991) over the Valk’s original ideas. Badouel & Darondeau, 1997
last two decades to provide modelers with a more introduces a subclass of self-modifying nets. The
flexible and parametric formalism able to exploit considered nets appear as stratified sums of ordi-
the symmetric structure of most artificial discrete- nary nets and they arise as a counterpart to cascade
event systems. products of automata via the duality between au-
Modern information systems are more and tomata and nets. Nets in this class, called stratified
more characterized by a dynamic/reconfigurable nets, cannot exhibit circular dependences between
(distributed) topology and they are often conceived places: inscription on flow arcs attached to a given
as self-evolving structures, able to adapt their place depends at most on the content of places in
behavior and their functionality to environmental the lower layers. As an attempt to add modeling
changes and new user needs. Evolutionary design flexibility, Badouel & Oliver, 1998 defines a class
is now a diffuse practice, and there is a growing of high-level Petri nets, called reconfigurable nets,
demand for modeling/simulation tools that can that can dynamically modify their own structure
better support the design phase. Both Petri nets by rewriting some of their components. Bound-
and HLPN are characterized by a fixed structure edness of a reconfigurable net can be decided by
(topology), so many research efforts have been calculating its covering tree. Moreover such a net
devoted, especially in the last two decades, in can be simulated by a self-modifying Petri net.
trying to extend Petri nets with dynamical fea- The class of reconfigurable nets thus provides a
tures. Follows a non-exhaustive list of proposals subclass of self-modifying Petri nets for which
appeared in the literature. boundedness can be decided.
In Valk, 1978, the author is proposing his Modeling mobility, both physical and logical,
pioneering work, self-modifying nets. Valk’s is another active subject of ongoing research.
self-modifying nets introduce dynamism via self Mobile and dynamic Petri nets (Asperti & Busi,
modification. More precisely the flow relation 1996) integrate Petri nets with RCHAM (Reflec-
between a place and a transition is a linear func- tive Chemical Abstract Machine) based process
tion of the place marking. Techniques of linear algebra. In dynamic nets tokens are names for
algebra used in the study of the structural proper- places, an input token of a transition can be used
ties of Petri nets can be adapted in this extended in its postset to specify a destination, and more-
framework. Only simple evolution patterns can over the creation of new nets during the firing of a
be represented using this formalism. Another transition is also possible. Mobile Petri nets handle
major contribution of Valk is the so-called nets- mobility expressing the configuration changing of
within-nets paradigm (Valk, 1998), a multi-layer communication channels among processes.
approach, where tokens flowing through a net are Tokens in Petri nets, even in self-modifying,
in turn nets. In his work, Valk takes an object as mobile/dynamic and reconfigurable nets, are pas-
a token in a unary elementary Petri net system, sive, whereas agents are active. To bridge the gap

213
An Introduction to Reflective Petri Nets

between tokens and agents, or active objects, many rule-based transformations of P/T-systems. The
authors have proposed variations on the theme of new concept is based on algebraic nets and graph
nets-within-nets. In Farwer & Moldt, 2005, objects transformation systems. Finally, in Odersky, 2000
are studied as higher-level net tokens having an the author introduces functional nets, which com-
individual dynamical behavior. Object nets behave bine key ideas of functional programming and Petri
like tokens, i.e., they are lying in places and are nets to yield a simple and general programming
moved by transitions. In contrast to ordinary notation. They have their theoretical foundation in
tokens, however, they may change their state. join calculus. Over the last decade an operational
By this approach an interesting two-level system view of program execution based on rewriting
modeling technique is introduced. Xu, Yin, Deng, has become widespread. In this view, a program
& Ding, 2003 proposes a two-layers approach. is seen as a term in some calculus, and program
From the perspective of system’s architecture, it execution is modeled by stepwise rewriting of the
presents an approach to modeling logical agent term according to the rules of the calculus.
mobility by using Predicate Transition nets as All these formalisms, however, set up new
formal basis for the dynamic framework. Refer- hybrid (high-level) Petri net-based paradigms.
ence nets proposed in Kummer, 1998 are another While the expressive power has increased, the
formalism based on Valk’s work. Reference nets cognitive simplicity, which is the most important
are a special high level Petri net formalism that advantage of Petri nets, has decreased as well. In
provide dynamic creation of net instances, ref- Badouel, 1998 the authors argued that the intri-
erences to other reference nets as tokens, and cacy of these models leaves little hope to obtain
communication via synchronous channels ( Java significant mathematical results and/or automated
is used as inscription language). verification tools in a close future. The approach
Some recent proposals have some similarity we are presenting differs from the previous ones
with the work we are presenting in this chapter mainly because it achieves a satisfactory com-
or, at least, are inspired by similar aims. In Cabac promise between expressive power and analysis
et al., 2005 the authors present the basic concepts capability, through a quite rigorous application
for a dynamic architecture modeling (using nets- of classical reflection concepts in a consolidated
within-nets) that allows active elements to be (high-level) Petri net framework.
nested in arbitrary and dynamically changeable
hierarchies and enables the design of systems at
different levels of abstractions by using refine- CONCLUSION AND FUTURE WORK
ments of net models. The conceptual modeling of
such architecture is applied to specify a software Most discrete-event systems are subject to evo-
system that is divided into a plug-in management lution, and need to be updated or extended with
system and plug-ins that provide functionality new characteristics during lifecycle. Covering the
to the users. By combining plug-ins, the system evolutionary aspects of systems since the design
can be dynamically adapted to the users needs. In phase has been widely recognized as a crucial
Hoffmann et al., 2005 the authors introduce the challenge. A good evolution has to pass through the
new paradigm of nets and rules as tokens, where evolution of the design information of the system
in addition to nets as tokens also rules as tokens itself. Petri nets are a central formalism for the
are considered. The rules can be used to change modeling of discrete-event systems. Unfortunately
the net structure and behavior. This leads to the classical Petri nets have a static structure, so Petri
new concept of high-level net and rule systems, net modelers are forced to hard-code all the fore-
which allows to integrate the token game with seeable evolutions of a system at the design level.

214
An Introduction to Reflective Petri Nets

This common practice not only requires modeling REFERENCES


expertise, it also makes system’s design be polluted
by lot of details that do not regard the (current) Asperti, A., & Busi, N. (1996, May). Mobile Petri
system functionality, and affect the consolidated Nets (Tech. Rep. No. UBLCS-96-10). Bologna,
Petri nets analysis techniques. Italy: Università degli Studi di Bologna.
We have faced the problem through the Badouel, E., & Darondeau, P. (1997, September).
definition of a Petri net-based reflective archi- Stratified Petri Nets. In B. S. Chlebus & L. Czaja
tecture, called Reflective Petri Nets, structured (Eds.), Proceedings of the 11th International
in two logical levels: the base-level, specifying Symposium on Fundamentals of Computation
the evolving system, and the evolutionary meta- Theory (FCT’97) (p. 117-128). Kraków, Poland:
program (the meta-level). The meta-program is Springer.
in charge of observing in transparent way, then
(if necessary) transforming, the base-level PN. Badouel, E., & Oliver, J. (1998, January). Recon-
With this approach the model of the system and figurable Nets, a Class of High Level Petri Nets
the model of the evolution are kept separated, Supporting Dynamic Changes within Workflow
granting, therefore, the opportunity of analyzing Systems (IRISA Research Report No. PI-1163).
the model without useless details. The evolutionary IRISA.
aspects are orthogonal to the functional aspects
Bernardi, S., Donatelli, S., & Horvàth, A. (2001,
of the system.
September). Implementing Compositionality for
In this chapter we have introduced Reflective
Stochastic Petri Nets. Journal of Software Tools
Petri nets, and we propose an effective timed
for Technology Transfer, 3(4), 417–430.
state-transition semantics (in terms of a Markov
process) as a first step toward the implementation Best, E. (1986, September). COSY: Its Relation
of a (performance-oriented) discrete-event simu- to Nets and CSP. In W. Brauer, W. Reisig, & G.
lation engine for Reflective Petri nets. Ongoing Rozenberg (Eds.), Petri Nets: Central Models and
research is in different directions. We are planning Their Properties, Advances in Petri Nets (Part II)
to extend the GreatSPN tool to directly support (p. 416-440). Bad Honnef, Germany: Springer.
Reflective Petri nets, both in the editing and in
Cabac, L., Duvignau, M., Moldt, D., & Rölke,
the analysis/simulation steps. We are investigat-
H. (2005, June). Modeling Dynamic Architec-
ing other possible semantic characterizations (in
tures Using Nets-Within-Nets. In G. Ciardo &
terms of different stochastic processes), on the
P. Darondeau (Eds.), Proceedings of the 26th
perspective of improving the analysis capability.
International Conference on Applications and
We are currently using two different formalisms for
Theory of Petri Nets (ICATPN 2005) (p. 148-167).
the base- and meta- levels (ordinary and colored
Miami, FL: Springer.
stochastic Petri nets). It might be convenient to
adopt the same formalism for both levels, what Capra, L., & Cazzola, W. (2007, December).
would give origin to the reflective tower allowing Self-Evolving Petri Nets. Journal of Universal
the designer to model also the possible evolution Computer Science, 13(13), 2002–2034.
of the evolutionary strategies.
Capra, L., & Cazzola, W. (2009). Trying out
Reflective Petri Nets on a Dynamic Workflow
Case. In E. M. O. Abu-Atieh (Ed.), Handbook of
Research on Discrete Event Simulation Environ-
ments Technologies and Applications. Hershey,
PA: IGI Global.

215
An Introduction to Reflective Petri Nets

Capra, L., De Pierro, M., & Franceschinis, G. Hoare, C. A. R. (1985). Communicating Sequen-
(2005, June). A High Level Language for Struc- tial Processes. Upper Saddle River, NJ: Prentice
tural Relations in Well-Formed Nets. In G. Ciardo Hall.
& P. Darondeau (Eds.), Proceeding of the 26th
Hoffmann, K., Ehrig, H., & Mossakowski, T.
international conference on application and theory
(2005, June). High-Level Nets with Nets and
of Petri nets (p. 168-187). Miami, FL: Springer.
Rules as Tokens. In G. Ciardo & P. Darondeau
Cazzola, W. (1998, July 20th-24th). Evaluation (Eds.), Proceedings of the 26th International
of Object-Oriented Reflective Models. In Pro- Conference on Applications and Theory of Petri
ceedings of ecoop workshop on reflective object- Nets (p. 268-288). Miami, FL: Springer
oriented programming and systems (ewroops’98).
Hürsch, W., & Videira Lopes, C. (1995, February).
Brussels, Belgium.
Separation of Concerns (Tech. Rep. No. NUCCS-
Cazzola, W., Ghoneim, A., & Saake, G. (2004, 95-03). Northeastern University, Boston.
July). Software Evolution through Dynamic Ad-
Jensen, K., & Rozenberg, G. (Eds.). (1991). High-
aptation of Its OO Design. In H.-D. Ehrich, J.-J.
Level Petri Nets: Theory and Applications. Berlin:
Meyer, & M. D. Ryan (Eds.), Objects, Agents and
Springer-Verlag.
Features: Structuring Mechanisms for Contempo-
rary Software (pp. 69-84). Heidelberg, Germany: Kavi, K. M., Sheldon, F. T., Shirazi, B., & Hurson,
Springer-Verlag. A. R. (1995, January). Reliability Analysis of
CSP Specifications Using Petri Nets and Markov
Chiola, G., Dutheillet, C., Franceschinis, G.,
Processes. In Proceedings of the 28th Annual
& Haddad, S. (1990, June). On Well-Formed
Hawaii International Conference on System Sci-
Coloured Nets and Their Symbolic Reachability
ences (HICSS-28) (p. 516-524). Kihei, Maui, HI:
Graph. In Proceedings of the 11th international
IEEE Computer Society.
conference on application and theory of Petri
nets, (p. 387-410). Paris, France. Kummer, O. (1998, October). Simulating Syn-
chronous Channels and Net Instances. In J. Desel,
Chiola, G., Dutheillet, C., Franceschinis, G., &
P. Kemper, E. Kindler, & A. Oberweis (Eds.),
Haddad, S. (1993, November). Stochastic Well-
Proceedings of the Workshop Algorithmen und
Formed Coloured Nets for Symmetric Modeling
Werkzeuge für Petrinetze (Vol. 694, pp. 73-78).
Applications. IEEE Transactions on Computers,
Dortmund, Germany: Universität Dortmund,
42(11), 1343–1360. doi:10.1109/12.247838
Fachbereich Informatik.
Chiola, G., Franceschinis, G., Gaeta, R., & Ribau-
Maes, P. (1987, October). Concepts and Ex-
do, M. (1995, November). GreatSPN 1.7: Graphi-
periments in Computational Reflection. In N. K.
cal Editor and Analyzer for Timed and Stochastic
Meyrowitz (Ed.), Proceedings of the 2nd confer-
Petri Nets. Performance Evaluation, 24(1-2),
ence on object-oriented programming systems,
47–68. doi:10.1016/0166-5316(95)00008-L
languages, and applications (OOPSLA’87) (Vol.
Farwer, B., & Moldt, D. (Eds.). (2005, August). 22, p. 147-156), Orlando, FL.
Object Petri Nets, Process, and Object Calculi.
Odersky, M. (2000, March). Functional Nets. In
Hamburg, Germany: Universität Hamburg, Fach-
G. Smolka (Ed.), Proceedings of the 9th European
bereich Informatik.
Symposium on Programming (ESOP 2000) (p.
1-25). Berlin, Germany: Springer.

216
An Introduction to Reflective Petri Nets

Valk, R. (1978, July). Self-Modifying Nets, a Dynamic Systems: Discrete-event systems


Natural Extension of Petri Nets. In G. Ausiello subject to evolution.
& C. Böhm (Eds.), Proceedings of the Fifth Col- Petri Nets: Graphical formalism for discrete-
loquium on Automata, Languages and Program- event systems.
ming (ICALP’78), (p. 464-476). Udine, Italy: Reflection: Activity performed by an agent
Springer. when doing computations about itself.
Base-Level: Logical level of a reflective model
Valk, R. (1998, June). Petri Nets as Token Objects:
representing the system prone to evolve.
An Introduction to Elementary Object Nets. In J.
Meta-Level: Logical level of a reflective
Desel & M. Silva (Eds.), Proceedings of the 19th
model representing the evolutionary strategy.
International Conference on Applications and
State-Transition Graph: Graph describing
Theory of Petri Nets (ICATPN 1998) (p. 1-25).
the behavior of a system in terms of states and
Lisbon, Portugal: Springer.
transitions between them.
Xu, D., Yin, J., Deng, Y., & Ding, J. (2003, Janu-
ary). A Formal Architectural Model for Logical
Agent Mobility. IEEE Transactions on Soft- ENDNOTES
ware Engineering, 29(1), 31–45. doi:10.1109/
1
TSE.2003.1166587 Labels taking the form place_name | postfix
denote boundary-places
2
Recall that: i) CSP is based on guarded-
commands; ii) structured commands are
KEY TERMS AND DEFINITIONS included between square brackets; and iii)
symbols ?, *, and □ denote input, repetition
Evolution: Attitude of systems to change
and alternative commands, respectively.
layout/functionality.

217
218

Chapter 10
Trying Out Reflective Petri Nets
on a Dynamic Workflow Case
Lorenzo Capra
Università degli Studi di Milano, Italy

Walter Cazzola
Università degli Studi di Milano, Italy

ABSTRACT
Industrial/business processes are an evident example of discrete-event systems which are subject to
evolution during life-cycle. The design and management of dynamic workflows need adequate formal
models and support tools to handle in sound way possible changes occurring during workflow operation.
The known, well-established workflow’s models, among which Petri nets play a central role, are lack-
ing in features for representing evolution. We propose a recent Petri net-based reflective layout, called
Reflective Petri nets, as a formal model for dynamic workflows. A localized open problem is considered:
how to determine what tasks should be redone and which ones do not when transferring a workflow
instance from an old to a new template. The problem is efficiently but rather empirically addressed
in a workflow management system. Our approach is formal, may be generalized, and is based on the
preservation of classical Petri nets structural properties, which permit an efficient characterization of
workflow’s soundness.

INTRODUCTION from the static schema, and may cause breakdowns,


reduced quality of services, and inconsistencies.
Business processes are frequently subject to change Workflow management facilitates creating and
due to two main reasons (Aalst & Jablonski, 2000): executing business processes. Most of existing
i) at design time the workflow specification is Workflow Management Systems, WMS in the sequel
incomplete due to lack of knowledge, ii) errors or (e.g., IBM Domino, iPlanet, Fujisu iFlow, Team-
exceptional situations can occur during the workflow Center), are designed to cope with static processes.
execution; these are usually tackled on by deviating The commonly adopted policy is that, once process
changes occur, new workflow templates are defined
DOI: 10.4018/978-1-60566-774-4.ch010 and workflow instances are initiated accordingly

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

from scratch. This over-simplified approach forces Petri nets play a central role in workflow
tasks that were completed on the old instance to modeling (Salimifard & Wright, 2001), due to
be executed again, also when not necessary. If their description efficacy, formal essence, and the
the workflow is complex and/or involves a lot of availability of consolidated analysis techniques.
external collaborators, a substantial business cost Classical Petri nets (Reisig, 1985) have a fixed
will be incurred. topology, so they are well suited to model work-
Dynamic workflow management might be flows matching a static paradigm, i.e., processes
brought in as a solution. Formal techniques and that are finished or aborted once they are initiated.
analysis tools can support the development of Conversely, any concerns related to dynamism/
WMS able to handle undesired results introduced evolution must be hard-wired in classical Petri nets
by dynamic change. Evolutionary workflow de- and bypassed when not in use. That requires some
sign is a challenge on which lot of research efforts expertise in Petri nets modeling, and might result
are currently devoted. A good evolution is carried in incorrect or partial descriptions of workflow
out through the evolution of workflow’s design in- behavior. Even worst, analysis would be polluted
formation, and then by propagating these changes by a great deal of details concerning evolution.
to the implementation. This approach should be Separating evolution from (current) system
the most natural and intuitive to use (because it functionality is worthwhile. This concept has been
adopts the same mechanisms adopted during the recently applied to a Petri net-based model (Capra
development phase) and it should produce the best & Cazzola, 2007b), called Reflective Petri nets,
results (because each evolutionary step is planned using reflection (Maes, 1987) as mechanisms that
and documented before its application). easily permits separation of concerns. A layout
At the moment evolution is emulated by formed by two causally connected levels (base-,
directly enriching original design information and meta-) is used. the base-level (an ordinary
with properties and characteristics concern- Petri net) is unaware of the meta-level (a high-
ing possible evolutions. This approach has two level Petri net).
main drawbacks: i) all possible evolutions are Base-level entities perform computations on
not always foreseeable; ii) design information the entities of the application domain whereas
is polluted by details related to the design of the entities in the meta-level perform computations
evolved system. on the entities residing on the lower level. The
In the research on dynamic workflows, the computational flow passes from the base-level to
prevalent opinion is that models should be based the meta-level by intercepting some events and
on a formal theory and be as simple as possible. In specific computations (shift-up action) and backs
Agostini & De Michelis, 2000 process templates when the meta-computation has finished (shift-
are provided as ‘resources for action’ rather than down action). Meta-level computations are carried
strict blueprints of work practices. May be the out on a representative of the lower-level, called
most famous dynamic workflow formalization, the reification, which is kept causally connected to
ADEPTflex system (Reichert & Dadam, 1998), is the original level.
designed to support dynamic change at runtime, With respect to other dynamic Petri net exten-
making at our disposal a complete and minimal sions (Cabac, Duvignau, Moldt, & Rölke, 2005;
set of change operations. The correctness proper- Hoffmann, Ehrig, & Mossakowski, 2005; Badouel
ties defined by ADEPTflex are used to determine & Oliver, 1998; Ellis & Keddara, 2000; Hicheur,
whether a specific change can be applied to a Barkaoui, & Boudiaf, 2006), Reflective Petri nets
given workflow instance or not. (Capra & Cazzola, 2007b) are not a new Petri net

219
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

class, rather they rely upon classical Petri nets. WORKFLOW PETRI NETS
That gives the possibility of using available tools
and consolidated analysis techniques. This section introduces the base-level Petri net
We propose Reflective Petri nets as formal subclass used in the sequel, with related notations,
model supporting the design of sound dynamic and properties. We refer to Reisig, 1985; Aalst,
workflows. A structural characterization of sound 1996 for more elaborate introductions.
dynamic workflows is adopted, based on Petri Definition 1 (Petri net). A Petri net is a triple
net’s free-choiceness preservation. The approach (P;T;F), in which:
is applied to a localized open problem: how to
determine what tasks should be redone and which • P is a finite set of places,
ones do not when transferring a workflow instance • T is a finite set of transitions (P ∩ T = ∅
from an old to a new template. The problem is ;),
efficiently but rather empirically addressed in Qiu • F ⊆ ( P × T ) ∪ (T × P ) is a set of arcs (flow
& Wong, 2007, according to a template-based relation)
schema relying on the concept of bypassable task.
Conforming to the same concept we propose an In accordance with the simplicity assumption
alternative, that allows evolutionary steps to be (Agostini & De Michelis, 2000), we are consider-
soundly formalized, and basic workflow proper- ing a restriction of base-level Petri nets used in
ties to be efficiently verified. Capra & Cazzola, 2009. In the workflow context,
As widely agreed (Agostini & De Michelis, it makes no sense to have weighted arcs, because
2000), the workflow model is kept as simple as tokens in places correspond to conditions. Con-
possible. Our approach has some resemblance sequently, in a well-defined workflow a marking
with Reichert & Dadam, 1998, sharing some m is a set of places, i.e., m ∈ Set ( P ). In general a
completeness/smallness criteria, even if it con- marking is a mapping, m : P → . Inhibitor arcs
siderably differs in management of changes: it and priorities are unnecessary to model the routing
neither provides exception handling nor undoing of cases in a workflow Petri net.

mechanism of temporary changes; rather it relies x, x • denote the pre- and post- sets of
upon a sort of “on-the-fly” validation. x ∈ P ∪ T , respectively (the set-extensions
The balance of the chapter is as follows: first

A, A •A ⊆ P ∪ T , will be also used). Transi-
we give a few basic notions around Petri nets tions change the state of the net according to the
and workflows; then we sketch a template-based following rule:
dynamic workflow approach (Qiu & Wong, 2007)
adopted by an industrial WMS; finally, we present -t is enabled in m if and only if each place
our alternative based on Reflective Petri nets, us- p ∈•t contains at least one token.
ing the same application case as in Qiu & Wong, ◦ if t is enabled in m the it can fire, con-
2007; we conclude drawing conclusions and suming one token from each p ∈•t
perspectives. We refer to the companion chapter and producing one token for each
(Capra & Cazzola, 2009) for a complete, up-to- p ∈• t
date introduction on Reflective Petri nets.
Let PN = ( P; T ; F ) , ti ∈ T , ni ∈ T ∪ P ,
σ = t1 , t2 , tk -1 (possibly σ = ε ).

220
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

BASIC NOTIONS/NOTATIONS Sound Workflow-Nets and


Free-Choiceness
t1
m1 → m 2 if and only if t1 is enabled in m1 and its
firing results in m2 A Petri net can be used to specify the control flow
of a workflow. Tasks are modeled by transitions,
σ places correspond to task’s pre/post-conditions.
• m1 → m k if and only if
t1 t2 tk −1 Causal dependencies between tasks are modeled
m1 → m 2 → , m k −1 → m k. by arcs (and places).
• mk is reachable
σ
from m1 if and only if Definition 2 (Workflow-net). A Petri net PN
∃σ ,m1 → m k. = (P;T;F) is a Workflow-net (hereafter WF-net)
• ( PN ; m 0 ) is a Petri net with an initial if and only if:
state m0.
• Given ( PN ; m 0 ) , m' is said reachable if • There is one source place i such that • i = ∅.
and only if it is reachable from m0. • There is one sink place o such that o• = ∅.
• Every x ∈ P ∪ T is on a path from i to o.

Behavioral Properties A WF-net specifies the life-cycle of a case, so


it has exactly one input place (i) and one output
place (o). The third requirement in definition
(Live). ( PN ; m 0 ) is live if and only if, for every
2 avoids dangling tasks and/or conditions, i.e.,
reachable state m' and every transition t there ex-
tasks and conditions which do not contribute to
ists m'′ reachable from m' which enables t.
the processing of cases.
(Bounded, safe). ( PN ; m 0 ) is bounded if and
If we add to a WF-net PN a transition t* such
only if for each place p there exists b ∈  such that • •
that (t* ) = {o} and (t* ) = {i}, then the resulting
for every reachable state m, m( p ) ≤ b. A bounded
net is safe if and only if b = 1. A marking of a safe Petri net PN (called the short-circuited net of
Petri net is denoted by a set of places. PN) is strongly connected.
The requirements stated in definition 2 only
Structural Properties relate to the structure of a Petri net. However, there
is another requirement that should be satisfied:
(Path). A path from n1 to nk is a sequence Definition 3 (soundness). A WF-net PN =
n1 , n2 , , nk s u c h t h a t (ni , ni +1 ) ∈ F , ∀i (P;T;F) is sound if and only if:
1 ≤ i ≤ k −1
(Conflict). t1 and t2 are in conflict if and only • for every m σreachable from state {i}, there
if t1 ∩• t2 ≠ ∅.

exists σ, m →{o}
(Free-choice). PN is free-choice if and only if • {o} is the only marking reachable from {i}
∀ t1 , t2 •t1 ∩• t2 ≠ ∅ ⇒• t1 = • t2. with at least one token in place o
(Causal connection - CC). t1 is causally con- • there are no dead transitions i)n (PN;{i}),
nected to t2 if and only if (t1• \ •t1 ) ∩• t2 ≠ ∅. i.e., ∀t ∈ T there exists a reachable m,
t
m → m'

In other words: for any cases, the procedure


will terminate eventually1, when the procedure
terminates there is a token in place o with all the

221
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

other places empty (that is referred to as proper workflow procedures, for which soundness can
termination), moreover, it should be possible to be efficiently checked, there are WF-nets non free-
execute any tasks by following the appropriate choice which correspond to sensible processes.
route through the WF-net. S-coverability (Aalst, 1996) is a generalization
The soundness property relates to the dy- of free-choiceness: a sound free-choice WF-net
namics of a WF-net, and may be considered as is in fact S-coverable. In general, it is impossible
a basic requirement for any process. It is shown to verify soundness of an arbitrary S-coverable
in Aalst, 1996 that a WF-net PN is sound if and WF-net in polynomial time, that problem being
only if ( PN ;{i}) is live and bounded. Despite that PSPACE-complete. In many practical cases,
helpful characterization, deciding about sound- however, this theoretical complexity significantly
ness of arbitrary WF-nets may be intractable: lowers, so that S-coverability could be considered
liveness and boundedness are decidable, but also as an interesting alternative to free-choiceness.
EXPSPACE-hard.
Therefore, structural characterizations of sound
WF-nets were investigated (Aalst, 1996). Free- A TEMPLATE-BASED APPROACH
choice Petri nets seem to be a good compromise TO DYNAMIC WORKFLOWS
between expressive power and analysis capability.
They are the widest class of Petri nets for which An interesting solution to facilitate an efficient
strong theoretical results and efficient analysis management of dynamic workflows is proposed
techniques do exist (Desel & Esparza, 1995). In in Qiu & Wong, 2007. WMS supporting dynamic
particular (Aalst, 1996), soundness of a free-choice workflow change can either directly modify the
WF-net (as well as many other problems) can be affected instance, or restart it on a new workflow
decided in polynomial time. Moreover, a sound template. The first method is instance based while
free-choice WF-net (PN; {i}) is guaranteed to be the second is template based. The approach we
safe, according to the interpretation of places as are considering, in accordance with a consoli-
conditions. dated practice, falls in the second category, and
Another good reason to restrict our attention is implemented in Dassault Systèmes SmarTeam
to workflow models specified by free-choice WF- (ENOVIA, 2007), a PLM (Product Lifecycle
nets is that the routing of a case should be inde- Management) system including a WMS module.
pendent of the order in which tasks are executed. In Qiu & Wong, 2007 workflows are formally
If non free-choice Petri nets were admitted, then specified by Directed Network Graphs (DNG),
the solution of conflicts could be influenced by the which can be easily translated into PN.
order in which tasks are executed. In literature the The idea consists of identifying all bypass-
term confusion is often used to refer to a situation able tasks, i.e., all tasks in the new workflow
where free-choiceness is violated by a badly mix- instance that satisfy the following conditions: i)
ture of parallelism and conflict. Free-choiceness they are unchanged, ii) they have finished in the
is a desirable property for workflows. If a process old workflow instance, and iii) they need not be
can be modeled as free-choice WF-net, one should re-executed.
do so. Most of existing WMS support free-choice A task (transition, in Petri nets) is said un-
processes only. We will admit as base-level Petri changed, before and after a transformation of the
nets free-choice WF-nets. workflow template, if and only if it represents
Even though free-choice WF-nets are a the same activity (what will be always assumed
satisfactory characterization of well-defined true), and preserves input/output connections.
To determine if a task is bypassable when the

222
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

instance is transferred to a new template, an ad- template, by means of a simple Petri nets structural
ditional constraint is needed: all tasks from which analysis. Validation is accomplished “on-the-fly”,
there is a path (i.e, are causally connected) to the i.e., by operating on the workflow reification while
task itself, must be bypassable in turn. A smart change is in progress. Changes are not reflected
algorithm permits the identification of bypassable to the base-level in case of a negative check. With
tasks: starting from the initial task, which is by- respect to a preliminary version (Capra & Cazzola,
passable by default, only successors of bypassable 2007a), the evolutionary strategy, as concerns in
tasks are considered. particular the validation part, is redesigned and
This solution has been implemented in some bugs are fixed.
SmarTeam system, that includes a workflow man- We consider the same application case pre-
ager and a messaging subsystem, but no built-in sented in Qiu & Wong, 2007. A company has
mechanisms to face dynamic workflow’s change. several regional branches. To enhance operation
A set of API enables detaching and attaching consistence, the company headquarter (HQ)
operations between processes and workflow tem- standardizes business processes in all branches. A
plates. A process is redone entirely if its template workflow template is defined to handle customer
is changed. Workflow’s change is implemented problems. When the staff in a branch encounters
by an application-server, which executes the fol- a problem, a workflow instance is initiated from
lowing steps: the template and executed until completion.
The Petri net specification of the initial template
1. Obtain a process instance; is given in Figure 1. A problem goes through two
2. Obtain the old and new workflow stages: problem solving and on-site realization.
templates; Problem solving involves several tasks, included
3. Attach the new workflow template to the in a dashed box. When opening a case, the staff
process; reports the case to HQ. When closing the case,
4. Identify and mark the tasks that can be by- it archives the related documents. The HQ man-
passed in the new workflow instance; ages all instances related to the problem handling
5. Initiate the new workflow without redoing process.
the marked tasks. In response to business needs, HQ may decide
to change the problem handling template. The
What appears completely unspecified in Qiu & new template (Figure 2) differs from the original
Wong, 2007 is how to safely operate steps 4 and 5: one in two points: a) “reporting” and “problem
some heuristics appear to be adopted, rather than solving” become independent activities; b) “on
a well defined methodology. No formal tests are site realization” can fail, in that case procedure
carried out to verify the soundness of a workflow “problem solving” restarts.
instance transferred to the modified template. At Petri net level, we can observe that transition
Report is causally-connected to ProductChange
in Figure 1, while it is not in Figure 2, and that a
AN ALTERNATIVE BASED ON new transition has been added in Figure 2 (Real-
REFLECTIVE PETRI NETS izationRejected) which is in free-choice conflict
with OnSiteRealization.
We propose an alternative to Qiu & Wong, 2007, When using Reflective Petri nets, the evolu-
based on Reflective Petri nets, which allows a full tionary schema has to be redesigned. The new
formalization of the evolutionary steps, as well as workflow template is not passed as input to the
a validation of changes proposed for the workflow staff of the company branches, but it results from

223
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Figure 1. An instance of a workflow template (begin, end are used instead of i and o)

applying an evolutionary strategy to a workflow (e.g., a deadlock) is revealed by introspection.


instance belonging to the current template. The (Late) introspection is also used to discriminate
initial base-level Petri nets is assumed a free-choice whether evolutionary commands have been safely
WF-net. No details about the workflow dynamics applied to the current workflow instance, or they
are hard-wired in the base-level net. Evolution is have to be discarded.
delegated to the meta-program, that acts on the Figure 1 depicts the following situation: a
WF-net reification. workflow instance running on the initial template
The meta-program is activated when an evolu- has received a message from HQ. At the current
tionary signal is sent in by HQ, or some anomaly state (marking) SolutionDesign, a sub-task of

224
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Figure 2. Workflow’s evolution

ProblemSolving, and Report are pending tasks, One might think of this approach as instance-
whereas a number of tasks (e.g., Analysis and based, rather than template-based. In truth it covers
CaseOpening) have been completed. The meta- both: if the evolutionary commands are in fact
program in that case successfully operates a change broadcasted to workflow’s instances we fall in
on the old template’s instance, once verified that the latter scheme.
all paths to any pending tasks are only composed The evolutionary strategy relies upon the no-
of bypassable tasks. tion of adjacency preserving task, which is more
The workflow instance transferred to the new general than the unchanged task used in Qiu &
template is illustrated in Figure 2. Wong, 2007. It is inspired by van der Aalst’s

225
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

concept that any workflow change must preserve ∀x ∈ At ∀y ∈• t ∪ t •, y ∈• x ⇔ φ ( y ) ∈ • x and


the inheritance relationship between old and new y ∈ x• ⇔ φ ( y) ∈ x •
templates (Aalst & Basten, 2002). Let us introduce
some notations used in the sequel. If t is adjacency preserving then all its cau-
L e t PN = (Old_P; Old_T ; Old_A) , b e sality/conflict relationships to adjacent tasks are
a base-level WF-net (better, its reification at maintained. A case where Definition 5 holds, and
the meta-level), PN ′ = ( P′; T ′; A′) be the re- another one where it does not, are illustrated in
sulting Petri net after some modifications 2, Figure 3 (the black bar denotes a new task, t’ is
Old_N = Old_P ∪ Old_T , N ′ = P′ ∪ T ′. used instead of t ). In case (b) the original input
Symbols x and x refer to a node “preserved” connections of t are maintained (output connec-
by change, considered in the context of PN and tions are unchanged): if the occurrence of t is made
PN ′, respectively. possible by the occurrence of some preceding tasks
S e t s Del_N = Del_P ∪ Del_T , it, the same may happen in the new situation. That
New_N = New_P ∪ New_T , and New_A, is not true in case (c): the occurrence of the new
Del_A, denote the base level nodes/arcs added to task represents in fact an additional precondition
and removed from PN, respectively. for any subsequent occurrence of t.
We assume that New_A ∩ Del_A = ∅. No Checking definition 5 is computationally
other assumptions are made: for example “mov- very expensive. However, if useless changes are
ing” a given node across the base-level Petri net forbidden, e.g., “deleting a given place p, then
might be simulated by first deleting the node, then adding p’ inheriting from p all connections”, or
putting it again setting new connections. “adding an arc 〈 p, t 〉, then deleting p or t ”, check’s
As explained in (Capra & Cazzola, 2009), complexity can be greatly reduced.
the evolutionary framework (a transparent meta- Lemma 1 states some rules for identifying a
level’s component) being in charge of carrying superset of tasks Na not preserving adjacency.
out evolution rejects a proposed change if not It can be easily translated to an efficient meta-
consistent with respect to the current base-level’s program’s routine. Almost always it comes to be
reification. N a ≡ NO_ADJ .
Finally, NO_ADJ, NO_BYPS denote the Lemma 1. Consider set Na, built as follows
tasks not preserving adjacency and the non-
bypassable tasks, respectively (of course, p ∈ Del_P ⇒• p ∪ p • ⊆ N a
NO_ADJ ⊆ NO_BYPS ). Some of the symbols
just introduced will be used as names for the t ∈ Del_T ⇒• (• t ) ∪ (t • )• ⊆ N a
evolutionary strategy parameters.
Definition 4 (adjacent set). Let t be a transi- 〈 p, t 〉 ∈ Del_A ∨ 〈t , p〉 ∈ Del_A ⇒• p ∪ p • ⊆ N a
tion. The set of adjacent transitions At is:
〈 p, t 〉 ∈ New_A ∧ t ∈ Old_N ⇒ {t} ∪ D ⊆ N a
• •
( t ∪ t • ) ∪ (• t ∪ t • )• \{t}.
where D = • p ∪ p • if p ∈ Old_N , else D = ∅
Definition 5 (adjacency preserving task). Let
t ∈ Old_T , t ∈ T ′. Task t is adjacency preserv- Then NO_ADJ ⊆ N a.
ing if and only if ∀x ∈ Old_T , x ∈ At ⇔ x ∈ At The evolutionary meta-program if formalized
and there exist a bijection φ :• t ∪ t • → • t ∪ t • in Figure 4. The use of a CSP-like syntax (Hoare,
such that 1985; Capra & Cazzola, 2009) makes it possible
its automatic translation to a high-level Petri net

226
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Figure 3. Definition 5 Illustrated

(the logical meta-level of the Reflective Petri


nets layout). The meta-program is activated at //changing the current reification
any transition of state on the current workflow newNode(New_P ∪ New_T);newArc(New_A);
instance (shift-up), reacting to three different types deleteArc(Del_A); delNode(Del_N);
of events. In the case of deadlock, a signal is sent //checking the (new) WF-net’s well-def-
to HQ, represented by a CSP process identifier. initeness
If the current instance has finished, and a “new checkWfNet(); checkFc();
instance” message is received, the workflow is /*there might be a deadlock, or a non-
activated. Instead if there is an incoming evolution- bypassable task is causally
ary message from HQ, the evolutionary strategy Connected to a pending one ...*/
starts running. !exists t in Tran, enab(t) or (exists t
in Tran ∩ Old_N, enab(t) and
*[ !isEmpty(ccBy(t) ∩ NO_BYPS)) -> [re-
VAR p, t, n: NODE; start()] //rejecting changeshiftDown() //
VAR New_P,New_T,Old_N,Del_N, NO_BYPS: reflecting change
SET(NODE); ]
VAR New_A,Del_A: SET(ARC); []
//receiving an evolutionary signal #end=0 and !exists t in Tran, enab(t)
HQ ? change-msg() -> [ -> [HQ ! notify-deadlock()]
//receiving the evolutionary commands []
HQ ? New_P; HQ ? New_T; HQ ? New_A; HQ #end=1; HQ ? newInstance-msg() ->
? Del_A; HQ ? Del_N; [flush(end); incMark(begin)]
//getting the WF-net reification ]
Old_N = ReifiedNodes();
//computing the non-bypassable tasks Just after an evolutionary signal, HQ com-
NO_BYPS = ccTo(notAdjPres()); municates the workflow nodes/connections to

227
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Figure 4. Workflow’s evolutionary strategy

be removed/added. For the sake of simplicity we scheme just described might be adopted for a
assume that change can only involve workflow’s wide class of evolutionary patterns.
topology. The (super)set of non-bypassable tasks Language’s keywords and routine calls are
is then computed. in bold. We recall (Capra & Cazzola, 2009) that
After operating the evolutionary commands type NODE represents a (logically unbounded)
on the current workflow reification, definition recipient of base-level nodes, and is partitioned
2 and free-choiceness are tested on the newly into Place and Tran subtypes . The exists quan-
changed reification. Following, the strategy tifier is used to check whether a net element is
checks by reification introspection whether the currently reified. The built-in routine ReifNodes
suggested workflow change might cause a dead- computes the nodes belonging to the current
lock, or there might be any non-bypassable tasks base-level reification. The routine notAdjPres
causally-connected to an old task which is cur- initializes the set of non-bypassable tasks to Na
rently pending. In either case, a restart procedure according to lemma 1. The routines ccTo and ccBy
takes the workflow reification back to the state compute the set of nodes the argument is causally
before strategy’s activation. Otherwise, change connected to, and that are causally connected to
is reflected to the base-level (shift-down). The routine’s argument, respectively.

228
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

On-the-Fly Structural Check • the only operations affecting free-choice-


ness, under a conservative hypothesis, are
The structural changes proposed from time to time the addition/removal of an input arc 〈p,t〉
to a dynamic workflow can be validated by means (the removal of a node produces as a side-
of classical Petri nets analysis techniques. Valida- effect the withdrawal of all adjacent arcs, so
tion is accomplished on the workflow reification it is fair, with respect to free-choiceness)
“on-the-fly”, i.e., while the evolutionary strategy • for each arc 〈p,t〉 which has been added/
is in progress. Thanks to a restart mechanism, po- removed we only have to check the free-
tentially dangerous changes are discarded before choiceness between t and every transition
they are reflected to the base-level, at the end of sharing with t some input places.
a meta-computation.
Routines checkWfNet, checkFc test the pres- Putting the Strategy to the Test
ervation of base-level Petri nets well-definiteness
(definition 2) and free-choiceness, respectively. Let us explain how the strategy works, considering
Their calls are located in the meta-program just again Figures 1-2, upon receiving the evolution-
after the evolutionary commands, which operate ary commands:
on the base-level workflow reification.
New_P={}
[ New_T={ RealizationRejected}
VAR t,tx: Tran; Del_A={ 〈p13, ProductChange〉 }
VAR p: Place; Del_N={}
*(<p,t> in New_A ∪ Del_A) New_A={〈p6, Realization Rejected〉, 〈p13,
[ Archiving〉, 〈Realization Rejected, p5〉}.
exists(p) and exists(t) ->
[ The non-bypassable tasks come to be: {Report,
*(tx in post(pre(t))/{t}) Archiving, ProductChange, OnSiteRealization,
[pre(t) <> pre(tx) -> restart();] CaseClosure}. In the workflow instance running
] on the modified template (Figure 2) tasks (tran-
] sitions) Report and ProductChange are pending
] (enabled) in the current state (marking) m:{p11,
p14}. All preexisting completed tasks that are caus-
Free-choiceness preservation, in particular, ally connected to one of them can be bypassed,
may be checked in a simple, efficient way. so the new workflow has not to be restarted from
Figure 5 expands the corresponding routine. scratch, saving a lot of work.
It works under the following assumptions and The approach just described ensures a de-
principles: pendable evolution of workflows, while being
enough flexible. We have not intended to propose
• the initial base-level Petri net is a free- a general solution to the particular problem ad-
choice WF-net (conservative hypothesis) dressed in Qiu & Wong, 2007. Better policies
• variables New_A, Del_A record all the do probably exist. Rather, we have tried to show
arcs which are added/deleted to/from the that an approach merging consolidated reflec-
base-level reification during the evolution- tion concepts to classical Petri nets techniques
ary strategy’s execution; they are cleared at can suitably address the criticisms of dynamic
any meta-program activation; workflow change.

229
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Figure 5. Meta-program’s routine checking free-choiceness

Structural Base-Level Analysis. The base- really carried out (reflected) on the base-level,
level is guaranteed to be a free-choice WF-net without doing any consistency control, a deadlock
all over its evolution: that makes is possible to would be eventually entered (state {p8}) after
use polynomial algorithms to check workflow’s the workflow instance continues its run on the
soundness. In particular techniques based on the modified template. The problem is that m' is not a
calculus of flows (invariants) are elegant and very reachable state of ( PN ′;{begin}), but reachability
efficient. In general they are highly affected by is NP-complete also in live and safe free-choice
the base-level Petri net complexity. The separa- Petri nets, so it would make no sense checking
tion between evolutionary and functional aspects reachability directly at meta-program level.
encourages their usage.
For instance, by operating the structural algo- Current Limitations
rithms of GreatSPN tool (Chiola, Franceschinis,
Gaeta, & Ribaudo, 1995), it is possible to discover The proposed reflective model for dynamic work-
that both nets depicted in Figures 1-2 are covered flows suffers from a major conceptual limitation:
by place-invariants. A lot of interesting properties only the control-flow perspective is considered.
thereby descend: in particular boundedness and Let us shortly discuss this choice. We abstract from
liveness, i.e., workflow soundness. the resource perspective because in most workflow
management systems it is not possible to specify
Counter Example that several (human) resources are collaborating
in executing a task. Even if multiple persons are
Assume that evolution occurs when the only executing a task, only one is allocated to it from
pending task is OnSiteRealization, i.e., consider the WMS perspective: who selected the work item
as current marking of the net in Figure 1 m' :{ p6 }. from the in-basket. In contrast to other application
That means, among the other things, tasks Pro- domains such as flexible manufacturing systems,
ductChange, VersionMerging and Report have anomalies resulting from locking problems are
been completed: change in that case is discarded not possible (it is reasonable to assume that each
after having verified that there are some non- task will be eventually executed by the person
bypassable tasks which are causally connected having in charge it). Therefore, from the view-
to the pending one. If the suggested change were point of workflow verification, we can abstract

230
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

from resources. However, if collaborative features to efficiently check preservation of workflow’s


will be explicitly supported by WMS (through structural properties, which in turn permit sound-
a tight integration of groupware and workflow ness -a major behavioral property- to be checked
technology), then the resource perspective should in polynomial time. All is done while evolution is
be taken into account. We partly abstract from the in progress. An algorithm is delivered to soundly
data perspective. Production data can be changed transferring workflow instances from an old to a
at any time without notifying the WMS. Their new template, redoing already completed work-
existence does not even depend upon the work- flow’s task only when strictly necessary. The
flow application and they may be shared among approach formalizes and improves a procedure
different workflows. The control data used to currently implemented in an industrial workflow
route cases are managed by the WMS. Some of management system. We are studying the possibil-
these data are set or updated by humans or ap- ity of using even more general structural notions
plications. Clearly, some abstraction is needed to than free-choiceness, in particular S-coverability
incorporate the data perspective when verifying (Aalst, 1996), that provides in most practical cases
a given workflow. The abstraction currently used a structural characterization of soundness.
is the following. Since control data (workflow at-
tributes such as the customer id, the registration
date, etc.) are only used for the routing of a case, REFERENCES
we model each decision as a non-deterministic
choice. If we are able to prove soundness for the Agostini, A., & De Michelis, G. (2000, Au-
situation without workflow attributes, it will also gust). A Light Workflow Management System
hold for the situation with attributes. Abstracting Using Simple Process Models. Computer Sup-
from triggers and workflow attributes fits in the ported Cooperative Work, 9(3-4), 335–363.
usage of ordinary Petri nets for the base-level of doi:10.1023/A:1008703327801
the reflective model: this is preferable because of Badouel, E., & Oliver, J. (1998, January). Recon-
the availability of efficient and powerful analysis figurable Nets, a Class of High Level Petri Nets
tools. Supporting Dynamic Changes within Workflow
Systems (IRISA Research Report No. PI-1163).
IRISA.
CONCLUSION
Cabac, L., Duvignau, M., Moldt, D., & Rölke,
Industrial/business processes are an example of H. (2005, June). Modeling Dynamic Architec-
discrete-event systems which are increasingly tures Using Nets-Within-Nets. In G. Ciardo &
subject to evolution during life-cycle. Covering P. Darondeau (Eds.), Proceedings of the 26th
the intrinsic dynamism of modern processes has International Conference on Applications and
been widely recognized as a challenge by designers Theory of Petri Nets (ICATPN 2005) (p. 148-167).
of workflow management systems. Petri nets are Miami, FL: Springer.
a central model of workflows, but traditionally
Capra, L., & Cazzola, W. (2007a, on 26th-29th
they have a fixed structure. We have proposed
of September). A Reflective PN-based Approach
and discussed the adoption of Reflective Petri nets
to Dynamic Workflow Change. In Proceedings
as a formal model for sound dynamic workflows.
of the 9th International Symposium in Symbolic
The clean separation between (current) workflow
and Numeric Algorithms for Scientific Computing
template and evolutionary strategy on one side,
(SYNASC’07) (p. 533-540). Timisoara, Romania:
and the use of classical Petri nets notions (free-
IEEE CS.
choiceness) on the other side, make it possible

231
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

Capra, L., & Cazzola, W. (2007b, December). Hoffmann, K., Ehrig, H., & Mossakowski, T.
Self-Evolving Petri Nets. Journal of Universal (2005, June). High-Level Nets with Nets and Rules
Computer Science, 13(13), 2002–2034. as Tokens. In G. Ciardo & P. Darondeau (Eds.),
Proceedings of the 26th International Conference
Capra, L., & Cazzola, W. (2009). An Introduction
on Applications and Theory of Petri Nets (ICATPN
to Reflective Petri Nets. In E. M. O. Abu-Atieh
2005) (pp. 268-288). Miami, FL: Springer.
(Ed.), Handbook of Research on Discrete Event
Simulation Environments Technologies and Ap- Maes, P. (1987, October). Concepts and Ex-
plications. Hershey, PA: IGI Global. periments in Computational Reflection. In N. K.
Meyrowitz (Ed.), Proceedings of the 2nd confer-
Chiola, G., Franceschinis, G., Gaeta, R., & Ribau-
ence on object-oriented programming systems,
do, M. (1995, November). GreatSPN 1.7: Graphi-
languages, and applications (OOPSLA’87) (Vol.
cal Editor and Analyzer for Timed and Stochastic
22, pp.147-156), Orlando, FL.
Petri Nets. Performance Evaluation, 24(1-2),
47–68. doi:10.1016/0166-5316(95)00008-L Qiu, Z.-M., & Wong, Y. S. (2007, June). Dynamic
Workflow Change in PDM Systems. Computers
Desel, J., & Esparza, J. (1995). Free Choice Petri
in Industry, 58(5), 453–463. doi:10.1016/j.com-
Nets (Cambridge Tracts in Theoretical Computer
pind.2006.09.014
Science Vol. 40). New York: Cambridge Univer-
sity Press. Reichert, M., & Dadam, P. (1998). ADEPTflex
- Supporting Dynamic Changes in Workflow
Ellis, C., & Keddara, K. (2000, August). ML-
Management Systems without Losing Control.
DEWS: Modeling Language to Support Dynamic
Journal of Intelligent Information Systems, 10(2),
Evolution within Workflow Systems. Computer
93–129. doi:10.1023/A:1008604709862
Supported Cooperative Work, 9(3-4), 293–333.
doi:10.1023/A:1008799125984 Reisig, W. (1985). Petri nets: An introduction
(EATCS Monographs in Theoretical Computer
ENOVIA. (2007, September). Dassault systèmes
Science Vol. 4). Berlin: Springer.
plm solutions for the mid-market [white-paper].
Retrieved from. http:/www.3ds.com/fileadmin/ Salimifard, K., & Wright, M. B. (2001, November).
brands/enovia/pdf/whitepapers/CIMdata-DS_ Petri Net-Based Modeling of Workflow Systems:
PLM_for_the_MidMarket-Program_review- An Overview. European Journal of Operational
Sep2007.pdf) Research, 134(3), 664–676. doi:10.1016/S0377-
2217(00)00292-7
Hicheur, A., Barkaoui, K., & Boudiaf, N. (2006,
September). Modeling Workflows with Recursive van der Aalst, W. M. P. (1996). Structural Char-
ECATNets. In Proceedings of the Eighth Inter- acterizations of Sound Workflow Nets (Computing
national Symposium on Symbolic and Numeric Science Reports No. 96/23). Eindhoven, the Neth-
Algorithms for Scientific Computing (SYNACS’06) erlands: Eindhoven University of Technology.
(p. 389-398). Timisoara, Romania: IEEE CS.
van der Aalst, W. M. P., & Basten, T. (2002,
Hoare, C. A. R. (1985). Communicating Sequen- January). Inheritance of Workflows: An Approach
tial Processes. Upper Saddle River, NJ: Prentice to Tackling Problems Related to Change. Theo-
Hall. retical Computer Science, 270(1-2), 125–203.
doi:10.1016/S0304-3975(00)00321-2

232
Trying Out Reflective Petri Nets on a Dynamic Workflow Case

van der Aalst, W. M. P., & Jablonski, S. (2000, Soundness: behavioral property of a well-
September). Dealing with Workflow Change: defined workflow net.
Identification of Issues and Solutions. Interna- Structural Properties: properties derived
tional Journal of Computer Systems, Science, and from the incidence matrix of Petri nets.
Engineering, 15(5), 267–276. Free-Choiceness: typical structural property
which can be efficiently tested.

KEY TERMS AND DEFINITIONS


ENDNOTES
Evolution: attitude of systems to change 1
If we assume, as it is reasonable in the work-
layout/functionality.
flow context, a strong notion of fairness: in
Dynamic Workflows: models of industrial/
every infinite firing sequence, each transition
business processes subject to evolution.
fires infinitely often.
Petri Nets: graphical formalism for discrete- 2
We recall that in Reflective Petri nets any
event systems.
evolutionary strategy is defined in terms of
Reflection: activity performed by an agent
basic operations on base-level’s elements
when doing computations about itself.
Workflow Nets: Petri net-based workflow
models.

233
234

Chapter 11
Applications of Visual
Algorithm Simulation
Ari Korhonen
Helsinki University of Technology, Finland

ABSTRACT
Understanding data structures and algorithms is an integral part of software engineering and elementary
computer science education. However, people usually have difficulty in understanding abstract concepts
and processes such as procedural encoding of algorithms and data structures. One way to improve their
understanding is to provide visualizations to make the abstract concepts more concrete. In this chapter,
we represent a novel idea to promote the interaction between the user and the algorithm visualization
system called visual algorithm simulation. As a proof of concept, we represent an application framework
called Matrix that encapsulates the idea of visual algorithm simulation. The framework is applied by
the TRAKLA2 learning environment in which algorithm simulation is employed to produce algorithm
simulation exercises. Moreover, we discuss the benefits of such exercises and applications of visual
algorithm simulation in general.

INTRODUCTION of issues that we must take into account while de-


signing and creating effective visualizations and
Data structures and algorithms are important core algorithm animation for teaching purposes (Baecker,
issues in computer science education. They are also 1998; Brown & Hershberger, 1992; Fleischer &
complex concepts, thus difficult to grasp by novice Kucera, 2001; Gloor, 1998; Miller, 1993). See,
learners. Fortunately, algorithm animation, visual for example, the techniques developed for using
debugging and algorithm simulation are all suitable color and sound (Brown & Hershberger, 1992) or
methods to aid the learning process. Much research hand-made designs (Fleischer & Kucera, 2001)
has been carried out to identify the great number to enhance the algorithm animations. We argue,
however, that these are only minor details (albeit
DOI: 10.4018/978-1-60566-774-4.ch011 important ones) in the learning process as a whole.

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Applications of Visual Algorithm Simulation

In order to make a real difference here, we should level than those of basic programming courses.
change the point of view and look at the problem We are more interested in the logic and behavior
from the learner’s perspective. How can we make of an algorithm than its implementation details.
sure the learner actually gets the picture? It is not Therefore, systems that grade programming
what the learner sees but what he or she does. exercises are not suitable here. The problem is
In addition, we argue that no matter what kind to find a suitable application framework for a
of visualizations the teacher has available, the system that is capable of interacting with the user
tools cannot compete in their effectiveness with through canonical data structure illustrations in
environments in which the learner must perform this logical level and giving feedback on his or
some actions in order to become convinced of his her performance. The aim is to extend the concept
or her own understanding. of direct manipulation (Stasko, 1991, 1998) to
From the pedagogical point of view, for ex- support not only manipulation of a visualization
ample, a plain tool for viewing the execution of an but also the real underlying data structures that
algorithm is not good enough (C. D. Hundhausen, the visualization reflects. It is a kind of combina-
Douglas, & Stasko, 2002; Naps et al., 2003). Even tion of direct manipulation and visual debugging
visual debugging cannot cope with the problem in which the user can debug the data structures
because it is always bound to the actual source through graphical user interface. Our approach,
code. It is still the system that does all the work however, allows the user to manipulate the data
and the learner only observes its behavior. At least structures in two different levels. First, in low level,
we should ensure that a level of progress in learn- the data structures and the data they contain can
ing has taken place. This requires an environment be altered, for example, by swapping keys in an
where we can give and obtain feedback on the array. Second, in higher level, the framework can
student’s performance. provide ready-made algorithms that the user can
Many ideas and systems have been introduced execute during the manipulation process. Thus,
to enhance the interaction, assignments, mark-up instead of swapping the keys, the user can sort the
facilities, and so on, including (Astrachan & Rod- whole array with one command. In addition, the
ger, 1998; Brown & Raisamo, 1997; Grillmeyer, high level algorithms can be simulated in terms
1999; Hansen, Narayanan, & Schrimpsher, 2000; of the low level operations. Thus, the simulation
Mason & Woit, 1999; Reek, 1989; Stasko, 1997). process can be verified by comparing it to the
On the other hand, the vast masses of students in execution of an actual algorithm. Quite close to
basic computer science classes have led us into this idea comes PILOT (Bridgeman et al., 2000) in
the situation in which giving individual guid- which the learner solves problems related to graph
ance for a single student is impossible even with algorithms and receives graphical illustration of
semi-automated systems. Thus, a kind of fully the correctness of the solution, along with a score
automatic instructor would be useful such as and an explanation of the errors made. However,
(Baker, Boilen, Goodrich, Tamassia, & Stibel, the current tool covers only graph algorithms, and
1999; Benford, Burke, Foxley, Gutteridge, & especially the minimum spanning tree problem.
Zin, 1993; Bridgeman, Goodrich, Kobourov, & Hence, there is no underlying general purpose
Tamassia, 2000; English & Siviter, 2000; Hig- application framework that can be extended to
gins, Symeonidis, & Tsintsifas, 2002; Hyvönen other concepts and problem types such as trees,
& Malmi, 1993; Jackson & Usher, 1997; Reek, linked lists, and arrays.
1989; Saikkonen, Malmi, & Korhonen, 2001). In this chapter, we will introduce the concept
However, the topics of data structures and al- of algorithm simulation to reach the goal set and
gorithms are often introduced on more abstract fill the gap between visual debuggers and real-

235
Applications of Visual Algorithm Simulation

time algorithm simulation. The idea is to develop BST insertion routine. Now, one faces the ques-
a general purpose platform for illustrating all tion of balancing the tree. First, rotations can be
the common data structure abstractions applied studied on the detailed level by dragging edges
regularly to illustrate the logic and behavior of into the new positions and by letting the system
algorithms. Moreover, the platform is able to al- redraw the tree. Second, after mastering this,
low user interaction in terms of visual algorithm the available rotation commands can be invoked
simulation. As an application, we support exercises directly from menus. Finally, one can choose to
in which automatically generated visual feedback work on AVL trees, create a new tree from the
is possible for algorithm simulation exercises. menu, and use the AVL-tree insertion routine to
We call such a process automatic assessment add new elements into the search tree. In addition,
of algorithm simulation exercises (Korhonen & one can experiment on the behavior of AVL trees
Malmi, 2000). by creating an array of keys, and by dragging the
whole array into the title bar quickly creating a
Organization of this Chapter large example tree. The system inserts the keys
from the array one by one into the tree using the
The next sections first introduce the concept of implemented insertion routine. Moreover, the
visual algorithm simulation and the method of result is not a static tree, but a sequence of states
algorithm simulation exercises. In addition, the of the tree between the single insertions, which
literature survey compares the above with work can be stepped back and forth to examine the
of others. After that, the Matrix algorithm simula- working of the algorithm more closely similar
tion framework is described, and we show how to to that in any algorithm animation system. Thus,
apply the application framework to come up with the vision above extends the concept of algorithm
visual algorithm simulation exercises. Finally, we animation by allowing the user to interact with the
conclude our ideas and addresses some future system in more comprehensive level than that of
research topics. a simple animator panel can provide (step, back,
forward, etc.).

ALGORITHM SIMULATION Visual Algorithm Simulation

The goal of Algorithm Simulation is to further the In visual algorithm simulation, we are interested
understanding of algorithms and data structures in the detailed study of the dynamic behavior of
inductively, based on observations of an algorithm data structures. Sequences of data structures are
in operation. In the following, we first give a vi- directly represented through event procedures.
sion by an example what kind of operations such Each event altering any part of the data structure
a method should provide. is completed by a GUI operation updating the cor-
Consider a student studying basic search trees responding visualization. This paradigm permits
who can perform operations on a binary search the representation of systems at an essentially un-
tree (BST) by dragging new keys into the cor- limited level of detail. Simulation experiments are
rect leaf positions in the tree, thus simulating the performed under the control of an animator entity
BST insertion algorithm. After mastering this, with event process orientation. Model executions
one can switch to work on the conceptual level are guarded by an algorithm or by human interac-
and drag the keys into data structure. The keys tion. However, an algorithm here corresponds to
are inserted into the tree by the pre-implemented a code implemented by a developer – not by the

236
Applications of Visual Algorithm Simulation

Figure 1. An overview of algorithm animation and visual algorithm simulation. Five cycles of interac-
tion can be identified. A typical Control Interface allows (1) to customize the layout, change the speed
and direction of the animation, etc. and (2) to manipulate the data structures by invoking predefined
operations. Not only (3) Direct Manipulation is possible, but (4) the changes can also be delivered
into the underlying data structures by means of Visual Algorithm Simulation. In addition, (5) even the
underlying data structures can be passed to the algorithm as an input by utilizing Visual Algorithm
Simulation functionality

user itself. Thus, the user does not code anything manipulate not only the representation but also
while simulating algorithms. Instead, the user can the underlying implemented data structure (see
execute predefined algorithms or take any other (4) and (5) in Figure 1).
allowed action to alter the data structures by means Matrix is an application framework for algo-
of simple GUI operations. Actually, the code does rithm visualization tools that encapsulates the idea
not have to be even visible during the simulation. of visual algorithm simulation. The system seam-
Thus, one can simulate algorithms that do not even lessly combines algorithm visualization, algorithm
exist, yet. However, in case we apply this method, animation, and visual algorithm simulation and
for example, for algorithm simulation exercises provides a novel approach for the user to interact
(as we will do in the next section), there is usually with the system.
a need to represent also the code that the learner
is supposed to follow while solving the exercise Simulation Techniques
(i.e., while simulating the corresponding algorithm
in the exercise). We do not know any other system that is capable
The manipulation process is conceptually the of direct manipulation in terms of visual algorithm
opposite of the algorithm animation with respect to simulation similar to Matrix. Astrachan et al.
the information flow. Where algorithm animation discuss simulation exercises while introducing
visually delivers the information from the system the Lambada system (Astrachan & Rodger, 1998;
to the user, direct manipulation delivers the input Astrachan, Selby, & Unger, 1996). However,
from the user to the system through a graphical their context is completely different, because the
user interface (see (3) in Figure 1). Generally, if students simulate models of practical applica-
human interaction is allowed between the visual tions, partly coded by themselves, and the system
demonstration of an algorithm and the user in is used only for illustrating the use of primitive
such a way that the user can directly manipulate data structures without much interaction with
the data structure representation, the process the user.
is called direct manipulation. However, visual On the other hand, GeoWin (Bäsken & Näher,
algorithm simulation allows the user directly to 2001) is a visualization tool for geometric algo-

237
Applications of Visual Algorithm Simulation

rithms in which the user can manipulate a set of tion by demonstration. In addition, the JAWAA
actual geometric objects (e.g. geometric attributes (Pierson & Rodger, 1998) editor is capable of
of points in a plane) through the interactive inter- producing scripting language commands that can
face. However, the scope of the system is quite be animated. However, within these systems the
different from Matrix. While all the relations user does not manipulate an actual data structure,
between objects in GeoWin are determined by but only a visualization of an alleged structure.
their coordinates in some geometric space, the The system produces the algorithm animation
relations between the underlying object instances sequence based on the direct manipulation. How-
in Matrix are determined by their interconnected ever, while creating, for example, an AVL tree
references. demonstration, it is the user’s concern to main-
More close to Matrix, in this sense, comes tain the tree balanced. In Matrix, several levels
CATAI (Cattaneo, Italiano, & Ferraro-Petrillo, of interaction are possible: one can manipulate
2002) and JDSL Visualizer (Baker et al., 1999). a tree as with Animal or it is also possible to
They allow the user to invoke some methods invoke an actual insert method for the AVL tree
on the running algorithm. In terms of method that inserts an element into the appropriate posi-
invocations it is directly possible to access the tion. The actual underlying structure is updated
content of the data structures or to execute a piece and the new state of the structure is visualized
of code encapsulated in an ordinary method call. for the user. Another system that allows the
However, Matrix also provides a user interface user to simulate algorithms in this sense is Pilot
and an environment for this task in terms of (Bridgemanet al., 2000). However, it is targeted
direct manipulation. Thus, Matrix not only al- only to graph algorithms. Moreover, the user is
lows method invocations, but also the facility to only allowed to interact with some attributes of
simulate an algorithm in a more abstract level by edges and vertices (e.g., change the color) and
drag & dropping new input data at any time to the not the structure itself.
corresponding data structure. Finally, Matrix is implemented in Java, which
Moreover, Matrix is designed for students gives more flexibility in terms of platform inde-
working on a higher level of abstraction than, pendence, compared to older systems such as
for example, JDSLVisualizer or AlvisLive! (C. Amethyst (Myers, Chandhok, & Sareen, 1988)
Hundhausen & Brown, 2005). In other words, and UWPI (Henry, Whaley, & Forstall, 1990).
both of these tools are designed for a programming Of course, Java has its own restrictions, but Java
course to provide interactive debugging tools for together with WWW has given a new impetus to
educational purposes (Program Visualization and algorithm visualization techniques.
Animation), while the representations in Matrix
are intended for a data structures and algorithms
course to illustrate and grasp the logic and con- MATRIX SIMULATION FRAMEWORK
cepts of data structures and algorithms (Algorithm
Visualization and Animation). Of course, both Two kinds of functionality are provided for in-
kinds of tools are fit for use. teraction with the Matrix system. First, control
In Matrix, the user can directly change the over the visualization is required, for example,
underlying data structure on-the-fly through in order to adjust the amount of detail presented
the graphical user interface. Also, for example, in the display, to navigate through large data
Animal (Rößling, Schüler, & Freisleben, 2000) structures, or to control the speed and direction
and Dance (Stasko, 1991, 1998) both have the of animations. In Figure 1, Control Interface al-
look and feel of building an algorithm anima- lows this kind of functionality. A considerably

238
Applications of Visual Algorithm Simulation

Figure 2. Arrays, lists, trees and graphs are important fundamental data types, i.e., reusable abstractions
regularly used in computer science. The figure above depicts these concepts printed from the Matrix
system

large number of systems exist providing miscel- functionality are needed for exploring the under-
laneous sets of functionality for these purposes. lying structure.
For example, Dynalab (Boroni, Eneboe, Goosey,
Ross, & Ross, 1996) supports the flexible anima- Elements of Visualization
tion control for executing animations forward and
backward. On the other hand, Brown (1988) was From the user’s point of view, Matrix operates
the first one to introduce the support for custom on a number of visual concepts which include
input data sets. arrays, linked lists, binary trees, common trees,
Second, some meaningful ways to perform and graphs, as depicted in Figure 2. Many of the
experiments are needed in order to explore the basic layouts for these representations are based
behavior of the underlying structure. Here, Matrix on the algorithms introduced in the literature of
allows the user to change the state of the underly- information visualization (see, for example, Bat-
ing data structure in terms of direct manipulation tista, Eades, Tamassia, & Tollis, 1999). We call
(see algorithm simulation in Figure 1). The ma- the corresponding real underlying data structures
nipulation events are targeted to the visualization as Fundamental Data Types (FDT). FDT is a data
and correspond to the changes mentioned earlier. structure that has no semantics, i.e., it does not
Again, the display is automatically updated to commit on the data it stores, but merely ignores
match the current state after each change. More- the type of the data.
over, all the changes are recorded in order to be We distinguish FDTs from Conceptual Data
reversed through the control interface. Therefore, Types (CDT) that also have the semantics on what
this second item is virtually our primary contri- kind of information is stored to them. Any CDT
bution and is the type of interaction we mean by can be composed of FDTs, however, and thus
algorithm simulation. Of course, both kinds of visualized by the same visual concepts as FDTs.

239
Applications of Visual Algorithm Simulation

Figure 3. Matrix control panel

For example, a binary search tree can reuse the can similarly be changed by setting a visual
binary tree FDT, and thus can be visualized by reference to point to another visual component.
the same representation. Finally, the visual container (whole structure)
Next, we introduce the four basic entities that can be attached to a visual component, resulting
all can be subjected to changes in Matrix applica- in nested visualization of two or more concepts.
tion framework. All the visual concepts can be This is an important feature providing new op-
assembled by integrating the following parts into a portunities to create complex structures such as
whole. First, a visual container is a graphical entity adjacency lists or B-trees. Matrix does not restrict
that corresponds to the overall visual structure and the level of nesting. Moreover, context sensitive
may contain any other visual entities. Second, operations can also be invoked for such composed
visual components are parts of visual containers structures.
that are connected to each other by visual refer-
ences. Finally, the visual components can hold Control over the Visualization
another visual container, recursively, or a visual
key, which has no internal structure. Object-oriented modeling seems to offer a natural
For example, in Figure 2, the Tree layout is way to implement these kinds of systems. The
composed of 9 visual components, each holding entity descriptions encapsulate the label, the
a visual key (denoted by letters A, N, E, X, A, structure and its behavior into one organizational
M, P, L, and E). The components are connected unit. The changes in the data structure can then
to each other by eight visual references to form be illustrated by a sequence of discrete states or
a tree like structure drawn from left to right (the by a continuous process. Either the algorithm
default orientation, however, is top to bottom). or the human controlling the model can be the
In addition, one reference is needed to point into dominant participant. In order to integrate all of
the root of the tree. Finally, the frame around the this, the animator must continually check for both
structure corresponds to the visual container that the state and the control events. These are defined
responds to events targeted to whole structure in the following.
(for example, in a binary search tree, the user The control events are trivially obtained by
can insert new keys into the structure by drag & implementing a control panel in which the control
dropping the keys into the frame). operations are supported. The basic set of control
The user can interact with all the entities de- operations are illustrated in Figure 3. The actions
scribed above as far as the underlying structure these operations take are obvious.
allows the changes. For example, the user may Moreover, the user has several other control
insert or delete components, replace any key operations to perform. These operations influence
with another one (again, simple drag & drop of the layout of the visualization and are implemented
any visible key onto the target key is sufficient) by most of the modern algorithm animation sys-
or rename keys. On the other hand, the structure tems. Thus, we only summarize these briefly. See,

240
Applications of Visual Algorithm Simulation

for example, A Principled Taxonomy of Software that can be embedded in a LATEX source file.
Visualization by Price, Baecker, & Small (1993) Matrix also supports text formats to import and
to have more details. automatically create various data structures from
edge lists and adjacency lists. Third, the current job
1. Multiple views: User can open structure or can be saved into the file system and loaded back
part of it in the same or a new window. into the Matrix system. Here, the whole animation
2. Multiple layouts: User can change the layout sequence, i.e., the invoked operations and states
of the structure. of the underlying data structure, can be stored and
3. Orientation: User can change the orientation restored by means of Java serialization.
of layout (rotate, mirror horizontally, and
mirror vertically). Actions on the Underlying
4. Granularity and Elision control: User can Data Structures
minimize structure or part of it; hide unnec-
essary details such as title, indices of arrays, The set of state events may vary among the mod-
direction of references, back, forward, or els to be simulated due to the CDTs in which the
cross edges in graphs, and empty subtrees. conceptual model can be subjected to several
5. Labeling: User can name or rename operations. On the other hand, there exists only a
entities. limited set of actions in the GUI that a user may
6. Security: User can disable or enable entities perform during the execution of a simulation.
in order to allow or prevent them to respond Thus, the simulation environment should be re-
particular events. sponsible for mapping these actions to particular
7. Scaling: User can change the font type or state events. The same action might lead to a
font size. very different result depending on the model to
be simulated. On the other hand, there might be a
Finally, the system provides several additional situation in which some actions are not allowed,
features that can be used to finalize a task. First, and thus the model should be aware of this and
the current implementation includes an extensive throw an exception.
set of ready-made data structures and algorithms Naturally, the set of actions above could be
that can be directly summoned from the menu targeted to any of the visual objects (visual con-
bar. The implemented fundamental data types tainer, visual component, visual reference, and
are array, linked list, several tree structures, and visual key). Nevertheless, the functionality may
graphs. Moreover, also included are many CDTs differ from object to object. However, the simu-
such as Binary Search Tree, 2-3-4-Tree, Red- lation model gives a meta-level proposition for
Black-Tree, Digital Search Tree, Radix Search the actions as described above. We discuss this
Trie, Binary Heap, AVL Tree, and Splay Tree. In general set of actions here. Furthermore, a number
addition, the prototype includes several non-empty of exceptions are discussed briefly.
structures that are already initialized to contain Read and Write Operations. From the anima-
useful data for simulation purposes, for example, tor’s point of view, the only operations we are
a set of keys in alphabetical or in random order. interested in are those that in some way manipulate
Second, the produced algorithm animation can the data structure to be visualized. In procedural
be exported in several formats. For example, as a programming languages, this manipulation takes
smooth SVG (Scalable Vector Graphics) anima- the form of assignments. An assignment is the
tion to be embedded into a Web page or a series operation that stores the value of an expression in
of still images in TeXdraw format (Kabal, 1993) a variable. Thus, a visual component is considered

241
Applications of Visual Algorithm Simulation

to be the expression above; the variable refers as well. If the target is not a CDT, the drag & drop
to some other visual components key attribute. action is interpreted as the write action.
Moreover, we may assign any FDT into the key Pointer Manipulation. The visual references
because the data structure the visual component have also functionalities of their own. Especially
points to (represents) is by definition a funda- if it is implemented for the underlying structure.
mental data type. However, such pointer manipulation requires a
A single drag on an object assigns an internal different approach because changing a reference
reference to point into that object. The object could to point into another object may lead up to the
be rewritten to some other place by dropping it on whole structure to become inconsistent. Parts
the target object. The object read can be a single of the whole structure may become unreachable
piece of data (datum) as well as, for example, a for a moment until a whole set of pointer opera-
subtree of a binary search tree. A drop on a visual tions is performed. Thus, pointer operations are
data object causes the action to be delivered to queued to be invoked only when all the corre-
the component holding the data object. This is sponding operations are completed. This kind of
because the target object is considered to be the encapsulation ensures that the user can perform
value of an expression, and thus cannot possibly any combination of pointer manipulations for the
change or replace itself. If the component similarly underlying data structure.
cannot handle the write operation, the action is Instrumental Operations. The GUI can also
forwarded to the container holding the component provide several instrumental operations that
with the additional information where the write help the user to perform several complex tasks
operation occurred (especially if the container is more easily. For example, the user can turn the
an array, the array position component invokes drag & drop facility to swap mode, in which
the write operation of the array with the additional not only is the source structure inserted into the
information of the actual index where the action target structure, but also vice versa. Thus, the
took place). Usually, the object read is rewritten GUI swaps the corresponding structures without
as the key of the target object. Thus, we may end any visible temporal variables. Another example
up having very complex structures if, for example, is the insertion of multiple keys with only one
we drag a binary tree and drop it to some position single GUI operation by dragging & dropping the
of an array and then again drag this array and drop whole array of keys into the title bar of the target
it into some other tree structure. structure visualization. The GUI inserts the keys
Insert Operation. The drag-and-drop action is in the array one at a time.
sometimes referred as an insert operation. This Other Operations. The default behavior of the
is especially true in the case of conceptual data drag & drop functionality can be changed. For
types such as binary search trees and heaps. Thus, example, it can be interconnected to the delete op-
the insert operation is executed by dragging and eration declared in the interface CDT. In addition,
dropping the object to be inserted (source) into such miscellaneous operations can be declared in
a container (target), for example to its label top pop-up menu attached to all visual objects. By
of the frame. Thus, the target must implement a default, the delete operation is included there.
special insert method that takes the source to be Similarly, almost any kind of additional opera-
inserted as an argument. This method is declared tions could be implemented. Thus, the actual set
in the special interface CDT that indicates that the of possible simulation operations depend heavily
visualized structure is a conceptual data type and on those procedures allowed by the programmer
capable of handling such an action. This interface of the underlying data structure. For example, the
declares other methods such as delete and search AVL tree visualization may include a special set

242
Applications of Visual Algorithm Simulation

of rotate operations to illustrate the behavior of on the underlying data structures, and thereafter
this particular balanced search tree. redraws the display. An example assignment could
Decoration. There exist also certain predefined be “Insert the keys I, P, B, T, R, Q, F, K, X, and U
decoration operations in the pop-up menus at- into an initially empty red-black tree.” The task
tached to each entity. For example, the label (name) is to show how the corresponding data structure
of a component could be enabled, disabled and evolves during the insertions. The insertions are
set by selecting the proper operation. Note, how- performed by drag & dropping the keys from a
ever, that in this case the label is not necessarily stream of keys into the target data structure (bi-
a property of the actual visualized data structure nary tree) and selecting a balancing operation, if
but rather a property of the visualization, and thus necessary. Initially, the tree is empty and contains
does not appear in every visual instance of the only one node (the root of the tree). Each insertion
underlying structure. into an empty node alters the data structure such
The discussion of extending simulation func- that the node is occupied with the corresponding
tionality from the visualizer’s perspective is left inserted key and two new empty sub trees appears
out of this chapter. below the node.
The exercise appears as an applet (See Figure
4) that portrays individually tailored exercises in
VISUAL ALGORITHM which each learner has different initial input data
SIMULATION EXERCISES structures that he or she manipulates by means of
the available functionality. Actually, an exercise
One of the main applications for Matrix and algo- is individually tailored each time the exercise is
rithm simulation is TRAKLA2 that is a web-based initialized. This feature has several advantages
learning environment dedicated to distribute visual over traditional homework exercises as learners
algorithm simulation exercises (Malmi et al., can freely collaborate without the temptation to
2004) in data structures and algorithms courses. copy answers from each other. In addition, the
Some other significant applications for the same system is capable of recording the sequence of
framework exist as well (See, e.g., MatrixPro simulation operations, and automatically assess-
(Karavirta, Korhonen, Malmi, & Stålnacke, 2004) ing the exercise by comparing it with another
or MVT (Lönnberg, Korhonen, & Malmi, 2004)). sequence generated by the actual implemented
However, we have chosen to use TRAKLA2 as algorithm. Based on this, TRAKLA2 can give
an example to illustrate the concept in practice immediate feedback to the learner. This supports
for two reasons. First, this is the main application self studying at any place or time.
for why we have developed the framework in the Figure 4 shows an example exercise in which
first place. Second, several universities in Finland the learner is to insert (drag and drop) the keys
and US have already adopted the TRAKLA2 from the given array (initial data) one by one into
system in their courses. Thus, we have had very the red-black tree. After completing the exercise,
good results to show that the concept works also the learner can ask the system to grade his or her
in practice. performance. The feedback received contains
In visual algorithm simulation exercises, the the number of correct steps out of the maximum
learner is supposed to drag & drop graphical en- number of steps, which is turned to points rela-
tities such as keys, nodes and references to new tive to other exercises. The final points are de-
positions on the screen to simulate the opera- termined by the best points received among all
tions a real algorithm would do. Consequently, the submissions for a single exercise. The learner
the system performs the corresponding changes can also view the model solution for the exercise

243
Applications of Visual Algorithm Simulation

as an algorithm animation sequence at any time. Feedback


The model solution is shown as a sequence of
discrete states of the data structures, which can Even though feedback to students is the most
be browsed backwards and forwards in the same essential form of feedback, very few systems
way as learner’s own solution. are also capable of assessing students’ work and
After viewing the grading results, the learner giving continuous feedback on their performance
can either submit the solution or restart the same to the teacher. This is very important in improving
exercise with different input data. If the model the teaching process. Another concern is whether
solution has been requested, grading and sub- the student is actually doing something useful
mission are disabled until the exercise has been with these tools or merely playing and enjoying,
reset again. for example, the animation without any actual

Figure 4. The TRAKLA2 exercise window (an applet) includes data structures and push buttons to ma-
nipulate them in addition to the GUI operations (click, drag & drop, etc.). In addition, the model answer
for the exercise can be opened in a separate window

244
Applications of Visual Algorithm Simulation

learning. As reported by Baker et al. (1999), some solution for the exercise. This model solution is
students have difficulties seeing the systems as then automatically compared to the submitted
complementary tools, and therefore some of them answer. The assessing procedure then gives some
do not use them in the most effective manner. feedback to the student on his or her performance.
Traditionally, grading has been based on final The process of creating new exercises, however,
examination, but a variety of weekly homework is left out of the scope of this chapter as it is ex-
assignments are also widely used. Thus, automatic plained elsewhere (Malmi et al., 2004).
assessment of these kinds of assignments has been
seen to provide great value to large-scale courses.
However, nowadays automatic assessment has a DISCUSSION
role to play with small student groups as well.
For example, learners studying while they have a In this chapter, we have introduced a novel appli-
steady job can benefit a great deal from environ- cation framework for algorithm simulation tools.
ments that can be accessed at any place or time. The aim is to explore the capabilities of algorithm
simulation — a novel concept that fosters the
Implementation of Algorithm interaction between the user and the visualization
Simulation Exercises environments. The framework has been applied
to the TRAKLA2 learning environment for data
From the technical point of view, an exercise is structures and algorithms courses. In addition, we
a class that conforms to the particular Exercise have also paid attention to the process of develop-
interface. Basically, in order to implement the ing new assignments in order to allow creation of
interface, one needs to define a couple of methods. new exercises by only focusing on the algorithms
First, one needs a method to determine all the data needed to evaluate the student’s answers.
structures and their names for the visualization Three main concepts can be identified within
during the simulation of the exercise algorithm. the system. First, the system is based on algorithm
These data structures are expected to be funda- visualization in which common reusable visualiza-
mental data types and ready-made visualizations tions of all the well known fundamental data types
for them should exist. Second, a method to solve can be attached to any underlying objects within
the exercise is needed when automatic assessment an executable run-time environment. Second, the
takes place or the model solution is otherwise execution of an algorithm can be memorized in
requested. order to animate the running algorithm that ma-
An implementation of such an exercise in nipulates these objects. Third, algorithm simula-
actual learning environment1 is shown in Figure tion allows the user to interact with the system
4. The binary search tree exercise requires the and to change the content of the data structures
solve method that is equal to binary search tree in terms of direct manipulation.
insert routine. In addition, the assignment text The benefits of the Matrix framework are sum-
and a couple of initiating methods (such that fills marized in the next section. In addition, we discuss
in the array with new random data) are needed the educational benefits provided by algorithm
as well. simulation exercises in terms of automatic assess-
The framework is capable of visualizing the ment. Finally, we address some future research
exercise by showing the assignment in some text topics that remains to be studied.
window and by illustrating the structures needed
to simulate the algorithm. By invoking the solve
method, it is also capable of determining the model

245
Applications of Visual Algorithm Simulation

Matrix & Scheinin, 2002). In addition, commitment to


the course (low attrition), is almost equal in both
In Matrix, the fundamental idea to enhance the cases even though students working in the virtual
human–computer interaction is to allow the user environment seems to drop the course earlier than
to simulate algorithms, thus he or she specifies students working in the classroom. However, vir-
the animation in terms of algorithm simulation. In tual learning environments can provide advantages
addition, it is also possible to produce user-coded in terms of Automatic Assessment (AA) which the
programs by reusing existing components called traditional teaching methods cannot offer.
probes, thus the framework provides also a set of The principal benefit of AA, based on our
self-animating components. We summarize the experience, is that it enables one to set up enough
properties of the framework as follows: compulsory exercises for the students and to give
them feedback on their work on large scale. Our
1. The system makes distinction between the experience shows that voluntary exercises do not
implementation of an actual data structure motivate students enough to reach the required
and its visualizations; level of skills (Malmi, Korhonen, & Saikkonen,
2. Manual simulation is possible by interacting 2002). Compulsory assignments do not provide
with the user interface; much more unless we can monitor students’
3. One data structure can have many conceptual progress regularly and, more important, give
visualizations; them feedback on their performance. This would
4. One conceptual visualization can be re- be possible in weekly classroom exercises, if we
used for displaying many different data only had enough classrooms, time and instructors,
structures; which unfortunately is not the case. By means of
5. The framework provides fast prototyping of AA, we can partly solve the feedback problem
algorithm simulation applications; even though we do not argue that computerized
6. The framework enables visualization, al- feedback would generally match expert human
gorithm animation, and algorithm simula- feedback. In some cases, however, it does, and we
tion of user-made code without restrictions have conducted surveys in which the attitude to-
concerning object-oriented design methods; wards TRAKLA2 learning environment increases
and during a course (Laakso et al., 2005). Thus, in case
7. Many applications already exist such as the of simulation exercises, computerized feedback
TRAKLA2 to automatically assess exercises is experienced to be superior to class room ses-
in web based learning environment. sions due to the fact that the students can get the
feedback immediately and try the exercise again
Automatic Assessment with different input data.
The second benefit of AA is that the computer
Development of learning environments requires can give feedback at any time and place. Thus, it
also awareness of what is pedagogically appro- applies particularly well to virtual courses, and
priate for students and teachers. In this respect, in fact, it has turned most of our courses partially
we have conducted, for example, a research virtual. However, we do have lectures and class-
that shows that there is no significant difference room sessions as well, but the students do most
in the learning results between students doing of their self-study on the Web.
their homework exercises in a virtual learning The third benefit is that we can allow the
environment and students attending a traditional students to revise their submissions freely based
classroom session (Korhonen, Malmi, Myllyselkä, on the feedback (Malmi & Korhonen, 2004). By

246
Applications of Visual Algorithm Simulation

setting up the assignments in such a way that pure in general, also large examples must be
trial-and-error method is no option, the student is supported.
pushed to rethink his or her solution anew, and to 3. More research on program animation and
make hypotheses about what is wrong. This is one simulation capabilities. The current frame-
of the conventions that constructivism (Phillips, work does not support the program code to
2000) suggests we should put into effect. More- be executed (like in the debuggers) with the
over, it should be noted that in classroom sessions, simulation or animation.
such a situation is rarely achieved for every student, 4. Easy creation of new exercises in TRAKLA2
since the teacher cannot follow and guide the work environment. This includes exercises that
of each individual at the same time. can be easily created and incorporated into
The fourth benefit is that we can personalize one’s own course web pages or electronic
exercises, i.e., all students have different assign- exercise books without the intervention of
ments, which discourages plagiarism and encour- developers.
ages natural co-operation while solving problems.
With human teaching resources, it is practically The primary research topics include those aim-
impossible to achieve this in large courses. ing at completing the work needed to create the
Finally, AA tools save time and money. We feel, electronic exercise book and the learning environ-
however, that they represent an extra resource that ment. On the other hand, the scope and content
helps us to teach better. Moreover, they allow us of the new system should provide a generalized
to direct human resources to work which cannot platform-independent framework that is capable
be automated, for instance, personal guidance and of visualizing even large examples. The system
feedback for more challenging exercises such as should be rich enough to provide flexibility to
design assignments. develop the framework and its applications further.
For example, elision control is essential since its
The Future value is emphasized especially in large examples.
The application framework should also provide
At the moment, we feel that all the capabilities the tools for developing proper user interfaces for new
framework supports are only a very small subset applications. Nevertheless, the effectiveness of
of all those features possible to include. Many such systems cannot be evaluated until an applica-
new ideas remain and are discussed very briefly tion is actually implemented. Thus, the effective-
here. Moreover, several open research problems ness of the new framework has been beyond the
are still to be solved. Thus, in the near future at scope of this chapter due to its subjective nature,
least the following tasks should be studied: but should be promoted in future studies. Finally,
the framework itself could be developed further
1. Enhanced recording facilities for algorithm to provide an even more flexible environment for
simulation. Especially, to be able to ex- software visualization.
change algorithm animations among differ- Of course, developing the visualization tech-
ent animation systems is one of the future niques is only a half of the whole potential for new
challenges. research topics. The other side is the evaluation of
2. Further customization of representations. the applications from the learning point of view.
The current system is aimed at small ex- For example, it is an open question whether the
amples. However, in order to widen the TRAKLA2 system should limit the learner from
applicability, e.g., to software engineering submitting his answers as many times as one

247
Applications of Visual Algorithm Simulation

pleases. The first version of TRAKLA did limit Benford, S., Burke, E., Foxley, E., Gutteridge, N.,
the submissions, but the current TRAKLA2 does & Zin, A. M. (1993). Ceilidh: A course administra-
not. However, the preliminary results (Malmi, tion and marking system. In Proceedings of the
Karavirta, Korhonen, & Nikander, 2005) are 1st international conference of computer based
somewhat mixed and we have considered to limit learning, Vienna, Austria.
the resubmissions in the future. This, however,
Boroni, C. M., Eneboe, T. J., Goosey, F. W.,
requires more research before we can argue which
Ross, J. A., & Ross, R. J. (1996). Dancing with
option leads to better learning.
Dynalab. In 27th SIGCSE technical symposium
on computer science education (pp. 135-139).
New York: ACM Press.
REFERENCES
Bridgeman, S., Goodrich, M. T., Kobourov, S. G.,
Astrachan, O., & Rodger, S. H. (1998). Animation, & Tamassia, R. (2000). PILOT: An interactive tool
visualization, and interaction in CS1 assignments. for learning and grading. In Proceedings of the
In The proceedings of the 29th SIGCSE technical 31st SIGCSE technical symposium on computer
symposium on computer science education (pp. science education (pp. 139-143). New York: ACM
317-321), Atlanta, GA. New York: ACM Press. Press. Retrieved from http://citeseer.ist.psu.edu/
Astrachan, O., Selby, T., & Unger, J. (1996). An bridgeman00pilot.html
object-oriented, apprenticeship approach to data Brown, M. H. (1988). Algorithm animation.
structures using simulation. In Proceedings of Cambridge, MA: MIT Press.
frontiers in education (pp. 130-134).
Brown, M. H., & Hershberger, J. (1992). Color
Baecker, R. M. (1998). Sorting out sorting: A and sound in algorithm animation. Computer,
case study of software visualization for teaching 25(12), 52–63. doi:10.1109/2.179117
computer science. In M. Brown, J. Domingue, B.
Price, & J. Stasko (Eds.), Software visualization: Brown, M. H., & Raisamo, R. (1997). JCAT: Col-
Programming as a multimedia experience (pp. laborative active textbooks using Java. Computer
369–381). Cambridge, MA: The MIT Press. Networks and ISDN Systems, 29(14), 1577–1586.
doi:10.1016/S0169-7552(97)00090-1
Baker, R. S., Boilen, M., Goodrich, M. T., Tamas-
sia, R., & Stibel, B. A. (1999). Testers and visual- Cattaneo, G., Italiano, G. F., & Ferraro-Petrillo,
izers for teaching data structures. In Proceedings U. (2002, August). CATAI: Concurrent algorithms
of the 30th SIGCSE technical symposium on and data types animation over the internet. Jour-
computer science education (pp. 261-265), New nal of Visual Languages and Computing, 13(4),
Orleans, LA. New York: ACM Press. 391–419. doi:10.1006/jvlc.2002.0230

Bäsken, M., & Näher, S. (2001). GeoWin a generic English, J., & Siviter, P. (2000). Experience with
tool for interactive visualization of geometric algo- an automatically assessed course. In Proceedings
rithms. In S. Diehl (Ed.), Software visualization: of the 5th annual SIGCSE/sigcue conference on
International seminar (pp. 88-100). Dagstuhl, innovation and technology in computer science
Germany: Springer. education, iticse’00 (pp. 168-171), Helsinki,
Finland. New York: ACM Press.
Battista, G. D., Eades, P., Tamassia, R., & Tollis,
I. (1999). Graph drawing: Algorithms for the Fleischer, R., & Kucera, L. (2001). Algorithm
visualization of graphs. Upper Saddle River, NJ: animation for teaching. In S. Diehl (Ed.), Soft-
Prentice Hall. ware visualization: International seminar (pp.
113-128). Dagstuhl, Germany: Springer.

248
Applications of Visual Algorithm Simulation

Gloor, P. A. (1998). User interface issues for algo- Jackson, D., & Usher, M. (1997). Grading student
rithm animation. In M. Brown, J. Domingue, B. programs using ASSYST. In Proceedings of 28th
Price, & J. Stasko (Eds.), Software visualization: ACM SIGCSE symposium on computer science
Programming as a multimedia experience (pp. education (pp. 335-339).
145–152). Cambridge, MA: The MIT Press.
Kabal, P. (1993). TEXdraw – PostScript drawings
Grillmeyer, O. (1999). An interactive multimedia from TEX. Retrieved from http://www.tau.ac.il/cc/
textbook for introductory computer science. In pages/docs/tex-3.1415/ texdraw_toc.html
The proceedings of the thirtieth SIGCSE technical
Karavirta, V., Korhonen, A., Malmi, L., & Stål-
symposium on computer science education (pp.
nacke, K. (2004, July). MatrixPro – A tool for
286–290). New York: ACM Press.
on-the-fly demonstration of data structures and
Hansen, S. R., Narayanan, N. H., & Schrimpsher, algorithms. In Proceedings of the third program
D. (2000, May). Helping learners visualize and visualization workshop (pp. 26–33). Warwick,
comprehend algorithms. Interactive Multimedia UK: Department of Computer Science, University
Electronic Journal of Computer-Enhanced Learn- of Warwick, UK.
ing, 2(1).
Korhonen, A., & Malmi, L. (2000). Algorithm
Henry, R. R., Whaley, K. M., & Forstall, B. (1990). simulation with automatic assessment. In Proceed-
The University of Washington illustrating com- ings of the 5th annual SIGCSE/SIGCUE confer-
piler. In Proceedings of the ACM SIGPLAN’90 ence on innovation and technology in computer
conference on programming language design and science education, ITiCSE’00 (pp. 160–163),
implementation (pp. 223-233). Helsinki, Finland. New York: ACM Press.
Higgins, C., Symeonidis, P., & Tsintsifas, A. Korhonen, A., Malmi, L., Myllyselkä, P., &
(2002). The marking system for CourseMaster. Scheinin, P. (2002). Does it make a difference if
In Proceedings of the 7th annual conference on students exercise on the web or in the classroom?
innovation and technology in computer science In Proceedings of the 7th annual SIGCSE/SIGCUE
education (pp. 46–50). New York: ACM Press. conference on innovation and technology in com-
puter science education, ITICSE’02 (pp. 121-124),
Hundhausen, C., & Brown, J. (2005). What you
Aarhus, Denmark. New York: ACM Press.
see is what you code: A “radically-dynamic”
algorithm visualization development model for Laakso, M.-J., Salakoski, T., Grandell, L., Qiu, X.,
novice learners. In Proceedings IEEE 2005 sym- Korhonen, A., & Malmi, L. (2005). Multi-perspec-
posium on visual languages and human-centric tive study of novice learners adopting the visual
computing. algorithm simulation exercise system TRAKLA2.
Informatics in Education, 4(1), 49–68.
Hundhausen, C. D., Douglas, S. A., & Stasko, J. T.
(2002, June). A meta-study of algorithm visualiza- Lönnberg, J., Korhonen, A., & Malmi, L. (2004,
tion effectiveness. Journal of Visual Languages May). MVT — a system for visual testing of
and Computing, 13(3), 259–290. doi:10.1006/ software. In Proceedings of the working confer-
jvlc.2002.0237 ence on advanced visual interfaces (AVI’04) (pp.
385–388).
Hyvönen, J., & Malmi, L. (1993). TRAKLA – a
system for teaching algorithms using email and
a graphical editor. In Proceedings of hypermedia
in Vaasa (pp. 141-147).

249
Applications of Visual Algorithm Simulation

Malmi, L., Karavirta, V., Korhonen, A., & Ni- Naps, T. L., Rößling, G., Almstrum, V., Dann, W.,
kander, J. (2005, September). Experiences on Fleischer, R., & Hundhausen, C. (2003, June).
automatically assessed algorithm simulation Exploring the role of visualization and engagement
exercises with different resubmission policies. in computer science education. SIGCSE Bulletin,
Journal of Educational Resources in Computing, 35(2), 131–152. doi:10.1145/782941.782998
5(3). doi:10.1145/1163405.1163412
Phillips, D. C. (2000). Constructivism in educa-
Malmi, L., Karavirta, V., Korhonen, A., Nikander, tion. opinions and second opinions on controver-
J., Seppälä, O., & Silvasti, P. (2004). Visual algo- sial issues. 99th yearbook of the national society
rithm simulation exercise system with automatic for the study of education (Part 1). Chicago: The
assessment: TRAKLA2. Informatics in Education, University of Chicago Press.
3(2), 267–288.
Pierson, W., & Rodger, S. (1998). Web-based
Malmi, L., & Korhonen, A. (2004). Automatic animation of data structures using JAWAA. In
feedback and resubmissions as learning aid. In Proceedings of the 29th SIGCSE technical sympo-
Proceedings of 4th IEEE international conference sium on computer science education (pp. 267-271),
on advanced learning technologies, ICALT’04 Atlanta, GA. New York: ACM Press.
(pp. 186-190), Joensuu, Finland.
Price, B. A., Baecker, R. M., & Small, I. S. (1993).
Malmi, L., Korhonen, A., & Saikkonen, R. (2002). A principled taxonomy of software visualization.
Experiences in automatic assessment on mass Journal of Visual Languages and Computing, 4(3),
courses and issues for designing virtual courses. In 211–266. doi:10.1006/jvlc.1993.1015
Proceedings of the 7th annual SIGCSE/SIGCUE
Reek, K. A. (1989). The TRY system or how to
conference on innovation and technology in com-
avoid testing student programs. In [New York:
puter science education, ITiCSE’02 (pp. 55–59),
ACM Press.]. Proceedings of SIGCSE, 89,
Aarhus, Denmark. New York: ACM Press.
112–116. doi:10.1145/65294.71198
Mason, D. V., & Woit, D. M. (1999). Providing
Rößling, G., Schüler, M., & Freisleben, B. (2000).
mark-up and feedback to students with online
The ANIMAL algorithm animation tool. In Pro-
marking. In The proceedings of the thirtieth
ceedings of the 5th annual SIGCSE/SIGCUE
SIGCSE technical symposium on computer sci-
conference on innovation and technology in com-
ence education (pp. 3-6), New Orleans, LA. New
puter science education, ITiCSE’00 (pp. 37-40),
York: ACM Press.
Helsinki, Finland. New York: ACM Press.
Miller, B. P. (1993). What to draw? When to
Saikkonen, R., Malmi, L., & Korhonen, A.
draw? An essay on parallel program visualization.
(2001). Fully automatic assessment of program-
Journal of Parallel and Distributed Computing,
ming exercises. In Proceedings of the 6th annual
18(2), 265–269. doi:10.1006/jpdc.1993.1063
SIGCSE/SIGCUE conference on innovation and
Myers, B. A., Chandhok, R., & Sareen, A. (1988, technology in computer science education, IT-
October). Automatic data visualization for novice iCSE’01 (pp. 133-136), Canterbury, UK. New
Pascal programmers. In IEEE workshop on visual York: ACM Press.
languages (pp. 192-198).
Stasko, J. T. (1991). Using direct manipulation
to build algorithm animations by demonstration.
In Proceedings of conference on human factors
and computing systems (pp. 307-314), New Or-
leans, LA.

250
Applications of Visual Algorithm Simulation

Stasko, J. T. (1997). Using student-built algorithm stracting the data, operations, and semantics of the
animations as learning aids. In The proceedings algorithm by creating dynamic graphical views
of the 28th SIGCSE technical symposium on of those abstractions.
computer science education (pp. 25-29), San Jose, Direct Manipulation: is a human-computer
CA. New York: ACM Press. interaction style, which involves continuous
representation of objects of interest, and rapid,
Stasko, J. T. (1998). Building software visualiza-
reversible, incremental actions and feedback
tions through direct manipulation and demonstra-
(Wikipedia).
tion. In M. Brown, J. Domingue, B. Price, & J.
Visual Algorithm Simulation: is direct ma-
Stasko (Eds.), Software visualization: Program-
nipulation, which not only allows actions in the
ming as a multimedia experience (pp. 103-118).
level of the representation, but also alters the real
Cambridge, MA: MIT Press.
underlying implemented data structures.
Automatic Assessment: is computer-aided
grading method for various types of exercises.
KEY TERMS AND DEFINITIONS Algorithm Simulation Exercises: are simula-
tion exercises solved by means of visual algorithm
Software Visualization: is an active research simulation, which are typically graded in terms
field in software engineering that uses graphics of automatic assessment.
and animation to illustrate different aspects of
software.
Algorithm Visualizations: are abstract graphi- ENDNOTE
cal representations that provide an insight into how
1
small sections of code (algorithms) compute. at http://svg.cs.hut.fi/TRAKLA2/
Algorithm Animation: is the process of
visualizing the behavior of an algorithm by ab-

251
252

Chapter 12
Virtual Reality:
A New Era of Simulation and Modelling
Ghada Al-Hudhud
Al-Ahlyia Amman University, Amman, Jordan

ABSTRACT
The chapter introduces a modern and advanced view and implementations of Virtual reality systems.
Considering the VR systems as tools that can be used in order to alter the perceived information from
real world and allow perceiving the information from virtual world. Virtual Reality grounds the main
concepts for interactive 3D simulations. The chapter emphasizes the use of the 3D interactive simu-
lations through virtual reality systems in order to enable designers to operationalize the theoretical
concepts for empirical studies. This emphasize takes the form of presenting most recent case studies for
employing the VR systems. The first emphasizes the role of realistic 3D simulation in a virtual world for
the purpose of pipelining complex systems production for engineering application. This requires highly
realistic simulations, which involves both realism of object appearance and object behaviour in the vir-
tual world. The second case emphasizes the evolution from realism of virtual reality towards additional
reality. Coupling interactions between virtual and real worlds is an example of using the VR system to
allow human operators to interactively communicate with real robot through a VR system. The robots
and the human operators are potentially at different physical places. This allows for 3D-stereoscopic
robot vision to be transmitted to any or all of the users and operators at the different sites.

VIRTUAL REALITY: OVERVIEW Virtual Reality (VR) has been known only in
the past few years, but VR has a history dating
Virtual reality is commonly known as a 3D projec- back to the year 1950s. VR roots started as an idea
tion of a seemed to be real image of the mind. This that would improve the way people interacted with
world of 3D projections is believed to alter what computers. Douglas Engelbart 1960s introduced
human senses perceive. the use of computers as tools for digital display.
Inspired by Engelbart past experience as a radar
DOI: 10.4018/978-1-60566-774-4.ch012

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Virtual Reality

engineer, he proposed that any digital informa- By early 1980s, creating many of the special
tion could be viewed on a screen connected to effects techniques for visualised data, scientific
the computer; data visualisation. In addition, by visualization moved into animation. Animation
the early 1960s, communications technology was was a compelling testament of the value of a kind of
intersecting with computing and graphics technol- imagery, 1990s. Meanwhile, animation had severe
ogy. This intersection leads to the emergence of limitations. These limitations had addressed both
virtual reality. cost and lack of interactivity. Once the animation
The emergence of VR served in a diversity completed changes in the data or conditions gov-
of scientific views of VR uses. For example, it erning an experiment could not alter the responses
is cheaper and safer to train pilots on the ground in the imagery. In this context, VR manifests the
before real flight. Flight simulators were one of experience of transforming the data into 3D im-
the earliest applications of virtual reality and were ages through various display devices. This implies
built on motion platforms. building a computer-generated”virtual” world
that possesses the following features: a) realism,
b) immersion and c) presence. These features
COMPUTER VISUALIZATION provides support for visualising, hearing or even
AND ANIMATION touching objects inside this computer generated
environment
Computer visualization is being thought of as a
new way of how scientists evaluate and explore Realism
their data through an interactive human inter-
face. Because of the V R evolution, advances in The ever-increasing evolution in computer graphi-
graphic display and high-end computing have cal technologies helped in the rapid change towards
made it possible to transform data into interactive, using 3D simulations in testing theoretical models.
three-dimensional images. Hence, all the display This is important because the lack of the realism
and feedback devices that make this possible are factor in 2D simulation leads to the production
termed Virtual Environments. Considering the of an incomplete view of the simulated world.
various possible goals of the virtual environments, Realism can be defined as geometrical realism
each provides easier way for scientists to interact and behavioural realism. The former describes
with computers. to what extent the simulated world has a close
Interactivity was the mainstay for scientists, appearance to the representation of the real world
military, business, and entertainment. (Slater, Steed & Chrysanthou, 2002.). The more
Therefore, the advances in VR systems were realistic, an object’s representation, and the more
also accompanied by computer visualization evo- realistic views the user gets. The later implies
lution to comply with the demand for interactivity. the existence of behavioural signs that indicate
At this end, scientific visualization identified the interactive responses which cannot be caused
imagery concept that is to transform the numerical by geometric realism but because of the realistic
data into images using computer graphics. Sci- responses and behaviour.
entists can benefit from this conceptual view for Realism is considered to be important in order
compiling the enormous amount of data required to grasp and get a reasonable sense of dimensions,
in some scientific investigations. Examples of spaces, and interactions in the simulated world.
these scientific investigations are fluid dynam- Therefore, for more realistic representations for
ics, nuclear reactions, communication models or objects in theoretical models and the virtual world
cosmic explosions. as well as their behaviours in the simulated world,

253
Virtual Reality

the 3D-simulation and visualisation tools are of has being seen as a medium used within physical
great importance. reality, it is nowadays believed as an additional
reality. VR opens up a broaden area of ideas and
Immersion possibilities as VR systems supports both simula-
tion and modelling.
The feeling of being there, that is the common The nature of virtual reality technology also
description of being immersed in a virtual real- brings up related issues about simulation.
ity environment. The user switches all the com- Simulation as an efficient tool for analysing
munication channels off with the real world and the properties of a theoretical model, can be
receives the information from the virtual world. practically used for realistic representation of
This feeling varies depend on the type of the VR these models.
system. Due to the complexity of designing these theo-
retical models, simulating the model in a virtual
Presence world can reduce this complexity by building and
testing the system components separately. This
Unlike the animated scenes in recorded mov- step-by-step building strategy allows the scientists
ies, interactive virtual reality systems improve to test and evaluate individual components during
the overall feeling of being inside the simulated the development stages.
world. This is considered of great help especially In order to benefit most from VR simulations
for visualising system’s behaviours in the cases before creating physical implementations, the
of theoretical models. This dues to the ability simulation must give an insight into the opera-
of providing the user with the suitable means to tional requirements. Such requirements include
fly and walk around the objects, for example to the size of the models and the space they move in.
monitor what is happening behind a wall or under Therefore, visualising the simulation of the theo-
a table. In addition, the user can zoom into the retical model within a virtual environment (VE)
scene when a precise local evaluation is needed can actually be more efficient and functional than
or zoom out when a global view is required. within conventional simulations. The realism of
Presence is one of the key outputs of the $VE$ the simulation requires real time simulation tools,
visualization. The reason for this is that presence i.e. no pre-computations. Loscos et al (Loscos,
in the simulated environment played a major role Marchal & Meyer June 2003) described a crowd
in designing the experiments in order to integrate behaviour real time simulation that allowed for the
all the visual outputs. Both aspects, presence and rendering of the scene and the individuals as well
immersion, required a high level of realism in as simulating the crowd behaviours. The system
the simulation and a certain minimum level of described by Locos et al is an individual-base
quality for display. model, i.e. it concentrates on the microscopic view
to the system and at the same time it does not use
AI techniques. In addition, it allows the emerging
WHY VR SYSTEMS characteristics to be empirically defined, tested
and explained by considering both quantitative and
VR is a very expensive technology, and has qualitative outputs. The performance of a model
emerged within institutional contexts: NASA, can be evaluated and assessed within different
military researchers, video game manufacturers, simulated environments. This implies that entities
universities, software and hardware developers, within a these theoretical models can be exposed
and entertainment industries. However, as VR to a variety of different tasks; performance can be

254
Virtual Reality

tested in different virtual environments without an The CAVE


excessive amount of development time.
One of the newest, most”immersive” virtual envi-
ronments is the CAVE. The CAV E is a 10x10x9
TYPES OF COMMON VR SYSTEMS foot darkened room and data are projected in
stereoscopic images onto the walls and floor of
Commonly used VR systems vary according to the CAV E so that they fill the room. The CAV
the viewing tools (e.g. lenses, glasses) and display E provides a greater sense of being immersed in
devices that add depth to flat images. A common the data. CAV E possesses two advantages over
aspect of the V R systems is that these systems other V R systems. The first advantage, CAV E
should help the user switching off the communica- is that no helmets or viewing boxes are needed.
tion channels with real world and only perceive Only pair of glasses and a wand is required. The
the information received from the virtual world. second advantage of the CAV E is its large field
Examples of these systems are described below. of view.

Head-Mounted Display Presagis

User looks inside motor-cycle like helmet where Presagis, previously known as (MultiGen Para-
two lenses can be seen. Movement tracking is digm) is most recently recognised as advanced
being obtained by a device on top of the helmet VR system for simulation and modelling. Presagis
to report head movements signals relative to a offers Creator, API and VegaPrime in addition to
stationary tracking device. Datagloves and wands other products. Creator is specifically designed for
are the most common interface devices used with real-time 3D simulation by producing OpenFlight
head-mounted displays. These tracking devices models, see figure 1. Creator rapids the modelling
are attached to the glasses, lenses, and relay the process and increases productivity by enabling
viewer’s position to the computer. The computer easily and effectively creating highly detailed
re-computes the image to reflect the viewer’s 3D models and digital terrains. Vega Prime, with
new perspective. The computer, then, continually its cross-platform, scalable environment, is the
updates the simulation to reflect according to the most productive toolkit for real-time 3D appli-
new perspective. One disadvantage of the head- cation development and deployment. The Vega
mounted displays they weigh several pounds. Prime provides the only framework one need to
configure, create, and deploy advanced simula-
The Binocular Omni Orientation tion applications. And, because it allows for the
Monitor (BOOM) easy integration of new or existing code, Vega
Prime offers tremendous cost and time savings
BOOM is a no-helmet head-mount system. The by improving re-usability.
BOOM’s elements are the viewing box and eye- For the stereoscopic display of the scene in a
glasses. The users place forehead against two full scale VE, a distributed version of the desktop
eyeglasses in the viewing box. To change the model can be produced through a cluster of 6-PC’s
perspective on an image, one can grab the handles to generate six synchronised images. These six
on the side of the viewing box and move around images are projected onto an 8X2 meters, 145
the image. BOOM’s control buttons on the handles degrees curved screen (giving a 3Kx1K resolu-
usually serve as the interface as well as datagloves tion) using six different projectors. Recall that the
or other interface devices. number of views and cameras are controllable via

255
Virtual Reality

Figure 1. Presagis system architecture; Vega


CURRENT IMPLEMENTATION
Prime System
OF VES

3D simulations not only need to mimic the real


physical spaces and but also needs to support real
time visualisation and monitoring (Teccia and
Chrysanthou, 2001).This is essential for a large
scale interactive virtual environment application.
A recent example of an interactive virtual environ-
ment system was presented in (Cairo & Aldeco &
Algorri, 2001.) where a virtual museum’s assistant
takes the tourists in a virtual tour and interactively
answers questions.
the user interface, and synchronously displayed Other examples of current utilisation of full
on the screen. scale 3D semi-immersive simulations include
simulating an urban space environment, an in-
terior architectural environment, and simulating
HOW DOES VIRTUAL REALITY the behavioural models for the training purposes.
SYSTEMS PROVIDE A 3D EFFECT Example of recent 3D immersive environments
that have been used successfully to develop
To get a 3D effect, the computer projects stereo- group behaviour models, and perception models
scopic images whilst controls the lenses in the are: simulating the movements of crowds (Tang,
viewing glasses; stereo glasses, in synchronization Wan & Patel, 3-5 June 2003.), simulating a fire
with the images being flashed on the screen. The evacuation system (Li, Tang, Simpson 2004.),
VR systems are conceptually common as they are intelligent transportation systems (Wan and Tang,
based on projecting two slightly different computer 2004), urban behaviour development (Tecchia,
generated images on the screen: one representing Loscos, Conroy & Chrysanthou, 2003.), and an
the view seen through the right eye, the other, intelligent vehicle model for 3D visual traffic
through the left eye. These two images are then simulation (Wan and Tang 2003). These examples
fused by human brain into one stereo image. Some have shown that the immersive 3D simulation
images are accompanied by sounds mapped to the V E supports training purposes as they feature:
same data driving the imagery. presence, realism, and immersion which all add
A virtual roller coaster ride is so much more value to the simulation.
compelling when you can hear the sounds and Virtual prototypes and product evaluation is
air crashing on the face and riders screams. But another application that exploits virtual environ-
beyond providing ambiance, sounds can reveal fine ments technologies. It propagates the idea of a
features in data not easily captured in images, such computer based 3D simulations of systems with
as speed and frequency. Recently, researchers in a degree of functional realism. Accordingly, the
the field work on feeling the virtual world through virtual prototypes are used for testing and evalua-
exploration, such as the resistance of a weight tion specific characteristics of the product design.
whilst pulling up an incline surface. Therefore, This can guide the product design from idea to
new input devices such as stair steppers and other prototype, which helps to address the engineer-
tactile devices will allow users to do so. ing design concerns of the developer, the process
concerns of the manufacturer, the concerns of

256
Virtual Reality

the maintainer, and the training and program- the areas of assisting architectural design, urban
matic concerns of the operation. Simulations of planning, safety assessment, and virtual prototyp-
these systems are being developed to enable the ing. Using a large-scale virtual environment in
creation of a variety of realistic operational envi- evaluating robotic products and applications has
ronments. Virtual prototypes can be tested in this been used for simulating and testing the hardware
simulated operational environment and during all whilst producing single-robot applications. Us-
the development stages. Once a virtual prototype ing a 3D full-scale semi-immersive environment
is approved, design and manufacturing tradeoffs for simulating communication and co-operation
can be conducted on the virtual prototype to en- between multiple robots has not yet been fully
hance productivity and reduce the time required covered.
to develop a physical prototype o even direct to
the final product.
Another useful area where real time semi- CASE STUDY 1: REALISM
immersive simulations can be useful is controlling OF VIRTUAL REALITY
the movements of mobile physical robots, and
assessing the individual behaviour in a robotic This section presents a case study for using a V
application (Navarro-Serment, Grabowski, aredis R system in producing communication model to
& hosla, ecember 2002) example is presented in control movements and task assignment for a set
(Leevers, Gil, Lopes, Pereira, Castro, Gomes- of multiple mobile robots. The proposed theoreti-
Mota, Ribeiro, Gonalves, Sequeira, Wolfart, cal model is based on multi-agent systems MAS
Dupourque, Santos, Butterfield, & Hogg, 1998) and architectures. The
which provides working model of its autonomous MAS communication model classifies the com-
environmental sensor for where the robot tours munications for the set of robots into two levels.
the inside of a building and automatically creates The first level is the local communications for
a 3-D telepresence map of the interior space. An interactive movement co-ordination. This com-
example that presents how virtual reality helps in munication level adopts the flocking algorithm
handling the connection of the various tools and to mimic animal intelligence. The second level
the communication between the software and the tackles the interactive task assignment and ensures
hardware of the robot is presented in (Rohrmeier keeping the human operator in the loop as a global
1997.). Simulating such systems require real time communication level. The global communication
visualisation, a high level of geometrical realism is being obtained through a virtual blackboard.
in representing the world and the moving objects, The role of VR in this study was to emphasise
and support for the specifics of the application. For the human operators collaboration with a set of
example, in order for a set of robots, represent- virtual robots through a virtual blackboard. The
ing a set of autonomous mobile robots, to build VR system used for modelling and simulating this
their knowledge system they need to perceive the model was the Presagis, installed at Virtual Reality
environment. Accordingly, specifying a sensing Center at De Montfort University1. Inspired by
device and sensing techniques is essential. This Presagis available tools, the model was being built
can not be achieved by implementing point mass in both desktop version and full scale immersive
objects. Alternatively, the computations need to version. For both versions, major anticipations of
be dependent on the sensing devices and rely on using 3D-simulations were:
real data gathered from the world.
To summarise, the use of large scale visu- • 3D immersive simulations can support re-
alisation tools in recent published work included alism in representing the simulated world

257
Virtual Reality

Figure 2. Rotating sensor


Realism of Simulated Agent’s
Virtual Vision System

Another issue that is required in a real-time 3D


simulation is the ability to focus on the specifics
of the simulation; e.g. simulating the agent’s vi-
sion system as a laser sensor. This aims at giving
a direct link to the implementation of a real life
robot’s sensor.
The agent’s sensor is simulated as a fixed line
segment at a defined angle centred at the agent’s
location but starting just outside of the robots’
physical space, see figure 2.
The length of the line segment determines
in order to sense the dimensions and spac- the laser sensor range. The sensor can rotate 360
es, these terms are referred to as geometri- degrees quantized into 18 degree intervals and
cal realism. therefore it detects one of 20 different locations
• 3D immersive simulations can support the at each time interval (τ).
specifics of the simulations for example Figure 3 shows that at each location, the sensor
simulating the sensors. detects only one object. In this case, the sensor
• Allows the user interaction with the simu- will only detect the first one hit (that is object A
lated world at interactive rates, referred to in figure 3).
as behavioural presence. The range of the agent’s sensor is character-
• 3D immersive simulations must allow for ised by the length of this horizontal line segment.
real time visualisation of a large number The visualisation whether on a desktop or using
of robots displayed simultaneously in real the semi-immersive projection, has helped enor-
time, referred to as scalability. mously in adjusting the sensor range during the
very early development stages. This is aided by
The realism of this simulation is critical in visually showing the sensor with colour coding.
ensuring that the implemented algorithm during This horizontal sensor fired from the agent
the simulation can be feasibly transferred to a real towards the environment returns the distance and
robotic application. Simulating a MAS model, the intersection point of any detected object.
as a geometrical physical model implies that we In the 3D-simulation, a user can adjust the
have to represent the environment and the robots influence of the local communications with nearby
with geometrical shapes, which is termed geo- robots by controlling the characteristics of the sen-
metrical realism. At the same time, it is believed sor in the simulation. Previous implementations
realism is important to gain an increased intuitive of the flocking algorithms in a VE often lacked
understanding of the robots behaviours inside the tools to simulate the sensor that mimics real
the VE. Therefore, great attention is paid to the world sensors. Therefore, a goal at the very early
behavioural representation, which is termed a stages of evaluation involved adjusting the range
behavioural realism. within which an agent needs to be aware of the
surrounding by setting a sensible sensor range, see
section 8.4.3. In order to adjust the sensor range,
it is important to consider the effect of detecting

258
Virtual Reality

Figure 3. The sensor renders green once it hits an object, Object A is detected but B is not

a relatively high number of robots during each optimisation purposes and as an illustration to
sensor sweep and visualising the influence of the future users.
interaction rules on the agent’s progress. The evaluation involved running the different
models on both the desktop and on the semi-immer-
Realism of Robots’ Interactions: sive VR system. The feedback from the observers
Behavioural Realism was of great importance, for example, when the
one of the models was demonstrated to groups of
Behavioural realism involves visualising believ- students, they expressed their views that robots
able behaviours that imitate real life reactions do not all communicate all the time, which meets
between robots, as a result of the interaction our expectation for this model. Moreover, groups
between their cognition units (an agent’s beliefs of students were able to describe the actions as if
and intentions. However, recent research hypoth- they were describing real actions. For example,
esised that behaviour realism within a simulation the virtual robots do feel the objects in the envi-
is important to aid believability (Technical Report, ronment, as virtual robots do not go through the
2004), (Chrysanthou, Tecchia, Loscos & Conroy walls, or through each other. This is because of the
2004). virtual perception system was modelled properly
The creation of a believable representation that allows objects to feel the other objects; walls
of robots together with believable behaviours and robots. In addition, when running the students
results in a more realistic simulation. Therefore, expressed their feeling; as they believed the human
the main concern was to produce a behaviour operator (me) could speak to the robots and can
model consistent with the physical appearance ask them to go somewhere very specific within
of the virtual robots in order to believe that the the environment. They expressed an impression
virtual robot could be seen in a manner as it would that the robots-user interaction demonstrates a
operate in the real world. The proposal aimed ’real time response’.
at implementing the three different behavioural This section has shown that realism does not
models, section 8.4, and the main goal of this only mean designing and building physically
stage was to quickly evaluate each model cutting believable virtual models, but, also embodying
down the time required to change parameters for them with believable and realistic behaviours.

259
Virtual Reality

This helps the user to believe or disbelieve ac- The VE as a Testbed


tions and provide suitable tools to interact with
the robots. The 3D simulation tools, introduced above, have
been implemented at two versions; a desktop
User’s Interaction with the Robots VE and a semi-immersive projection VE. The
in a VE: Behavioural Presence former implies displaying the 3D simulation at
a desktop size whereas the later implies that the
Using Presagis, and for both the desktop and the simulated world is displayed on a big screen. In
full scale VEs the user can: addition, when running the visual simulation, all
numerical information to be exchanged between
1. Interactively choose a path to move and fly robots and between robots and human operator
around any object in the environment. are recorded. These numerical values represent the
2. Choose different starting points when issuing paths followed by all robots, x and y positions,
commands. the heading angle, the new heading, as well as all
3. Initiate new tasks or simply define new the interaction weights.
targets and monitor, closely, the robots’
reactions and changing behaviours. The Desktop VE Version
4. Issue a new command for a new team;
initialised by, for example, grouping two The desktop display is used for verifying the built
existing teams together. components during most of the design stages.
5. Interactively monitor the robots’ responses. The software environment used for the real-time
This improves the user’s impression of being visual simulations includes: a user interface and
immersed in what is seen. the programming environment. This simulation
interface provides the user with the ability to man-
Unlike the animated scenes in recorded mov- age the set up, e.g. the appearance of the complete
ies, these interactive aspects improve the overall scene including the number of view points needed
feeling of being inside the simulated world. This and the level of details required. There is also
implies providing the user with the ability to complete flexibility of the viewing location and
fly and walk around the objects, for example to the direction being monitored and the view of any
monitor what is happening behind a wall or under action possible with multiple views. The interface
a table. In addition, the user can zoom into the enables the user to: a) control the dimensions of
scene when a precise local evaluation is needed the environment, i.e. the sizes of the rooms and
or zoom out when a global view is required, corridors, and b) define and test, interactively, the
Figures 6 and 7. size of the target and the target model.
Presence is one of the key outputs of the VE
visualization. The reason for this is that presence The Immersive Full Scale VE Version
in the simulated environment played a major role
in designing the experiments described below. This large scale visualization allows for robots’
A key aspect in the experimentation process is movements and interaction inside the virtual world
to assess robots’ behaviour by integrating all to be projected ion a big screen enabling multiple
the visual outputs. Both aspects, presence and users to visualize and assess the simulation at once.
immersion, required a high level of realism in Figure 5 shows an abstract version of the VEC at
the simulation and a certain minimum level of De Montfort University. The space in front of the
quality for display. screen can host up to 20 users at once and allows

260
Virtual Reality

Figure 4. A screenshot, photograph taken while running the large scale VR system to a group of students
for evaluation

for collaborative analysis to take place between Advantages of Immersive


experts from various fields to monitor, discuss Large Scale Simulation
and control a simulation. This has been used for
group analysis, and for information presentations Because of the additional features that can be
in the form of demonstration, see figure 4. The obtained using semi-immersive full-scale en-
demonstrations aimed at displaying different vironment V E, the big advantage over a pure
models and versions have been used to obtain simulation is the building of an environment that
feedback from non-experts, i.e. undergraduate mimics, admittedly to a different quality level, the
students from different disciplines. real physical space, see Figure 8. This full-scale
visualisation enables us to test different situations
with different scales and dimensions and can
enhances the sense of the dimension and sizes of

Figure 5. Virtual laboratory with different starting points and target positions marked

261
Virtual Reality

Figure 6. Viewing robots’ interaction; zoom in

objects inside the environment see figure 9. complexity whilst reducing the time-cost of
The large-scale display also offers the opportu- always rendering the highest details. Since the
nity to visualise multiple views at once, see figure object representation inside the world does not
10. This allowed for a quick visual, individual necessarily imply representing all the details all
and group, analysis for the quality of the robots’ the time, the simulation in a VE allows the system
actions in the virtual world for this large number to render different level of details according to the
of robots by allowing us to move closely to the distance from the viewpoint. The higher detail
individual robots in different situations. geometry is rendered when objects are close to
Another issue that is considered as an advan- the eye point and only lower detail geometry is
tage of using the 3D simulation within VEs is rendered when objects are further away. It also
the ability of increasing the appearance of scene allows for multiple levels of detail, where there

Figure 7. Team members split into two groups; zoom out

262
Virtual Reality

Figure 8. The large-scale simulation supports real physical sizes and spaces

would otherwise be abrupt changes. different micro structures that implies testing
different agent’s architectures. Therefore, the
Visual Evaluation of the evaluation process focuses on visually assessing
Emergent Behaviours the quality of the robots’ actions and tests different
behavioural models to see whether they meet the
Significant benefits of using interactive 3D- user’s expectations. Visualising different levels
simulations only occur if the VE assists the user of behaviour under different conditions allows
in achieving objectives more efficiently than us to define ’what-if’ situations, to discuss times
other simulations would. The simulation tools when the system fails, limitations on the number
used in this work allow the human vision system of robots, and sensor range modifications, etc. It
to assess an agent’s behaviour by running the provides us with a way to explain the cases that
system on a desktop or projecting the scenes onto are considered as bottlenecks. This improves the
a large screen in order to improve the ability to performance by evaluating the emergent behaviour
investigate many more design alternatives using at the system level not just at the individual level

Figure 9. Different scales give different representation of the same spaces

263
Virtual Reality

Figure 10. Leicester Reality Centre, De Montfort University, displaying multiple views simultaneously

over a number of iterations. I am agent Ai at position x, y and can see agent


In this respect, the visual assessment considers Aj 18 degrees to the right within the avoidance
two main issues. First, to what extent the robots’ zone.
actions are consistent with their knowledge;
where rationality is considered as a measure of The user expects this agent when appropriate
the consistency. This is carried out by comparing to move 18 degrees to the left in order to avoid
the numerical and written messages sent by these the detected agent. Second, the quality of the per-
robots with the observable actions of these robots. formed actions and the decision made are feasible
For example, a message of content: of being performed in real-time; that is they are
guaranteed to behave within a certain time.

Figure 11. Viewing a large number of robots, photograph taken while running the large-scale system

264
Virtual Reality

The experiments are being run for testing four leader should get trapped between other robots,
levels of communication: a) Local communica- any front agent will take over the leadership and
tions for movement coordination using the flock- the nearest agent will follow. Figure 14 shows
ing algorithm described in (Al Hudhud and Ayesh a set of robots represented as 3D-robots. They
2008), b) Global Communication for interactive move randomly following the first three rules of
task assignment by the human operator based on the algorithm.
Blackboard technique (Al-Hudhud, 2006), and
c) A combined local and global communications. Global Communication
These levels of communications are coded into Model GC −Model
four layers. These layers encode the different lev-
els of behaviour and agent’s architectures within For the GC − Model the ground based controller
different models: Local communication Model, (human) broadcasts messages, via the keyboard,
(LC −Model), Global Communication Model to all registered robots within the system. These
(GC −Model), Local-Global Communication messages include explicit instructions to allocate
Model (LGC − Model), and Follow Wall Model tasks for a set of robots. When these robots read the
(FW −Model). message, they are grouped into a team; the concept
of team forming. In terms of the flocking rules,
Local Communication each agent only considers the collision avoidance
Model LC −Model rule. In this context, an agent’s internal system
sets the weights that control the strength of both
The LC −Model represents an initial case where alignment and cohesion force to zero. The effect
robots are expected to only communicate with of applying global communication in this way is
nearby robots to coordinate movements. Therefore equivalent to the effect of applying a global align-
the only inputs for the robots in this model are ment force, i.e. all robots will align their velocity
the received information via the agent’s sensor in the direction of the target position.
and they are not assigned any tasks. The expected According to this model, an agent is expected
behaviour from this model is to obtain the natural to activate only the cognitive component of its
behaviour of a flock. The robots are expected to architecture. This component allows an agent to
show this reactive behaviour that represents animal possess a cognitive knowledge structure and in
intelligence in avoiding each other and moving this case each agent of the group is supposed to
in a group (see Figure 12). possess a joint mental state that leads to a joint
Running this model results in a lower level of intention; i.e. performing the common goal.
behaviour and allows for the reactive components This model is tested by running the system
of the robots to be active. This model allows us with a specified number of robots, (five robots
to test the system’s micro structure; i.e. the reac- for the first trial), and allowing the GBC to inter-
tive component of the agent’s architecture (see actively issue a message to the robots using the
Figure 13). keyboard as an input device. The robots inside the
The tests show a high level of co-ordination virtual world receive the message and interpret the
as robots were able to move in groups with signs contents of the message. The robots first issued
of natural behaviours. For example, the flock can messages to the GBC to report their status at each
split up into two smaller flocks to go around both time interval. For example, an agent is happy to
sides of an obstacle and then rejoin once past the perform the task if (T <= C) 2, also it is a little bit
obstacle. In addition, a dynamic leadership is also “Tired” if (T − C <= 20) 3.
achieved as any front agent can lead. So, if the

265
Virtual Reality

Figure 12. A set of robots are moving within a team and influenced by the cohesion force and the align-
ment force

Local-Global Communication Belief Desire Intention integrated with the Black-


Model (LGC −Model) Board in one architecture.
The test considered the scenario where a set
According to this model, robots are expected to of robots are assigned a task, e.g. they are given
be able to communicate with each other as well a target location, in the upper left in figure 5. By
as with the ground based controller GBC; i.e. all assigning the virtual task, each of the robots in
the communication levels are implemented. All the group is expected to: a) compute the expected
robots are supposed to be grouped in teams as a distance to the task, b) estimate the time to finish the
consequence of issuing a group command by the task, and, c) issue messages to the GBC reporting
GBC. So this model exploits agent’s reactive and these information as a status ’HAPPY’. Each agent
cognitive components represented by the Hybrid is expected to report its status depending upon

Figure 13. A team of robots moving forward, the team members split into two teams as they encountered
the wall

266
Virtual Reality

Figure 14. Viewing robots’ interaction in a large scaled model

its dynamically changing beliefs. The reported actions have met the expectations where the de-
status is a result of activating the deliberative liberative component in the agent’s architecture
component of the agent’s architecture together works synchronously with the reactive component.
with the reactive component. This test allows us to compare the different be-
In other words, an agent uses all the informa- haviours based on different knowledge structures.
tion available, not only those related to the local The emergent behaviour depends on the cognition
reactions but also those related to the GBC’s com- intelligence as a prerequisite for the deliberation,
mands. On arrival, robots are to circle around the see GC −Model. Regarding the macro level struc-
target and continuously modify their weight for ture, the visual tests assists in building the new
the blackboard negotiation rule, in order to avoid model that reflects the features of both models
getting too close to the target. into one, namely the LGC −Model.
Running this model has shown interesting The visual tests played a major role in evalu-
results across the micro and macro levels. ating the group communication as it allows for
Regarding the micro structure, the agent’s interactively monitoring the robots’ actions in real

Figure 15. Viewing robots’ interaction in small-scaled model

267
Virtual Reality

Figure 16. Team members are circling around a target position

time. The four layers of communication (three Oscillate State Detection and
layers for the flocking algorithm plus one layer Follow Wall Mode: FW −Model
for the blackboard rule) are all implemented.
In addition to the visual outputs, the numerical The FW − Model resolves the conflict that can
outputs can also be analysed, for example, the arise when an agent needs to reach a specified
interaction weights and positions. This informa- point that lies behind a barrier or a wall. The ’Wall’
tion together with the visual experiments allowed problem implies that an agent would turn to avoid
us to identify the effect of the cohesion force as a the wall then turn back heading toward the target
binding force that affects the robots’ progress, see so leading to oscillation. This can also happen,
section 8.4.1 where the user is able to numerically for example, when an agent’s sensor reads three
analyse the weights. This has been significantly consecutive similar values for the identity of the
useful when the visual test showed slow responses detected object within the perception zone of the
in the agent’s progress when combining the flock- collision avoidance rule.
ing rules with the blackboard. The basic idea behind the FW − Model is to
The results of running this model show that let a meta-level component run the world model
the user is able to get comprehensive information faster than real time to make predictions of future
from robots in the form of actions and written mes- states. When the metalevel component detects
sages. The user can determine the conditions that that the system has specified an undesired action,
increase the chance of completing the task despite it modifies the decision made to produce a new
difficulties that may arise, for example, when any actual action in order to avoid reaching this state.
agent fails to perform the specified task and other This model enables the agent to detect the oscil-
team members will then continue to perform the lation state and then allows the agent to switch
task. Practically, this has been implemented in the between two modes. The first is the standard
same manner as described in section 8.4.2. LGC − Model, whilst the second is the Follow
Wall Mode or FW − Mode. The FW − Mode is
used as a recovery mode and allows the agent to

268
Virtual Reality

move smoothly alongside the wall until it can specified target. When an agent moves within a
turn again toward the target. Once an agent has team towards a target, this implies that it activates
passed the oscillation state, it switches back to both the local interaction rules together with the
the standard LGC −Model. external blackboard rule. The local interaction
The FW −Mode is implemented as follows: rules require an agent to filter the sensory data
Firstly, each agent performs a pre-collision detec- according to the minimum separation distance
tion test using its current heading. Secondly, it allowed between those robots. Therefore, the
computes a new heading according to the com- separation distance Sd is also considered when
munication algorithm. The agent then will use the adjusting the sensor range as it controls the influ-
new heading to perform a post-collision detection ence of the local interactions.
test before changing its current heading. An agent Theoretically, the minimum number of frames
will not change its heading unless there is an object required for an agent to reach a target is when it
straight ahead. For the cases when an agent detects detects no objects during its path to the target. This
three different objects this model may still fail as can be done by setting the sensor range to zero
an agent may still show oscillation whilst trying which implies switching off the flocking system.
to avoid these objects. Consequently, the number of frames times the step
length an agent moves each frame exactly equals
Directed Numerical Analysis the direct distance to the target. However, the
fastest route implies that robots move as a set of
The results obtained from the visual evaluation individuals rather than as team members. This also
in the previous section are of great importance in implies that robots do not interact locally with the
directing the numerical analysis. Instead of blindly surroundings in the environment; they do not align,
analysing the numerical outputs of the robots, one cohere, or avoid colliding with other objects. On
can numerically assess the performance of the local the other hand, using a large sensor range implies
communication unit separately. This for example increasing the influence of the flocking system.
can be obtained by testing different sensor ranges, This in turn increases the number of interactions
as the sensor is the only source of the information as each agent moves towards the nearest agent to
governs the flocking system. In addition, one can align with, as well as towards the team centroid
test the influence of the flocking weights on team to keep itself bound to the team. This implies that
progress towards the target location. the number of frames or movements, towards the
target increases and accordingly, an agent takes a
Testing the Appropriateness slower route towards the target.
of Sensor Range SR The effect of the minimum distance allowed
between objects in the environment that is the
As the sensor links an agent to the environment separation distances Sd, is also considered in the
and controls its local interactions via the flocking experiment. The visual tests have shown that with
rules, this section aims at testing the optimum sen- Sd less than 7units 4 an agent is happy to slightly
sor range that allows robots to: a) move and act as bump into things, as this distance is less than the
a team, and b) minimise the number of frames to agent’s dimensions (radius, width). In reality, the
complete a specified task. The number of frames separation distance must allow for an object to
indicates the number of steps, and consequently turn safely without hitting the detected object. In
the distance an agent moves in order to reach this contrast, higher values for Sd increases the time
target. Completing the task means being within a for an agent to complete the task. This is partially
distance equal to double the sensor range from the because with a large separation distance an agent

269
Virtual Reality

Figure 17. The smaller sensor range reduces the Figure 18. A larger sensor range increases the
influence of the flocking system influence of the flocking system

Controlling the Interaction Weights


may not be able to go through the space between of the Agent’s Subsystems
two closely detected objects or squeeze tightly
round a corner. For this, an experiment is designed This section tests, with the aid of visual evaluation,
to run the LGC −Model with 20 robots. The user how the user can adjust the interaction weights
interactively chooses a start point to issue the to help accelerating the robots’ progress? For this
team task command by giving the position of the purpose, controlling the interaction weights is
specified target location. The robots are to move carried out by running the Local Communication
in a team to reach this target location. Model (LC−Model). The weights associated with
The experiment resulted in the fact that increas- the flocking rules are dynamically computed at
ing the sensor range results in an increase of the each time interval depending on the rule’s cen-
perception zones for the flocking system; i.e. an troid and on the attitudes of each rule; these are
agent may see more robots, and therefore, the num- the perception zone for this rule and the filtering
ber of interactions increases, see figure 18. This strategy.
implies more steps towards the target represented Originally, the flocking system is implemented
by more number of frames, see figure 19. here with the weights computed in a conventional
To summarise, choosing a lower value for the way, see table 1. Since the collision avoidance
sensor range leads to reducing the influence 4 of weight is given precedence over other weights,
the flocking system. This in turn yields many indi- as it is the most important interaction rule, the
vidualists rather than team and group performance, visual tests involved the influence of the cohesion
each lives his own life for itself and does not try Rb
to co-operate with or follow others. Nevertheless, weight WAi on the progress of the robots. The
it is essential to maintain the influence of the visual simulation has been useful at this stage
flocking system by setting the sensor range that in assessing and evaluating the extent to which
helps keep the essence of the group co-operation varying the cohesion weight allows the robots in
in the form of the team movements. the same team to move as a unit. According to
the visual evaluation, it was found that the cohe-
sion weight slows the robots’ progress, due to the

270
Virtual Reality

Figure 19. The number of movements towards the


this task using the mechanism described in GC
target increases as a result of a higher influence
−Model section. According to this mechanism,
of the flocking system
the trapped agent may discard his commitment
regarding completing the task. The other robots
who detect the trapped agent will be influenced
by the trapped agent, which can still significantly
slow their progress. This leads to a longer expected
completion time, or even prevents the influenced
robots from completing the task.
In this respect, a main goal of analysing the
interaction weights then is to adjust the cohesion
weight in order to avoid these impacts of a high
cohesion weights without loosing the benefits
of the supportive role of this weight in the team
performance. Therefore, the start point was to test
the conventional implementation of the weights
in flocking algorithms, and the values are shown
R
in table 1, for the alignment w Aia and cohesion
R
weight w Aib . For this implementation, the cohe-
sion weight is computed as the inverse of the
R R
distance to the cohesion centroid C Aib if the C Aib
falls within the avoidance range, otherwise it set
equal to one. This implies that the cohesion force
high influence of the cohesion force. In addition, is mostly inversely proportional to the distance
it causes overcrowding in the surrounding area to the centroid. The weight becomes bigger very
which is used as an indicator for examining the quickly as the centroid position falls outside the
strength of this binding force. avoidance range (Sd) whilst it does not become
Consider the situation where an agent detects a very small within the avoidance range.
large number of nearby robots, then each of these In order to numerically assess the dominance
robots modifies its velocity to move towards the of the cohesion weight in situations where the
cohesion centroid. If one or more of these robots robots dos not detect any avoidance cases, the
detects a wall and at the same time some of the other LC − Model was used. Accordingly, these inter-
robots within the avoidance zone, it may become action weights are shown in figure 20. The bar
trapped. In this trap situation, a neighbour of this graph shows the weights that control the strength
agent (who may not detect the same objects) will of the interaction forces, according to the values
be influenced by the trapped agent. In the same shown in Table 1, on an agent over the first 200
manner, the remaining robots will be influenced frames of the simulation. Points of high cohesion
by the trapped robots as a result of a high cohe- weight, in figure 20, implies that an agent will be
sion weight. This can become worse if this set highly influenced by the nearby robots, and via
of robots is assigned a task to reach a specified monitoring the trap problem can be observed.
R
target. Considering this scenario, the trapped agent To overcome the trap problem, WAib is modi-
continuously checks its capabilities of performing fied in the following manner. Within the cohesion

271
Virtual Reality

Table 1. The values of set of interaction weights

Ra Rb Rg
Alignment W Cohesion W Collision Avoidance W
Ai Ai Ai

R R R R R R
C Aia < Sd C Aia > Sd C Aib < Sd C Aib > Sd C Aig < Sd C Aig > Sd

Ra R
1 / C Ai 1 1 / C Aib 1 1 1

Figure 20. The interaction weights over the first 200 frames. The cohesion weight dominates the in-
teraction weights whenever the avoidance weight is zero. The unmodified cohesion weight values are
shown in Table 1

R when assigning a task to these robots. The Local


range, WAib is inversely proportional to the square
R Global Communication Model (LGC − Model)
of the distance to C Aib otherwise it is inversely
R
is used to examine the efficiency of the modified
proportional to the distance to C Aib elsewhere weight of the cohesion rule in terms of completion
as in table 2. time (in frames).
Figure 21 shows the effect of modifying the
way the cohesion weight is computed in reducing
the influence of the cohesion force as well as in
maintaining the essence of team performance.
This can reduce the expected completion time

272
Virtual Reality

Table 2. The interaction weights, modified

Ra Rb Rg
Alignment W Cohesion W Collision Avoidance W
Ai Ai Ai

R R R R R R
C Aia < Sd C Aia > Sd C Aib < Sd C Aib > Sd C Aig < Sd C Aig > Sd

Ra R R
1 / C Ai 1 1 / (C Aib )2 1 / C Aib 1 1

Figure 21. The interaction weights over the first 200 frames, with the cohesion weight modified accord-
ing to the values shown in table 2

Detecting Locations of Complexity of analysing teams x, y positions in terms of their


within the Environment means and standard deviations.
By considering that the communication model
A novel aspect of the numerical analysis of the can be deployed as a communication with a higher
interaction weights was the ability by analysing level human controller, who could physically
the cohesion weights to extract ideas about the exist in a different location, the human control-
structure of the environment. ler can analyse the information sent by a set of
This implies a human operator can draw a robots so as to detect the locations of complex-
rough idea about the locations of complexity by ity in the environment. This is carried out by
comparing the cohesion weights with the results viewing the distribution of the cohesion weights

273
Virtual Reality

while performing a specified task by running the can be considered as a way to extract locations of
LGC−Model. The experiment is designed to run difficulties encountered in the environment.
the simulation by launching a set of five robots In addition, the test showed that the x, y posi-
in the simulated environment from a start point, tions can be fruitfully analysed together with the
at the front of the screen, and assign them a task, distribution of the cohesion weights to give a more
to reach the target position shown in top left in explicit structure of the environment. Therefore,
the same figure. The numerical outcomes of the the x, y positions, sent by the robots to the GBC,
experiment consist of all the cohesion weights, is used to plot the deviations from the mean posi-
and the (x, y) positions. tions. During each frame, the mean positions of
The graph shown in figure 22 shows the the robots and how far individually they are from
distribution of the cohesion weights along the the mean are calculated.
frames. In this graph, low values of cohesion The graph in figure 23 shows the deviation of
weight indicate a large distance to the cohesion the robots’ positions from the mean, for the task
centroid. A small cohesion weight may represent used to analyse these cohesion weights above.
those robots, which are moving away from each The deviation follows five different routes and
other in an attempt to avoid an obstacle. This also shows the times of the obstacles encountered, as
may imply that the detected robots are not close high deviations, and the times of open areas as
to each other and may be splitting up, depending low deviations. For example, the time encountered
on their positions from the obstacle and from each and locations of the four rows of chairs shown in
other, in order to avoid one or more obstacles. On figure 5 can be extracted. This also can be useful
the other hand, high values of the cohesion weights in assessing the robots ability to autonomously
imply a reduced distance to the cohesion centroid, organise themselves by maintaining their posi-
which indicates that the detected robots are close tions with respect to their neighbours. This implies
to each other. Therefore, the graph in figure 22 that these robots were capable of maintaining

Figure 22. The distribution of the cohesion weights associated with the cohesion forces for each
frame

274
Virtual Reality

their positions during each frame in order to act picture of the environment by integrating the
as a team on their way towards the target, which ideas from both graph 24 and graph 22, defining
supports the survivability aspect in the long-term an explicit structure of the environment and the
tasks. This level of autonomy is currently a high locations of complexity.
demand in robotic applications. This example has A comparison between the performances of
demonstrated the following issues: the 3D simulator using the above described VR
system and a set of real multiple robot system
1. The set of robots are able to efficiently is being carried out. The set of real robots was
avoid each other as they feel complete the interactively assigned the same task as assigned
task within a team so as to count the col- to the virtual robots in the above-described experi-
lision avoidance rule and social relations ment. The x, y positions of the real robots is being
represented by the alignment and cohesion plotted in Figures 25 and 26. These plots exhibit
forces. the similar behaviour of both simulated robots;
2. The high level of autonomy these robots have Figures 23 and 24, and the real robots; Figure
whilst they are moving towards the target 25 and 26. This result emphasizes the role of the
as they are capable of self-organizing and virtual environments in pipelining the complex
position-maintaining in order to control their systems production.
motion is considered efficient in controlling
a large number of robots and supports the
scalability objectives. CASE STUDY 2: FROM REALISM
TO ADDITIONAL REALITY
The numerical assessment was strongly di-
rected by the visual assessment. These numerical This case study presents the implementation of a
analysis can be deployed successfully for analyse 3D virtual environment VE as an interactive col-
the cohesion weights and can draw out an abstract laboration environment between a team of robots.

Figure 23. The individual deviations from the mean positions for the set of 5 robots

275
Virtual Reality

Figure 24. The average standard deviation for the same group, the set of 5 robots

Figure 25. The deviation from the mean positions, two flocking robots

This team can include human operators and physi- Recently, research in the area of communi-
cal robots all at different physical locations. The cation and control emphasizes combining both
use of VE as a virtual Blackboard and a virtual reality and realism for controlling a set of robots
world has shown to be useful in order to allow physically existing in a different location (Turner,
the human operator to: a) exchange information Richardson, Le Blanc, Kuchar, Ayesh, and Al
regarding the robots environment, b) observe the Hudhud 2006). This implies a set of properties can
emergent behaviour of the robots through a stereo- be achieved by integrating various tools together,
scopic projection of the images sent by robots, c) to create a testbed network for the next generation
allow for user interaction, real time monitoring, of robot telepresence
and d) visually assess the robots’ behaviour and The above-described research aims to be able
interactively issue a relevant decision through the to launch a manned space craft, twice in a defined
virtual blackboard. period of time, with a skeleton crew of technicians

276
Virtual Reality

Figure 26. The average standard deviation; two flocking robots

to operate it. Many current UMVs (Unmanned experimentation study for testing real imple-
Vehicles) instead of requiring a small operating mentation for multiple command centres and
crew, can require a large team of operators, and accordingly, describing the individual parts of
be more expensive than the equivalent manned the testbed being used through the CompuSteer
version to run, thus defeating some of the purposes project (Turner et al, 2006).
of having UMVs in the first place. Also, in the
last 15 years there has been a rapid growth in the A Collaborative User and Robot
study of semi-emergent behaviour when many Environment Network Testbed
autonomous robots operate in a closed environ-
ment. It is thus imperative if we require emergent This integration between the virtual and real
behaviour from UMVs we are going to have to networks feature key practical aspects.
change the way telepresence operates. First, an increased level of presence and im-
RoboViz (Turner et al, 2006). is a small step in mersion within the simulation helps gaining an
the way to consider robotic autonomous engines, increased level of understanding of the robots
as connected computational engines, that have behaviour inside their environment. Second, the
to be steered and also have local communication virtual network enables a real-time user-robots
modes towards each other as well as to and from interaction by observing and monitoring the ro-
command centres and operators. This way of de- bots’ actions and analysing the information sent
scribing heterogeneous computing elements being by the robots to a human operator.
connected with diverse networking standards and The proposed implementation of the virtual
wishing to be computationally steered by human network combines the realism and reality of the
operators, has many similarities to traditional robots-based communication modes and the
e-Science computational steering systems. This corresponding behaviours. This can be of great
paper highlights two main dimensions: a) First, importance for ability to control the real physical
simulation results for multi communication levels robots from different physical locations allowed
of heterogeneous agent-based system; human for easy fault detecting and for real-world problems
operators and robots, and b) Second, advanced to be tested and seen in an easy and understandable

277
Virtual Reality

Figure 27. The robots receive the user’s commands as well as messages from other robots via ethernet
connection

manner. The following sections presents a study operator, δu, an instruction is sent to the robot
for this kind of implementation including the over a further network time delay, δc, and acted
response time for multiple connections, building by the robot after interpretation with a final delay
a testbed for these multiple connections, and the of, δr. This cycle repeats with trained operators
choices for the this testbed. controlling the robot with an extended reaction
time of,
Loss, Latency and
Multiple Connections rt = dv + du + dc + dr

The networking involved within a telepresence A trained operator can use this kind of simple
robotic interface has two key components; the con- system with a high degree of efficiency, even
trol network predominately sending instructions experiencing extra input for example robotic
from an operator to the robot, and the information haptic information (transmission of force/touch
network transmitting data, often predominately direct to vibration/force feedback devices) over
video (especially when considering required band- quiet a range of latency times. As latency times
width), from the robot back to an operator5. Given increase efficiency decreases, but is in some
reliable communication channels, and dedicated respects a controlled degradation of service. An
links with known, if not actually removed latency issue arises when extra influences occur that can
then the command-and-control system, within this make the control or robotic steering operation
telepresence, is tightly controlled and becomes an difficult or even impossible, which we will now
easy to understand loop. list and consider.
We can now consider the time delays involved The first influence we will describe is vari-
in this ideal system. Video and diagnostics in- able latency. It is well known that given a fixed
formation from the robot are transmitted over a latency, even if it is very large for example from
time delay, δv, and after a response time from the Moon to Earth (up to 2 seconds time delay with

278
Virtual Reality

direct visibility but raising to 10 seconds if a data s = t Ä p¢ + n


relay satellite is used), over a training period,
preferably off-line, an experienced operator can If s is very different from t, i.e. above some
reliably control an on-line robot at a constant threshold T then the resulting data can be unre-
level of efficiency. This is unlikely to approach coverable.
the same efficiency of an operator with direct
control, but what is important is that there is a Fail : {T >| s - t |}
constant percentage efficiency value. Networks
are not usually built like this (unless for example A third problem introduced is the fact that a
they are dedicated switching systems [dark light single operator controlling only one robot is in-
fiber]) and their latency will fluctuate over time. efficient6, and the introduction of multiple robot
Now the extended reaction time is a function of control is essential. This aids both efficiency, as
time t, well as allowing new joint collaborative actions
to occur between the robots, that may not have
rt(t ) = dv (t ) + du + dc ( fn(t )) + dr been possible otherwise.
Finally we would like to add specialist expertise
and guidance, for the operator, involving other us-
These fluctuations in latency make the task an
ers, each monitoring the progress as well as being
extra degree harder for the operator, potentially
able to act, take over control, or simply suggest
drastically reducing the amount of useful work
alternative courses of actions. This brings together
achievable. A simple approximation is to model
a requirement for a set of multiple command-and-
these fluctuations around Gaussians or Poisson
control centres with multiple robots over standard
distributions, with known mean values, but this
networks. This brings a need for a third network
may not take into account burst temporaral de-
to be introduced that connects command centres
lays, as networks have non-homogenous effects.
together creating a shared collaborative space.
A second influence of a standard network is the
probability of loss and corruption (e.g. data con-
volution and additive noise) within the transmit-
Technical Choices for Building a
ted data that is a natural occurrence. Again, for
Testbed Communication System
an operator this introduces uncertainty and thus
To consider these issues, a real robot network with a
reduces performance. So for each of the networks
set of real telepresence operators are proposed and
we have probabilities p of corruption or loss for a
distributed using different packet style systems,
particular element of size s, which again is time
each considering the requirements for multi-way
dependent,
communication, loss and latency.
ps (v(t )), ps (c(t ))
UDP/IP
Although latency and loss can be modelled by The network connections will consider UDP/IP to
a generic corruption/delay possibly in threshold transmit continual streams with no concern for full
filtering the convolution of the transmitted stream error corrections and recovery, by inherently not
t with a point spread function p ¢ and adding noise allowing packet resending. In a real-time system,
n. As networks are non-linear in (packet) data hearing or seeing data, which should have been
recovery this is only a crude approximation. transmitted some time previously, if missed, is
possibly too late to be useful. As temporal latency

279
Virtual Reality

increase for a packet, a practical solution is to Intra-Robot Communication


introduce larger than average packet buffers at
the receiving end - which will also increase the The language required for robot to robot requests
mean latency value. needs to be formal in structure.
There is a continual battle of trade-offs between This allows for analysis prediction, and be-
mean latency and amount of packet loss. Increas- havioural modelling. This also allows localised
ing the mean latency, by having a large packet communication, for when two robots physically
buffer, reduces the number of packets lost, but (or visually) see each other and can pass inten-
also increase the reaction time rt for the system. tions, behaviours and states of being.
There is a difference of opinion as to whether the
data stream, consisting of commands, should be Semi-Autonomous Robot Behaviour
error correction for missing packets, e.g. TCP/IP,
or if even simple command streams can also have If the arrival of data transmitted cannot be
lossy influences that are acceptable. guaranteed, nor guaranteed the exact response
times to request, commands cannot be at the
Multicast transmission microscopic detailed level. Thus, robots need
also semi-autonomous behaviour for two main
Any system should be scalable, and the usual reasons; one so that they can continue to operate
unicast (one-to-one) communication systems have safely (negotiate doors, walls etc.) when there is
an O (n2) scaling issue as the number of robots no direct assistance, and secondly so that together,
and/or the number of command centres increases. by working in teams, the emergent behaviour
Using multicast, the aim is for a single packet to can allow functionality that is too complex for
be optimally transmitted to a range of addresses, a single operator to manipulate. The system cur-
meaning along any specific connection no more rently used is based on flocking rules and known
than one copy will need to be transmitted. This is blackboard intents.
essential to cope with a large increase in number
of robots, operators or/and command centres. Steering Architecture

Compression Standards With this system every command centre will


receive all the video streams (possibly in stereo-
All data streams that do not need to be transmit- scopic projection if required and available) from all
ted faithfully, for example video and audio can be the robots, as well as from other command centres.
also lossy compressed. For a 25Mb/s connection This gives a specified experience of latency and
(low average guaranteed within academia), with thus efficiency to control the robots. Each robot
lets say 20-30 video streams each should aim to can be considered as a computational engine,
be well under 1Mb/s in order to accommodate the considering its next actions based on both higher-
outgoing network as well as include audio and level tasks from the command centres, as well as,
other data streams. Compression can consider low-level information from its environment.
all forms of quantisation; temporal (number of Multiple users can try and steer the same ro-
frames per second), spatial (x–y resolution of bot, which in temporal latency issues can occur
the image) and within the compression codec for no matter how well practiced operators are. The
example quantising in frequency (for example in robot needs to take a decision based on the mul-
Discrete Cosine Transforms, or Wavelet transform tiple instructions, possibly contradictory, as well
codecs). as of course the local environment. For example

280
Virtual Reality

this means that two instructions, one to turn right example building schematics to be shared in 3D
and the other to turn left, to avoid a hole will not at different sites.
result in the robot carrying out the average of the
two instructions. Diagnostic status information
returning from the robot, indicate the current SUMMARY
course of action, back to the operators and thus
with communication across command-centres This chapter presented an overview of the current
difficulties can be alerted. virtual reality systems. It also introduced a new
The users still need protocols to operate but as VR system that is Presagis. Presagis allows for in-
the command centres are video and audio linked teractive full scale immersive simulations without
inter-command centre steering can be done (ini- helmets and with variety of tracking devices.
tially by shouting). This is not ideal but for the This chapter introduced future implementations
first generation of test-bed this has advantages of of the virtual reality systems. These implementa-
procedural optimisation as well as adding layers tions are: pipelining complex systems production
of computational blocking at a later stage. and telepresence. These areas are currently highly
demanded. What is being presented in this chapter
Required Facilities at are case studies for those future implementations
Command Centres of virtual reality systems. The results of these
studies have been shown to be fruitful. In one
All the standard benefits from an Access Grid hand, the realism of the VR simulations helps in
(Collaborative Working Environment) structure pipelining systems production as it can cut off
are expected within any command centre (which the time required for physical implementations.
may not be the case in the future). This helps to On the other hand, it can be useful for advanced
increase the level of presence experienced by the applications for the purpose of gaining additional
users, so other command centres appear in similar reality as seen in the case study 2, telepresence
co-located spaces as well as co-located with the example.
spaces inhabited by the robots. This includes;

1. Large life-size projected screens (for full REFERENCES


size viewing)
2. Directionless remote microphones - so there Al-Hudhud, G. (2006, July 10-12). Visualising
are no cables or even radio-tags required the emergent behaviour of a multiagent com-
3. Echo cancellation system – so headphones munication model. In Proceedings of the 6th
are not needed International Conference on Recent Advances in
4. Shared data sets as well as the media Soft Computing, (pp. 375–381). Canterbury, UK:
streams – so environments can be considered University of Kent.
integrated Al Hudhud, G., & Ayesh, A. (2008, April 1-4). Real
Time Movement Cooredination Technique Based
A final issue to be considered is that of stereo- on Flocking Behaviour for Multiple Mobile Robots
scopic projection, that is available on a subset of System. In Proceedings of Swarming Intelligence
conference rooms. This allows for 3D-stereoscopic Applications Symposium. Aberdeen, UK.
robot vision to be transmitted to any or all of
the users and operators, as well as data sets - for

281
Virtual Reality

Avatars and agents in immersive virtual environ- Slater, M., Steed, A., & Chrysanthou, Y. (2002).
ments. (2004). Technical Report, The Engineer- Computer Graphics and Virtual Environments:
ing and Physical Sciences Research Council. from Realism to Real-Time. Reading, MA: Ad-
Retrieved from http://www.equator.ac.uk/index. dison Wesley.
php/articles/697.
Tang, W., Wan, T., & Patel, S. (2003, June 3-5).
Cairo, O., Aldeco, A., & Algorri, M. (2001). Real-time crowd movement on large scale terrains.
Virtual Museum’s Assistant. In Proceedings of In Theory and Practice of Computer Graphics.
2nd Asia-Pacific Conference on Intelligent Agent Washinton, DC: IEEE Computer Society.
Technology. Hackensack, NJ: World Scientific
Tecchia, F., Loscos, C., Conroy, R., & Chrysan-
Publishing Co. Pte. Ltd.
thou, Y. (2003). Agent behaviour simulator (abs):
Chrysanthou, Y., Tecchia, F., Loscos, C., & Conroy, A platform for urban behaviour development. In
R. (2004). Densely populated urban environments. Conference Proceedings of Theory and Practice
The Engineering and Physical Sciences Research of Computer Graphics. Washington, DC: IEEE.
Council. Retrieved from http://www.cs.ucl.ac.uk/
Teccia, F., & Chrysanthou, Y. (2001). Agent behav-
research/vr/Projects/Crowds/
ior simulator. Web Document, University College
Leevers, D., Gil, P., Lopes, F. M., Pereira, J., London, Department of Computer Science.
Castro, J., Gomes-Mota, J., et al. (1998). An
Turner, M. J., Richardson, R., Le Blanc, A.,
autonomous sensor for 3d reconstruction. In 3rd
Kuchar, P., Ayesh, A., & Al Hudhud, G. (2006,
European Conference on Multimedia Applica-
September 14). Roboviz a collaborative user and
tions, Services and Techniques (ECMAST98),
robot. In CompuSteer Workshop environment net-
Berlin, Germany.
work testbed. London, UK: Oxford University.
Li, H., Tang, W., & Simpson, D. (2004). Behav-
Wan, T., & Tang, W. (2004). Agent-based real time
ior based motion simulation for fire evacuation
traffice control simulation for urban environment.
procedures. In Conference Proceedings of Theory
IEEE Transactions on Intelligent Transportation
and Practice of Computer Graphics. Washington
Systems.
DC: IEEE.
Wan, T. R., & Tang, W. (2003). An intelligent
Loscos, C., Marchal, D., & Meyer, A. (2003,
vehicle model for 3d visual traffic simulation. In
June). Intuitive crowd behaviour in dense urban
IEEE International Conference on Visual Infor-
environments using local laws. In Proceedings of
mation Engineering, VIE.
the conference of Theory and Practice of Computer
Graphics, TP.CG03, University of Birmingham,
Birmingham UK. Washington DC: IEEE.
KEY TERMS AND DEFINITIONS
Navarro-Serment, L., Grabowski, R., Paredis, C.
& Khosla, P. (2002, December). Millibots. IEEE Pipelining Software Production: for the pur-
Robotics & Automation Magazine. pose of categorising the development processes
Rohrmeier, M. (1997). Telemanipulation of robots of complex software systems. There is a high
via internet mittels VRML2.0 and Java. Institute demand to the use of virtual models in producing
for Robotics and System Dynamic, Technical the system as part of the production pipeline.
University of Munchen, Munich, Germany. Behavioural Realism: using the virtual reality
to model theoretical models supports the realism of

282
Virtual Reality

visualized emergent behaviours inside the virtual 3


T − C is the time elapsed since the agent
world. These behaviours should be observer- started to encounter problems in performing
convenient and possess high level of realism. a specified task
4
Intra-Robot Communication: Robot-robot A unit is used with respect to the environment
communications dimensions, in other words, if the dimensions
Stereoscopic View: a projection of the im- of the environment is 1700 units °— 800
ages sent by robots through Human-Robot com- units the separation distance is 7 units
5
munications. An operator is define here as a close con-
nected hands-on controller (note there could
be more than one), and a user is defined as
ENDNOTES any human with the ability to take control
directly or indirectly.
In fact many operators may be required to
1
Thanks to De Montfort University, Leicester, 6

UK for providing the facilities and support operate one robot


for this study
2
C is the expected time for task completion,
while T represents the elapsed time since
the agent started the task.

283
284

Chapter 13
Implementation of a
DES Environment
Gyorgy Lipovszki
Budapest University of Technology and Economics, Hungary

Istvan Molnar
Bloomsburg University of Pennsylvania, USA

ABSTRACT
In this chapter the authors describe a program system that implements a Discrete Event Simulation
(DES) development environment. The simulation environment was created using the LabVIEW graphi-
cal programming system; a National Instruments software product. In this programming environment,
the user can connect different procedures and data structures with “graphical wires” to implement a
simulation model, thereby creating an executable simulation program. The connected individual objects
simulate a discrete event problem. The chapter describes all simulation model objects, their attributes
and methods. Another important element of the discrete event simulator is the task list, which has also
been created using task type objects. The simulation system uses the “next event simulation” technique
and refreshes the actual state (attribute values of all model objects) at every event. The state changes
are determined by the entity objects, their input, current content, and output. Every model object can
access (read) all and modify (write) a selected number of object attribute values. This property of the
simulation system provides the possibility to build a complex discrete event system using predefined
discrete event model objects.

GENERAL INTRODUCTION subjects (language definition, syntax and semantics


OF THE DES SIMULATOR discussion, translation, etc.), in this chapter the
authors decided to remain focused on those is-
Implementation of a DES can be realized in different sues, which are closely related to the DES and the
ways (Kreutzer, 1986). To simplify the procedure simple implementation of the DES computational
and to avoid difficulties related to computer science model. Doing so, the authors present a DES exten-
sion of the industry standard programming system
DOI: 10.4018/978-1-60566-774-4.ch013 LabVIEW.

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Implementation of a DES Environment

The LabVIEW DES simulation is the numerical range. As it can be seen, the program system is
computation of a series of discrete events described able to execute sample processing of continuous
with the help of a set of blocks with given char- signals with the help of digital computers. It also
acteristics (the framework system of simulation contains procedures, which are able to handle ele-
processes, which can be described with discrete ments of queues (creating queue objects, placing
stochastic state variables). a new element into the queue, remove an element
of a queue, and finally. deleting a queue object).
• With the help of the framework system, a These procedures and a few others, which help
simulation of processes can be realized. parallel programming (Notifier and Semaphore
The processes are driven by several inde- Operations, or Occurrences), are objects and pro-
pendent, parallel events, between which cedures, which can be used to create (put together)
information exchange is possible, a discrete event simulation model.
• Within the framework system, the linear, A new research and development has been
parallel and feedback connection of objects started to make LabVIEW capable to create new
is possible in optional quantity and depth. object types, which make it possible to generate
• The framework system is designed in an discrete event simulation models of arbitrary
object-oriented manner. Both, the blocks complexity in an easy way.
with different characteristics serving the
process simulation and the entities carry- What is Needed to Create a
ing information are designed in the same Discrete Event Simulator (DES)?
way.
• The programs prepared with the help of Realizing a DES, there are a series of basic func-
the framework system can be saved as tionalities and elements, which must be imple-
subroutines, too. These subroutines can be mented to ensure a modular program structure.
reused in the simulation program; an arbi- When the basic elements were implemented,
trary number of times with arbitrary input object-oriented data structures and object related
parameters. methods were used, utilizing fully the possibilities
and features of the host, the LabVIEW.
The simulation framework operates in the
following structure: Simulation time and event handling

Why is it Necessary to Extend the The most important step in a DES system is to
LabVIEW Program System with establish the event objects (Task) with given data,
Discrete Event Objects (DEO)? the storage of the data according to given condi-
tions and the removal of them from the system,
The LabVIEW program system was developed for when given events have already taken place. The
measuring, analysis and display of data generated individual event objects have to be properly stored
in discrete time in equidistant (time) steps. It has and organized according to given conditions using
numerous procedures, which make the measure- another object (called TaskList). When different
ment of data easier by using triggering technique DES systems are implemented, different event
based on a given time or signal level. For the data lists (differing both in number and in tasks) can
analysis, there is a wide scope procedure library be used. In the DES system, realized in LabVIEW,
at disposal, which makes it possible to realize de- there is only one such list, which contains all the
tailed signal analysis either in time or in frequency events (events’ data) of the simulation model in

285
Implementation of a DES Environment

Figure 1. General block diagram of simulation system in LabVIEW

ascending order of time. Each new event is put random number generator of uniform distribution
into its proper place within the event objects list, between 0 and 1. This random number generator is
by keeping the time in ascending order. the base of Constant, Uniform, Exponential, Nor-
mal, Triangle and User Defined random number
Random Number Generators generator types. At a call, the random number gen-
erators generate random numbers in accordance
In the LabVIEW DES system, the operation of with the parameters of the given distribution type.
numerous random number generator types has to If needed, further random number generator types
be provided to fulfill the different timing tasks in can be implemented and included into the set of
the simulation model. The operation of random model elements developed previously.
number generators in LABVIEW is ensured by the
Types of Discrete Event Objects

Figure 2. Characteristics of task object The most important elements in a DES simulator
are the base objects fulfilling different tasks. The
base objects make it possible to build up different
simulation models. There are five base-objects in
the LabVIEW-DES simulator, which provide the
foundations of more complex DES objects with
complicated operations. These base objects pro-
vide the tools for generation, receiving, temporary

286
Implementation of a DES Environment

Figure 3. Structure of TaskList object and its


model the different time needed for operations in
characteristics
real systems. Those entities, which have already
completed their tasks, can be (have to be) removed
from the simulation model; the Entity will be
passed to a Sink type object, which removes the
Entity from the objects list and deletes it to avoid
further tasks to be executed with it.
The object names in LabVIEW DES system
are composed using two name elements. Each
object has a specific, individual name; while all
objects have a name parameter that shows which
other objects contain it. The naming convention of
Entity objects enables tracing and administration.
The Full Name of the object is made up from the
Name and the so-called Container Object Name;
it shows which other object or objects contain
the given Entity object. As a consequence, the
base objects handling the discrete events have to
possess a Container Object Name parameter,
which receives the data from a Container object.
The Container object has only one task, to ensure
the value of the Container Object Name for the
basic objects.
The objects described above are called DES
Objects. The DES models in a simulation system
are made up of these objects.

Connecting Discrete Event Objects

The input and output of DES model objects has


storage, movement delay and the removal of Entity to be connected according to the structure of the
objects representing the movement of material and simulation model. This is the way to construct the
information. Besides the Entity objects ensuring network of objects, which ensure the fulfilling of
the operation of the DES system, there are other a specific task with the simulation model. In the
objects as well, which create Entity objects accord- LabVIEW graphical programming system, each
ing to an order given by a time distribution; these base object of a DES model has a single input
objects are called Source objects. The admittance and a single output, which can (have to) be con-
and temporary storage of Entity objects are done by nected with a data exchange line of “identical
the Buffer objects, which are able to store Entities type” according to its data type. The framework
temporarily, although only with a finite storage system executing the simulation supplies each
capacity. The Machine object is able to postpone/ DES object of the model with an ID Number, then
delay the advancement of an Entity according to a determines which output is connected to which
given time distribution. This capability enables to input. Also, each DES basic object has one data
(Entity) input and one data (Entity) output. If a data

287
Implementation of a DES Environment

Figure 4. Characteristics of DES object


eters provide statistical data about the operation
of the given object depending on its particular
DES object type.
Data provided by Source objects (Figure
18):

• TimeNow
• Time of Next Launching
• Time Step
• Index of ObjectList
• Content Object Index
• Number of Created Objects
• Status Time Parameter (Duration (time)
in Idle, Busy, Blocked, Failed, Paused
state)
• Status of Source (The possible states are:
Idle, Busy, Blocked, Failed, Paused)
• Full Name

Data provided by Buffer objects (Figure 19):

• TimeNow
• Index of ObjectList
• Content of Buffer
output channel is connected to the input channel of • Indexes of Content Objects
more than one DES object, then the state of DES • Used Buffer Capacity (in percentage)
objects (occupied or not occupied) determines to • Status Time Parameters (Duration (time)
which input channel of the DES object the Entity in Idle, Busy, Blocked, Failed, Paused
will be forwarded. In case each DES object is state)
ready to receive the incoming Entity, the (physi- • Status of Buffer (The possible states are:
cal) order of the object definition determines to Idle, Busy, Blocked, Failed, Paused)
which DES object the Entity will be forwarded. • Full Name
Besides this “wired” connection mode, we can
select with the help of supplementary objects an Data provided by Machine objects (Figure
input channel according to an optional strategy, 20):
ensuring a flexible selection of DES objects, i.e.
processing direction. • TimeNow
• Time of Next Launching
Output Statistical Data of • Time Step
Discrete Event Objects • Index of ObjectList
• Content Object Index
Each DES base object has an input and an output • Utilization (in percentage)
parameter set (record). The input parameters define • Histogram of States
the operation of DES objects. The output param- • Pie Chart of States

288
Implementation of a DES Environment

Figure 5. Structure of ObjectList object and its


• TimeNow
characteristics
• Index of ObjectList
• Last Content Object List
• Number of Sinked Objects
• Status of Sink (The possible states are:
Idle, Busy, Blocked, Failed, Paused)
• Full Name

Each DES object possesses a two-dimensional


dynamic array variable, the elements of which
are string variables (User Attribute). Arbitrary
data, belonging to specific DES objects, emerging
during the execution of the simulation program
can be stored in these cells and referred to by
indexes as necessary. This ensures that each DES
object is able to collect information as needed to
define statistical data, which can be continuously
evaluated during the execution of the simulation
program or at the end of the simulation.

Example 1: The Model of M/M/1


System in a LabVIEW System

The M/M/1 system model in the LabVIEW DES


system can be created with connecting the four
basic objects. This has to be done by mapping
the elements of the system model into LabVIEW
objects (Figure 14, Figure 15 and Figure 16). The
element ensuring the Entity generation in the sys-
tem model can be represented in the LabVIEW
DES as a Source object. Similarly, the queue can
be represented as a Buffer object, the service opera-
tion as a Machine object, and the removal of an
Entity as a Sink object. The next step, as described
• Status Time Parameters (Duration (time) in the chapter Connecting Discrete Event Objects
in Idle, Busy, Blocked, Failed, Paused (see also Figure 16), is the creation of a running
state) model by connecting the DES objects.
• Status of Buffer (The possible states are: In order to complete the numerical computa-
Idle, Busy, Blocked, Failed, Paused) tions, all the objects have to be provided with the
• Full Name record consisting of the input parameter values and
with the so-called output parameter record, which
Data provided by Sink objects (Figure 21): ensures the display of the result (the statistical
data). All this has to be done through the panel

289
Implementation of a DES Environment

of the LabVIEW program (see also the chapter The input and output parameters of the object
Structure and tasks Source object). are shown in Figure 6.

System Model Mapping Structure and Tasks of the Sink Object

A system model, based on discrete events, can The Sink object extinguishes (swallows) the
be built with objects, which fulfill the basic tasks Entity objects.
and functionalities of a general simulation system. During its operation, the Sink object determines
Such basic types are the Source type objects, which whether the Entity received objects contain other
issue Entity objects necessary for the functioning Entity objects. Following this, the operation will
of the simulator. After executing the defined task, be recursively continued until it does not find any
the Entity objects are extinguished by the Sink type more objects containing further Entity objects. It
objects. The intermediate storage of Entity objects puts all the indexes of Entity objects “explored”
is realized using Buffer objects, while the delay of (all the children objects) into a stack memory, and
Entity data stream processing – which represents deletes them from the object list. Finally, only
the working process in a model – is realized us- the Entity remains that was received through the
ing Machine objects. The creation of simulation input channel. Finally, the Sink object deletes this
models depicted in Figure 14 to Figure 17, can be object also from the object list.
realized using the above four basic object types. Deleting the object from the object list means
Subsequently, the four basic object types, their that its name is extinguished (the name will be a
characteristics and use will be discussed. text type constant of zero length).
There is no output defined for the Sink ob-
Structure and Tasks of ject.
the Source Object It is compulsory to connect the input channel
of the Sink object, as it is compulsory to define
The Source object issues Entity objects at intervals the record of its parameter record.
determined by different function types. The input and output parameters of the object
During its operation, the Source object sends are shown in Figure 7.
the generated Entity objects to the object connected
to its output. In case it is not ready to receive the Structure and Tasks of the Buffer Object
Entity, the Source object changes its state into a
Blocked state at the next active computational The Buffer object ensures a temporary storage of
period. In a blocked state, there is a repeated at- the Entity objects (Entities) until the connected
tempt to send the Entity to the object connected object behind the Buffer object in the material or
to the Source object’s output. information flow is able to fulfill the predefined
It is possible to connect more than one object task.
input to a Source output. In such cases, the Source During its operation, the Buffer object tries
object attempts to assign the Entity object in the to send immediately the received object input to
definition order of the connected objects. Arbitrary the object connected as output. If it fails, then it
Entity generation strategies can be achieved with stores it temporarily.
the help of the connected output Source objects’ It is possible to connect more than one object
indexes. It is compulsory to define the input pa- input to the Buffer object output. In such cases,
rameter record of the Source object. the Buffer object attempts to assign the Entity

290
Implementation of a DES Environment

Figure 6. The input and output parameters of the object

Figure 7. The input and output parameters of the Sink object

object to its output in the definition order of the • Buffer Object Input Parameters: The
connected objects. input parameters of the Buffer object
An arbitrary strategy can be defined with the (Capacity, Enabled Status of Buffer,
help of object indexes of those objects, which are Type of Queue)
connected to the output of the Buffer object. • Input Channel: A record of two integer
It is compulsory to activate the input channel type, which provides in its Data variable
of the Buffer object, as it is compulsory to define the name of that Entity object, which is
the record of its parameter record. waiting for the Buffer object to take it from
The input and output parameters of the object the connected object. If the value of Data
are shown in Figure 8. is zero or negative, it means that the Input
The parameter sets and their setup is demon- Channel is empty, i.e., it does not contain
strated with the help of the Buffer object example any data. If the Data value is positive, it
(as a DES object) having both input and output means that it is the index value of a real
channels. The base objects with similar structure Entity object. The Channel Index variable
as described above, are of similar set-up, except provides the information about the channel
that in some cases the input data channel (Source), number from which the Entity arrived.
while in other cases the output data source (Sink)
function is missing, although the channel physi- The output parameters of the Buffer objects
cally is at disposal. are as follows (Figure 8):
Buffer objects have the following input pa-
rameters (Figure 8): • Index of ObjectList: A positive integer
type value, which provides the object in-
• Container Object Name: String variable, dex value in the ObjectList/DES Objects
which contains the name of the container vector.
object of the Buffer object • Full Name: A String variable, which
• Buffer Object Name: Contains the name stores the name of the container object of
of its own Buffer object the Buffer object as well as the own name

291
Implementation of a DES Environment

Figure 8. The input and output parameters of the Buffer object

of the Buffer object. The Source object does not limit the number of
• Buffer Object Output Parameters: The Entities issued, since the value of Maximum
output parameters of the Buffer object Batch Size is infinite (Inf. = Infinite). The other
(TimeNow, Content of Buffer, Index values used as input parameters bear their input
of Content Objects, Status of Buffer, values in case of other functions. When the user
Command in Simulation, Full Name, of the program chooses CONSTANT(Value)
Status Time Parameters, Used Buffer distribution, Value can be dynamically changed
Capacity [in percentage]) during the simulation, and thus makes it pos-
• Output Channel: A record consisting of sible to define arbitrary distribution functions.
two integer type data, giving in its Data The Enabled Status of Source state selection
variable the index of that Entity object, combo box can support the selection of three
which is waiting for the Buffer object to be possible object states (Enabled, Failed, Paused).
passed over to the connected object. If the In Enabled status, the DES object executes all
value of Data is zero or negative, it means computations. In Failed status, it does not execute
that the Input Channel is empty, does not any, since this state shows that there is an error.
contain any data. If the value of Data is While in Pause status, the device is in an excellent
positive, it means that it marks a real Entity condition, but it does not execute any computation,
index. rather waits to start work to complete the task of a
(parallel working) DES object, which is currently
The Channel Index variable provides infor- in “failed” state.
mation on the channel number through which the The input parameters of the Buffer Object (Fig-
Entity has arrived. ure 10) include the number of Entities that can be
The structure of base objects Source, Machine, stored by the object (Capacity), as well as the type
Sink and Container are similar to the Buffer ob- of the object, and the Type of the Queue, which
ject described previously. The connections of the is one of the three possible choices in the combo
objects are demonstrated in Figure 16, where an box form. The Buffer types that can be chosen are
object named “Source 1” realizes the creation of First in First Out, Last in First Out and Us-
the Entity. ing Entity Priority (where the Entity leaves the
The next figures (Figure 9 to Figure 13) dem- storage according to the values stored in System
onstrate the input parameter values of the M/M/1 Attribute parameters of the same Entity).
DES Model. The Enabled Status of Buffer status can be
The Source object generates from the starting chosen out of three possible object statuses (En-
time of the simulation (First Creation Time) abled, Failed, and Paused) using a combo box. The
Entities, named “Apple”, with the help of an meaning of the statuses is the same as described
EXPONENTIAL(Mean) distribution function. under Source objects.

292
Implementation of a DES Environment

Figure 9. Input parameters of the Source object

Figure 10. The input parameters of the Buffer Object

Structure and Tasks of During its operation, the Machine object can
the Machine Object delay the advance of Entity objects arrived at its
input with an operation time. Operation time can
The Machine object executes operations of speci- be defined with the help of different distribution
fied time duration determined by different function functions. When the operation time is over, the
types on Entity objects. Machine object tries to hand over the Entity object

293
Implementation of a DES Environment

Figure 11. The input and output parameters of the Machine object

Figure 12. The input parameters of the Machine object

to the model object connected to its output chan- It is compulsory to activate the input channel of
nel. If it fails, the status becomes Blocked, and it the Machine object, as it is compulsory to define
will attempt to “get rid of the processed Entity” at the record of its parameters.
each computation cycle. While the Machine object The input and output parameters of the object
is in blocked status, it does not receive any other are shown in Figure 11.
Entity object through its input channel. There is a Distribution parameter among the
More than one object input can be connected input parameters of the Machine object (Figure
to the Machine object output. In such cases, the 12). It is a data element of combo box type, out
Machine object attempts to pass over the Entity of which values can be chosen as follows:
object in the defined order of the connected ob-
jects.
An arbitrary output strategy can be realized • CONSTANT (Value)
with the help of the indexes of the objects con- • EXPONENTIAL (Mean)
nected to the Machine object’s output. • NORMAL (Mean, Deviation)

294
Implementation of a DES Environment

Figure 13. Input parameters of the Sink Object


Figure 16 demonstrates the connections of the
model elements with help of the so-called Diagram
Panel, which connects the input and output data
channels with the help of graphical programming.
This program segment demonstrates how specific
DES objects receive the input parameters, and
how the connection of specific objects can be
realized.
Figure 17 shows the picture of such a Front
Panel, which can be easily realized using the
output parameter data of specific DES objects.
• TRIANGLE (Minimum, Usual, The output data of some objects appear in a single
Maximum) data structure, in a record (also called in LabVIEW
• UNIFORM (Minimum, Maximum) “cluster”), as Figures 18-21 show.
Figure 14 shows the results of an M/M/1 model
It can be seen from the list above that the as displayed after executing it for a specified time.
distributions use specific numerical values as It can be seen on the left side of the figure that
input parameters; this way the input parameters during the simulation the Source object generated
can be adjusted easily and precisely to the given 972 entities. In the square symbolizing the Sink
distribution types. object, there is a numeric value displayed under
The Enabled Status of Machine combo the sign Produced. This is where information is
box allows a choice out of three possible object delivered to about the management and exit of
statuses (Enabled, Failed, Paused). The meaning 967 Entities from the simulation process. The
of the statuses is the same as described under difference between the number of the generated
Source objects. and terminated entities are those entities, which
The Sink object (Figure 13) has only one input have stayed in some of the objects even after the
parameter value, which gives the status of the simulation has ended. Information on this is shown
object for the simulation model. on the numeric display seen under the display
The status of DES objects in a simulation of the Sink object below the Work in Process
model can be dynamically changed (i.e. during Objects title.
the simulation run, too). This way, in case of a There is also other information displayed on the
predefined or examined operational case, object Front panel. The simulated time, Real Time (sec)
statuses can be defined either manually or with of the simulation model, as well as the simulation
the help of a program (see figures 14 and 15). execution time on the given computer, furthermore

Figure 14. System Model of M/M/1

295
Implementation of a DES Environment

Figure 15. Block diagram of M/M/1 System

the quotient of both values, i.e. the speed factor of The following figure shows the records of
the simulation. This is shown in the Simulation the output parameters of the objects in M/M/1
Time/Real Time display. example after the end of the simulation run with
The values of Status of xxx combo box vari- predefined duration.
ables shown in Figure 17, can be set preceding the Figure 18 shows the output parameters of the
simulation run to the values needed by operational Source object. This object initializes and, gener-
conditions. In the case of this simulation model, ates those Entity objects using random number
the duration of the simulation model run can be generators, which pass the different objects of the
manually adjusted, too. Before the simulation is simulation model and wait in those for events.
started, the selected values and their parameters The display TimeNow shows that the simu-
can also be initialized using the combo box op- lation is at 9322 time unit (which was the time
tion of Distribution, and with the help of Mean duration of the simulation).
Time xxx knob. The adjustment shown in the The variable Time of Next Launching consists
case of both the Source and the Machine object of the current time (given in simulation time units)
is an EXPONENTIAL(9) distribution, which when the Source object has to generate (launch) the
means an exponential distribution with an aver- next Entity (this will not happen since the simu-
age value of 9.0. lation is over, already finished). The TimeStep

Figure 16. (M/M/1) Model Diagram Panel of Example 1

296
Implementation of a DES Environment

Figure 17. (M/M/1) Model Front Panel of Example 1

value shows, according to the selected distribution ated by the Source object until this point of time.
function, how big the last time step was. This also The vector named Status Time Parameters can
means that the last Entity was generated at that be found on the right hand side of Figure 18. The
time (Time of Next Launching – TimeStep). right hand side labels show the time duration value
The Index of ObjectList displays the index of of the given vector element in its different states.
the Source object in the ObjectList object, while It is in Idle state all through the operation of the
the display Content Object Index right besides Source object.
it shows that there is no generated object in the There is one exception, when the Source object
Source object at this time (which it could not yet is not able to pass over (within the same simulation
passed to the input of the connected object). The step) the Entity generated at a previously specified
switch, Entity at Output, would set to ON state time to the object connected to the data output (in
if there were any Entity in the Source object. At our case, the Buffer object). This can only happen
this moment there is none, this is why the switch if the object connected to the output of the Source
is in OFF state. The Number of Created Objects object is not able to receive the new Entity offered
display shows how many Entities has been cre- for it. In this example this can happen only when

297
Implementation of a DES Environment

Figure 18. Output parameters of the Source Object

the Buffer object is full, i.e. it contains Entities The Command in Simulation combo box
equal to its capacity. defines the execution state of the simulation pro-
Then, the Source objects get into Blocked gram. The possible states are as follows:
status, and the In Blocked State element of the
Status Time Parameters vector shows, in time • Initiation This is the program state, in
units, how long the Source object spent in the which the DES objects of the model au-
Blocked status. tomatically receive their individual ID
The desirable state from the point of discrete Numbers. The ID number will be used as
simulation model view is to minimize or abolish identification index in the ObjectList.
the blocked state(s). • Clear Inputs This is the program state, in
The Source object can never fall into In Busy which the inputs and outputs of each DES
State during the simulation model run, since the object are filled up with zero values. At
basic principle of its operation is that it does not this state, the input and output of the object
execute any operation on Entities. The time dura- does not have an index marked with a posi-
tion of the Source object in disabled state is shown tive integer number.
by the displayed time In Failed State, while the • Calculate This is the program state, in
time duration in stand-by status by the In Paused which the simulation program runs with
State vector element. the help of connected objects bearing
The Status of Source combo box defines the identification indexes. The simulation sys-
state of the Source object in a given simulation tem examines the state of each object at
time (TimeNow). The possible states of the object every event time and, if needed, executes
are as follows: Idle, Busy, Blocked, Failed and a state change. There are a number of such
Paused. We have already described the meaning state changes defined by procedures. The
of each of these states. similarity of these different procedures

298
Implementation of a DES Environment

is that the change in a given object state has a “family name”, called Container Full Name,
(for example receiving a new Entity at and in addition to this its own name. Figure 18
the input channel), affects the object(s) demonstrates this by showing that such a Container
connected to it, and thus the state of this object has been connected to the ContainerName
latter has to be changed, too. In order to input of the Source object named Source1. The
comfortably manage the Entity traffic, Full Name of this Container object is Main #
each Entity, as all other DES objects, as Model and its own Full Name is Main # Model
well, have their own state variable values. # Source1.
Entities are in Idle state when they are Figure 19 shows the output parameters of the
waiting at the output data channel of one Buffer object. The object is able to store tempo-
of the DES objects to be “taken” by one of rarily those Entity objects, which it receives at its
the objects connected to the output. The input. At the time of their arrival, the Buffer object
Entities are in Busy state, when one of the tries to send the arrived Entity objects through its
objects already takes them; their Index of output channel. If it fails, it stores them.
Container Object will show the DES ob- The display TimeNow also shows that the time
ject they are in. of the simulation is 9322 simulated time units
(the end of the simulation). The display Content
The field Full Name displays the DES object’s of Buffer shows how many Entities are at the
full name. This includes the name of that Container given moment in the Buffer object. The simula-
type object to which the Source objects belongs, tion presents in the Index of Content Objects in
and the object’s own name, which is also given. a string variable all the indexes of those Entities
The variable follows the method usually used in in the ObjectList variable, which are in the Buffer
relation with family names, i.e. each DES object object at this point of time.

Figure 19. Output parameters of the Buffer Object

299
Implementation of a DES Environment

The Status of Buffer and the Command can be only in four states: Idle, Blocked, Failed
in Simulation combo boxes contain (display) and Paused. State Idle means in the case of the
identical selection values, as already described Buffer object, that it does not contain any Entity
when tackling the Source objects. Their tasks object. The Buffer object is in state Blocked, if it
are also the same, as was also described above. cannot pass over the Entity objects it contains to
The Status of Buffer display shows the actual the DES object connected to its output. The State
state of the Buffer object (Idle, Busy, Blocked, Failed means that the Buffer object is disabled,
Failed, Paused). The Command in Simulation cannot receive Entities at its input, and cannot
display combo box shows the current operation pass over Entities it contains at its output. State
of the running simulation program (Initiation, Paused means, that every unit of the Buffer object
Clear Inputs, Calculate). is ready for operation, it is a security standby of
The vector elements of Status Time Param- the simulation model. Its function is to substitute
eters show the duration of each of the states of a failed Buffer object that operates parallel (con-
the Buffer object (In Idle State, In Busy State, nected to the same input and output).
In Blocked State, In Failed State, in Paused The output parameters of the Machine object
State). While in operation, the Buffer object are demonstrated in Figure 20. The object does

Figure 20. Output parameters of the Machine Object

300
Implementation of a DES Environment

not allow that the Entity object, which it has read • In Blocked State if it has finished the op-
from its input channel, to pass for a given time eration (the time duration of which is spec-
(TimeStep). ified by the TimeStep parameter), but is
The field Full Name displays the full name unable to pass the Entity object on to the
of the DES object. The field TimeNow displays DES object linked to its output because the
that the time corresponds to time unit 9322 of the DES object is unable to receive the Entity
simulation (the ending time of the simulation). object;
The time (measured in simulation time units) in • In Failed State, giving the time spent by
the variable Time of Next Launching shows the Machine object in disabled state;
when the Machine Object has to release the • In Paused State, giving the time spent in
Entity it stores. The value TimeStep shows the readiness state.
size of the last time step according to the selected
distribution function. This also means that the The Histogram State and the Pie Chart
Machine object has released the last Entity at a State give the same information on the vector
time equal to its value (Time of Next Launch- as the Status Time Parameter, only in different
ing – TimeStep). The Index of ObjectList dis- graphical form.
plays the object index of the Machine object in The Status of Machine combo box indicates
the ObjectList object. Right besides it, the field the state of the Machine object in the given simula-
Content Object Index displays that particular tion moment (TimeNow). The possible states of
Entity object which is in the Machine object at the object are: Idle, Busy, Blocked, Failed and
this point of time. Paused, as already explained. The Command in
The Entity at Input switch is in the on (upper) Simulation combo box indicator shows the actual
position because an Entity is waiting in the input operation performed by the simulation program
data channel for the Machine object to be read. (Initiation, Clear Inputs and Calculate).
The Entity at Output switch would be in the The output parameters of the Sink Object are
on (upper) position if an Entity waited in an output presented in Figure 21. This object receives and
channel for another object linked to the Machine exits those Entity objects from the simulation
object to be taken over. system, with which all necessary tasks were
The Utilization (%) instrument indicates the completed and which were terminated.
magnitude of (useful) utilization of the Machine In the indicator called Full Name, the full name
object. This numerical value will be calculated of the DES object is given. The TimeNow indica-
based on the time spent in Busy state divided tor shows in which simulation time (moment) the
by the full simulation time until the moment of execution is. The Last Content Object Index
calculation. shows the index of that Entity object which was
On the right hand side of Figure 20 is the vec- the last in the Sink object. The Number of Sinked
tor with the name Status Time Parameters, the Object shows the number of Entity objects that
right hand side of which indicates the time the have exited until then.
given vector spent in each state. The Entity at Input switch is in an off (low)
The Machine object is position because there is no Entity waiting in
the input data channel to be read by the Sink
• In Idle State if no Entity object is found object.
in it; The Status of the Sink combo box indicates
• In Busy State if an Entity object is found the state of the Sink object in the given moment.
in it; The possible states of the object are: Idle, Busy,

301
Implementation of a DES Environment

Figure 21. Output parameters of the Sink Object

Blocked, Failed and Paused as already ex- The Structure and Task of the
plained. The Command in Simulation combo Entity Object (Moving Object)
box indicates the actual operation carried out by
the simulation program (Initiation, Clear Inputs This object can be used as a freely usable task
and Calculate). performing object. The structure of the DES object
The input parameters of the simulation program allows it to be either a model or a working object.
are to be set and the execution of the program The type of the DES object determines what type
completed until the given time with the help of of task is performed by the given object.
value adjustment and selector elements as shown in In the LabVIEW DES system, only Source
Figure 17. After the completion of the simulation type objects generate Entity objects. In regard of
program, the accumulated statistical values, which programming technique, the programmer is able
were gathered during the simulation run (execu- to create an Entity object independently, but the
tion of the simulation program), are delivered. handling and the organization of its operation
Obviously, during the execution of the program entails so many interlinked tasks that it is not
one can follow the change of simulation results recommended.
and the actual values (maximum, minimum) of Entity objects can also be material goods with
special interest. For this, the LabVIEW program defined characteristics (e.g., materials in produc-
provides numerous indicator objects, with the tion or logistics processes) or persons with dif-
help of which any type of display is easily ac- ferent qualifications, whose turnover or servicing
complished. have to be managed, scheduled, etc. within the
given conditions.

302
Implementation of a DES Environment

Several other examples can be found for the object linked to the output channel. If that is
the appearance and role of the Entity objects in unsuccessful, it temporarily stores the Entity.
models. On the whole, this object is a simulation The entities leave the Join object based on the
object that enters the simulation process at a given First In First Out strategy.
physical point, during its progress is obliged to The output channel of the Join object can be
wait and finally exits the simulation process at a linked to not only one, but several object input
given physical point. channels. In this case, the Join object tries to
The progress of the Entity through the simula- forward the Entity object in its output channel in
tion system results in numerous relations (statis- the order in which the linked objects have been
tical information) with those objects that it met defined. Any type of output strategy can be ac-
during its progress. The Entity objects progressing complished with the help of the objects’ indexes
through the simulation system also gather valuable linked to the output channel of the Join object,
unique statistical data about the system through It is mandatory to link up the input channel
which they pass through. of the Join object and to define the record of the
input parameter.
Example 2: Joining of Several Input and output parameters of the object are
Output Points (operation Join) shown in Figure 22.
Example 2 presents in Figure 23 the opera-
Example 2 presents objects, which are able to se- tion of the Join object. The Entity series emitted
lect, based on an algorithm, one Entity object out by two identically structured lines consisting of
of a series of Entity objects generated by multiple Source, Buffer and Machine objects will be put
sources. The characteristics and structure of the to the input channels of the object Join1. The
Join object will be presented next. object (now) chooses with a 50-50% probability
between the two possible input channels and will
The Structure and Task of a Join Object take the Entity object found at its channel to the
operational object called Machine3.
A Join object has several inputs and one output
channel; moreover, it can select the input channel Example 3: Selection between
through which it can receive the Entity object with Several Input Channels Based on
one of its input parameters (Join Input Channel Given Rules (Select operation)
Index).
The Join object possesses more than one input Example 3 further expands the list of helpful
channel (a vector of input channels). The Join objects and introduces an object, which realizes
object tries to forward immediately the selected an algorithm-based distribution of Entity objects
Entity object that arrived to its input channel to among multiple outputs. The characteristics and

Figure 22. The input and output parameters of the Join object

303
Implementation of a DES Environment

Figure 23. Example 2 of the LabVIEW DES Diagram Panel

304
Implementation of a DES Environment

structure of the Select object will be presented defined by the index, which can be given as an
next. input parameter.
There are two identically structured lines con-
The Structure and Task of sisting of the Buffer, Machine and Sink objects
the Selector Object linked to the output channel of the Selector object,
in which the processing of the Entity series con-
The Selector object has more than one output tinues. The Selector object (now) chooses with
channels and one input channel; moreover, it can a 50-50% probability between the two possible
select the output channel to which it will send output channels.
the Entity object with one of its input parameters
(Selector Channel Index). Example 4: Placing More than
The Selector object has more than one output one Entity in Another Entity (Pack
channel (the vector of output channels). The Se- Process) and Unpacking More
lector object tries to forward the selected Entity than one Entity from Another
object that arrived to its input channel immediately Entity (UnPack Process)
to the object linked to the output channel. If that is
unsuccessful, it temporarily stores the Entity. Example 4 presents two new objects, which
The stored entities leave the Selector object are able to pack Entity objects into other Entity
based on the First In First Out strategy. objects using the Pack object, and later unpack
It is obligatory to link one model object to each it using the UnPack object at a certain point of
of the output channels of the Selector object. time during the simulation. The characteristics
Any type of output strategy can be accom- and structure of the Pack and UnPack objects will
plished with the help of the objects’ indexes linked be presented next.
to the output channel of the Selector object.
It is mandatory to link up the input channel The Structure and Task
of the Selector object and to define the record of of the Pack Object
the input parameter.
Input and output parameters of the object are The Pack object has two input channels and one
shown in Figure 24. output channel. Through the first input channel
Example 3 shows in Figure 25 the operation (Channel_0) the number of objects to be packed
of the Selector object. The Entities outputted by arrives. At the same time, through the second
the Source object situated in the input channel input channel (Channel_1) the package (box)
(left side of the figure) are forwarded by the object also arrives. Packaging is done in a way
Selector object to one of its two output channels, that first the package (box) object has to arrive,

Figure 24. Input and output parameters of the Selector object

305
Implementation of a DES Environment

Figure 25. Example 3 of the LabVIEW DES Diagram Panel

306
Implementation of a DES Environment

and then the object waits until the amount of Structure and Task of the
Entities to be packaged arrive. The object places UnPack Object
the Entities to be packaged into the package (box)
object and then sends off the package (box) object The UnPack object has two input channels and
thus modified. one output channel. Through the first channel
The Pack object is a specially formed Join (Channel_0) the unpackaged objects get out,
object, which has two input channels (a vector of while through the second channel (Channel_1)
input channels). In the object, the task of packag- the package (box) object leaves. During unpack-
ing is done by a sequential logic program, which ing, first a package (box) object has to arrive and
manages the input and output channels of the Pack then the object unpacks the Entities in the package
object based on the required operations. This object object and temporarily stores them. It continu-
required the creation of switches authorizing input ally removes the unpacked entities through its
and output channels and therefore in this version first output channel (Channel_0) and when all of
of the LabVIEW DES program the same solution them have been removed, it disposes the package
for all other model objects were also applied. (box) object through its second output channel
For the task of channel defining and handling (Channel_1).
of the Pack object, the following input parameter The UnPack object is a specially formed Se-
is available: Packed Quantity Request, which is lector object with two output channels (vector of
a positive integer, defining the number of pack- output channels). It tries to forward the arrived
aged Entities. packaged entities to the object linked to the se-
More than one object input channel can be lected output channel. If this is unsuccessful, it
linked to the output channel of the Pack object. will temporarily store the unpacked Entities and
In such case, the Pack object tries to forward the package Entity.
the packaged Entity objects that are in its output There are no input parameters available for the
channel in the order of the definition of the linked tasks of defining and handling the output channel
objects. of the UnPack object.
Any type of output strategy can be accom- Entities that are stored leave the UnPack object
plished with the help of the objects’ indexes linked based on the First In First Out strategy. To each
to the output channel of the Pack object,. of the input channels of the UnPack object one
It is mandatory to link up all the input channel model object can be linked. Any type of output
of the Pack object and to define the record of the strategy can be accomplished with the help of the
input parameter. objects’ indexes linked to the output channel of
Input and output parameters of the object are the UnPack object,.
shown in Figure 26. It is mandatory to link up all the output channel
of the UnPack object and to define the record of
the output parameter.

Figure 26. Input and output parameters of the Pack object

307
Implementation of a DES Environment

Input and output parameters of the object are mation from these variables about the state and
shown in Figure 27. state changes of all objects in the system, includ-
Example 4 shows in Figure 28 the operation ing their own. The globally usable list elements
of Pack and UnPack objects. The Entity series of the ObjectList and TaskList objects and their
inputted by the line consisting of two identically characteristics will be presented next.
structured Source, Buffer and Machine objects
will be put to the input channels of the object Global Variables of ObjectList
called Pack1. The Entity objects to be packed are
directed to the first input channel, while the pack- During simulation, each element of the global
age objects to the second input channel. Packaging variable, ObjectList shown on Figure 5 can be
cannot start until a package object arrives. The reached with any of the procedures. The simula-
input parameter of the Pack object is the number tion model in fact is built in the ObjectList global
of objects to be packaged into one package. variable in the DES Objects vector in a way that
The outputting and processing elements of the the type and input parameters of the DES object in
package and the Entity object to be packaged are the simulation model are given (as it has been done
shown on the left hand side of Figure 28. A basic in the case of the Buffer object shown in Figure
programming capability of LabVIEW allows that 8). With the help of the graphical programming
any number of LabVIEW icons can be merged into capability of the LabVIEW program, the DES
one subroutine. Thus, the subroutine shown in the object link to be examined is created. By using the
lower part of Figure 28 can fulfill the same task indicator records of LabVIEW DES or any other
as the Entity handling system that is composed type of indicator provided for in the LabVIEW
of various elements in the upper part. system, the presentation of calculated quantities
In this simple example, the packaged objects is planned and realized. Then the execution of the
are immediately unpacked with the help of an simulation program follows where various DES
UnPack object then the packaged objects and objects read out information on the actual state of
the packages in which they were packaged are other (any) DES object and if specified conditions
processed with separate Sink objects. are fulfilled, change these. Consequently, the Ob-
jectList global variable is a status storage system,
where the state and state changes are generated
COMMON CHARACTERISTICS OF by the running simulation program over the time;
THE OBJECTS OF THE SIMULATION it represents the system behavior.
SYSTEM (GLOBAL VARIABLES) A few variables, performing administrative
tasks appear among the attributes of the ObjectList
The simulation system consists of several system variable (Figure 29), which provide values to each
variables, which must be accessed by all objects DES Object for their functioning. The ObjectList
and methods. The simulation objects obtain infor- variables are the following:

Figure 27. Input and output parameters of the UnPack object

308
Implementation of a DES Environment

Figure 28. Example 4 Panel of LabVIEW DES Diagram

309
Implementation of a DES Environment

Figure 29. Values of ObjectList Global Variable


(The DES Objects vector can dynamically
after the running of M/M/1 model
change its size). This is necessary because
during the simulation significant time can
be saved if the DES Objects vector is not
replaced.
• Input # for Inputless Objects: each of the
DES Objects has identical structure. To
differentiate the Source object type that has
no input, a constant has been used.
• Separator of Name character, used for
separating individual name components.
• Identity Number is an integer, which in-
creases its value by one at the time when a
new DES object is created.
• Possible Number of Objects in Model in-
dicates how many DES Objects are used
for the execution of the program at the giv-
en moment of the simulation model run. It
is possible that these DES Objects fill the
DES Objects vector with gaps.
• ObjectList Enabled logical value, if it is
FALSE, it will not allow changing the val-
ues of the DES Objects vector.
• # of Rows of SYSTEM Attributes, the
maximum row index of the storage ele-
ments of individual system attributes be-
longing to the DES Object used by the
simulation system.
• # of Columns of System Attributes, the
maximum column index of the storage el-
ements of individual system attributes be-
longing to the DES Object and used by the
simulation system.
• # of Rows of USER Attributes, the maxi-
mum row index of the storage elements
of individual parameters belonging to the
• Index of Last Model Object provides in DES Objects in the simulation system and
the DES Objects vector the index value can be programmed by the user.
for which there is no operating (existing) • # of Columns of USER Attributes, the
Entity object. maximum column index of the storage ele-
• The Maximum Index ObjectList provides ments of individual parameters belonging
the maximum index number of elements to the DES Objects in the simulation sys-
that the DES Objects vector can still store. tem and can be programmed by the user.

310
Implementation of a DES Environment

Because of programming technique reasons, • ShiftMemory (Connected Output Object


the simulation system stores the attributes of the Indexes).vi is a string list that indicates
DES Objects vector not in one single object, but which other objects are linked to the out-
in a vector element that can be reached globally put channel of the given DES Object. The
and which corresponds to the number of attri- list stores the index of DES Objects linked
butes. The vector attributes of DES Objects, the to the output channel, where the position
maximum index number of which is given in the on the list indicates the sequential number
Maximum Index of ObjectList global variable, (index of the output).
are the following: • ShiftMemory (Entry Time into Container
Object).vi is a floating point variable indi-
• ShiftMemory (All Inputs Are Zero).vi is cating when the given object entered in a
a global variable serving in the simulation DES Object, which will store it or delay its
system for the mapping of the linkages be- progress during the simulation.
tween individual DES Objects that gives a • ShiftMemory (First Calculation).vi is a
TRUE logical value if the value of all in- logical variable, which indicates that dur-
puts is zero. ing the simulation no operation has yet
• ShiftMemory (All Inputs Arrived).vi been performed with the given object. Its
is the variable serving in the simulation value is set to be TRUE when the first op-
system for the mapping of the linkages eration has been accomplished.
between the individual DES Objects. It • ShiftMemory (Full name).vi is a string
contains logical value that indicates that variable that contains the full name of the
the related DES Object information ar- DES Object. This is used first of all during
rived to the input channel of all given DES searches based on object name.
Objects. • ShiftMemory (Index of Container
• ShiftMemory (Borning Time of Object). Object).vi is an integer type variable that
vi is a floating point value which indicates indicates the index of the DES Object in
at what moment in the simulation the giv- which the DES Object (usually an Entity)
en DES Object has been created and from is presently located.
which moment on it participates in the op- • ShiftMemory (Indexes of Content
eration of the model. This parameter value Objects).vi is a string variable that indi-
of the main components of the simulation cates in an object list the index of those
model is of 0.0 value. This parameter is objects which can be found in the given
used first of all to determine the age of the object (belong to it) at that moment.
Entity type DES Objects; the time dura- • ShiftMemory (Input).vi is vector type
tion of the pass through and residence can variable that indicates which Entity type
be determined. DES Objects (identified using indices) are
• ShiftMemory (Connected Input Object to be found in the given DES Object’s in-
Indexes).vi is a string list that indicates put channel.
what other objects are linked to the input • ShiftMemory (Output).vi is a vector type
channel of the given DES Object. The list variable that indicates which Entity type
stores the index of DES Objects linked to DES Objects (identified using indices) are
the input channel, where the position on to be found in the given DES Object’s out-
the list indicates the sequential number (in- put channel.
dex of the input).

311
Implementation of a DES Environment

• Shiftmemory (Object Type).vi is a combo of Sender Object that were created during
box type variable, which indicates the type the simulation run.
of the given DES Object. The following • ShiftMemory (User Attributes).vi is
types are possible: EMPTY, CONTAINER, structure containing matrix type string
ENTITY, SOURCE, BUFFER, MACHINE, variables storing user characteristics of
SINK, JOIN, SELECTOR, PACK, and given DES Objects. With the help of the
UNPACK. user characteristics, the user of the pro-
• ShiftMemory (Status of Input Channel). gram may link (with the help of a program)
vi is a logic variable, which indicates values that form the base of arbitrary statis-
whether traffic is allowed in the input tical calculations.
channels of the DES Object. It is logically
TRUE, if the channel is closed and logi- Global Variables of TaskList
cally FALSE, if the channel is open.
• ShiftMemory (Status of Output During simulation, all components of the TaskList
Channel).vi is a logic variable, which global variable shown on Figure 3 can be reached
indicates whether traffic is allowed in the from any process. During the simulation run, the
output channels of the DES Object. It is given DES Objects place the Tasks needed for
logically TRUE if the channel is closed the operation of the DES Model in the components
and logically FALSE if the channel is of the vector consisting of Task Objects records.
open. To speed up the program execution, the program
• ShiftMemory (Status Object).vi is a com- places a new event into the first free place of the
bo box type variable that indicates the state Task Objects vector, then the list of executing the
in which the DES object is in. Possible events in time is determined by a special program,
states include: IDLE, BUSY, BLOCKED, which determines the order in a way that no data
FAILED, and PAUSED. movement occurs in the Task Objects vector
• ShiftMemory (System Attributes).vi is during this time.
a structure containing vector type integer The components of the TaskList global vari-
variable values storing system attributes able are the following (Figure 30):
belonging to given DES Objects. The user
of the program can freely access the sys- • Maximum Index in TaskList indicates the
tem attributes and can expand them. maximum number of events (Tasks) that
• ShiftMemory (Terminate Time).vi is a can be stored in the Shiftmemory (Time
floating point variable indicating when the of Tasks).vi vector.
given DES Object completes the operation • Index of Next Free Place on the Time of
actually performed. Tasks is an integer type index value that in-
• ShiftMemory (Time in Statuses).vi is a dicates the index of the next free space in the
floating point vector type variable indicat- ShiftMemory (Time of Tasks).vi vector.
ing how much time the given DES Object • TimeNow is a floating point variable that
has spent until that simulation moment in indicates the actual time (moment) of the
various states (Idle, Busy, Blocked, Failed, running simulation program.
Paused). • Delimiter of TaskList is a string type vari-
• ShiftMemory (Time of Tasks).vi is a re- able that indicates that string value is used
cord type vector containing the values of in the Task list as a separator character
the Time of task, TaskNumber and Index string.

312
Implementation of a DES Environment

Figure 30. Values of the TaskList global variable during the run of the M/M/1 model

• Empty Value in TaskList is a floating point contains the indices of the Task in execu-
variable that allows to fill up time value of tion order.
not yet used tasks in the ShiftMemory
(Time of Tasks).vi vector. Values that do The global variables listed here and in Table
not occur are used (e.g., negative values). 1 can be expanded with any elements, but the
• Command in Simulation is a combo box processing of the information of the new variables
indicator that indicates the actual operation and their (re)programming in the given DES basic
affected during the simulation program components have to be accomplished.
(Initiation, Clear Inputs and Calculate). Processes of the simulation system are shown
• TaskList Enabled? is a logic type variable. in Table 1.
If it is FALSE, it does not allow to change
the vector values of the ShiftMemory
(Time of Tasks). CONCLUSION AND POSSIBILITIES
• Calculation Blocked? Is a logic type FOR FURTHER DEVELOPING
variable, which plays the role of allow- THE SIMULATION PROGRAM
ing the uninterrupted run of the simulation
program. The directions, which seem essential for any
• TaskList is a string type variable that en- future simulation application development, are
sures the chronological operation of the discussed next. These elements were not imple-
elements of the ShiftMemory (Time of mented, because they were not indispensable for
Tasks).vi vector. The TaskList string itself the models used in an educational environment.

313
Implementation of a DES Environment

Table 1.

Name Comment
Object handling functions
Clear Content of Objects.vi Clearing the content of the ObjectList
Delete and Object by Index.vi Deletion of an object from the ObjectList based on its index
Delete an Object Index from Content Objects List.vi Deletes one object index from those stored in the object
Delete Packed Objects.vi Deletes all elements of the packed Entity object
Get User Attribute.vi Giving user attribute value
Has Object Arrived.vi Examines if there is a DES Object in the input channel
Index of Object by Name.vi Based on the full name, searches the index of the object in the
ObjectList
Input Channel Enable.vi Allows/disables the data flow of the given input channel
Input ChannelS Enable.vi Allows/disables the data flow of the input channels
Insert an Object into ObjectList Creates a new object in the ObjectList
Insert Index AS FIRST into Content Objects List by Inserts a new index before the index of the objects stored in the
Priority.vi object
Insert Index AS LAST into Content Objects List.vi Inserts a new index after the index of the objects stored in the
object
Last Index of content Objects List.vi Gives the last index of those object indices, which are indexes of
objects stored in an object.
Entity Operations
Entity is Idle.vi The function examines the state of the Entity object
Entity System Attribute Setting.vi Sets the value of the system variable assigned to the Entity
Task management functions
Clear Content of TaskList.vi Deletes the content of the TaskList
Continue of Simulate.vi Examination of he continuation of simulation
Delete a Task by Index.vi Deletion of a Task from the TaskList object based on its index
Insert a Task into TaskList.vi Insertion of a Task into the TaskList object
Task at Index.vi Parameters of the Task appearing at the given index of the TaskList
object

Distribution functions
Exponential Distribution.vi Exponential distribution function
Empirical Distribution.vi Empirical distribution function
Normal Distribution.vi Normal distribution function
Triangle Distribution.vi Triangle distribution function
Uniform Distribution.vi Uniform distribution function
Other processes
Choose.vi Selects one channel out of two with a given probability
Next Empty Place Index in ObjectsList.vi Determining the index of the next empty space in the ObjectList
Object Parameters.vi Provides the parameters of the object that is at the given index of the
ObjectList
Piechart.vi Draws a pie chart

continued on following page

314
Implementation of a DES Environment

Table 1. continued
Name Comment
Real Output Channels Series (Selector Object).vi Accomplishes the merging of the physical and logic channels
Set System Attribute.vi The process of setting the values of the variables of the given object
system
Set User Attribute.vi The process of setting the user values of the given object
Status Histogram.vi Draws a status histogram
Input Connection Control.vi Determines the index of the object linked to the input channel
Used Objects.vi Provides the map of the objects used in the ObjectList

When the simulation applications demand statisti- responding to the data type of the attribute – can
cal calculations or new object definition during easily be accomplished. The user has to provide
runtime, these can be achieved with the definition for its initialization and its tasks in the DES Object
of new object attributes and/or with the extension with new program code segments. Generally, the
of the object’s functionality. object attributes are read in the Calculate state
of the simulation system and they are given new
Including Further Statistical values in this state after certain calculations.
Calculations in the DES Objects The attributes storing the position, speed, ac-
celeration, etc. and the position of the axis of the
The statistical calculations built into the objects Entity object’s rotation are already built in into the
of the LabVIEW DES system, can be extended present version of the simulation system. There
without any difficulties with further calculations. was no possibility in the present version, however,
To accomplish this (e.g., any statistical analysis), to service these new features with algorithms.
it is not necessary to interfere with the objects
already completed, but the most effective process Creation of New DES Objects
is to “surround” an already existing basic object
with the new requirements and with controlling If the aim is to create a new DES Object for a
the input and output channels of the Entity ob- completely new problem, it is always worthwhile
ject’s flows and with the help of the information to examine what type of system theoretical links
measured and stored by the Entity objects. To im- can be used to create the new object by using the
prove the system, graduate students have already existing DES Object(s). In such a case, the tasks
completed new objects with which the choice of of the new DES Object have to be programmed
input and output channels can be accomplished in the three run states (Initiation, Clear Inputs,
with a given distribution. Calculate) in a way that the Entity flow and the
changes in the state of the new DES Object are
Creation of New Attributes and continually stored based on accomplished tasks
Incorporation of their Effects in the object attributes.
into the DES Objects If the user aims to create a fully new, previously
non-existent DES object, it is worthwhile to study
The creation of all new attributes that are present the previously created basic objects.
in the DES Object – the creation of a vector cor-

315
Implementation of a DES Environment

REFERENCES NI LabVIEW Developer Team. (2007). LabVIEW


8.5 User Manual. Austin, TX: National Instru-
Brown, P. J. (1980). Writing Interactive Compil- ments.
ers and Interpreters. Chichester, UK: John Wiley
& Sons. Rohl, J. S. (1986). An Introduction to Compiler
Writing. New York: Macdonald and Jane’s.
Hilen, D. (2000). Taylor Enterprise Dynamics
(User Manual). Utrecht, The Netherlands: F&H
Simulations B.V.
KEY TERMS AND DEFINITIONS
Johnson, G. W. (1994). LabVIEW Graphical
Programming. New York: McGraw-Hill. Entity: The Entity objects are representing
the movement of material and information in the
Kelton, W. D., Sadowski, R. P., & Sadowski, D.
simulation system.
A. A. (1998). Simulation with Arena. Boston:
Buffer: The Buffer object ensures a temporary
McGraw-Hill.
storage of Entity objects until the connected object
Kheir, N. A. (1988). Systems Modeling and Com- behind the Buffer object in the material or informa-
puter Simulation. Basel, Switzerland: Marcel tion flow is able to fulfill the predefined task.
Dekker Inc. Machine: The Machine object executes op-
erations of specified time duration determined by
Kreutzer, W. (1986). System Simulation - Pro-
different function types of Entity objects.
gramming Styles and Languages. Reading, MA:
Sink: The Sink object extinguishes (swallows)
Addison Wesley Publishers.
the Entity objects.
Law, A. M., & Kelton, W. D. (1991). Simulation Source: The Source object issues Entity ob-
Modeling and Analysis. San Francisco: McGraw- jects at intervals determined by different function
Hill. types.
ObjectList: The ObjectList contains the
Ligetvári, Zs. (2005). New Object’s Development
predefined discrete event model objects of the
in DES LabVIEW (in Hungarian). Unpublished
simulation model.
Master’s thesis, Budapest University of Technol-
TaskList: The TaskList contains the list of
ogy and Economics, Hungary.
executing events in time.
Lönhárd, M. (2000). Simulation System of Discrete
Events in Delphi (in Hungarian). Unpublished
Master’s thesis, Budapest University of Technol-
ogy and Economics, Hungary.

316
317

Chapter 14
Using Simulation Systems
for Decision Support
Andreas Tolk
Old Dominion University, USA

ABSTRACT
This chapter describes the use of simulation systems for decision support in support of real operations,
which is the most challenging application domain in the discipline of modeling and simulation. To this
end, the systems must be integrated as services into the operational infrastructure. To support discov-
ery, selection, and composition of services, they need to be annotated regarding technical, syntactic,
semantic, pragmatic, dynamic, and conceptual categories. The systems themselves must be complete
and validated. The data must be obtainable, preferably via common protocols shared with the opera-
tional infrastructure. Agents and automated forces must produce situation adequate behavior. If these
requirements for simulation systems and their annotations are fulfilled, decision support simulation can
contribute significantly to the situational awareness up to cognitive levels of the decision maker.

INTRODUCTION equipment by providing the necessary stim-


uli for the system being tested,
Modeling and simulation (M&S) systems are ap- • training of new personnel working with the
plied in various domains, such as system, and many more.

• supporting the analysis of alternatives, The topic of this chapter is one of the most
• supporting the procurement of new systems challenging applications for simulation systems,
by simulating them long before first proto- namely the use of simulation systems for deci-
types are available, sion support in general, and particularly in direct
• supporting the testing and evaluation of new support of operational processes. In other words,
the decision maker is directly supported by M&S
DOI: 10.4018/978-1-60566-774-4.ch014 applications, helping with

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Using Simulation Systems for Decision Support

• “what-if” analysis for alternatives, of modeling and simulation. The same is true for
• plausibility evaluation for assumptions of decisions in complex environments, such as the
other party activities, battlefield of a military decision maker or the stock
• consistency checks of plans for future market for an international investment broker.
operations, To this end, the simulation system must be
• simulation of expected behavior based on integrated into operational systems as a deci-
the plan and trigger the real world obser- sion support service. In order to be successful,
vations for continuous comparison (are we not only the technical challenges of integration,
still on track), discrete and other simulation technologies, into
• manage uncertainty by simulating several operational IT systems must be solved. It is also
runs faster than real time and display vari- required that the simulation system fulfills addi-
ances and connected risks, tional operational and conceptual requirements as
• trend simulation to identify potentially in- well. Simulation systems are more than software.
teresting developments in the future based Simulation systems are executable models, and
on current operational developments, and models are purposeful abstractions of reality.
additional applications that support the In order to understand if a simulation system
meaningful interpretation of current data. can be used for decision support, the concepts
and assumptions derived to represent real world
While current decision support systems are objects and effects in a simplified form must be
focused on data mining and data presentation, understood. The conceptualization of the model’s
which is the display of snap-shot information and artifacts is as important as the implementation
historical developments are captured in most cases details of the simulation. As stated in Tolk (2006):
in the form of static trend analyses and display interoperability of systems requires composability
curves (creating a common operating picture), of models!
simulation systems display the behavior of the The author gained most of his experience in
observed system (creating a common executable the military sector, integrating combat M&S into
model). This model can be used by the decision Command and Control (C2) systems. The develop-
maker to manipulate the observed system “on the ment of the Levels of Conceptual Interoperability
fly” and use it not only for analysis, but also to Model (LCIM) capturing the requirement for
communicate the results very effectively to and alignment on various levels to support decision
with partners, customers, and supporters of his support is a direct result of the experiences of
efforts. As stated by van Dam (1999) during his integrating M&S services as web-services into
lecture at Stanford: “If a picture is worth a 1000 service-oriented C2 systems (Tolk et al., 2006). It
words, a moving picture is worth a 1000 static is directly related to the recommendations found
ones, and a truly interactive, user-controlled dy- in the North Atlantic Treaty Organization (NATO)
namic picture is worth 1000 ones that you watch Code of Best Practice for C2 Assessment (NATO,
passively.” That makes simulation very interesting 2002) that was compiled by a group of interna-
for managers and decision makers, encouraging tional operational research experts in support of
the use of decision support simulation systems. complex C2 analysis. It was also influenced by
Another aspect is that of complex systems: non- the recommendations of the National Research
linearity and multiple connections. In order to Council (2002, 2006), as using simulation for
understand and evaluate such system, traditional procurement decision or for analysis and using
tools of operational research and mathematics this analysis for decision support are closely
have to be increasingly supported by the means related topics.

318
Using Simulation Systems for Decision Support

Furthermore, the growing discipline of agent- simulation systems when being used for decision
directed simulation (ADS) is very helpful in support, as identified by the National Science
providing new insights and methods (Oren et al, Foundation and related organizations. Finally,
2000). ADS consists of three distinct yet related some examples are given and current develop-
areas that can be grouped under two categories. ments are highlighted.
First, agent simulation (or simulation for agents),
that is simulation of systems that can be modeled by
agents in engineering, human and social dynamics, RELEVANT WORK
military applications, and so on. Second, agents
for simulation can be grouped under two sub- The area of related and relevant work regarding
categories, namely agent-based simulation, which decision support systems in general and the use of
focuses on the use of agents for the generation of simulation systems for decision support in general
model behavior in a simulation study; and agent- is huge. A book chapter can never suffice for a
supported simulation, which deals with the use complete explanation. Therefore, the focus of this
of agents as a support facility to enable computer section is to highlight some of the most influencing
assistance by enhancing cognitive capabilities in works leading to formulation of the requirements
problem specification and solving. for simulation systems. Additional information is
The vision of using simulation systems in contained in the section giving examples of deci-
general, and discrete event simulation systems in sion support simulations in this chapter.
particular, for decision support is that a decision The need for using simulation systems in addi-
maker or manager can utilize an orchestrated tion to traditional decision support systems is best
set of tools to support his decision using reliable derived from the work documented in the NATO
simulation systems implementing agreed concepts Code of Best Practice for C2 Assessment (NATO,
using the best currently available data. It does 2002). After having operated under more or less
not matter if the decision support system is used fixed strategic and doctrinal constraints for several
in the finance market, where the stock market is decades, in which NATO and the Warsaw Pact
simulated on a continuous basis, always being faced each other in a perpetual lurking position,
adjusted and calibrated by the real stock data, or NATO suddenly faced a new operational environ-
if it used to support a traffic manager in guiding a ment for their decisions when the Warsaw Pact
convoy through a traffic jam during rush hour to broke apart. While in the old order the enemy was
the airport while constantly being updated by the well known – down to the equipment, strategy,
recent traffic news. The technologies described and tactics – the new so-called “operations other
here support the military commander in making than war” and “asymmetric operations” were
decisions based on the best intelligence and sur- characterized by uncertainty, incompleteness,
veillance data available by a sensor, as well as to and vagueness. At the same time, developments
the surgeon using a detailed model of the human in information technology allowed the efficient
body in preparation of a risky surgery. While the distribution of computing power in the form of
application fields are significantly different, the loosely coupled services. Consequently, the idea
underlying engineering methods are not. was to use an orchestrated set of operational tools
The section will start by presenting the rel- – all implemented as services that can be loosely
evant work, focusing on the special insights from coupled in case of need – to support the decision
the military domains before generalizing them maker with analysis and evaluation means in an
for other applications. The main part is built by area defined by uncertainty, incompleteness, and
enumerating and motivating the requirements for vagueness regarding the available information. In

319
Using Simulation Systems for Decision Support

Figure 1. Command and Control Improvements

order to measure improvement in this domain, the knowledge embedded within the C2 sys-
value chain of Net Centric Warfare was introduced; tem. Awareness is explicitly placed in the
see among others (Alberts and Hayes, 2003): cognitive domain.

• Data is factual information. The value C2 quality is improved by an order of magni-


chain starts with Data Quality describing tude when a new level of quality is reached in this
the information within the underlying C2 value chain. Figure 1 depicts this. C2 quality is
systems. improved by these developments as follows:
• Information is data placed into context.
Information Quality tracks the complete- • Data quality is characterized by stand-
ness, correctness, currency, consistency, alone developed systems exchanging data
and precision of the data items and infor- via text messages as used in most C2 sys-
mation statements available. tems. Having the same data available at the
• Knowledge is procedural application of in- distributed locations was the first goal to
formation. Knowledge Quality deals with reach.
procedural knowledge and information • By the introduction of a common op-
embedded in the C2 system such as tem- erational picture, data is put into context,
plates for adversary forces, assumptions which evolves the data into information.
about entities such as ranges and weapons, The collaborating systems using this com-
and doctrinal assumptions, often coded as mon operational picture result in an or-
rules. der of magnitude of improvement of the
• Finally, Awareness Quality measures Command and Control quality, as decision
the degree of using the information and makers share this common information. As

320
Using Simulation Systems for Decision Support

stated before: a picture is worth a 1,000 In order to identify what information is needed
words. to annotate operational M&S services, the Levels
• The next step, which is enabled by service- of Conceptual Interoperability Model (LCIM) was
oriented web-based infrastructures, is the developed. The closest application to the topic
use of simulation services for decision sup- of this book chapter is documented by Tolk et al.
port. Simulation systems are the prototype (2006). The LCIM exposes layers of abstractions
for procedural knowledge, which is the ba- that are often hidden: the conceptualization layer
sis for knowledge quality. Instead of just leading to the model, the implementation layer
having a picture, an executable simulation leading to the simulation, and technical questions
system can be used. of the underlying network. Each layer is tightly
• Finally, using intelligent software agents to connected with different aspects of interopera-
continually observe the battle sphere, apply tion. We are following the recommendation given
simulations to analyze what is going on, to by Page and colleagues (Page et al., 2004), who
monitor the execution of a plan, and to do suggested defining composability as the realm of
all the tasks necessary to make the decision the model and interoperability as the realm of the
maker aware of what is going on, C2 sys- software implementation of the model. Included
tems can even support situational aware- in the technical challenge of integrating networks
ness, the level in the value chain tradition- and protocols, the following three categories for
ally limited to pure cognitive methods. annotations emerge:

Traditional decision support systems enable • Integratability contends with the physical/
information quality, but they need the agile com- technical realms of connections between
ponent of simulation in order to support knowl- systems, which include hardware and firm-
edge quality as well. In other words, numerical ware, protocols, networks, etc.
insight into the behavior of complex systems as • Interoperability contends with the soft-
provided by simulations is needed in order to ware and implementation details of inter-
understand them. operations; this includes exchange of data
In order to support the integration of decision elements via interfaces, the use of middle-
support simulations, it is necessary to provide them ware, mapping to common information ex-
as services. However, this task is not limited to change models, etc.
technical challenges of providing a web service • Composability contends with the alignment
or a grid service, but the documentation of the of issues on the modeling level. The under-
service and the provided functionality is essential lying models are purposeful abstractions of
to enable the discovery, selection, and composition reality used for the conceptualization being
of this service in support of an operational need. implemented by the resulting systems.
The papers (Tosic et al., 2001) and (Srivastava
and Koehler, 2003) summarize the state of the art The LCIM increases the resolution by adding
of service composition. Pullen et al. (2005) show additional sub-layers of interoperation. The layer
the applicability for M&S services. Additionally, of integratability is represented by the technical
what is needed are annotations. Annotations layer, which ensures that bits and bytes can be
give meaning to services by changing them into exchanged and correctly interpreted. The syntactic
semantic web services. The reader is referred to layer allows mapping all protocols to a common
(Agarwal et al., 2005) and (Alesso and Smith, structure. The semantic layer defines the meaning
2005) for more information on this topic. of information exchange elements. Syntax and

321
Using Simulation Systems for Decision Support

semantics belong to the interoperability realm. observing system. Figure 2 shows the three prem-
In the pragmatic layer, the information exchange ises that need to be supported by the annotations
elements are grouped into business objects with describing the M&S services. The system – or the
a common context. Annotations on the dynamic M&S service – is herein described by its properties
layer capture the processes invoked and the system that are grouped into propertied concepts (the basic
state changes taking place when business objects simulated entities and attributes), the processes
are exchanged between systems. Finally, the rel- (the behavior of simulated entities and how their
evant constraints and assumptions are captured attributes change), and constraints (assumptions
in the conceptual layer, which completes the constraining the values of the attributes and the
composability realm. behavior of the system).
The LCIM supports a structured way to
annotate M&S services. Dobrev et al. (2007) • The first premise is that the observing sys-
show how this model can be used to support tem has a perception of the system to be
interoperation in general applications. Zeigler understood. This means that the proper-
and Hammonds (2007) use it to compare it with ties and processes must be observable and
their ideas on using ontological means in support perceivable by the observing system. The
of interoperation. It was furthermore applied for properties used for the perception should
the Department of Defense, the Department for not significantly differ in scope and reso-
Homeland Security, The Department of Energy, lution from the properties exposed by the
and NATO. These annotations are necessary system under observation.
requirements to allow discovery, selection, and • The second premise is that the observing
composition of services. system needs to have a meta-model of the
These annotations should be interpreted as a observed system. The meta-model is a
machine understandable version of the underlying description of properties, processes, and
conceptual model of the M&S service. Robinson constraints of the expected behavior of the
(2008) defines the conceptual model as “a non- observed system. Without such a model of
software specific description of the simulation the system, understanding is not possible.
model that is to be developed, describing the • The third premise is the mapping between
objectives, inputs, outputs, content, assumptions, observations resulting in the perception
and simplifications of the model.” He furthermore and meta-models explaining the observed
points out that there is a significant need to agree properties, processes, and constraints.
on how to do develop conceptual models and
capture information formally. What is needed In other words, machine understanding is the
in support of composable services is therefore selection process of the appropriate meta-model
to capture objectives, inputs, outputs, content, to explain the observed properties, processes,
assumptions, and simplifications of the model in and constraints. This corresponds to the selection
the technical, syntactical, semantic, pragmatic, of appropriate M&S services to support a deci-
dynamic, and conceptual category. The discipline sion. The properties and propertied concepts are
of model-based data engineering (Tolk and Diallo, described by syntax, semantic, and pragmatic an-
2008) is a first step into this direction. notations, processes by dynamic annotations, and
To understand why these annotations are so im- constraints by conceptual annotations capturing
portant, it is necessary to understand how machines objectives, inputs, outputs, content, assumptions,
gain understanding. Zeigler (1986) introduced a and simplifications in addition to implementation
model for understanding a system within another details and technical specifications. No matter

322
Using Simulation Systems for Decision Support

Figure 2. Premises for System’s Understanding

if these annotations are used to discover, select, for a candidate simulation system for decision
and orchestrate M&S functionality as operational support.
services or if they are used by intelligent agents All this related work sets the frame for describ-
to communicate their use, they are necessary for ing M&S services to support their discovery and
every application beyond the traditional system de- orchestration for integration as an orchestrated set
velopments that are often intentionally not reused of tools into the operational infrastructure used
in their requirements. This section can therefore by the decision maker. The following section
also serve as a guideline for what is needed to an- will describe the requirements for the simulation
notate legacy systems that shall be integrated into systems themselves in more detail.
a net-centric and service-oriented environment to
contribute to a system of systems.
Table 1 can be used as a checklist to ensure
that all information is captured or obtainable

Table 1. Checklist points for decision support simulation annotations

Annotation Categories Levels of Interoperation System Characteristics Conceptual Model


Characteristics
• Integratability • Technical • Properties • Objectives
• Interoperability • Syntactic • Concepts • Inputs
• Composability • Semantic • Processes • Outputs
• Pragmatic • Constraints • Content
• Dynamic • Assumptions
• Conceptual • Simplifications

323
Using Simulation Systems for Decision Support

REQUIREMENTS FOR are captured. This includes all aspects of the sys-
SIMULATION SYSTEMS tem: modeled entities (properties and concepts),
modeled behavior and interactions (processes),
This section will explain the necessary require- and modeled constraints. The reason is trivial: if
ments for simulation systems when they are something important is not part of the model, it
to be used as decision support systems. These cannot be considered for the analysis, nor can it
requirements may not be sufficient for all appli- be part of the recommended solution.
cation domains, so additional application domain The artifacts used for documentation of the
expertise is needed for informed selection. While system (and annotation) during the conceptu-
the focus in the last section was annotation, the alization phase should capture the necessary
focus here will be the content and completeness information. As defined by Robinson (2008) in
of the simulation system. his overview work on conceptual modeling, the
This section will start with general require- characteristics of a conceptual model are objec-
ments for all simulation systems to be applied to tives, inputs, outputs, content, assumptions, and
decision support and will finish with additional simplifications. A practical way to accomplish
requirements in the case a federation of simulation this task has been captured in the contributions
systems is to be applied, which is the more likely of Brade (2000), which will be addressed in the
scenario. As the NATO Code of Best Practice section on verification and validation.
(NATO, 2002) points out: it is highly unlikely Example: A simulation system shall be used to
that one tool or simulation system will be able to support the decision of where to install additional
deal with all questions describing the sponsor’s gas stations in a town. It models the cars used in
problem; the use of an orchestrated set of tools this town, the behavior of the car drivers, and
should be the rule. the gas stations already in use within this town.
This section extends and generalizes the find- The idea is to use simulation based optimization
ings documented in Tolk (1999) and referenced to find out how many new gas stations should be
in NRC (2002). While the principle results are built and where.
still valid, the development in the recent years, In order to be able to use the simulation system,
in particular in the domain of agent-based models additional system characteristics may have to be
in support of behavior modeling and of com- captured, such as
puter generated forces contributed significantly
to solutions in challenging areas that need to be • Under which circumstances are drivers
incorporated. The section on current developments willing to go to neighboring towns to buy
in this chapter will focus on these developments gas to fill up their cars? (Assumption that
in more detail. drivers in the town will use gas stations in
this town)
Modeling of Relevant • How will the competition react? Will they
System Characteristics build new stations? Will they close down
stations? (Assumption that only the com-
Models are purposeful abstractions from reality. pany conducting the study actively chang-
This means that they simplify some things, leave es the gas supply infrastructure)
others out, use assumptions, etc. When using a • Are there additional influences that are rel-
simulation system as a decision support simulation, evant, such as the overall driving behav-
it is crucial that all relevant system characteristics ior based on current average oil prices?

324
Using Simulation Systems for Decision Support

(Assumption that decision rules used by architectures is the alignment of protocols for data
simulated entities follow a closed world storage and exchange in operational systems and
assumption) decision support simulation systems. The optimal
case is that decision support simulation systems
Even if this is not implemented, the simula- and the embedded operational system use the same
tion can still be used in support of analysis, but data representation. If this is not the case, data
the expert must be very well aware of what the mediation may be a possible solution to mapping
simulation systems simulates and how. In other the existent operationally available data to the
words, an awareness of the assumptions and required initialization and input data. However, it
constraints affecting the validity of the simulation must be pointed out that data mediation requires
results is necessary. the mapping of data is complete, unambiguous,
In summary, it is essential that the simulation and precise. To this extent, Model-based Data
system can support the decision to be made by Engineering was developed and successfully ap-
ensuring that all concepts, properties, processes, plied (Tolk and Diallo, 2008).
and constraints identified to be relevant in the An aspect unique to M&S services is the need
problem specification process are implemented. that modeled data are conceptually connected to
The NATO Code of Best Practice (NATO, 2002) operationally available data. As models are ab-
gives guidance for the problem specification stractions of reality, some data may be “academic”
process. The conceptual models used for the abstractions that theoretically are constructible, but
simulation development document the respective are difficult to observe or to obtain. In particular
characteristics of the simulation. statistical measure of higher order, such as using a
negative polynomial bivariate intensity probability
Ability to Obtain All Relevant Data distribution function to model the movement of
entities as a fluid, often make perfectly sense when
Closely related is the second premise that must developing the model, but may be very hard to
be fulfilled: the relevant data needed for the feed with real world data.
simulation system initialization and execution Example: A simulation system shall be used
must be obtainable. Even if a simulation system to support a decision maker with evacuation
is complete in describing all concepts, proper- decisions during a catastrophic event (Muhdi,
ties, processes, and constraints, the model can be 2006). Most evaluation models currently used are
practically useless if the necessary data to drive flow-based models. The data available in a real
these models cannot be provided. The quality of emergency, however, is discrete, describing exit
the solution is driven by the quality of the model obstacles, individuals, and other data that need
and the quality of the data. to be converted into this model (and potentially
The NATO Code of Best Practice (NATO, mapped back in support of creating elements of a
2002) gives guidance with respect to obtaining plan that needs to be shared using the operational
data and ensuring the necessary quality of data. infrastructure).
Among the identified factors for good data are In summary, it is essential that data needed by
the reliability of sources and the accuracy of data. the model can be obtained and mediated. The data
Additional factors are the costs to obtain data, will be used to initialize the simulation systems
how well the data is documented, if and how the and as input data during execution.
data have been modified, etc.
Another aspect that increases in importance
in the area of net centricity and service- oriented

325
Using Simulation Systems for Decision Support

Validation and Verification of It is not trivial but is at least possible to ac-


Model, Simulation, and Data complish verification and validation for physical
processes and models. However, the simulated
Validation and verification are processes to deter- entities and processes are not limited to such
mine the simulation’s credibility. They deal with physical processes. Cognitive processes and
answering questions such as “Does the simulation decision models need to be modeled as well.
system satisfy its intended use? Can the simulation Moya and Weisel (2008) point out the resulting
system be used to evaluate specific questions? How challenges.
close does the simulation system come to reality?” Example: To show the necessity of verifica-
In other words, validation and verification are the tion and validation, two examples of simulation
processes of determining if a simulation is correct failures in operational environments are given
and usable to solve a given problem. that are directly applicable to decision support
The US Department of Defense defined vali- simulation systems as well.
dation and verification for military use in their Simulation in Testing: During Operation Iraqi
M&S instruction (DoD, 1996). Validation is the Freedom, Patriot missiles shot down two allied
process of determining the degree to which a aircraft & targeted another. On March 23, 2003,
model or simulation is an accurate representa- the pilot and co-pilot aboard a British Tornado
tion of the real world from the perspective of GR4 aircraft that was shot down by a U.S. Patriot
the intended uses. Verification is the process of missile died. On April 2, 2003, another Patriot
determining that a model or simulation imple- missile downed a U.S. Navy F/A-18C Hornet
mentation accurately represents the developer’s which was flying a mission over Central Iraq.
conceptual description and specifications. In The evaluation report identified one of the causes
other words, validation determines if the right of these failures stemmed from using an invalid
thing is coded while verification determines if simulation to stimulate the Patriot’s fire control
the thing is coded right. Validation determines system during its testing.
the behavioral and representational accuracy; Simulation in Engineering: Another cata-
verification determines the accuracy of trans- strophic event in spring 2003 was the Columbia
formation processes. disaster. The space shuttle had been damaged
There are many papers available dealing with by foam debris during takeoff. NASA engineers
the necessity to validate and verify models and decided, based on their professional judgment,
simulation before using them for decision making. that the damage would not endanger the shuttle
The interested reader is pointed to the overview of when returning to earth. They were wrong and
methods and tools provided by Balci (1998) and the shuttle broke apart when entering the atmo-
several specific papers by Sargent (1999, 2000, sphere, killing the crew and throwing the shuttle
2007). The work of Brade (2000) making practical program significantly back. What is of interest for
recommendations regarding artifacts was already the readers of this chapter is that the simulation
mentioned in a previous section. available to the experts predicted the disaster, but
It seems to be obvious that simulation systems the results were not deemed reliable and credible
designed to be used as decision support simula- by the experts. Obviously they were mistaken.
tion systems must be validated and verified. This In summary, it is necessary to only make use
is true for the models, the simulations, and the of validated and verified models and data for de-
data. If this is not the case, the results will not be cision support simulation systems. It is essential
credible and reliable and as such not applicable that the decision maker is supported with reliable
to support decisions. and credible information.

326
Using Simulation Systems for Decision Support

Creating Situation means of behavioral representation in modeling


Adequate Behavior and simulation.
In summary, intelligent software agents rep-
One of the most challenging premises is to fulfill resenting human behavior in simulation systems
the requirement for situation adequate behavior. must ensure that the simulated entities behave
This premise addresses the behavior of simulated correctly. Scripted and rule driven approaches are
entities, which is represented by the processes of not sufficient. The conference on behavioral rep-
the system characteristics. The premise has a very resentation in modeling and simulation (BRIMS)
practical side and a resulting challenge. Many is a good source of current research and proposed
simulation systems used in other application solutions. Yilmaz et al. (2006) are giving a good
domains, in particular for training and testing, overview of such use of agents in serious games
also require that the simulated entities behave as as well as in simulation systems.
they would in the real world. If this behavior is
connected with human decision making, it is quite Additional Issues When Using
often humans in the loop making the decision. Federations of Simulation Systems
A typical military computer assisted exercise
comprises not only the training audience, but also The first four premises must be fulfilled by every
soldiers representing the subordinates, partners, simulation system that will be used for decision
and superior commands, as well as the opposing support. However, as pointed out several times in
forces. To ensure that soldiers “train as they fight,” this chapter, the application of an orchestrated set
the units are commanded by military experts. The of tools in order to evaluate all relevant aspects of
simulation computes the movement, the attrition, a model is the rule. If several simulation systems
the reconnaissance, and other processes that are need to be used to provide the required functional-
based on physical aspects. It is more the rule than ity, some concerns need to be addressed that are
the exception that more soldiers are needed to unique to federations of simulation systems.
support the simulation system than are trained in The main challenge is to orchestrate simula-
an event. The use of agents to generate the orders tions not only regarding their execution, but also
is mandatory for decision support; otherwise the to conceptually align them to ensure that the fed-
manpower would increase to the point of no longer eration delivers a consistent view to the decision
being practical or feasible. maker fulfilling all requirements that have been
Example: If training on the brigade level is captured. The LCIM can support this challenge. A
conducted, approximately 800 orders have to be simulation federation in itself is a complex system
created in order to drive a simulation model. Tak- of systems. Current simulation interoperability
ing into account that not only the orders for the standards are not sufficient to support the neces-
brigade are needed, but also for the neighbored sary consistency. Besides several publications by
units and – last but not least – the orders for the the author in this domain, this view is shared by
enemy increases this number by the factor of many other experts in the field, such as Zeigler and
four to six resulting in the number of 3,000 to Hammonds (2007) show in their survey. Yilmaz
5,000 orders to be created for just one alternative. (2007) proposed the use of meta-level ontology
This is accomplished by a group of 500 to 600 relations to measure conceptual alignment.
soldiers. As this many personnel can never be The objective of these alignments is to har-
supported by a brigade headquarter that wants monize the three elements essential for simula-
to use the simulation for decision support, the tion result consistency, which are the concepts
majority of these orders must be generated by underlying the simulated entities (resolution and

327
Using Simulation Systems for Decision Support

structure), the internal decision logic used to interoperability standards), but that they are
generate the behavior of the simulated entities, aligned regarding concepts, internal decision logic,
and the external measure of performance used and external evaluation logic as well.
to evaluate the accomplishment. If this is not the Summarizing all five premises dealt with in
case, the results will be counter-intuitive at best, this chapter, the following enumeration lists the
and inconsistent and wrong at worst. As shown questions that need to be answered to ensure that
in Muguira and Tolk (2006), even if all federates the requirements are fulfilled:
are validated and correct, the federation may still
expose structural variances, making the result • Are all concepts having a role in solving
unusable for decision support. the problem identified and simulated in the
Example: The triangle of concepts, internal simulation?
decision logic, and external evaluation logic • Are the properties used to model the prop-
must be harmonized regarding all three aspects, ertied concepts in the necessary resolution
or structural variance can result in non-credible and the necessary structure?
results. • Are all identified processes (entity behav-
ior and overarching processes) modeled?
• Concepts and decision logic: Simulation A • Are the assumptions and constraints identi-
represents a fish swarm as a cubicle; simu- fied for the operational challenge to be de-
lation B uses a statistical distribution within cided upon reflected appropriately by the
a bowl. If the decision logic of simulation simulation system?
A is used to support a decision in simula- • Can operational data and author authorita-
tion B, the decision is based on the wrong tive data sources provide all data needed for
assumptions and is likely to be wrong. the initialization of the simulation system?
• Concepts and evaluation logic: If the mea- • Can operational data provide all data need-
sure of merit requires inputs not exposed ed as input data during the execution of the
by the federation, or if the structure and simulation system?
resolution are significantly different in the • Do the operational infrastructure and the
federated simulation systems, the evalua- decision support simulation system share
tion is wrong. the same data model, or – if this is not the
• Decision and evaluation logic: One of the case – can model-based data engineering
most observed reasons for strange behavior be applied to derive the necessary media-
in the results of federations is that the mea- tion functions? Are possible semantic loss-
sure of merit used for the evaluation and es resulting from the mapping acceptable?
the measure of merit used to optimize the • Is the data obtainable in the structure and
decisions internally are not harmonized. If resolution (and accuracy) needed, or – if
the decision logic targets to maximize the this is not the case – can the data be trans-
amount of fish captured in each event and formed into the required format?
the evaluation logic checks if the overall • Are all potential M&S services and simu-
regeneration of fish is ensured as well, it is lation systems validated and verified?
likely that structural variances will occur. • Are the data validated and verified?
• Is the behavior of all simulated entities
In summary, it must be ensured that the simula- situation adequate?
tion systems are not only coupled and technically • In case of personnel intensive simula-
correct (based on currently available simulation tion systems, can the human component

328
Using Simulation Systems for Decision Support

be replaced with intelligent software Everett (2002) describes the design of a


agents to produce the required deci- simulation model to provide decision support
sions (or can it be ensured that always for the scheduling of patients waiting for elec-
enough persons are available to support tive surgery in the public hospital system. The
the application)? simulation model presented in this work can be
• Are the represented concepts (simulated used as an operational tool to match hospital
entities) sufficient to produce the proper- availability with patient need. To this end, patients
ties needed for the measures of merit of the nominated for surgery by doctors are categorized
decision logic and the evaluation logic? by urgency and type of operation. The model is
• Are the measures of merit used for the in- then used to simulate necessary procedures, avail-
ternal decision logic aligned with the ex- able resources, resulting waiting time, and other
ternal evaluation logic? decision parameters that are displayed for further
evaluation. Therefore, the model can also be used
This list builds the core of questions the devel- to report upon the performance of the system and
oper of decision support simulation systems must as a planning tool to compare the effectiveness of
be able to answer positively. Additional applica- alternative policies in this multi-criteria decision
tion specific questions are likely and need to be health-care environment.
captured for respective development or integration Truong et al. (2005) present another application
projects as requirements. domain for decision supporting use of simulation:
fisheries policy and management decisions in
support of optimizing a harvesting plan for the
EXAMPLES OF DECISION SUPPORT fishing industry. As in many application areas,
SIMULATION APPLICATIONS the behavior of fish and the effects of harvesting
are not fully understood, but can be captured to
The previous sections dealt with the necessary sufficient detail to build a simulation that reflects
annotation for M&S services and the require- the known facts in sufficient detail. This enables
ments for simulation systems when being used simulation-based optimization using the simula-
for decision support. This section gives some tion to obtain quasi-objective function values of
selected references to examples of using simula- possible alternatives, in the example particular
tion for decision support. While these examples fishing schedules. This idea is applicable in similar
are neither complete nor exclusive, they do show environments with uncertain and imprecise data
that decision support simulation is already applied that exposes some trends that can be captured in
in various fields. simulations.
Kvaale (1988) describes the use of simulation Power and Sharda (2007) summarized related
systems in support of design decisions for a new ideas recently in their work on model-driven deci-
generation of fast patrol boats. This application is sion support systems. Following their definition,
the traditional use of simulation in support of the model-driven decision support systems use alge-
procurement process: alternatives are simulated braic, decision analytic, financial, simulation, and
and compared using a set of agreed to measures of optimization models to provide decision support.
merit. Although this application is not driving the Like this chapter, they use optimization models,
support using operational data directly obtained decision theory, and other means of operational
from operational systems, it is one of the first analysis and research as an orchestrated set of
journal papers describing the use of simulation tools in which simulation is embedded in an
systems for decision support. aligned way.

329
Using Simulation Systems for Decision Support

Decision support systems, as well as the use CURRENT DEVELOPMENTS


of simulation systems, have a relatively long his-
tory in the military domain. An example is given As with the previous section of this chapter, it is
by Pohl et al. (1999) who present the results of a extremely difficult to decide which of the current
project sponsored by the Defense Advance Project developments should be highlighted, as every
Research Agency (DARPA). The Integrated Ma- development in the discipline of M&S improves
rine Multi-Agent Command and Control System the usability of resulting systems for decision sup-
(IMMACCS) is a multi-agent, distributed system. port. The focus of contributions in this section is
It is designed to provide a common tactical picture therefore relatively small. As before, the idea is
as discussed earlier in this chapter and an early not to be restrictive but to give examples.
adapter of the agent-based paradigm for decision The military community used the Simulation
support. Between 1999 and 2004, the Office for Interoperability Workshops to work on the devel-
Naval Research (ONR) sponsored a series of opment of a technical reference model (TRM) for
workshops on decision support systems in the coupling of command, control, communications,
United States. Furthermore, Wilton (2001) pre- computers, intelligence, surveillance, and recon-
sented an overview of decision support simulation naissance (C4ISR) with M&S systems (Griffin
ideas integrated with C2 devices for the training et al., 2002). Figure 3 shows a generalization of
of soldiers. the model, as already recommended by Tolk et
Management related military applications are al. (2008).
regularly discussed at the annual International The model focuses on data exchange require-
Command and Control Research and Technology ments and categories. The data is categorized
Symposia (ICCRTS), which features a special as:
track on decision support. The work presented
here is often focused on cognitive aspects of • simulation specific management data
sense-making and aims more at increasing the unique to the decision support simulation
shared situational awareness than on a common system,
technical framework. Many principles are not • operational initialization data describing
limited to the military domain but are applicable the data needed for initialization of both
to all forms of agile organizations without fixed systems describing concepts, properties,
external structures. An example is the analysis processes, and constraints,
of requirements of cognitive analysis to support • dynamic exchange of operational data de-
C2 decision support system design by Potter et scribing information that captures the input
al. (2006). and output data of both worlds during ex-
The books edited by Tonfoni and Jain (2002), ecution, and
Phillips-Wren and Jain (2005), and Phillips-Wren • operational system specific management
et al. (2008) are valuable references for examples data unique to the IT infrastructure used by
of using means of artificial intelligence and in- the decision maker.
telligent software agents in support of decision
making using simulation systems. The use of Unfortunately, the standardization work on
ontological means to ensure composability of the TRM was never completed, so that besides
models and interoperability of simulations is the the final report of the study group and several
topic of several additional publications. contributing workshop papers no standard in sup-
port of embedding decision support simulations
into operational IT infrastructures exists. Work in

330
Using Simulation Systems for Decision Support

Figure 3. Generalization of the C4ISR Technical Reference Model

this domain would be very helpful to the M&S conceptual model elements defined by the BOM
community. standard contain descriptions of concepts, prop-
As pointed out before, the US Department of erties, and processes. The description is not only
Defense is working on a series of strategies and static, but the interplay is captured in the form
standards to enable net-centric operations. Another of state machines as well. The BOM template is
standard developed under the roof of the Simula- divided in five categories and reuses successful
tion Interoperability Standardization Organization ideas of the current simulation interoperability
(SISO), the Base Object Model (BOM) Standard, standard “High Level Architecture” (IEEE 1516)
is currently being evaluated to be used for the reg- and supports:
istration of M&S services. The standard is defined
in two documents, the “Base Object Model (BOM) • Model Identification by associating
Template Standard” and the “Guide for Base important metadata with the BOM.
Object Model (BOM) Use and Implementation” Examples include the author of the BOM,
(SISO, 2006). The first document provides the the responsible organization, security
essential details regarding the makeup and ontol- constraints, etc.
ogy of a BOM, the companion document gives • Conceptual Model Definition by describ-
examples and best practice guidelines for using ing patterns of interplay, state machines
and implementing the new standard. In summary, representing the aspects of the conceptual
the BOM standard provides a standard to capture model, entity types, and event types.
the artifacts of a conceptual model. Furthermore, • Modeling Mapping by defining what simu-
it can be used to design new simulation systems lated entities and processes represent what
as well as integrating legacy simulations. The elements of the conceptual model.

331
Using Simulation Systems for Decision Support

• Object Model Definition by recording the and sometimes even survival – will depend on
necessary implementation details (objects, the work and efforts produced.
interactions, attributes, parameter, and data
types as defined by IEEE 1516)
• Additional Supporting Tables in the form SUMMARY
of notes and lexicon definitions.
Decision support of operational processes is the
The BOM standard has successfully been most challenging application for simulation sys-
applied in several US military research projects. tems. In all other application domains, the neces-
Outside the US Department of Defense, its use sity for credible and reliable results is lower than
has not yet been documented sufficiently to speak for real world operation decision support. While
of a broadly accepted standard. The potentials, in all other domains there is always the chance to
however, are impressive, as shown by Searle react and counteract to insufficient M&S function-
and Brennan (2006) in their educational notes ality, a wrong recommendation in support of real
for NATO. world operations can lead to significant financial
As mentioned at the beginning of this section, trouble or even the loss of lives.
many other developments are of high interest This chapter summarized the requirements for
to decision support simulation developers. The simulation systems when being used for such ap-
increasing use of agent-directed simulation is plications. It showed the necessary annotation to
one aspect. The human behavior representa- allow the discovery, selection, and orchestration
tion in M&S is another. Complex systems in of M&S systems as services in service-oriented
knowledge-based environments (Tolk and Jain, environments. It also listed the premises for
2008) are another domain of interest, in particular simulation system functionality, focusing on com-
how to cope with uncertainties or how to apply pleteness of concepts, properties, processes, and
ontological means in support of complex system constraints, obtainability of data, validation and
interoperation. Enumerating all interesting fields verification, and the use of means of knowledge
lies beyond the scope of this chapter. management. The current developments continue
In summary, the developer of decision support to close gaps so that the use of simulation in the
simulation systems or the engineer tasked with the context of operational decision support will soon
integration of simulation systems for operational enable support to even the cognitive levels of
decision support must follow developments in all group decision making and common situational
levels of interoperation: from technical innova- awareness.
tions enabling better connectivity (such as optical
memories or satellite based internet communica-
tions) via improvement in the interoperability REFERENCES
domain (such as new developments in the domain
of semantic web services) to conceptual questions Agarwal, S., Handschuh, S., & Staab, S. (2005).
(including standardizing artifacts in machine Annotation, Composition and Invocation of Se-
understandable form). As systems developed for mantic Web Services. Journal of Web Semantics,
this domain need to be highly reliable and cred- 2(1), 1–24.
ible, the engineer needs not only to be highly
technically competent, but also needs to follow
the code of ethics of the profession, as wealth –

332
Using Simulation Systems for Decision Support

Alberts, D. S., & Hayes, R. E. (2003). Power to Kvaale, H. (1988). A decision support simulation
the Edge, Command and Control in the Informa- model for design of fast patrol boats. European
tion Age. Department of Defense Command and Journal of Operational Research, 37(1), 92–99.
Control Program; Information Age Transforma- doi:10.1016/0377-2217(88)90283-4
tion Series. Washington, DC.
Moya, L., & Weisel, E. (2008) The Difficulties
Alesso, H. P., & Smith, C. F. (2005). Developing with Validating Agent Based Simulations of Social
Semantic Web Services. Wellesley, MA: A.K. Systems. Proceedings Spring Multi-Simulation
Peters, Ltd. Conference, Agent-Directed Simulation, Ottawa,
Canada.
Balci, O. (1998). Verification, Validation, and
Testing. In J. Banks, (Ed) Handbook of Simula- Muguira, J. A., & Tolk, A. (2006). Applying
tion. New York: John Wiley & Sons. a Methodology to Identify Structural Vari-
ances in Interoperations. Journal for De-
Brade, D. (2000). Enhancing M&S Accreditation
fense Modeling and Simulation, 3(2), 77–93.
by Structuring V&V Results. Proceedings of the
doi:10.1177/875647930600300203
Winter Simulation Conference, (pp. 840-848).
Muhdi, R. A. (2006). Evaluation Modeling: De-
Department of Defense. (1996). Modeling &
velopment, Characteristics and Limitations. Pro-
Simulation Verification, Validation, and Accredita-
ceedings of the National Occupational Research
tion (US DoD Instruction 5000.61). Washington,
Agenda (NORA), (pp. 87-92).
DC: Author.
National Research Council. (2002). Modeling
Dobrev, P., Kalaydjiev, O., & Angelova, G.
and Simulation in Manufacturing and Defense
(2007). From Conceptual Structures to Semantic
Acquisition: Pathways to Success. Washington,
Interoperability of Content. (LNCS Vol. 4604, pp.
DC: Committee on Modeling and Simulation
192-205). Berlin: Springer-Verlag.
Enhancements for 21st Century Manufacturing
Everett, J. E. (2002). A decision support simula- and Defense Acquisition, National Academies
tion model for the management of an elective Press.
surgery waiting system. Journal for Health
National Research Council. (2006). Defense
Care Management Science, 5(2), 89–95.
Modeling, Simulation, and Analysis: Meeting
doi:10.1023/A:1014468613635
the Challenge. Washington, DC: Committee on
Family of Standards for Modeling and Simula- Modeling and Simulation for Defense Transfor-
tion (M&S) High Level Architecture (HLA) – (a) mation, National Academies Press.
IEEE 1516-2000 Framework and Rules; (b) IEEE
North Atlantic Treaty Organization. (2002).
1516.1-2000 Federate Interface Specification; (c)
NATO Code of Best Practice for Command and
IEEE 1516.2-2000 Object Model Template (OMT)
Control Assessment. Revised ed. Washington,
Specification IEEE 1516-2000, IEEE Press.
DC: CCRP Press
Griffin, A., Lacetera, J., Sharp, R., & Tolk, A.
Ören, T. I., & Numrich, S. K. S.K., Uhrmacher,
(Eds.). (2002). C4ISR/Sim Technical Reference
A.M., Wilson, L. F., & Gelenbe, E. (2000). Agent-
Model Study Group Final Report (C4ISR/Sim
directed simulation: challenges to meet defense
TRM), (SISO-Reference Document 008-2002-
and civilian requirements. In Proceedings of the
R2). Simulation Interoperability Standards Or-
32nd Conference on Winter Simulation, Orlando,
ganization, Orlando, FL.
Florida, December 10 – 13. San Diego, CA: Soci-
ety for Computer Simulation International.

333
Using Simulation Systems for Decision Support

Page, E. H., Briggs, R., & Tufarolo, J. A. (2004). Sargent, R. G. (1999). Validation and verification
Toward a Family of Maturity Models for the Simu- of simulation models. Proceedings of the Winter
lation Interconnection Problem. In Proceedings of Simulation Conference, 1, 39–48.
the Spring Simulation Interoperability Workshop.
Sargent, R. G. (2000). Verification, Validation,
New York: IEEE CS Press.
and Accreditation of Simulation Models. Pro-
Phillips-Wren, G., Ichalkaranje, N., & Jain, L. C. ceedings of the Winter Simulation Conference,
(Eds.). (2008). Intelligent Decision Making –An (pp. 50-59).
AI-Based Approach. Berlin: Springer-Verlag.
Sargent, R. G. (2007). Verification and validation
Phillips-Wren, G., & Jain, L. C. (Eds.). (2005). of simulation models. Proceedings of the Winter
Intelligent Decision Support Systems in Agent- Simulation Conference, (pp. 124-137).
Mediated Environments. The Netherlands: IOS
Searle, J., & Brennan, J. (2006). General Interoper-
Press.
ability Concepts. In Integration of Modelling and
Pohl, J. G., Wood, A. A., Pohl, K. J., & Chapman, Simulation (pp. 3-1 – 3-8), (Educational Notes
A. J. (1999). IMMACCS: A Military Decision- RTO-EN-MSG-043). Neuilly-sur-Seine, France
Support System. DARPA-JFACC 1999 Symposium
Simulation Interoperability Standards Or-
on Advances in Enterprise Control, San Diego,
ganization (SISO). (2006). The Base Object
CA.
Model Standard; SISO-STD-003-2006: Base
Potter, S. S., Elm, W. C., & Gualtieri, J. W. (2006). Object Model (BOM) Template Standard; SISO-
Making Sense of Sensemaking: Requirements of STD-003.1-2006: Guide for Base Object Model
a Cognitive Analysis to Support C2 Decision Sup- (BOM) Use and Implementation. Orlando, FL:
port System Design. Proceedings of the Command SISO Documents.
and Control Research and Technology Symposium.
Srivastava, B., & Koehler, J. (2003). Web Ser-
Washington, DC: CCRP Press.
vice Composition - Current Solutions and Open
Power, D. J., & Sharda, R. (2007). Model- Problems. Proceedings ICAPS 2003 Workshop
driven decision support systems: Concepts and on Planning for Web Services.
research directions. Journal for Decision Sup-
Tolk, A. (1999). Requirements for Simulation
port Systems, 43(3), 1044–1061. doi:10.1016/j.
Systems when being used as Decision Support
dss.2005.05.030
Systems. [), IEEE CS press]. Proceedings Fall
Pullen, J. M., Brunton, R., Brutzman, D., Drake, Simulation Interoperability Workshop, I, 29–35.
D., Hieb, M. R., Morse, K. L., & Tolk, A. (2005).
Tolk, A., & Diallo, S. Y. (2008) Model-Based
Using Web Services to Integrate Heterogeneous
Data Engineering for Web Services. In Nayak
Simulations in a Grid Environment. Journal on
R. et al. (eds) Evolution of the Web in Artificial
Future Generation Computer Systems, 21, 97–106.
Intelligence Environments, (pp. 137-161). Berlin:
doi:10.1016/j.future.2004.09.031
Springer.
Robinson, S. (2008). Conceptual modelling for
Tolk, A., Diallo, S. Y., & Turnitsa, C. D. (2008).
simulation Part I: definition and requirements. The
Implied Ontological Representation within the
Journal of the Operational Research Society, 59,
Levels of Conceptual Interoperability Model.
278–290. doi:10.1057/palgrave.jors.2602368
[IDT]. International Journal of Intelligent Deci-
sion Technologies, 2(1), 3–19.

334
Using Simulation Systems for Decision Support

Tolk, A., Diallo, S. Y., Turnitsa, C. D., & Winters, Zeigler, B. P. (1986) Toward a Simulation Meth-
L. S. (2006). Composable M&S Web Services odology for Variable Structure Modeling. In Elzas,
for Net-centric Applications. Journal for De- Oren, Zeigler (Eds.) Modeling and Simulation
fense Modeling and Simulation, 3(1), 27–44. Methodology in the Artificial Intelligence Era.
doi:10.1177/875647930600300104
Zeigler, B. P., & Hammonds, P. E. (2007) Mod-
Tolk, A., & Jain, L. C. (Eds.). (2008). Complex eling & Simulation-Based Data Engineering:
Systems in the Knowledge-based Environment. Introducing Pragmatics into Ontologies for
Berlin: Springer Verlag. Net-Centric Information Exchange. New York:
Academic Press.
Tonfoni, G., & Jain, L. C. (Eds.). (2003). Inno-
vations in Decision Support Systems. Australia:
Advanced Knowledge International.
KEY TERMS AND DEFINITIONS
Tosic, V., Pagurek, B., Esfandiari, B., & Patel, K.
(2001). On the Management of Compositions of Decision Support Systems: are information
Web Services. In Proceedings Object-Oriented systems supporting operational (business and
Web Services (OOPSLA). organizational) decision-making activities of a
Truong, T. H., Rothschild, B. J., & Azadivar, F. human decision maker. The DSS shall help deci-
(2005) Decision support system for fisheries man- sion makers to compile useful information from
agement. In Proceedings of the 37th Conference raw data and documents that are distributed in
on Winter Simulation, (pp. 2107-2111). a potentially heterogeneous IT infrastructure,
personal or educational knowledge that can be
van Dam, A. (1999). Education: the unfinished static or procedural, and business models and
revolution. [CSUR]. ACM Computing Surveys, strategies to identify and solve problems and
31(4). doi:10.1145/345966.346038 make decisions.
Wilton, D. (2001). The Application Of Simulation Decision Support Simulation Systems:
Technology To Military Command And Control are simulation systems supporting operational
Decision Support. In Proceedings Simulation (business and organizational) decision-making
Technology and Training Conference (SimTecT), activities of a human decision maker by means
Canberra, Australia. of modeling and simulation. They use decision
support system means to obtain, display and
Yilmaz, L. (2007). Using meta-level ontology evaluate operationally relevant data in agile
relations to measure conceptual alignment and contexts by executing models using operational
interoperability of simulation models. In Pro- data exploiting the full potential of M&S and
ceedings of the Winter Simulation Conference, producing numerical insight into the behavior of
(pp. 1090-1099). complex systems.
Yilmaz, L., Ören, T., & Aghaee, N. (2006). Intel- Integratability: contends with the physical/
ligent agents, simulation, and gaming. Journal technical realms of connections between systems,
for Simulation and Gaming, 37(3), 339–349. which include hardware and firmware, protocols,
doi:10.1177/1046878106289089 networks, etc. If two systems can exchange physi-
cal data with each other in a way that the target
system receives and decoded the submitted data
from the sending system the two systems are
integrated.

335
Using Simulation Systems for Decision Support

Interoperability: contends with the software Validation and Verification: are processes to
and implementation details of interoperations; this determine the simulation credibility. Validation is
includes exchange of data elements via interfaces, the process of determining the degree to which a
the use of middleware, and mapping to common model or simulation is an accurate representation
information exchange models. If two systems are of the real world from the perspective of the in-
integrated and the receiving system can not only tended uses. Validation determines the behavioral
decode but understand the data in a way that is and representational accuracy. Verification is the
meaningful to the receiving system, the systems process of determining that a model or simula-
are interoperable. tion implementation accurately represents the
Composability: contends with the alignment developer’s conceptual description and specifi-
of issues on the modeling level. The underlying cations. Verification determines the accuracy of
models are purposeful abstractions of reality used transformation processes.
for the conceptualization being implemented by Model-Based Data Engineering: is the
the resulting systems. If two systems are interop- process of applying documented and repeatable
erable and share assumptions and constraints in a engineering methods for data administration – i.e.
way that the axioms of the receiving system are managing the information exchange needs includ-
not violated by the sending system, the systems ing source, format, context of validity, fidelity, and
are composable. credibility –, data management – i.e. planning, or-
Conceptual Modeling: is the process of defin- ganizing and managing of data, including defining
ing a non-software specific formal specification and standardizing the meaning of data and of their
of a conceptualization building the basis for the relations -, data alignment – i.e. ensuring that data
implementation of a simulation system (or another to be exchanged exist in all participating systems,
model-based implementation) describing the ob- focusing a data provider /data consumer relations
jectives, inputs, outputs, content, assumptions, -, and data transformation – i.e. the technical
and simplifications of the model. The conceptual process of mapping different representations of
model conceptual model is a bridge between the the same data elements to each other – supported
real world observations and the high-level imple- by a common reference model.
mentation artifacts.

336
337

Chapter 15
The Simulation of Spiking
Neural Networks
David Gamez
Imperial College, UK

ABSTRACT
This chapter is an overview of the simulation of spiking neural networks that relates discrete event
simulation to other approaches and includes a case study of recent work. The chapter starts with an
introduction to the key components of the brain and sets out three neuron models that are commonly used
in simulation work. After explaining discrete event, continuous and hybrid simulation, the performance
of each method is evaluated and recent research is discussed. To illustrate the issues surrounding this
work, the second half of this chapter presents a case study of the SpikeStream neural simulator that
covers the architecture, performance and typical applications of this software along with some recent
experiments. The last part of the chapter suggests some future trends for work in this area.

INTRODUCTION present their own challenges and are well suited to


discrete event simulation.
In recent years there has been a great deal of inter- This chapter starts with some background infor-
est in the simulation of neural networks to test our mation about the operation of neurons and synapses
theories about the brain, and these models are also in the brain and sets out some of the reasons why
being used in a wide variety of applications rang- simulation plays an important role in neuroscience
ing from data mining to machine vision and robot research. Next, some common neural models are
control. In the past the majority of these simulations examined and the chapter moves on to look at the
were based on the neurons’ average firing rate and differences between continuous simulation, discrete
there is now a growing interest in the development event simulation and the emerging hybrid approach.
of more biologically realistic spiking models, which The issues in this area are then illustrated with a more
detailed look at the SpikeStream neural simulator
DOI: 10.4018/978-1-60566-774-4.ch015 that covers its architecture, performance, typical

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Simulation of Spiking Neural Networks

Figure 1. Two connected neurons

applications and recent experiments. Finally, the cell membrane alters the voltage across it and
chapter concludes with some likely future direc- neurons actively manage this voltage by pumping
tions for work in this area. ions across the cell membrane so that the voltage
remains at around -70 millivolts – a value known
as the resting potential.
NEURAL SIMULATION Neurons communicate by sending pulses of
electrical energy along their axons that are known
Neurons and the Brain as spikes. When a spike reaches the dendrite of
another neuron it increases that neuron’s voltage,
The main information-processing elements in the and if the voltage passes a threshold of about -50
human brain are 80 billion cells called neurons millivolts, then the neuron ‘fires’ and emits a pulse
that are organized into a highly complex structure or spike of electrical activity that is transmitted to
of nuclei and layers that process different types of other neurons. Since axons and dendrites have dif-
information.1 Neurons send signals to each other ferent lengths, these spikes take different amounts
along fibres known as axons, which connect to of time to propagate between neurons, and once
the dendrites of other neurons at junctions called a neuron has fired there is a period know as the
synapses, as shown in Figure 1. refractory period in which it is unresponsive or
The body of each neuron is surrounded by less responsive to incoming spikes (see Figure 2).
a cell membrane that separates the nucleus and In the human brain each neuron fires at around
intracellular fluid from the extracellular fluid 1-5 Hz and sends spikes to between 3,000 and
outside the cell. The cell membrane is crossed by 13,000 other neurons (Binzegger, Douglas and
porous channels that allow ions, such as sodium Martin, 2004), leading to approximately 1022 spike
and potassium, to be exchanged with the extra- events per second.
cellular fluid. Since these ions have a positive The axon of one neuron connects to the dendrite
or negative charge, their movement through the of another at a junction known as a synapse. When

338
The Simulation of Spiking Neural Networks

Figure 2. Change in the voltage across a neuron’s cell membrane over time

a spike reaches a synapse, the change in voltage tions between billions of neurons produce human
causes a chemical called a neurotransmitter to behaviour. As part of the research on this area a
be released, which crosses the synaptic junction lot of anatomical work has been carried out on
and changes the voltage of the receiving neuron. humans and animals in order to understand the
Depending on the amount and type of neurotrans- large scale connections in the brain, and studies
mitter, the spike from one neuron can have a large of brain-damaged patients have revealed a great
or small effect on another, and in many neuron deal about the specialized processing of different
models this is expressed by saying that each syn- brain areas. The detailed behaviour of individual
apse has a particular weight. Although there are a neurons has also been measured using in vitro
number of different learning mechanisms in the neuron preparations, and electrodes have been
brain, most research on neural networks focuses on implanted into the brains of living animals and
models that use changes in the synapses’ weights humans to record from individual neurons as they
to store information. One of the most widely used interact with their environment. However, whilst
methods for learning weights in spiking networks electrodes have good spatial and temporal resolu-
is called spike time dependent plasticity (STDP). tion, it is only possible to use a few hundred at a
This algorithm increases the weight of a synapse time, which severely limits the amount of informa-
when a spike arrives just before the firing of a tion that can be gathered by this method.
neuron, because it is assumed that the spike has In recent years, a great deal of progress has been
contributed to the neuron’s firing. When a spike made with the development of non-invasive scan-
arrives just after the neuron has fired the weight is ning systems, such as PET or fMRI, that provide
decreased because the synapse has not contributed an almost real time picture of brain activity. These
to the neuron’s activity. technologies have helped us to understand the
A comprehensive theory about how the brain specializations and interactions of different brain
works should be able to explain how the interac- areas, but they are limited by the fact that their

339
The Simulation of Spiking Neural Networks

dV
maximum spatial resolution is about 1mm3, which C = −∑ I k (t ) + I (t )
dt (1)
represents the average activity of approximately k

50,000 neurons, and their maximum temporal


where C is the capacitance, I(t) is the applied
current at time t, and ∑ I k is the sum of the ionic
resolution is of the order of 1 second.
This limited access to neurons in the brain has k

led many researchers to work on large scale neu- currents across the cell membrane. Since the re-
ral simulations and in the longer term it is hoped sistance of the leakage channel is independent of
that this type of model will be able to predict the the voltage, its conductance, gL, can be expressed
behaviour of neural tissue and greatly increase as the inverse of the resistance (2):
our understanding of the brain’s functions. Many
neural simulations have also been carried out 1
gL =
in engineering and computer science as part of R (2)
work on machine learning, computer vision and
robotics. The conductance of the other channels de-
pends on the voltage, V, and changes over time.
Neuron and Synapse Models When all the channels are open they transmit with
maximum conductance gNa or gK. However, some
In the past the majority of neural simulations of the channels are usually blocked and the prob-
modelled neurons’ average firing rates and propa- ability that a channel is open is described by the
gated these firing rates from neuron to neuron at variables m, n and h, where m and h control the
each time step. Although this type of model is Na+ channels and n controls the K+ channels. The
not particularly biologically realistic, it can be behaviour of the variables m, n, and h is specified
used for brain-inspired models of populations of by equations (3), (4) and (5):
neurons - see, for example, Krichmar, Nitz, Gally
and Edelman, (2005) – and it has been used ex- dm
= α m (V )(1 - m) - β m (V )m
tensively in machine learning and robotics. More dt (3)
recently, an increase in computer power and the
recognition that spike timing plays an important dn
part in the neural code (Mass and Bishop, 1999) = α n (V )(1 - n) - β n (V )n
dt (4)
have motivated more researchers to simulate
spiking neurons.
dh
The most detailed spiking neural models are = α h (V )(1 - h) - β h (V )h
typically based on an approach put forward by dt (5)
Hodgkin and Huxley (1952), who treated the
cell membrane as a capacitor and interpreted the where α and β are empirical functions of the volt-
movement of ions through the channels as dif- age that were adjusted by Hodgkin and Huxley to
ferent currents. In the standard Hodgkin-Huxley fit their measurements of the squid’s giant axon
model there is a sodium channel, Na+, a potassium (see Table 1). Putting all of this together, the sum
channel, K+, and an unspecific leakage channel of the three current components is expressed in
that has resistance R. The rate of change of the equation (6):
voltage, V, across a neuron’s membrane is given
by equation (1): ∑I
k
k = g Na m3 h(V − ENa ) + g K n 4 (V − EK ) + g L (V − EL )

(6)

340
The Simulation of Spiking Neural Networks

Table 1. Parameters of the Hodgkin-Huxley equations (adapted from Gerstner and Kistler, 2002, p.
36)

Parameter Value
αm (2.5 – 0.1V) / [ exp (2.5 – 0.1V) – 1 ]
αn (0.1 – 0.01V) / [ exp (1 – 0.1V) – 1 ]
αh 0.07 exp (-V / 20)
βm 4 exp (-V / 18)
βn 0.125 exp (-V / 80)
βh 1 / [ exp (3 – 0.1V) + 1 ]
ENa 115 mV
EK -12 mV
EL 10.6 mV
gNa 120 mS/cm2
gK 36 mS/cm2
gL 0.3 mS/cm2
C 1 μF/cm2

where ENa, EK and EL are parameters based on the tial of the neuron is zero and when it exceeds the
empirical data and given in Table 1. The complex threshold the neuron fires and the contributions
morphology of neurons’ axons and dendrites is from previous spikes are reset to zero. The volt-
modelled by treating them as a series of intercon- age Vi at time t for a neuron i that last fired at tˆ is
nected compartments whose electrical properties given by equation (7):
are calculated at each time step, and more recent
( t − t (j f ) )
models include a number of different types and −
Vi (t ) = ∑∑ wij e τm
− e n − (t −ti ) Η '(t − tˆi )
ˆm

distributions of ion channels and the details of j f (7)


individual synapses. This type of modelling is
very computationally expensive and it can only where ωij is the synaptic weight between i and j,
be carried out for a small number of neurons on τ m is the membrane time constant, f is the last
a single computer, or using supercomputers for firing time of neuron j, m and n are parameters
larger simulations. controlling the relative refractory period, and H’
A less accurate, but more computationally ef- is given by equation (8):
ficient approach is to treat each neuron as a point
and use internal variables to represent the mem-
brane potential and other attributes. One common
Η '(t − tˆi ) = { ∞ , if 0 ≤ ( t − tˆi ) < ρ
1, otherwise
(8)

point neuron model is the Spike Response Model in which ρ is the absolute refractory period. The
(Gerstner and Kistler, 2002; Marian, 2003), which Spike Response Model is particularly good for
has three components: a leaky integrate and fire event-based simulation because its equations
of the weights of incoming spikes, an absolute can be used to calculate the voltage across the
refractory period in which the neuron ignores cell membrane retrospectively after a spike has
incoming spikes, and a relative refractory period been received.
in which it is harder for spikes to push the neuron A second widely used point neuron model was
beyond its threshold potential. The resting poten- put forward by Izhikevich (2003). In this model

341
The Simulation of Spiking Neural Networks

the voltage V of a neuron at time t with a time step • Propagate spikes generated in the previous
ts is given by equations (9), (10) and (11): time step to the appropriate synapses and
neurons, possibly with a delay.
Vt +ts = Vt + ts (0.04Vt 2 + 5Vt +140 - ut + I ) (9) • Update the states of all the neurons and
synapses.
ut +ts = ut + a (bVt +ts - ut ) (10) • Return to step 2 or terminate simulation
if the maximum amount of time has been
if(Vt >= 30 mV); then reached.

Vt = c One of the main computational costs of con-


tinuous simulation is the update of all the neurons
u=u+d (11) and synapses at each time step, which scales
linearly with the number of updates and depends
on the complexity of the models. In continuous
where u is a membrane recovery variable repre- simulation the propagation of spikes with differ-
senting the sodium and potassium currents, and ent delays can be handled using a circular buffer3
a, b, c and d are parameters that are used to tune that requires a store and retrieve operation in
the behaviour of the neuron. Izhikevich claims memory for each spike, and this has a cost that
that this model can reproduce the behaviour of a scales linearly with the number of spikes. For a
number of different types of neuron, but it has the network of N neurons firing F times per second
disadvantage that it is more complicated to use with with an average connectivity per neuron P, the
discrete event simulation. Both the Spike Response cost per second of simulated time is given in
Model and Izhikevich’s model are typically used in equation (12):4
conjunction with basic synapses that pass on their
weight to the neuron when they receive a spike, C N × N CS × N × P
+ + CP × F × N × P (12)
possibly with some learning behaviour. ts ts

Continuous Simulation of where CN is the cost of updating a neuron, CS


Spiking Neural Networks is the cost of updating a synapse, CP is the cost
of propagating a single delayed spike, and ts is
Continuous simulation is also known as time- the time step duration in seconds. The first part
driven or synchronous simulation and its defining of (12) is the cost of updating all of the neurons
feature is that updates to the model are driven by at each time step, which is likely to be high for
the advance of a simulation clock.2 At each time complex neuron models. The central part is the
step the states of all the neurons and synapses are cost of updating all of the synapses at each time
calculated, spikes are transmitted to other neurons, step and although synapse models are generally
the time step is advanced and then the process much less complex than neuron models, these
is repeated again. These steps in a continuous synapse updates can incur a considerable cost
simulation are summarized below: because there can be up to 10,000 synapses per
neuron in a biologically realistic simulation. The
• Initialize each component of the model. last part of (12) is the cost of propagating the
• Advance the time step by a fixed amount. spikes and passing the synapses’ weights to the
Time steps of 0.1 ms or 1 ms are typically neurons. The cost of propagation, CP, will be low
used with spiking neural networks.

342
The Simulation of Spiking Neural Networks

if a circular buffer is used for the delays and the Hodgkin-Huxley models that would be difficult
spikes’ weights are simply added to the neurons’ to handle using discrete event simulation. A good
voltages. However, if the network activity or con- example of this type of work is the Blue Brain
nectivity is high, then there will be a large number project (Markram, 2006), which is attempting to
of spike events to process each second, and the rate produce a biologically accurate model of a single
of propagating the spikes will be limited by the cortical column of 10,000 neurons intercon-
speed of the computer network if the simulation nected with 30 million synapses. This project is
is distributed across several machines. simulating these neurons on an IBM Blue Gene
One of the main advantages of continuous supercomputer containing 8192 processors and
simulation is that it is generally easier to implement 2 TB of RAM – a total of 22 x 1012 teraflops
than discrete event simulation - especially when processing power. The first simulation of a rat’s
working in parallel across multiple machines - and cortical column was carried out in 2006 and it is
it is good for neural models that generate sponta- currently running at about two orders of magni-
neous activity, such as the Hodgkin-Huxley and tude slower than real time. The main objective
Izhikevich models discussed earlier. Continuous of this project is to reproduce the behaviour of
simulation can also be equally or more efficient in vitro rat tissue, and so the stimulation is not
than discrete event simulation when there are a connected to sensory input and it has not been
large number of events, although it is often much used to control the behaviour of a real or virtual
less efficient when events are sparse or the neuron robot. Continuous simulation has also been used
model is complicated, because of the updates to by Izhikevich and Edelman (2008) to create an
all the neurons and synapses at each time step. anatomically realistic neural network based on
The main disadvantage of the continuous ap- Izhikevich’s model with one million neurons and
proach is that the resolution of the time step limits half a billion synapses. This simulation ran sixty
the performance and accuracy of the simulation. times slower than real time and was carried out
Continuous simulations with a large time step on a Beowulf cluster with 60 3GHz processors
will model a given amount of time faster than and a total of 90 GB RAM.
simulations with a small time step, but larger
time steps can substantially alter the behaviour Discrete Event Simulation of
of the network by changing whether spikes arrive Spiking Neural Networks
inside or outside the neurons’ refractory periods.5
Furthermore, the simultaneous transmission of The characteristic feature of discrete event simula-
spikes at each time step leads to an unnatural tion is that updates to the model are event-driven
clustering at low time step resolutions that affects instead of clock-driven, and this type of simula-
the learning behaviour.6 Against these problems, tion typically works by maintaining a queue of
Brette et al. (2007) argue that since neurons have events that are sorted by the time at which they are
no effect on each other within the minimum scheduled to occur. The simulation engine reads
spike transmission delay, it should be possible to the next event, advances the clock to the time
produce accurate simulations by setting the time specified by the event, updates the model, and
step to this value. However, to make this work then adds new events to the queue at the correct
without loss of accuracy, the spikes within each position. Discrete event simulation is consider-
time step have to be sorted and dealt with in an ably easier when spontaneous neural activity is
event-based manner. not required – for example, when using the Spike
The continuous approach is often used to Response Model – and in this case the only events
simulate spiking neural networks based on that need to be handled are the arrival of spikes at

343
The Simulation of Spiking Neural Networks

neurons, with a typical simulation following the 4. Extract the spike event with the lowest timing
steps outlined below. from the front of the spike event queue.
5. Advance the simulation clock to the time
1. Add initial spike events to the event queue specified by the event.
sorted by the time at which they are scheduled 6. Update the states of the neurons and synapses
to occur. that are affected by the spike.
2. Extract the spike event with the lowest time 7. Update the predicted firing times of the
from the front of the event queue. neurons that are affected by the spike.
3. Advance the simulation clock to the time 8. Return to step 3 or terminate simulation if
specified by the event. the maximum time has elapsed or there are
4. Update the states of the neurons and synapses no events left to process in both queues.
that are affected by the spike. 9. Else if no spike events are scheduled to occur
5. If any of the updated neurons fire, insert earlier than the next neuron firing event.
appropriate spike events into the queue. 10. Extract the neuron firing event with the
6. Return to step 2 or terminate simulation if lowest timing from the front of the second
the queue is empty or if the maximum time queue.
has elapsed. 11. Add spikes generated by this event to the
spike event list.
With neuron models that generate spontaneous 12. Update the predicted firing time of the
activity, such as Izhikevich (2003), neuron firing neuron.
can happen independently of incoming spikes and 13. Return to step 3 or terminate simulation if
the simulation engine has to predict when each the maximum time has elapsed or there are
neuron will fire and handle these predictions as no events left to process in both queues.
separate events. Whilst spike arrivals are events
that definitely happen once they have been added If each neuron fires at rate F and connects on
to the event queue, predictions about neurons’ average to another P neurons, then the firing of
firing times are uncertain because spike arrivals a neuron generates P x F spike events that have
can advance or retard the predicted times. These to be inserted into the first queue in the correct
uncertain events are typically handled by adding order with a cost CQ1 for each event. When the
them to a second event queue and events in this spikes from a neuron are processed, the predicted
second queue are made certain when no earlier firing times of all neurons connected to the firing
spike events are scheduled to occur. This ap- neuron have to be revised in the second queue,
proach is similar to three phase activity scanning with each access and update to the neurons in the
(Banks, Carson II, Nelson and Nicol, 2005) and second queue having a cost CQ2. The processing
the algorithm is given below.7 of each spike event also triggers the updates of
the associated synapse and neuron, which incurs
1. Add initial spike events to the spike event costs CS and CN. Putting this together, the total
queue. cost of discrete event simulation of N neurons
2. Predict the times at which the neurons will for one second of simulated time is given by
spontaneously fire and add these events to equation (13):8
the second event queue.
3. If the next spike event is scheduled to occur F × N × P × (CN + CS + CQ1 + CQ 2 ) (13)
earlier than the next neuron firing event.

344
The Simulation of Spiking Neural Networks

Since the time resolution of the events can extended the event-driven strategy to efficiently
be specified precisely without incurring a per- handle the noisy background activity that is typi-
formance penalty, discrete event simulation cally found in biological networks.
avoids many of the accuracy problems that are
associated with the continuous approach. When Hybrid Strategies
there are sparse events (i.e. N and/or P are low)
discrete event simulation can be very efficient, Over the last few years a number of simulators
but when event rates are high, the management have emerged that are based on a hybrid ap-
of the event queues can become a major perfor- proach, which combines some of the benefits
mance bottleneck - for example, a simulation of of continuous and discrete event simulation and
a network of 10,000 neurons firing at 5 Hz with avoids many of their limitations. The most com-
10,000 connections per neuron has to handle a mon hybrid strategy is to make the update of the
billion events per simulated second. Although synapses and possibly the neurons event driven,
there are data structures in which the enqueue and to maintain a continuous simulation clock
and dequeue operations take constant average that delays the spikes and avoids the inefficien-
time (Brown, 1988; Claverol, Brown and Chad, cies of event queues. The algorithm for this type
2002), the cost of each operation in these types of of hybrid strategy is as follows.
queue can be high, and with simpler queue imple-
mentations, such as binary heaps, the cost of the 1. Initialize each component of the model.
enqueue and dequeue operations varies with the 2. Advance the time step by a fixed amount.
logarithm of the number of events. Event-based Time steps of 0.1 ms or 1 ms are typically
simulation has the further disadvantage that it can used with spiking neural networks.
be difficult to distribute the event queues over 3. Propagate spikes generated in the previous
multiple computers. time step, possibly with a delay.
One of the better known event-based spiking 4. Update the states of the neurons and synapses
simulators is MVASpike, which is a simulation that are affected by the spikes.
library written in C++ that comes with a number 5. Return to step 2 or terminate simulation
of different neuron models and STDP learning. if the maximum amount of time has been
This simulator can be controlled using a high level reached.
scripting language, such as Python, and it was used
by Tonnelier, Belmabrouk and Martinez (2007) Hybrid strategies often work by dividing the
to model a network with 1000 neurons and 106 network into a number of neuron groups that can
connections. A second discrete event simulator is be distributed across several machines, and the
SpikeSNNS, which is based around a single event synchronization between the different groups is
queue and was used by Marian (2003) to model achieved through the exchange of spike lists at
the development of eye-hand coordination in a each time step.
network with several hundred neurons. Other work The computational cost of the hybrid strategy
in this area has been carried out by Mattia and Del consists of the cost of updating each neuron and
Guidice (2000), who developed an efficient event- synapse that receives a spike, CN and CS, and the
driven algorithm based around a discrete set of cost of propagating each delayed spike CP. For a
ordered delays and demonstrated it on a network of network of N neurons firing F times per second
15,000 neurons that ran approximately 150 times with an average connectivity per neuron P, the
slower than real time. There is also the research update cost per second of simulated time is given
of Reutimann, Giugliano and Fusi (2003), who by equation (14):

345
The Simulation of Spiking Neural Networks

Cts
F × N × P (CN + CS + CP ) + (14)
a firing rate of 1Hz (Delorme and Thorpe, 2003).
ts Whilst the SpikeNET architecture was influential,
the free version was not widely used because of
where Cts is the cost of advancing the time step
a number of critical limitations, such as a lack of
and ts is the time step resolution in seconds. The
spike delay and a poor user interface.11 A second
advance of the time step has a very small cost and
well-known hybrid simulator is NEST, which
on a single computer the cost of spike propagation
has been under development for a number of
is also reasonably low since the delays in spike
years and has been used for biologically inspired
transmission can be efficiently managed using
simulations by a number of researchers. Simula-
a circular buffer. However, when the network is
tion runs are set up in NEST using a proprietary
distributed across several machines the exchange
scripting language and it can simulate a network
of spike lists can consume a large amount of pro-
of 100,000 neurons and 1 billion connections with
cessing power and network bandwidth, especially
an average firing rate of 2.5Hz at approximately
when there are multiple delays. This problem can
400 times real time using 8 processors (Morrison
be reduced by managing the delays of incoming
et al., 2005). A third example of a hybrid simula-
spikes locally on each machine and spike list buff-
tor is SpikeStream, which works along broadly
ering can also be used to improve the efficiency
similar lines to SpikeNET and NEST and will
of message exchange.9
now be covered in more detail.
When the connectivity and/or neuron firing
rate is low the event-driven update of the neurons
leads to substantial performance gains, but with
THE SPIKESTREAM
biological firing rates and levels of connectivity,
NEURAL SIMULATOR
the neurons become updated at the same rate as
they would in a continuous simulation, or even
Introduction
at a greater rate. Event-driven neuron updates
are also not possible with neuron models that
SpikeStream is a hybrid neural simulator that was
generate spontaneous activity, such as those of
developed as part of the CRONOS project to build
Hodgkin-Huxley and Izhikevich. Since synapses
a conscious robot. It has good performance and
receive events at a much lower rate than neurons,
a considerable amount of effort was put into the
it is generally always advantageous to make the
development of a good graphical interface that
update of the synapses event driven, which can
enables users to view the neuron activity as the
reduce the simulation cost by several orders of
simulation runs and fine tune the parameters of
magnitude. The hybrid approach still suffers from
the model. SpikeStream was designed for robotic
the accuracy problems of continuous simulation,
work and it can exchange streams of spikes with
but since hybrid simulation is more efficient, the
external devices over a network. The key features
time step can be set to a high resolution without
of SpikeStream are as follows:
incurring a major performance penalty, and it is
also possible to use an event-based approach within
• Written in C++ using Qt for the graphical
each time step to deliver events with a higher
user interface.
precision than the time step resolution.10
• Database storage.
SpikeNET was one of the first hybrid neural
• Parallel distributed operation.
simulators to be developed and it could simulate
• Sophisticated visualisation, editing and
400,000 neurons and 19 million connections in
monitoring tools.
real time with a time step resolution of 1ms and
• Modular architecture.

346
The Simulation of Spiking Neural Networks

• Variable delays. • Patterns. Holds spatiotemporal patterns


• Dynamic synapses. that can be applied to the network for train-
• Dynamic class loading. ing or testing.
• Live operation. • Neural Archive. Stores archived neuron
• Spike exchange with external devices over firing patterns. Each archive contains an
a network. XML description of the network and data
in XML format.
This overview of SpikeStream starts with • Devices. The devices that SpikeStream can
its architecture and graphical interface and then exchange spikes with over a network.
moves on to its performance and communication
with external devices. Some application areas and These databases are edited by SpikeStream
recent experiments are also covered along with Application and used to set up the simulation run.
some information about the SpikeStream release They can also be edited by third party applications
under the terms of the GPL license. – for example, to create custom connection pat-
terns or neuron arrangements - without affecting
Architecture SpikeStream’s ability to visualize and simulate
the network.
SpikeStream is built with a modular architecture
that enables it to operate across an arbitrary number Graphical User Interface
of machines and allows third party applications to
make use of its editing, archiving and simulation An intuitive graphical user interface has been writ-
functions. The main components of the archi- ten for SpikeStream with the following features
tecture are a number of databases, the graphical (see Figure 3):
SpikeStream Application, programs to carry out
simulation and archiving functions, and dynami- • Editing. Neuron and connection groups
cally loaded neuron and synapse classes. can be created and deleted.
• 3D Visualisation. Neuron and connection
Databases groups are rendered in 3D using OpenGL
and they can be rotated, selectively hidden
SpikeStream is organized around a number of or shown, and their individual details dis-
databases that hold information about the network played. The user can drill down to infor-
model, patterns and devices. This makes it easy mation about a single synapse or view all
to launch simulations across a variable number of the connections simultaneously.
of machines and provides a great deal of flex- • Simulation. The simulation tab has con-
ibility in the creation of connection patterns. The trols to start and stop simulations and vary
SpikeStream databases are as follows: the speed at which they run. Neuron and
synapse parameters can be set, patterns
• Neural Network. Each neuron has a unique and external devices connected and noise
ID and connections between neurons are injected into the system.
stored as a combination of the presyn- • Monitoring. Firing and spiking patterns
aptic and postsynaptic neuron IDs. The can be monitored and variables, such as a
available neuron and synapse types along neuron’s voltage, graphically displayed.
with their parameters are also held in this • Archiving. Archived simulation runs can
database. be loaded and played back.

347
The Simulation of Spiking Neural Networks

Figure 3. SpikeStream graphical user interface. The numbers indicate the following features: (1) Dialog
for monitoring the firing of neurons in a layer, (2) Dialog for monitoring variables inside the neurons, such
as the calcium concentration and voltage, (3) 3D network view, (4) Simulation controls, (5) Dialog for
viewing and setting the noise in the network, (6) Dialog for viewing and setting neuron parameters.

Much more information about the graphical exchanged between neurons are a compressed
features of SpikeStream can be found in the manual version of the presynaptic and postsynaptic neu-
that is included with the release. ron IDs, which enables each spike to be uniquely
routed to a class simulating an individual synapse,
Simulation Engine and variable delays are created using a circular
buffer.
The SpikeStream simulator is based on the Unlike the majority of neural simulation
SpikeNET architecture and it consists of a number tools, SpikeStream can operate in a live mode
of processes that are launched and coordinated in which the neuron models are calculated using
using PVM, with each process modelling a group real time instead of simulation time. This enables
of neurons using the hybrid approach. The spikes SpikeStream to control robots that are interacting

348
The Simulation of Spiking Neural Networks

Table 2. Test networks

Small network Medium network Large network


Neurons 4,000 10,000 19,880
Connections 321,985 1,999,360 19,760,878

with the real world and to process input from live them to be distributed across multiple machines.
data sources, such as cameras and microphones. The Spike Response Model was used in these tests
Although SpikeStream is primarily a hybrid along with a basic synapse model.
simulator, it can update all of the neurons and/or At the beginning of each simulation run the
synapses at each time step to accommodate neuron networks were driven by random external current
models that generate spontaneous activity. until their activity became self sustaining and then
their performance was measured over repeated
Archiver runs of 300 seconds. The first two networks were
tested on one and two Pentium IV 3.2 GHz ma-
During a simulation run, the firing patterns of the chines connected using a megabit switch with time
network can be recorded by SpikeStream Archiver, step values of 0.1 and 1.0 ms. The third network
which stores lists of spikes or firing neurons in could only be tested on two machines because its
XML format along with a simple version of the memory requirements exceeded that available on
network model. a single machine. All of the tests were run without
any learning, monitoring or archiving.12
Neuron and Synapse Classes The results from these tests are plotted in
Figure 4, which shows the amount of time taken
Neuron and synapse classes are implemented as to simulate one second of biological time for
dynamically loaded libraries, which makes it easy each test network. In this graph the performance
to experiment with different neuron and synapse difference between 0.1 and 1.0 ms time step
models without recompiling the whole application. resolution is partly due to the fact that ten times
Each dynamically loadable class is associated with more time steps were processed at 0.1 ms resolu-
a parameter table in the database, which makes tion, but since SpikeStream is a hybrid simulator,
it easy to change parameters during a simulation the processing of a time step is not a particularly
run. The current distribution of SpikeStream in- expensive operation. The performance differ-
cludes neuron and synapse classes implementing ence between 0.1 and 1.0 ms time step resolution
the Spike Response Model and Brader, Senn and was mainly caused by changes in the networks’
Fusi’s (2007) STDP learning rule. dynamics that were brought about by the lower
time step resolution, which reduced the average
Performance firing frequency of the networks by the amounts
given in Table 3.
The performance of SpikeStream was measured The differences in average firing frequency
using three test networks put forward by Brette shown in Table 3 suggest that the relationship
et al. (2007). These networks contained 4,000, between real and biological time needs to be
10,000 and 20,000 neurons that were randomly combined with other performance measures for
interconnected with a 2% probability (see Table event-based simulators. To address this issue, the
2), and they were divided into four layers to enable number of spikes processed in each second of real

349
The Simulation of Spiking Neural Networks

Figure 4. Time taken to compute one second of biological time for one and two machines using time
step resolutions of 0.1 and 1 ms

time was also measured and plotted in Figure 5. the SpikeStream results. The only results that are
This graph shows that SpikeStream can handle directly comparable are those for NEST, which are
between 800,000 and 1.2 million spike events given by Brette et al. (2007, Figure 10B) for two
per second on a single machine and between 1.2 machines. On the 4,000 neuron network NEST
million and 1.8 million spike events per second takes 1 second to compute 1 second of biologi-
on two machines for the networks that were cal time when the synapse delay is 1 ms and 7.5
tested. Figure 4 and Figure 5 both show that the seconds to compute 1 second of biological time
performance increased when the processing load when the synapse delay is 0.125 ms. Compared
was distributed over multiple machines, but with with this, SpikeStream takes either 14 or 30
network speed as a key limiting factor, multiple seconds to simulate 1 second of biological time,
cores are likely to work better than multiple net- depending on whether the time step resolution is
worked machines. 1.0 or 0.1 ms, and these SpikeStream results are
Most of the performance measurements for independent of the amount of delay.
other simulators that are cited by Brette et al. (2007) A second point of comparison for the perfor-
are for different neuron and synapse models, and mance of SpikeStream is SpikeNET. The lack of
so they cannot be meaningfully compared with a common benchmark makes comparison diffi-

Table 3. Average firing frequencies in simulation time at different time step resolutions

Time step resolution Small network Medium network Large network


0.1 ms 109 Hz 72 Hz 40 Hz
1.0 ms 79 Hz 58 Hz 30 Hz

350
The Simulation of Spiking Neural Networks

Figure 5. Number of spikes processed per second of real time for one and two machines using time step
resolutions of 0.1 and 1 ms

cult, but Delorme and Thorpe (2003) claim that • Loosely synchronized UDP. Spikes are sent
SpikeNET can simulate approximately 19.6 mil- and received continuously to and from the
lion spike events per second, whereas SpikeStream external device with the rate of the simula-
can handle a maximum of 1.2 million spike events tion determined by the rate of arrival of the
per second on a single PC for the networks tested spike messages.
(see Figure 5). This measurement for SpikeNET • Unsynchronized UDP. Spikes are sent and
was obtained using a substantially slower machine, received continuously from the external
and its performance would probably be at least 40 device. This option is designed for live
million spike events per second today. work with robots.

External Devices The main external device that has been used
and tested with SpikeStream is the SIMNOS
One of the unique features of SpikeStream is that virtual robot (see Figure 6), which is simulated
it can pass spikes over a network to and from in soft real time using NVIDIA PhysX and has
external devices, such as cameras and real and an internal structure based on the human muscu-
virtual robots, in a number of different ways: loskeletal system (Gamez, Newcombe, Holland
and Knight, 2006).13 Visual data, muscle lengths
• Synchronized TCP. Spikes are ex- and joint angles are encoded by SIMNOS into
changed with the device at each time spikes that are sent across a computer network to
step. SpikeStream and the external de- SpikeStream, where they are used to directly fire
vice only move forward when they have neurons or change their voltage. SIMNOS also
both completed their processing for the receives muscle length data from SpikeStream in
time step. the form of spiking neural events, which are used

351
The Simulation of Spiking Neural Networks

Figure 6. SIMNOS virtual robot. The lines are the virtual muscles whose lengths are sent as spikes to
SpikeStream. The outlines of spheres with arrows are the joint angles, which are also sent as spikes to
SpikeStream

to control the virtual robot. Together SIMNOS which network is best at performing a spe-
and SpikeStream provide an extremely powerful cific task.
way of exploring sensory and motor processing • Models of consciousness and cogni-
and integration. tion. Dehaene and Changeux (2005) and
Shanahan (2008) have built models of con-
Applications sciousness and cognition based on the brain
that could be implemented in SpikeStream.
SpikeStream’s performance and flexibility make A comprehensive review of this type of
it suitable for a number of applications: work can be found in Gamez (2008a).
• Neuromorphic engineering. SpikeStream’s
• Biologically-inspired robotics. Spiking dynamic class loading architecture makes
neural networks developed in SpikeStream it easy to test neuron and synapse models
can be used to process sensory data from prior to their implementation in silicon.
real or virtual robots and generate motor Initial work has already been done on en-
patterns. A good example of this type of abling SpikeStream to read and write ad-
work is that carried out by Krichmar et al. dress event representation (AER) events,
(2005) on the Darwin series of robots. which would enable it to be integrated into
• Genetic algorithms. The openness of AER chains, such as those developed by
SpikeStream’s architecture makes it easy the CAVIAR Project.
to write genetic algorithms that edit the da- • Teaching. Once installed SpikeStream is
tabase to create new neural networks and well documented and easy to use, which
automatically run simulations to identify makes it a good tool for teaching students

352
The Simulation of Spiking Neural Networks

Figure 7. Experimental setup with the eye of SIMNOS in front of a red and blue cube. Spikes sent from
the network control the pan and tilt of SIMNOS’s eye and spikes containing red or blue visual informa-
tion are received from SIMNOS and used to stimulate neurons corresponding to the location of the red
or blue data in the visual field

about biologically structured neural net- After training, Motor Cortex moved SIM-
works and robotics. NOS’s eye around at random until a blue object
appeared in its visual field. This switched the
In a recent set of experiments carried out by network into its offline ‘imagination’ mode, in
Gamez (2008b) SpikeStream was used to simu- which it generated motor patterns and ‘imagined’
late a network with 17,544 neurons and 698,625 the red or blue visual input that was associated
connections that controlled the eye movements with these potential eye movements. This process
of the SIMNOS virtual robot. The experimental continued until it ‘imagined’ a red visual stimulus
setup is shown in Figure 7. that positively stimulated Emotion. This removed
The network was organized into ten layers the inhibition, and SIMNOS’s eye was moved to
whose overall purpose was to direct SIMNOS’s look at the red stimulus. Full details about the
eye towards ‘positive’ red features of its environ- experiments are given in Gamez (2008b).
ment and away from ‘negative’ blue objects. To This work was inspired by other simulations
carry out this task it included an ‘emotion’ layer of the neural correlates of consciousness, such
that responded differently to red and blue stimuli as Dehaene and Changeux (2005) and Shanahan
and neurons that learnt the association between (2008), and it shows how spiking neural networks
motor actions and visual input. These neurons can model cognitive characteristics, such as imagi-
were used to ‘imagine’ different eye movements nation and emotion, and control eye movements
and select the ones that were predicted to result in an ‘intelligent’ way. In the future more sophis-
in a positive visual stimulus. The connections ticated versions of this network might be able to
between the layers are shown in Figure 8. teach themselves new behaviours by ‘imagining’
The first part of the experiments was a training different motor actions and choosing to execute
phase in which the network learnt the association the one that has the most positive effect on their
between motor output and visual input. During emotion systems.
this training phase spontaneous activity in Motor
Cortex changed the position of SIMNOS’s eye, Release
copies of the motor signals were sent from Mo-
tor Integration to Red/ Blue Sensorimotor, and SpikeStream is available for free download
the synapse classes on these connections used under the terms of the GPL license. The current
Brader et al.’s (2007) rule to learn the associa- (0.1) release has 25,000 source lines of code,14
tion between an eye movement and red and blue full source code documentation, a mailing list
visual input. for SpikeStream users, and a comprehensive 80

353
The Simulation of Spiking Neural Networks

Figure 8. Neural network with SIMNOS eye. Arrows indicate connections within layers, between layers
or between the neural network and SIMNOS

page manual. SpikeStream is also available pre- less biologically realistic than Hodgkin-Huxley
installed on a virtual machine, which works on models, they are much easier to simulate and
all operating systems supported by VMware and understand, and over the next few years there is
can be run using the free VMware Player. More likely to be much more research that models mil-
information about the release is available at the lions and possibly billions of point neurons using
SpikeStream website. the hybrid approach.
Large simulations are rarely needed in less
biologically inspired applications, such as machine
FUTURE TRENDS learning and robotics, where relatively small num-
bers of neurons can be used to identify patterns
The measurement of living neurons is likely to be or control a robot arm - for example, see Tani
a problem for many years to come, and so neural (2008). For this type of research there will be an
simulations will continue to play an important increasing need for more user-friendly simulators,
role in neuroscience, where they provide a valu- such as SpikeStream, that can exchange data with
able way of testing theories about the brain. As external devices and allow the user to adjust the
computer power increases it should be possible behaviour of the network as it runs.
to simulate networks with millions of neurons Over the next few years there is likely to be
based on Hodgkin-Huxley models, and this type significant progress with the development of large
of work is likely to be carried out using the con- scale neural models in silicon. One example of
tinuous approach. Although point neurons are this type of work is the Neurogrid system, which

354
The Simulation of Spiking Neural Networks

uses analogue silicon circuits to emulate neurons’ are a number of ways in which this issue can be
ion-channels and routes the spikes digitally.15 substantially overcome, and a new generation
Neurogrid is still under development and it should of simulators are emerging that can model large
eventually be able to model a million neurons and numbers of neurons and synapses using the hybrid
six billion synapses (Silver, Boahen, Grillner, approach.
Kopell and Olsen, 2007). Another significant hard-
ware project is SpiNNaker, which is attempting
to simulate a billion spiking neurons in real time ACKNOWLEDGMENT
using a large array of power-efficient processors
(Furber, Temple and Brown, 2006). Many thanks to Owen Holland for feedback,
In the longer term it is hoped that it will be- support and advice about this work. The interface
come possible to validate biologically inspired between SIMNOS and SpikeStream was devel-
neural models using data from real brains. The oped in collaboration with Richard Newcombe,
low spatial and/or temporal resolution of our cur- who designed the spike conversion methods, and
rent measurement techniques makes this almost I would also like to thank Renzo De Nardi and
impossible to carry out at present, although it is Hugo Gravato Marques for many useful sugges-
possible to ground simulation work using data tions and discussions. This research was funded
recorded from in vitro neuron preparations. One by the Engineering and Physical Science Research
of the major problems with the validation of Council Adventure Fund (GR/S47946/01).
brain-inspired neural systems is that the activity
of even the most basic natural neural network is
driven by stimulation from the environment and REFERENCES
used to control the behaviour of an organism.
This will make the use of real or virtual robots Banks, J., Carson, J. S., II, Nelson, B. L., & Nicol,
an important part of the testing of neural models D. M. (2005). Discrete event simulation (4th Ed.).
in the future. Upper Saddle River, NJ: Pearson Prentice Hall.
Binzegger, T., Douglas, R. J., & Martin, K. A. C.
(2004). A quantitative map of the circuit of cat
CONCLUSION primary visual cortex. The Journal of Neurosci-
ence, 24(39), 8441–8453. doi:10.1523/JNEURO-
This chapter has explained why simulation plays SCI.1400-04.2004
an important role in neuroscience research and out-
lined some of the applications of artificial neural Brader, J. M., Senn, W., & Fusi, S. (2007).
networks in computer science and engineering. Learning real-world stimuli in a neural network
Although the continuous simulation approach has with spike-driven synaptic dynamics. Neural
accuracy and performance limitations, the nature Computation, 19(11), 2881–2912. doi:10.1162/
and complexity of the Hodgkin-Huxley equations neco.2007.19.11.2881
make continuous simulation the best choice for this
Brette, R., Rudolph, M., Carnevale, T., Hines,
type of model. Discrete event simulation can be
M., & Beeman, D., Bower et al. (2007). Simula-
an efficient and accurate way of simulating point
tion of networks of spiking neurons: A review of
neurons, but it can be complicated to implement
tools and strategies. Journal of Computational
and suffers from performance issues when there
Neuroscience, 23, 349–398. doi:10.1007/s10827-
are a large number of events. Whilst the hybrid
007-0038-6
approach also suffers from inaccuracies, there

355
The Simulation of Spiking Neural Networks

Brown, R. (1988). Calendar queues: A fast Gamez, D. (2008a). Progress in machine con-
0(1) priority queue implementation for the sciousness. Consciousness and Cognition, 17(3),
simulation event set problem. Journal of 887–910. doi:10.1016/j.concog.2007.04.005
Communication ACM, 31(10), 1220–1227.
Gamez, D. (2008b). The Development and analy-
doi:10.1145/63039.63045
sis of conscious machines. Unpublished doctoral
CAVIAR project website (n.d.). Available at http:// dissertation. University of Essex, UK. Available
www.imse.cnm.es/caviar/. at http://www.davidgamez.eu/mc-thesis/
Claverol, E., Brown, A., & Chad, J. (2002). Dis- Gamez, D., Newcombe, R., Holland, O., & Knight,
crete simulation of large aggregates of neurons. R. (2006). Two simulation tools for biologically
Neurocomputing, 47, 277–297. doi:10.1016/ inspired virtual robotics. Proceedings of the IEEE
S0925-2312(01)00629-4 5th Chapter Conference on Advances in Cyber-
netic Systems (pp. 85-90). Sheffield, UK: IEEE.
CRONOS project website (n.d.). Available at
www.cronosproject.net. Gerstner, W., & Kistler, W. (2002). Spiking neuron
models. Cambridge, UK: Cambridge University
Dehaene, S., & Changeux, J.-P. (2005). Ongo-
Press.
ing spontaneous activity controls access to con-
sciousness: A neuronal model for inattentional Haydon, P. (2000). Neuroglial networks: Neurons
blindness. Public Library of Science Biology, and glia talk to each other. Current Biology, 10(19),
3(5), 910–927. 712–714. doi:10.1016/S0960-9822(00)00708-9
Delorme, A., & Thorpe, S. J. (2003). SpikeNET: Hodgkin, A. L., & Huxley, A. F. (1952). A quan-
An event-driven simulation package for model- titative description of membrane current and its
ing large networks of spiking neurons. Network: application to conduction and excitation in nerve.
Computational in Neural Systems, 14, 613–627. The Journal of Physiology, 117, 500–544.
doi:10.1088/0954-898X/14/4/301
Izhikevich, E. M. (2003). Simple model of spiking
Diesmann, M., & Gewaltig, M.-O. (2002). NEST: neurons. IEEE Transactions on Neural Networks,
An environment for neural systems simulations. In 14, 1569–1572. doi:10.1109/TNN.2003.820440
V. Macho (Ed.), Forschung und wissenschaftliches
Izhikevich, E. M., & Edelman, G. M. (2008).
rechnen. Heinz-Billing-Preis, GWDG-Bericht.
Large-scale model of mammalian thalamocortical
Furber, S. B., Temple, S., & Brown, A. D. (2006). systems. Proceedings of the National Academy
High-performance computing for systems of of Sciences of the United States of America, 105,
spiking neurons. In Proceedings of the AISB’06 3593–3598. doi:10.1073/pnas.0712231105
Workshop on GC5: Architecture of Brain and
Krichmar, J. L., Nitz, D. A., Gally, J. A., & Edel-
Mind, (Vol. 2, pp 29-36). Bristol: AISB.
man, G. M. (2005). Characterizing functional
Gamez, D. (2007). SpikeStream: A fast and flex- hippocampal pathways in a brain-based device as
ible simulator of spiking neural networks. In J. M. it solves a spatial memory task. Proceedings of the
de Sá, L. A. Alexandre, W. Duch & D. P. Mandic National Academy of Sciences of the United States
(Eds.) Proceedings of ICANN 2007, (Vol. 4668, of America, 102(6), 2111–2116. doi:10.1073/
pp. 370-79). Berlin: Springer Verlag. pnas.0409792102

356
The Simulation of Spiking Neural Networks

Maas, W., & Bishop, C. M. (Eds.). (1999). Silver, R., Boahen, K., Grillner, S., Kopell, N.,
Pulsed neural networks. Cambridge, MA: The & Olsen, K. L. (2007). Neurotech for neurosci-
MIT Press. ence: Unifying concepts, organizing principles,
and emerging tools. The Journal of Neuroscience,
Marian, I. (2003). A biologically inspired com-
27(44), 11807–11819. doi:10.1523/JNEURO-
putational model of motor control development.
SCI.3575-07.2007
Unpublished MSc Thesis, University College
Dublin, Ireland. SpikeSNNS [computer software] (n.d.). Available
from http://cortex.cs.may.ie/tools/spikeNNS/
Markram, H. (2006). The Blue Brain project.
index.html.
Nature Reviews. Neuroscience, 7, 153–160.
doi:10.1038/nrn1848 SpikeStream [computer software] (n.d.). Available
from http://spikestream.sf.net.
Mattia, M., & Del Guidice, P. (2000). Ef-
ficient event-driven simulation of large net- Tani, J., Nishimoto, R., & Paine, R. W. (2008).
works of spiking neurons and dynamical syn- Achieving “organic compositionality” through
apses. Neural Computation, 12, 2305–2329. self-organization: Reviews on brain-inspired
doi:10.1162/089976600300014953 robotics experiments. Neural Networks, 21(4),
584–603. doi:10.1016/j.neunet.2008.03.008
Morrison, A., Mehring, C., Geisel, T., Aertsen, A.,
& Diesmann, M. (2005). Advancing the boundar- Tonnelier, A., Belmabrouk, H., & Martinez, D.
ies of high-connectivity network simulation with (2007). Event-driven simulations of nonlinear in-
distributed computing. Neural Computation, 17, tegrate-and-fire neurons. Neural Computation, 19,
1776–1801. doi:10.1162/0899766054026648 3226–3238. doi:10.1162/neco.2007.19.12.3226
MVASpike [computer software] (n.d.). Available VMware Player [computer software] (n.d.). Avail-
from http://mvaspike.gforge.inria.fr/. able from http://www.vmware.com/products/
player/.
NEST [computer software] (n.d.). Available from
http://www.nest-initiative.org. Wheeler, D. A. (n.d.). SLOCCount [computer
software]. Available from http://www.dwheeler.
Newcombe, R. (2007). SIMNOS virtual robot
com/sloc/.
[computer software]. More information available
from www.cronosproject.net.
NVIDIA PhysX [computer software] (n.d.).
KEY TERMS AND DEFINITIONS
Available from http://www.nvidia.com/object/
nvidia_physx.html. Axon: When a neuron fires it sends a volt-
Reutimann, J., Giugliano, M., & Fusi, S. (2003). age spike along a fibre known as an axon, which
Event-driven simulation of spiking neurons with connects to the dendrites of other neurons at a
stochastic dynamics. Neural Computation, 15, junction called a synapse.
811–830. doi:10.1162/08997660360581912 Continuous Simulation: An approach to
simulation in which updates to the model are
Shanahan, M. P. (2008). A spiking neuron model of driven by the advance of a simulation clock.
cortical broadcast and competition. Consciousness Dendrite: Each neuron has a large number of
and Cognition, 17(1), 288–303. doi:10.1016/j. fibres called dendrites that receive spikes from
concog.2006.12.005 other neurons. The axon of one neuron connects

357
The Simulation of Spiking Neural Networks

3
to the dendrite of another at a junction called a A circular buffer is commonly implemented
synapse. as an array combined with an index that
Discrete Event Simulation: An approach advances at each time step and returns to
to simulation in which updates to the model are zero when it reaches the end of the array. At
event-driven instead of clock-driven. This type of each time step all of the spikes at the current
simulation typically works by maintaining a queue index position are applied to the network and
of events that are sorted by the time at which they new spikes are delayed by inserting them at
are scheduled to occur. a position in the array that is an appropri-
Hybrid Simulation: An approach to simula- ate number of steps ahead of the current
tion in which a continuous simulation clock is index.
4
maintained and updates to the model are event- This calculation is based on Brette et al.
driven. (2007).
5
Neuron: A cell in the brain that carries out The effect of the time step resolution on
information processing. There are approximately network behavior can be seen in Table 3.
6
80 billion neurons in the human brain. The clustering of spikes at time steps affects
Spike: A pulse of electrical voltage sent from STDP learning because it can be ambiguous
one neuron to another to communicate informa- which spikes come before or after the firing
tion. of a neuron when several arrive at the same
SpikeStream: Software for the hybrid simula- time step.
7
tion of spiking neural networks. See Mattia and Del Guidice (2000) for a
STDP (Spike Time Dependent Plasticity): A more detailed discussion of this approach.
8
learning rule in which the weight of a synapse is This calculation is based on Brette et al.
increased if a spike arrives prior to the firing of (2007).
9
a neuron, and decreased if the spike arrives after These strategies are used by the NEST hybrid
the firing of a neuron. simulator - see Morrison, Mehring, Geisel,
Synapse: A junction between the axon of one Aertsen and Diesmann (2005).
10
neuron and the dendrite of another. See Morrison et al. (2005) for more on this
approach.
11
However, SpikeNET was developed into a
ENDNOTES commercial image recognition product.
12
Full details can be found in Gamez
1
The brain also contains 100 billion cells (2007).
called glia, which have traditionally been 13
SIMNOS was developed by Newcombe
thought to play a supporting role. However (2007).
some people, such as Haydon (2000), have 14
This was calculated using Wheeler’s SLOC-
suggested that glia might contribute to the Count software.
brain’s information processing. 15
This makes Neurogrid a true continuous
2
Although true continuous simulation can simulation since the properties of the tran-
only be carried out on an analogue system, sistors in the silicon are used to emulate the
the term “continuous simulation” is com- behaviour of neurons’ ion channels without
monly used to refer to the approximation of the need for variables that are stored in
a continuous system using a digital computer digital format and advanced in discrete time
and an arbitrarily small time step. steps.

358
359

Chapter 16
An Integrated Data Mining
and Simulation Solution
Mouhib Alnoukari
Arab Academy for Banking and Financial Sciences, Syria

Asim El Sheikh
Arab Academy for Banking and Financial Sciences, Jordan

Zaidoun Alzoabi
Arab Academy for Banking and Financial Sciences, Syria

ABSTRACT
Simulation and data mining can provide managers with decision support tools. However, the heart
of data mining is knowledge discovery; as it enables skilled practitioners with the power to discover
relevant objects and the relationships that exist between these objects, while simulation provides a
vehicle to represent those objects and their relationships. In this chapter, the authors will propose an
intelligent DSS framework based on data mining and simulation integration. The main output of this
framework is the increase of knowledge. Two case studies will be presented, the first one on car market
demand simulation. The simulation model was built using neural networks to get the first set of predic-
tion results. Data mining methodology used named ANFIS (Adaptive Neuro-Fuzzy Inference System).
The second case study will demonstrate how applying data mining and simulation in assuring quality
in higher education

INTRODUCTION On the other hand, Simulation is a powerful


technique for systems representations; because it
Data mining techniques provide people with new provides a concise way for knowledge encapsula-
power to research and manipulate the existing large tion. Simulation can be used effectively support-
volume of data. A data mining process discovers ing managers in decision making, especially in
interesting information from the hidden data that situations characterized by uncertainty. Simulation
can either be used for future prediction and/or intel- can provide realistic models for testing real-world
ligently summarizing the details of the data (Mei, decision making scenarios (what-if scenarios), and
and Thole, 2008). comparing alternative decisions in order to choose
the best solution affecting company’s success, by
DOI: 10.4018/978-1-60566-774-4.ch016

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Integrated Data Mining and Simulation Solution

enhancing profitability, market share, and cus- “knowledge-based economies” as “economies


tomer satisfaction. which are directly based on the production, dis-
Simulation methodologies, such as what-if tribution and use of knowledge and information”
analysis, can provide the engine to analyze com- (Weiss, Buckley, Kapoor, and Damgaard, 2003).
pany’s policy changes. For example, adding new According to the definition, Data Mining and
tellers to a bank, or adding new airline route, or Knowledge Management, and more generally
changing the number of machines in a job shop Business Intelligence, should be foundations
(Better, Glover, and Laguna, 2007). for building the knowledge economy. Business
Using data mining can help recalibrating Intelligence is an umbrella term that combines
system simulation models in many real world architectures, tools, data bases, applications,
applications, as it provides the insights gleaned practices, and methodologies (Turban, Aron-
from the hidden and interesting data patterns. son, Liang, and Sharda, 2007; Cody, Kreulen,
This chapter will be divided as the following: Krishna, and Spangler, 2002). Weiss et al. 2003
the next section will present the data mining and defined Business Intelligence as the “combina-
business intelligence techniques used in conjunc- tion of data mining, data warehousing, knowledge
tion with simulation, different experiences on the management, and traditional decision support
integration of simulation and data mining will be systems”.
presented, then we will propose an intelligent DSS Business Intelligence is becoming vital for
framework based on data mining and simulation many organizations, especially those which have
integration, finally the proposed framework will extremely large amount of data (Shariat, and
be validated using a two case studies on car market Hightower, 2007; Kerdprasop, and Kerdpraso,
demand simulation, and applying data mining in 2007). Organizations such as Continental Airlines
assuring quality in higher education. have seen investment in Business Intelligence
generate increases in revenue and cost saving
Introduction to Data Mining equivalent to 1000% return on investment (ROI)
and Business Intelligence (Zack, Rainer, and Marshall, 2007). The measure
of any business intelligence solution is its ability
It is noted that the number of databases keeps to derive knowledge from data. The challenge is
growing rapidly because of the availability of pow- met with the ability to identify patterns, trends,
erful and affordable database systems. Millions rules, and relationships from volumes of informa-
of databases have been used in business manage- tion which is too large to be processed by human
ment, government administration, scientific and analysis alone.
engineering data management, and many other Decision makers depend on detailed and ac-
applications. This explosive growth in data and curate information when they have to make deci-
databases has generated an urgent need for new sions. Business Intelligence can provide decision
techniques and tools that can intelligently and makers with such accurate information, and with
automatically transform the processed data into the appropriate tools for data analysis (Jermol,
useful information and knowledge, which provide Lavrac, and Urbanci, 2003; Negash 2004; Lau,
enterprises with a competitive advantage, work- Lee, Ho, and Lam, 2004]. It is the process of
ing asset that delivers new revenue, and to enable transforming various types of business data into
them to better service and retain their customers meaningful information that can help, decision
(Stolba, and Tjoa, 2006). makers at all levels, getting deeper insight of
In 1996, the Organization for Economic Co- business (Power, 2007; Girija, and Srivatsa 2006;
operation and Development (OECD) redefined Gartner Group, 1996).

360
An Integrated Data Mining and Simulation Solution

Any Business Intelligence application can be considers expert knowledge as an asset that can
divided into the following three layers: provide data mining with the guidance to the
discovery process. Thus, it says in a simple word,
1. Data layer responsible for storing struc- “data mining cannot work without knowledge”.
tured and unstructured data for decision Knowledge management system must address
support purposes. Structured data is usu- the following three elements: people, business
ally stored in Operational Data Stores rules, and technology (Girija, and Srivatsa, 2006).
(ODS), Data Warehouses (DW), and Data Business intelligence is an important technology
Marts (DM) (Alnoukari, and Alhussan that can help decision makers taking precious
2008). Unstructured data are handled by decisions quickly. Business intelligence includes
using Content and Document Management automated tools for analyzing and mining huge
Systems (Baars, and Kemper 2007). Data amount of data (Knowledge Discovery in Data-
are extracted from operational data sources, bases), thus transforming data into knowledgeable
e.g. SCM, ERP, CRM, or from external data information (Weiss, Buckley, Kapoor, and Dam-
sources, e.g. market research data. Data are gaard, 2003; Anand, Bell, and Hughes, 1995).
extracted from data sources that are trans- Data mining is the search for relationships and
formed and loaded into DW by ETL (Extract, distinct patterns that exist in datasets but they are
Transform and Load) tools. “hidden” among the vast amount of data (Tur-
2. Logic layer provides functionality to analyze ban, Aronson, Liang, and Sharda, 2007; Jermol,
data and provide knowledge. This includes Lavrac, and Urbanci, 2003). Data mining can be
OLAP (OnLine Analytical Processing), and effectively applied to many areas (Alnoukari, and
data mining. Alhussan 2008; Watson, Wixom, Hoffer, Leh-
3. Access layer, realized by some sort of man, and Reynolds, 2006], including marketing
software portals (Business Intelligence (direct mail, cross-selling, customer acquisition
portal). and retention), fraud detection, financial services
(Srivastava, and Cooley, 2003), inventory control,
According to Wikid Hierarchy (Steele, 2002), fault diagnosis, credit scoring (Shi, Peng, Kou, and
Intelligence is defined as “knowledge that has been Chen, 2005), network management, scheduling,
evaluated and relevant insights and understandings medical diagnosis and prognosis. There are two
extracted”. In this context, intelligence process main sets of tools used for data mining (Corbitt,
is “about making informed assessments of what 2003): discovery tools (Chung, Chen, and Nuna-
will happen, when and how”, so it relies on how maker, 2005; Wixom, 2004), and verification tools
to manage knowledge. (Grigori, Casati, Castellanos, Sayal, and Shan,
Knowledge can be created and consumed 2004). Discovery tools include data visualiza-
through various types of activities such as: conver- tion, neural networks, cluster analysis and factor
sation with one another, searching of information analysis. Verification tools include regression
in huge databases, just-in time learning, and con- analysis, correlations, and predictions.
tinuous education (Girija, and Srivatsa, 2006). Data mining application are characterized
Business Intelligence is a good environment in by the ability to deal with the explosion of busi-
which ‘marrying’ business knowledge with data ness data and accelerated market changes, these
mining could provide better results. characteristics help providing powerful tools
Knowledge can enrich data by making it “in- for decision makers, such tools can be used by
telligent”, thus more manageable by data mining business users (not only statisticians) for analyz-
(Graco, Semenova, and Dubossarsky, 2007). It ing huge amount of data for patterns and trends.

361
An Integrated Data Mining and Simulation Solution

Consequently, data mining has become a research knowledge that is embodied in a data set, as it may
area with increasing importance and it involved discover a lot of patterns that are irrelevant for the
in determining useful patterns from collected data analyzer. Refinement and variable changes would
or determining a model that fits best on the col- be necessary for mining that runs to acquire the
lected data (Fayyad, Shapiro, and Smyth, 1996; required knowledge.
Mannila, 1997; Okuhara, Ishii, and Uchida, 2005). Data mining process contains different steps
Different classification schemes can be used to including: data cleansing, data integration (into
categorize data mining methods and systems data warehouse), data mining, pattern evaluation,
based on the kinds of databases to be studied, and pattern presentation (Alnoukari, and Alhus-
the kinds of knowledge to be discovered, and the san, 2008).
kinds of techniques to be utilized (Lange, 2006; Data mining can use two types of data (Painter,
Smith, 2005). Erraguntla, Hogg, and Beachkofski, 2006): con-
A data mining task includes pre-processing, the tinuous data, and categorical (discrete) data.
actual data mining process and post-processing. Data mining algorithms are separated into
During the pre-processing stage, the data mining two main classes (Garcia, Roman, Penalvo, and
problem and all sources of data are identified, and Bonilla, 2008; Morbitzer, Strachan, and Simpson,
a subset of data generated from the accumulated 2004):
data. To ensure quality the data set is processed
to remove noise, handle missing information and 1. Supervised algorithms, or the predictive
transformed it to an appropriate format (Nayak, algorithms, where the object is to detect pat-
and Qiu, 2005). A data mining technique or a tern in present data using a learning stage.
combination of techniques appropriate for the type Prediction target is a special attribute (named
of knowledge to be discovered is applied to the label). The prediction model is based on
derived data set. The last stage is post-processing encoding the relationship between the label
in which the discovered knowledge is evaluated and the other attributes, in order to predict
and interpreted. new, unlabeled attribute. If the label is dis-
Data mining techniques is a way to extract pat- crete, the algorithm is named classification.
terns (knowledge) from data (Alnoukari, Alzoabi, If it’s continuous, the algorithm is named
and Hanna, 2008; Morbitzer, Strachan, and Simp- regression.
son, 2004). We are using the terms knowledge and 2. Unsupervised algorithms, or the descriptive
pattern interchangeably as patterns inside the huge algorithm, where the object is to detect pat-
databases reflect what-so-called “organizational tern in present data using a learning stage.
tacit knowledge”. Knowledge could be one of These algorithms belong to knowledge
two states: explicit or tacit (Nonaka, Toyama, and discovery modeling. Example of such al-
Konno, 2000). Explicit knowledge is objective, gorithms is association rules.
explicitly stated, easy to transfer, communicate,
and codify. On the other hand tacit knowledge is Different data mining techniques are used in
subjective, implicit in human’s minds, and dif- conjunction with simulation data analysis (Mor-
ficult to transfer, communicate and codify. The bitzer, Strachan, and Simpson, 2004; Painter,
patterns that are discovered in the databases using Erraguntla, Hogg, and Beachkofski, 2006):
data mining -or any other methodology- reflect
organization-wide processes, procedures, and • Association analysis discovers associa-
behaviors that are not easy to explain or be fig- tion rules that occur together in a given
ured out. Data mining can’t extract all available data set. Association mining provides the

362
An Integrated Data Mining and Simulation Solution

techniques needed for “basket analysis”, (Kuhlmann, Vetter, Lubbing, and Thole, 2005;
where the customer is provided with a set Mei, and Thole, 2008), software project manage-
of correlated items that she/he may pur- ment (Garcia et al., 2008), life cycle cost (Painter,
chase or set of services she/he would like Erraguntla, Hogg, and Beachkofski, 2006), and
to have. others.
• Classification analysis discovers the CAE-Bench is one of the most famous simu-
rules that are relevant to one particu- lation management tools (it is currently used at
lar variable. Classification methods in- BMW crash department) which applies the data
clude Bayesian Networks (BN), Artificial mining methods on the data resultant from car
Neural Networks (ANN), Decision Trees crash simulation (Kuhlmann, Vetter, Lubbing,
for Classifications (CARD, CHAID), and and Thole, 2005).
Multiple Discriminate Analysis (MDA). CAE-Bench stores data about models as com-
• Clustering analysis is the most suitable plete vehicles. Such models are processed as finite
data mining technique that is used for the element models (FE models) made up by about
analysis of simulation results. It attempts to 500.000 independent nodes and elements (called
discover the set of groups in a data set in a input deck). Each input deck (a car composed
way that maximizes the similarity between of about 1200 parts) is disassembled in order to
items in the same group, and minimizes the analyze the geometry of its parts. Data about parts
similarity between items in any two differ- and their geometry analysis are then provided
ent groups (Mei, and Thole, 2008). It dif- into a data mining working environment (named
fers from classification in the way that no SCAI-DM). Data preparation is the essential
assumptions are made about the number step for data mining as it costs about 80-90% of
of groups (classes), or any other structure. the total efforts. The data need to be processed,
Clustering methods include hierarchical cleaned, checked for consistency, and combined
clustering, and neural net-based methods. in order to be processed by the data mining tool
(Figure 1).
Classical data mining considers the existence After completing data preparation step, simi-
of data in a repository (such as data bases). Once larity analysis and data reduction are conducted
data is generated, it will be considered as a static in order to provide the main attribute for data
set that can be analyzed. This means that data will mining. Attribute selection method was used in
not be updated or modified. order to reduce the vast amount of geometrical
Dynamic data mining features solve the static modifications to a small number of similar ones.
issues of classical data mining application by Decision tree method was also employed in order to
analyzing the processes in action (Better, Glover, demonstrate the influence of design modification
and Laguna, 2007). These features provide capa- on a range of values, and leading to meaningful
bilities of handling uncertainty, and find solutions results.
to complex problems. The last step was deployment of data mining
reports, and importing them into CAE-Bench
simulation system in order to let them accessible
DATA MINING AND SIMULATION: to other users.
PREVIOUS WORKS Mei and Thole (2008) described the use of
data mining algorithms (especially clustering
Combining data mining and simulation tech- algorithm) to measure the scatter of parallel
niques was applied on different area: car crash simulation results to explain the instability na-

363
An Integrated Data Mining and Simulation Solution

Figure 1. CAE-Bench simulation tool. SCAI-DM working environment is taking its input from disas-
sembled parts data and geometry based meta data of the engineer’s working environment adapted from
(Kuhlmann, Vetter, Lubbing, and Thole, 2005)

ture of parallel crash simulation, and measure system behavior, especially sub-systems interac-
the amount of the scatter results. The use of data tions, and factors affecting life-cycle metrics. The
mining algorithms was chosen to avoid the time process involves simulating the fleet of engines for
consuming task of numerical crash simulation, as a set of planned operational scenarios, to collect
the simulation results are changing from one run maintenance decisions, and their cost implications
to another, although the simulation parameters over the overall service life. Multiple simulation
were identical. They also proposed solutions for runs are conducted to account the stochastic nature
direct clustering algorithms difficulties, especially of maintenance events and decisions. Simulation
the non deterministic nature of the clustering as- outputs, and cost history data are then mined in
signment, the huge amount of simulation data, order to find relationships that best characterize the
and the huge number of data object comparisons Life-Cycle Cost implication of each maintenance
needed for time dependency analysis. decision. Such parameters relationships need
Painter et at. (2006) combined simulation, periodic refinement (Learning phase) in order to
data mining, and knowledge-based techniques in characterize environments changes, maintenance
order to support decisions related to aircraft engine practices, reliability of engine components, and
maintenance in the US Department of Defense. other factors.
They developed a methodology for Life-Cycle Simulation can provide results that cannot
Cost combining discrete event simulation with data be done through any maintenance history data
mining techniques to develop and maintain cost collection system. Once data is generated, data
estimation parameters. Simulation output is used mining techniques is used to provide the key
as an input for data mining analysis to understand pattern, and parameters relationships. This can

364
An Integrated Data Mining and Simulation Solution

be used to define models, from which Life-Cycle some management policy factors on some project
Cost can be determined parametrically, in order attributes. The number of generated pattern is as
to provide more accurate decisions. The main high as the number of possible parameters com-
objective of data mining module is to determine binations, but we only concentrate on patterns
the drivers affecting Life-Cycle Cost. The data with high confidence rule. These patterns are
mining techniques used for such objectives are effectively affecting decision making. Garcia et
regression, classification, and clustering. al. (2008) proposed a refinement method, based
Data mining methods (especially associa- on discretisation of the continuous attributes, for
tion rules) was also applied on software project obtaining stronger rules that has been successfully
simulation (Garcia et al., 2008). Software proj- applied in the early software size estimation. The
ect management is affected by different factors. association rules were applied between several
Main factors are quality, effort, duration and cost. project management factors and attributes, such
These are affecting managers’ decisions about a as software quality, project duration, and costs.
project. Data mining methods were also applied to
The main issue when managers have to take enhance simulation optimization methods (Better,
decisions about a project is that they have to Glover, and Laguna, 2007; Huyet, 2006; Fang,
consider great number of variables, and relations Sheng, Gao, and Iyer, 2006). Such methodol-
between them. ogy can produce knowledge on system behavior
Simulation of software project by using dy- (especially the high performance behaviors), and
namic models provides a good tool to recognize analyze efficient solutions. Such approach is also
the impact of these software project management effective for the search of optimal values of input
variables, and relations between them. parameters in complex and uncertain situations,
Software Project Simulator (SPS) provides and avoids the “black box” effect of many opti-
managers with a powerful tool that enable them mization methods.
to try different policies, and take decisions accord- The methodology proposed by Huyet (2006)
ing to results obtained. Different type of analysis consists of collecting the solutions generated by
can be done using SPS at different stages (a priori optimization and evaluated by simulation. Then
analysis, project monitoring, and post-mortem the learning method is applied to the generated
analysis). data set. Finally the optimization method is used
SPS based on dynamic models enables manag- to generate new parameter combinations in order
ers to set simulation environment parameters and to constitute more efficient solution.
functions. Dynamic models provide the ability to The approach proposed by Better et al. (2007)
express restrictions among variables that change is based on dynamic data mining module which
over time. These restrictions are the baseline for provides the inputs to the simulation module
relationship analysis among different project and optimization engine. Inputs data are relevant
factors. variables, attributes, and rules that govern the
Although SPS based on dynamic models is able simulation module, which itself interacts with
to manage the large number of projects parameters, optimization engine to evaluate different scenarios
it may not be able to find the best parameters com- in order to choose the optimal one. Optimization
binations for a specified situation, as the number engine provide the means for dynamic data mining
of possible combinations is huge. module to produce classification, clustering, and
The use of data mining techniques such as asso- feature selection. The objective of optimization
ciation rules, machine learning, and evolutionary engine is to maximize the performance of the
algorithms could help analyzing the influence of firm according to pre-specified measures; such

365
An Integrated Data Mining and Simulation Solution

as market share, profit, sales revenue, etc. The Our proposed DSS framework is based on the
objective can be expressed as a single performance integration of simulation and data mining (Figure
measure, calculated as a combinations of more 2). Simulation data output is transferred into an
than one measure. enterprise data warehouse using the ETL (Extract,
Most of the previous researches were focused Transform, and Load) steps. Data warehouse can
on the use of data mining methods, on huge data produce different data marts, or multidirectional
produced by simulation tools. Few researches cubes, or simple aggregated data. Data mining
discussed the reciprocal case where we can techniques can be applied intelligently on these
demonstrate the use of simulation in grey related different data sets to facilitate the extraction of
analysis for data mining purposes. Grey related meaningful information and knowledge from such
analysis is a decision-making technique that can huge amount of data sets.
be used to deal with uncertainty in forms of fuzzy Most of the previous works were applying
data (Wu, Olson, and Dong, 2006). Wu et al. 2006 data mining techniques on traditional data bases.
used Monte Carlo simulation to measure the impact Such step can take a considerable time, especially
of fuzzy decision tree models (using categorical of these data bases which contain many object
data), compared to those based on continuous data. relationships.
Monte Carlo simulation is useful to provide experi- Our proposed model is based on the creation
ments needed to verify data mining algorithms. of enterprise data warehouse in order to facilitate
It can be used as a way to present the output of the transition into OLAP applications. This can
data mining model. Monte Carlo simulation can help producing more intelligent system where data
present the overall picture of the dispersion of mining method can be applied effectively.
the results obtained, and provide the probabilities The creation of the enterprise data warehouse
explanations for decision makers. is mainly based on the simulation data prepara-
tion. Data preparation is usually an important
A Proposed Intelligent DSS step, and can cost a considerable part of the total
Framework Integrating efforts. The data need to be processed, cleaned,
Simulation and Data Mining checked for consistency, and combined in order
to be processed by data mining tool.
Detailed Simulation can result in large data sets. Data mining stage can help recalibrating system
Exploring such data sets is difficult for users using simulation models in many real world applications,
traditional techniques. Data mining techniques as it provides the insights gleaned from the hidden
can help extracting patterns, associations, and and interesting data patterns.
anomalies from large data sets. Data mining can produce a huge amount of
Integrating simulation and data mining can data patterns, and patterns relationships. Patterns
provide decision makers with a more powerful evaluation will provide the system the ability
tools that can support them in situations char- to analyze all these patterns and relationships,
acterized by uncertainty, and provide them with and produce only meaningful patterns, and pat-
the power to find hidden patterns (organizational tern relationships. This step will produce more
tacit knowledge) that can either be used for future knowledge and enhance the overall company’s
prediction and/or intelligently summarizing the knowledge.
data. The main output of such integration is the
increase of knowledge.

366
An Integrated Data Mining and Simulation Solution

Figure 2. A proposed intelligent framework based on the integration of simulation and data mining

CASE STUDY I: CAR MARKET enterprise goals and objectives. Managers envision
DEMAND SIMULATION an analytical environment that will improve their
ability to support planning and inventory manage-
Automotive manufacturing are markets where the ment, incentives management, and ultimately
manufacturer does not interact with the consumer production planning, in addition to enable them
directly, yet a fundamental understanding of the to meet the expectations of their decision-making
market, the trends, the moods, and the changing process which is supported by appropriate data
consumer tastes and preferences are fundamental and trends. Regardless of functional boundaries
to competitiveness. and type of analysis needed, their requirements
The information gathered in order to produce focus on improving access to detailed data, more
car market demand’s simulation are the following consistent and more integrated information.
(Alnoukari, and Alhussan, 2008): Having a data warehouse that combines online
and offline behavioral data for decision-making
• Supply chain process (sales, inventory, or- purposes is a strategic tool which business users
ders, production plan). can leverage to improve sales demand forecasting,
• Manufacturing information (car con- improve model/trim level mix planning, adjust
figurations/packages/options codes and body model/trim level mix with inventory data,
description). and reduce days on lot.
• Marketing information (dealers, business The main goal for the data mining phase was to
centers… etc). get some initial positive results on prediction and
• Customers’ trends information (websites to measure the prediction score of different data
web-activities). sources using findings of correlation studies.
The data mining solution starts by process-
An enterprise data warehouse was built to ing Stock/Sale/Order data, web data, dealer data
hold web data, inventory data, car demand data and GAQ (Get a Quote) data and store them in
and sales data to better analyzing and predicting a data warehouse named “Vehicle Demand Data
car sales, managing car inventory and planning Warehouse”. The data warehouse was designed
car production. Sales and marketing managers are in order to support analysis that aim to use web
interested in better leveraging data in support of the data, inventory data, car demand data and sales

367
An Integrated Data Mining and Simulation Solution

data to better analyzing and predicting car sales, The ETL was designed for transforming
managing car inventory and planning car produc- heterogeneous data sources formats into flat file
tion. Inventory managers, brand managers and format in order to load it as bulk insert, to gain
sales managers are demanding more metrics for higher performance.
analyzing the past (i.e. inventory) and predicting The data warehouse maintains the following
the future (i.e. vehicle sales, mix and launches). dimensions:
These metrics need to be delivered seamlessly
with high quality and in a timely fashion. This • TOPs dimension used to map all car con-
will get managers closer to better understand- figurations under production. This dimen-
ing consumer demands and better managing the sion is used to map vehicles “Franchise/
customer relationship. Year/Model/Package/Option” to their
The main steps for providing the Slow Turn “Codes /Des criptions /OptionTypes /
Simulation/Launch Simulation/Prediction (we DefaultOptions”.
will use the term STS/LS/Prediction) are the fol- • Geographic dimension is used to map
lowing (Figure 3): “ZipCode” to “Zone/Business Center/
State”.
• Receiving and validating the data sources. • Dealer dimension is used to map dealers.
• ETL the data sources to the data
warehouse. The data warehouse maintains also the fol-
• Processing the data warehouse to generate lowing fact tables: “CarConfig” is used to store
the required data marts. “Build and Price” configurations made on the
• Building the required data marts and OLAP company’s web-sites. “Stock/Sales/Order” are
cubes. used to store orders, sales and stock made on the
• Generating the reports. company’s vehicles. “Production Plan” is used to
• Delivering the analysis in the form of Excel store production of the company’s plans intend-
workbooks, and PowerPoint slides. ing to achieve.
Data in a warehouse is typically updated only
The data sources used for this solution are the at certain points in time (Weekly/Monthly/Yearly
followings: in our case). In this way, there is a tradeoff exists
between the correctness of data and the sub-
• ”Stock/Sales/Orders” data sources are stantial effort required to bring the data into the
snapshots at the (exact) date of data. warehouse. Data warehouses also provide a great
• “Production Plan” data source is a list of deal of opportunities for performing data mining
models quantities that are planned to be tasks such as classification and summarization.
produced for a specific period. Typically, updates are collected and applied to the
• “TOPs” data source is a structured data data warehouse periodically in a batch mode, (e.g.,
which is used to map all car configurations during the night). Then, all patterns derived from
under production. the warehouse by some data mining algorithm
• “SPOT” data source is used to list dealers, have to be updated as well.
in addition to their rating and their geo- Processing is the stage where data is processed,
graphic information. summarized and aggregated in order to create the
• “Web Activity” data source is used to track required reporting data marts.
all user web hits/requests on customer’s The processing stages are:
websites.

368
An Integrated Data Mining and Simulation Solution

Figure 3. Slow Turn Simulation/Launch Simula-


customers trends on web “Build and
tion/Prediction Pipeline
Price” activities).
• Inventory stage: generates the reporting in-
ventory data mart. The inventory data is a
snapshot of the inventory stock on the cut-
off date.
• Order stage: generates the reporting order
data mart. The order data is a snapshot of
the dealer orders on the cutoff date.
• Sale stage: generates the reporting sale data
mart that covers the studied period.
• Final stage: generates the final reporting
data mart that combines all the data marts
created previously in order to serve the
Slow Turn Simulation/Launch Simulation/
Prediction reporting needs.

As a result of the processing pipeline, number


of data marts created to cover the reporting needs
over time. Data marts are data stores which are
subordinated to the data warehouse (Fong, Li,
and Huang, 2003; Jukic, 2006; Sen, and Sinha,
2005, Vaduva, and Vetterli, 2001). The main data
marts created are: Slow Turn Simulation/Launch
Simulation data marts, and Prediction data mart.
Slow turn simulation is not performed against
the data marts directly; instead it is performed
against OLAP cubes built above Slow Turn data
mart, such OLAP cubes provide analytics with
aggregations, drilldowns, and slicing/dicing of
data (Youzhi, Jie, 2004).
The last stage in our solution pipeline is the
reporting stage. It uses the data stored in the STS/
LS/Prediction data marts, and OLAP cubes created
for slow turning simulation in order to finally arrive
to a strategic data reporting solution, which sup-
• TOPs stage: generates the reporting ver-
ports the entire organization needs of information
sion of the TOPs, in order to have a cen-
at the highest possible degrees of flexibility and
tralized mapping schema that provides the
customizability that allows addressing change-
latest version on the TOPs.
ful reporting needs such as STS/LS/Prediction
• CarConfig stage: generates the report-
analysis and other types of analysis.
ing CarConfig data mart (this data mart
Powerful business intelligence and reporting
serves number of reports for launch
solution should have the capabilities to handle
analysis, where the study is only for
the complexities and diversity of data informa-

369
An Integrated Data Mining and Simulation Solution

tion in the data warehouse and data marts (Ester, is providing” by comparing inventory, or-
Kriegel, Sander, Wimmer, and Xu, 1998; Fong, der, sale and production plan with the mod-
Li, and Huang, 2003). el web car configurations.
The approach is focusing on being able to • Models slow turn analysis: identify vehi-
deliver a big number of multi-dimensional pivot cles that do not sell fast (help also knowing
reports in a short time frames based on the in- the vehicle that sell fast) by: identifying
cremental STS/LS/Prediction data marts. The inventory slow-turning options or option
core engine for delivering the STS/LS/Predic- combinations, and studying dealer aware-
tion multi-dimensional pivot reports is the report ness of such issues, analyzing performance
generator. It’s designed to be able to read from by determining slow or fast turning options
multiple data warehouse schemas (a simple Ad- or option combinations, analyzing invento-
Hoc query, a data mart and OLAP cubes). The ry by validating the results of performance
reporting layer contains a report metadata which analysis and determining the options or op-
is an XML-based metadata used to store the tion combinations having stock problems,
reports templates, and a report request metadata analyzing sales by validating the results of
which is an XML-based metadata used to store performance analysis and determining the
the reporting parameters for each report request need for drilling down to more detailed
made by the customer (report name, start date, analysis, and analyzing orders by studying
end date, franchise, family, etc.). dealer awareness of the results of the per-
A report metadata object library is used to formance analysis.
encapsulate the reporting metadata repository
and present the interface to read/manipulate it. Car market demand simulation model was
The launch simulation report generator is the core built using neural networks to get the first set of
reporting automation engine. It drives all reporting prediction results. The training data subset was
process, coordinates the internal and underlying gotten from April 2002 till June 2003. The test
components jobs and delivers the final reports. subset was from July 2003 till September 2003,
This component is primarily made to automate and the evaluation subset was from October 2003
generating a huge number of reports requested till November 2003 (Figure 4).
on a regular basis (daily/weekly/monthly/yearly). After evaluating the first prediction method,
This component encapsulates the pivot reporting we tried to use another method based on linear
component, PowerPoint generating component, regression without using separated weeks. This
and workflow component. method provided more accurate results for the
The reports deployed cover a wide range of first week, but the next weeks predictions are
STS/LS. Such reports include: inacceptable.
New car market demand simulation model
• Models web car configurations through was built using data mining methodology named
business centers, early indications of the ANFIS (Adaptive Neuro-Fuzzy Inference Sys-
customers’ interest in newly launched ve- tem), which is a combination of fuzzy logic &
hicles through business centers. neural networks by clustering values in fuzzy
• Models web car configurations, open dealer sets, membership functions are estimated during
orders, production plan, K-stock and sales training, and using neural networks to estimate
comparison, gives a comparison analysis weights. The results obtained were more accurate
between “what the customer wants” (web and this method was adapted in our solution as
car configuration) and “what supply chain the MAPE errors don’t exceed 10%.

370
An Integrated Data Mining and Simulation Solution

Figure 4. Car marker demand simulation results using neural networks

CASE STUDY II: APPLYING by the need of universities to emphasize strategic


DATA MINING AND SIMULATION planning in order to stay competitive in a highly
IN ASSURING QUALITY IN turbulent and fast improving industry.
HIGHER EDUCATION The emergence of business intelligence, data
mining and simulation is providing this type of
Quality assurance has the focus of all higher industry with great opportunity to invest in the
education institutes since one decade. This has huge data and information universities accumulate
been forced by different forces such as growing through their life time to discover opportunities
awareness of the need of quality assurance systems and potential risks and to act immediately in or-
in different disciplines, pressure from govern- der to act progressively to assure quality in their
ment entities towards standardization, customer products and hence to stay competitive.
demand for better performance, and the need for
more organizational efficiency and excellence Sources of Data in the Arab
(Gatfield, Barker, and Graham, 1999). International University (AIU)
This has been emphasized through interna-
tional recognition of the importance of assuring The university has a couple of information sys-
quality in higher education systems in order to tems that help in gathering data from different
facilitate student and instructor mobility, the need sources and all combined in the data warehouse.
for joint and dual degree programs, and cross cul- A data mining system empowered with simula-
tural programs. In addition, this has been forced tion power will then represent the data in a way

371
An Integrated Data Mining and Simulation Solution

that can help visualize all strengths, weaknesses 2. Punctuality of teachers (the time the atten-
and possibly threats and opportunities. In the fol- dance is taken is recorded).
lowing, we summarize all information systems 3. Performance of the students in the subject
used in AIU: as compared to their general performance
in the previous semesters (to be explained
Quality Assurance Automated in the following sections).
System (QAAS) 4. Absence ratios.
5. Performance indicator of the teacher in terms
The system was developed internally in the AIU of student feedback.
in order to integrate the computerized academic 6. Number of methodologies used through-
system with quality assurance concepts to provide out the course as compared to the planned
the management with a decision support system ones.
that helps in the effectiveness and efficiency of
the decision making process. Academic System
The system allows teachers to do the follow-
ing: The academic systems is meant to record all
academic movements of students throughout their
1. Enter the study plan of a specific course with academic lifecycle, such as admission, registra-
every chapter to be covered in every week tion, financial management, examinations, library
or class. activities etc. in addition, the system contains
2. Enter the text book used throughout the invaluable demographic, geographic, and past
course. educational history of the students . The system
3. Enter all references- including electronic- was supplied by an external provider.
used for every chapter or topic to be covered
in every chapter. Human Resource System
4. Enter the methodologies that will be used
for every chapter. This system was developed internally in AIU, to
5. Specify the outcomes of the course. provide management with data about academic
6. Connect chapters (topics) with every and administrative staff: demographic, financial,
outcome. historical, etc.
Examples of reports generated by the data
In every class, the system allows the instructor mining system based on the previous information
to enter the following: systems: Combining data from all systems can
provide the university senior management with
1. Attendance reports such as:
2. Chapter(s) to be covered in every class.
3. Methodology used: lecture, seminar, team 1. Number of Credit selected by students in
work, field study etc. every semester: this can help predicting the
graduation year of the student along with
The system then provides the management the projected GPA. Moreover, these reports
with many reports such as: help in combining information about the
students’ level of English (teaching language
1. Plan completion (how many chapters cov- in the university) with the current AGPA and
ered divided by the chapters planned). number of credit hours given to the student

372
An Integrated Data Mining and Simulation Solution

to check the effectiveness of the academic Figure 5 shows the correlation between stu-
advising of the university. dents’s accumulative GPA and their credit
2. Correlation of student’s performance in dif- hours. It shows that students with higher
ferent subjects: this can help in the design accumulative hours are getting higher ac-
of the curriculum as well as in the academic cumulative GPA. On the other hand it was
advising. For example, the correlation of very clear that there is strong correlation
marks between two subjects from the busi- between students’ level of English and ac-
ness (Organizational Behavior and Business cumulative GPA as shown in Figure 6.
Ethics), which are not prerequisite for the
other was 0.4 showing that students that
have chosen OB before BE are performing CONCLUSION
better in OB. This helps the academic advi-
sors as well as students to select subjects In this chapter, we proposed an intelligent DSS
accordingly. framework based on data mining and simulation
3. The relation between instructor’s perfor- integration. This framework is based on different
mance and the universities they graduated previous works tried to combine simulation with
from: this report helps in the recruitment data mining, and knowledge discovery. However,
process of new instructors. most of these works applied data mining methods
4. Correlation between instructor’s perfor- on the simulation data sets stored in traditional
mance and level of payment, incentives and data bases. The main contribution of our work
rewards they get: the correlation between was the use of data warehousing in order to move
instructors performance taken from QAAS into OLAP solution, and provide more intelligent
and their financial statuses help to achieve results.
the payment-for-performance the university Simulation data output is transferred into an
has as one of its mottos in dealing with its enterprise data warehouse using the ETL steps.
employees. Data warehouse can produce different data marts,
5. Categorizing subjects according to the stu- or multidirectional cubes, or simple aggregated
dent’s marks: this report helps in tailoring data. Data mining techniques can be applied intel-
the internship target for a student with his/ ligently on these different data sets to facilitate
her academic background so that he/she is the extraction of meaningful information and
assigned the right supervisor as well as the knowledge from such huge amount of data sets.
right industry to conduct the internship. The main output of such framework is the increase
6. Correlation between student overall perfor- of knowledge.
mance in different subjects and the feedback Our intelligent DSS framework has been vali-
from industry in which he/she conducted the dated by using two case studies, the first one on car
internship: this helps all faculties to review market demand simulation. The simulation model
the so-called competency based learning so was built using neural networks to get the first set of
that all curricula are tailored to the market prediction results. Data mining methodology used
need. named ANFIS (Adaptive Neuro-Fuzzy Inference
7. Correlation of students’ level of English and System), which is a combination of fuzzy logic
overall performance. This report helped a lot & neural networks by clustering values in fuzzy
in evaluating the current system used in the sets, membership functions are estimated during
English Language Center in the university training, and using neural networks to estimate
and resulted in major changes in the system. weights. The second case study was built in order

373
An Integrated Data Mining and Simulation Solution

Figure 5. Business Administration Students clustered according to their accumulative GPA & their
credit hours

Figure 6. Relationship between accumulative GPA & English level of Business Administration Student.
clustered according to their

374
An Integrated Data Mining and Simulation Solution

to apply data mining and simulation in assuring Corbitt T. (2003). Business Intelligence and
quality in higher education. Data Mining. Management Services Magazine,
November 2003.
Ester, M., Kriegel, H. P., Sander, J., Wimmer, M.,
REFERENCES
& Xu, X. (1998). Incremental clustering for mining
Alnoukari, M., & Alhussan, W. (2008). Using data in a data warehousing environment. Proceedings
mining techniques for predicting future car market Of The 24th Vldb Conference, New York.
demand. International Conference on Information Fang, X., Sheng, O. R. L., Gao, W., & Iyer, B.
& Communication Technologies: From Theory to R. (2006). A data-mining-based prefetching ap-
Applications, IEEE Conference, Syria. proach to caching for network storage systems. IN-
Alnoukari, M., Alzoabi, Z., & Hanna, S. (2008). FORMS Journal on Computing, 18(2), 267–282.
Using applying adaptive software development doi:10.1287/ijoc.1050.0142
(ASD) agile modeling on predictive data mining Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P.
applications: ASD-DM methodology. Interna- (1996). From data mining to knowledge discovery
tional Symposium on Information Technology, in databases, American Association for Artificial
Malaysia. Intelligence. AI Magazine, 37–54.
Anand, S. S., Bell, D. A., & Hughes, J. G. (1995). Fong, J., Li, Q., & Huang, S. (2003). Universal
The Role of Domain Knowledge in Data Mining. data warehousing based on a meta-data modeling
CIKM’95, Baltimore, MD, (pp. 37–43). approach. International Journal of Cooperative In-
Baars, H., & Kemper, H. G. (2007). Management formation Systems, 12(3), 318–325. doi:10.1142/
Support with Structured and Unstructured Data- S0218843003000772
An Integrated Business Intelligence Framework. Garcia, M. N. M., Roman, I. R., Penalvo, F.
Information Systems Management, 25, 132–148. J. G., & Bonilla, M. T. (2008). An association
doi:10.1080/10580530801941058 rule mining method for estimating the impact of
Better, M., Glover, F., & Laguna, M. (2007). project management policies on software quality,
Advances in analytics: Integrating dynamic development time and effort . Expert Systems
data mining with simulation optimization. IBM . with Applications, 34, 522–529. doi:10.1016/j.
Journal of Research and Development (Srinagar), eswa.2006.09.022
51(3/4). Gartner Group. (1996, September). Retrieved
Chung, W., Chen, H., & Nunamaker, J. F. Jr. (2005). November 12, 2005, from http://www.innerworx.
A Visual Framework for Knowledge Discover on co.za/products.htm.
the web: An Empirical Study of Business Intel- Gatfield, T., Barker, M., & Graham, P. (1999).
ligence Exploration. Journal of Management Measuring Student Quality Variables and the
Information Systems, 21(4), 57–84. Implications for Management Practices in
Cody, F., Kreulen, J. T., Krishna, V., & Spangler, Higher Education Institutions: an Australian and
W. S. (2002). The Integration of Business Intel- International Student Perspective. Journal of
ligence and Knowledge Management. Systems Higher Education Policy and Management, 21(2).
Journal, 41(4), 697–713. doi:10.1080/1360080990210210

375
An Integrated Data Mining and Simulation Solution

Girija, N., & Srivatsa, S. K. (2006). A Research Lau, K. N., Lee, K. H., Ho, Y., & Lam, P. Y.
Study- Using Data Mining in Knowledge (2004). Mining the web for business intelligence:
Base Business Strategies. Information Tech- Homepage analysis in the internet era. Database
nology Journal, 5(3), 590–600. doi:10.3923/ Marketing & Customer Strategy Management, 12,
itj.2006.590.600 32–54. doi:10.1057/palgrave.dbm.3240241
Graco, W., Semenova, T., & Dubossarsky, E. Mannila, H. (1997). Methods and problems in data
(2007). Toward Knowledge-Driven Data Mining. mining. Proceedings of International Conference
ACM SIGKDD Workshop on Domain Driven Data on Database Theory, Delphi, Greece.
Mining (DDDM2007), (pp. 49-54).
Mei, L., & Thole, C. A. (2008). Data analysis for
Grigori, D., Casati, F., Castellanos, M., Sayal, parallel car-crash simulation results and model
U. M., & Shan, M. C. (2004). Business Process optimization. Simulation Modelling Practice
Intelligence. Computers in Industry, 53, 321–343. and Theory, 16, 329–337. doi:10.1016/j.sim-
doi:10.1016/j.compind.2003.10.007 pat.2007.11.018
Huang, Y. (2002). Infrastructure, query optimi- Morbitzer, C., Strachan, P., & Simpson, C. (2004).
zation, data warehousing and data mining for Data mining analysis of building simulation per-
scientific simulation. A thesis, University of Notre formance data. Building Services Engineers Res.
Dame, Notre Dame, IN. Technologies, 25(3), 253–267.
Huyet, A. L. (2006). Optimization and analysis aid Nayak, R., & Qiu, T. (2005). A data mining
via data-mining for simulated production systems. application: analysis of problems occurring
European Journal of Operational Research, 173, during a software project development process.
827–838. doi:10.1016/j.ejor.2005.07.026 International Journal of Software Engineering
and Knowledge Engineering, 15(4), 647–663.
Jermol, M., Lavrac, N., & Urbanci, T. (2003).
doi:10.1142/S0218194005002476
Managing business intelligence in a virtual enter-
prise: A case study and knowledge management Nigash, S. (2004). Business Intelligence. Com-
lessons learned. Journal of Intelligent & Fuzzy munications of the Association for Information
Systems, 14, 121–136. Systems, 13, 177–195.
Jukic, N. (2006). Modeling strategies and Nonaka, I., Toyama, R., & Konno, N. (2000).
alternatives for data warehousing projects. SECI, Ba and aeadership: a unified model of dy-
Communications of the ACM, 49(4), 83–88. namic knowledge creation. Long Range Planning,
doi:10.1145/1121949.1121952 33, 5–34. doi:10.1016/S0024-6301(99)00115-6
Kerdprasop, N., & Kerdpraso, K. (2007). Moving Okuhara, K., Ishii, H., & Uchida, M. (2005).
data mining tools toward a business intelligence Support of decision making by data mining using
system. Enformatika, 19, 117–122. neural system. Systems and Computers in Japan,
36(11), 102–110. doi:10.1002/scj.10577
Kuhlmann, A., Vetter, R. M., Lubbing, C., &
Thole, C. A. (2005). Data mining on crash simula-
tion data. Proceedings Conference MLDM 2005,
Leipzig/Germany.
Lange, K. (2006). Differences between statistics
and data mining. DM Review, 16(12), 32–33.

376
An Integrated Data Mining and Simulation Solution

Painter, M. K., Erraguntla, M., Hogg, G. L., & Stolba, N., & Tjoa, A. M. (2006). The relevance
Beachkofski, B. (2006). Using simulation, data of data warehousing and data mining in the field
mining, and knowledge discovery techniques of evidence-based medicine to support healthcare
for optimized aircraft engine fleet management. decision making. Enformatika, 11, 12–17.
Proceedings of the 2006 Winter Simulation Con-
Turban, E., Aronson, J. E., Liang, T. P., & Sharda,
ference, 1253-1260.
R. (2007). Decision Support and Business Intel-
Power, D. J. (2007). A Brief History of Decision ligence Systems, (8th Ed.). Upper Saddle River,
Support Systems. DSSResources.com. Retrieved NJ: Pearson Prentice Hall.
from http://DSSResources.COM/history/dsshis-
Vaduva, A., & Vetterli, T. (2001). Metadata
tory.html, version 4.0.
management for data warehousing: an over-
Remondino, M., & Correndo, G. (2005). Data view. International Journal of Cooperative
mining applied to agent based simulation. Pro- Information Systems, 10(3), 273. doi:10.1142/
ceedings 19th European Conference on Modelling S0218843001000357
and Simulation.
Watson H. J., Wixom B. H., Hoffer J. A., Lehman
Sen, A., & Sinha, A. P. (2005). A com- R. A., & Reynolds A. M. (2006). Real-Time Busi-
parison of data warehousing methodologies. ness Intelligence: Best Practices at Continental
Communications of the ACM, 48(3), 79–84. Airlines. Journal of Information Systems Man-
doi:10.1145/1047671.1047673 agement, 7–18.
Shariat, M., & Hightower, R. Jr. (2007). Con- Weiss, S. M., Buckley, S. J., Kapoor, S., & Dam-
ceptualizing Business Intelligence Architecture. gaard, S. (2003). Knowledge-Based Data Mining.
Marketing Management Journal, 17(2), 40–46. [Washington, DC.]. SIGKDD, 03, 456–461.
Shi, Y., Peng, Y., Kou, G., & Chen, Z. (2005). Wixom, B. H. (2004). Business Intelligence Soft-
Classifying Credit Card Accounts For Business ware for the Classroom: Microstrategy Resources
Intelligence And Decision Making: A Multiple- on the Teradata University Network. Communica-
Criteria Quadratic Programming Approach. tions of the Association for Information Systems,
International Journal of Information Technology 14, 234–246.
& Decision Making, 4(4), 581–599. doi:10.1142/
Wu, D., Olson, D. L., & Dong, Z. Y. (2006).
S0219622005001775
Data mining and simulation: a grey rela-
Smith, W. (2005). Applying data mining to sched- tionship demonstration. International Jour-
uling courses at a university. Communications Of nal of Systems Science, 37(13), 981–986.
AIs, 16, 463–474. doi:10.1080/00207720600891521
Srivastava, J., & Cooley, R. (2003). Web Youzhi, X., & Jie S. (2004). The agent-based model
Business Intelligence: Mining the Web for on real-time data warehousing. Journal of Systems
Actionable Knowledge. INFORMS Journal Science & Information, 2(2), 381–388.
on Computing, 15(2), 191–207. doi:10.1287/
Zack, J., Rainer, R. K., & Marshall, T. E. (2007).
ijoc.15.2.191.14447
Business Intelligence: An Analysis of the Lit-
Steele, R. D. (2002). The New Craft of Intelli- erature. Information Systems Management, 25,
gence: Personal, Public and Political. VA: OSS 121–131.
International Press.

377
An Integrated Data Mining and Simulation Solution

KEY TERMS AND DEFINITIONS Decision Support System (DSS): Is an ap-


proach (or methodology) for supporting making. It
Data Mining (DM): Is the process of discover- uses an interactive, flexible, adaptable computer-
ing interesting information from the hidden data based information system especially developed for
that can either be used for future prediction and/ supporting the solution to a specific nonstructured
or intelligently summarizing the details of the management problem(Turban, Aronson, Liang,
data (Mei, and Thole, 2008). and Sharda, 2007).
Business Intelligence (BI): Is an umbrella Adaptive Neuro-Fuzzy Inference System
term that combines architectures, tools, data (ANFIS): Is a data mining methodology based on
bases, applications, practices, and methodologies a combination of fuzzy logic & neural networks
(Turban, Aronson, Liang, and Sharda, 2007). It by clustering values in fuzzy sets, membership
is the process of transforming various types of functions are estimated during training, and using
business data into meaningful information that neural networks to estimate weights (Alnoukari,
can help, decision makers at all levels, getting Alzoabi, and Hanna, 2008).
deeper insight of business. Quality Assurance (QA): Is all those planned
Data Warehouse (DW): Is a physical reposi- and systematic actions necessary to provide ad-
tory where relational data are specially organized equate confidence that a product or service will
to provide enterprise-wide, cleansed data in a satisfy given requirements for quality.
standardized format (Turban, Aronson, Liang,
and Sharda, 2007).
Knowledge Management (KM): Is the ac-
quisition, storage, retrieval, application, genera-
tion, and review of the knowledge assets of an
organization in a controlled way.

378
379

Chapter 17
Modeling and Simulation of
IEEE 802.11g using OMNeT++
Nurul I. Sarkar
AUT University, Auckland, New Zealand

Richard Membarth
University of Erlangen-Nuremberg, Erlangen, Germany

ABSTRACT
Due to the complex nature of computer and telecommunication networks, it is often difficult to predict
the impact of different parameters on system performance especially when deploying wireless networks.
Computer simulation has become a popular methodology for performance study of computer and
telecommunication networks. This popularity results from the availability of various sophisticated and
powerful simulation software packages, and also because of the flexibility in model construction and
validation offered by simulation. While various network simulators exist for building a variety of network
models, choosing a good network simulator is very important in modeling and performance analysis
of wireless networks. A good simulator is one that is easy to use; more flexible in model development,
modification and validation; and incorporates appropriate analysis of simulation output data, pseudo-
random number generators, and statistical accuracy of the simulation results. OMNeT++ is becoming
one of the most popular network simulators because it has all the features of a good simulator. This
chapter aims to provide a tutorial on OMNeT++ focusing on modeling and performance study of the
IEEE 802.11g wireless network.

INTRODUCTION (Bianchi, 2000; Fantacci, Pecorella, & Habib, 2004;


Tickoo & Sikdar, 2003). This popularity is due to
The use of discrete event simulation packages as the availability of sophisticated simulation packages
an aid to modeling and performance evaluation of and low-cost powerful personal computers (PCs),
computer and telecommunication networks, includ- but also because of the flexibility in rapid model
ing wireless networks has grown in recent years construction and validation offered by simulation.
A detailed discussion of simulation methodology,
DOI: 10.4018/978-1-60566-774-4.ch017 in general, can be found in (Carson II, 2004; Law

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Modeling and Simulation of IEEE 802.11g using OMNeT++

& Kelton, 2000). More specifically, Pawlikowski and weaknesses of OMNeT++ is presented first.
(1990) in a comprehensive survey of problems We then outline the system requirements and
and solutions suited for steady-state simulation installation procedure. A brief overview of the
mentioned the relevance of simulation techniques INET Framework (a framework for OMNeT++
for modeling telecommunication networks. While containing models for several Internet protocols) is
various network simulators (both open source and presented next. A tutorial on OMNeT++ focusing
commercial) exist for building a variety of network on developing, configuring and running simulation
models, selecting an appropriate network simula- models is presented. The network performance
tion package for a particular application is not an and test results are presented followed by brief
easy task. For selecting an appropriate network conclusions.
simulator, it is important to have knowledge of the
simulator tools available along with their strengths
and weaknesses. It is also important to ensure that STRENGTHS AND
the results generated by the simulators are valid WEAKNESSES OF OMNET++
and credible. Sarkar and Halim (2008) classified
and compared various network simulators to aid The strengths and weaknesses of OMNeT++ are
researchers and developers in selecting the most highlighted below.
appropriate simulation tool. Strengths: The main strengths of OMNeT++
We have looked at a number of widely used are the GUI, object inspectors for zooming into
network simulators, including ns-2 (Network component level and to display the state of
simulator 2, 2008) and OPNET Modeler (OPNET each component during simulation, the modular
Technologies). Ns-2 is a popular network simulator architecture, as well as a configurable and de-
among the network research community which is tailed implementation of modules and protocols.
available for download at no costs. However, ns-2 OMNeT ++ is an open source software package
is difficult to use and has a steep learning curve. A allowing users to change the source code to suit
tutorial contributed by Marc Greis (www.isi.edu/ their needs. It supports a variety of operating
nsnam/ns/tutorial/index.html) and the continuing systems (OSs) such as Linux and MS Windows.
evolution of the ns documentation have improved Another advantage of OMNeT++ is that it can be
the situation, but ns-2’s split-programming model integrated with other programs. For example, it is
remains a barrier to many developers. OPNET is possible to embed OMNeT++ in an application
a commercial package which has a comprehen- which creates a network model, simulates it and
sive model library, user-friendly graphical user produces results automatically. OMNeT++ builds
interface (GUI), and customizable presentation on small modules that can be reused and combined
of simulation results. However, OPNET is a very to more complex modules. Thus a hierarchy could
expensive package even though the package is be generated and different levels of abstraction can
offered under University academic programs. be realized. Object inspection is a useful feature
However, OPNET IT Guru is available at no offered by OMNeT++. For example, the current
costs for educational use but it has very limited state of each module, parameters, and statistics
functionality. The motivation of using OMNeT++ can be viewed at any time during simulation ex-
(OMNeT++, 2008) as a network simulator in our periments. This feature allows us to observe the
study is that it offers the combined advantages of data flow and node communications. OMNeT++
ns-2 and OPNET. can store simulation results in a file that can be
The remainder of this chapter is organized as analyzed later using various tools supported by
follows. A review of literature including strengths OMNeT++.

380
Modeling and Simulation of IEEE 802.11g using OMNeT++

Weaknesses: OMNeT++ is slow due to its as open source which is available for download at
long simulation run times and high memory con- no costs whereas the commercial version called
sumption. Developing a model using OMNeT++ OMNEST (OMNEST, 2007) requires a license
requires more than just drag and drop components for using it. As mentioned earlier, OMNeT++
on a workspace. The simulation model needs to has a GUI with its unique object inspectors for
be developed in two different applications and zooming into components and to display the cur-
configured in a text file. This can be inconvenient, rent state of each component at any time during
in particular for inexperienced users. the simulation. Other simulators such as OPNET
provide a GUI to set up and control simulation
only and ns-2 has no GUI at all (Varga, 2001).
OMNET++: A REVIEW Furthermore, OMNeT++ provides detailed proto-
OF LITERATURE col level description which is particularly useful
for teaching, research and development (Varga,
OMNeT++ is an object-oriented discrete event 1999). As OPNET, OMNeT++ can also be used
simulator written in C++ (Varga, 1999, 2001). It for performance evaluation of larger enterprise
is a non-commercial, open source software pack- networks as it has a modular architecture allowing
age developed by Andras Varga at the Technical module reuse at different levels of abstraction.
University of Budapest in 1992. It is primarily This modular architecture allows extending the
designed for simulation tasks especially for per- functionality of OMNeT++ without changing it.
formance modeling of computer and data commu- Another advantage of OMNeT++ is the automatic
nication networks. It is relatively easy to integrate generation of scenarios as no proprietary binary
new modules or alter current implementations of formats are used to store models. However, one
modules within its object-oriented architecture. has to be aware when using network simulators.
More details about OMNeT++ can be found at Cavin et al. (2002) demonstrate that the same
the OMNeT++ homepage (www.hit.bme.hu/phd/ model can produce different results (both quan-
varga/omnetpp.htm). titative and qualitative) when run under different
Examples of commonly used open source net- network simulators. Thus, the simulator char-
work simulators are ns-2, GloMoSim (GloMoSim, acteristics need to be looked at and match with
2001), and AKAROA (Ewing, Pawlikowski, & the model to be simulated. The effect of detailed
McNickle, 1999; Pawlikowski, Yau, & McNickle, simulation also plays an important role in network
1994); and commercial network simulators in- performance evaluation. Heidemann et al. (2001)
clude OPNET (www.opnet.com), QualNet De- argue that a trade-off between detailed but slow
veloper (Scalable Network Technologies, 2007), simulations and those lacking details and possibly
and NetSim (Tetcos, 2007). A brief overview of leading to incorrect results need to be consid-
various popular network simulators with their ered. Protocols can be modeled in OMNeT++ at
relative strengths and weaknesses can be found a high level of details, but it requires extensive
in (N.I. Sarkar & Halim, 2008; N. I. Sarkar & resources. However, OMNeT++ supports parallel
Petrova, 2005). and distributed simulations, running not only on
While commercial network simulators sup- a multiprocessor system, but also in a network of
port a wide range of protocols, those simulators distributed hosts. Hence, OMNeT++ can be used
released under open source are more specialised on as an alternative tool for network modeling and
one specific protocol (e.g., ns-2 supports TCP/IP). performance evaluation.
However, OMNeT++ uses a different approach,
offers a dual licensing. The source code is released

381
Modeling and Simulation of IEEE 802.11g using OMNeT++

SYSTEM REQUIREMENTS load for various Linux distributions like Debian,


AND SETUP and it is possible to install OMNeT++ with a single
command. The OMNeT++ installation procedure
OMNeT++ supports commonly used OSs such as is outlined next.
MS Windows, Linux, UNIX and Macintosh. For
most UNIX systems, the GNU development tools
(gcc, g++) are available for system implementa- INSTALLATION
tion. For Mac OS X, the development tools are
normally shipped on a CD with the main distribu- Once all the required software packages are
tion. When install on Mac OS X, it is important installed on a system, OMNeT++ can then be
to install Apple’s X11 in order to run graphical installed. Since OMNeT++ is an open source
Linux/UNIX programs. For Linux and other software package, the main distribution form is a
UNIX systems, the development tools need to be tarball of its source code. OMNeT++ was compiled
installed. For MS Windows, various development after downloading and unpacking the tarball. We
tools such as Cygwin, MinGW and Microsoft specify the correct path to the libraries installed
Visual C++ can be used to build the system. by MacPorts (which is /opt/local/...) and for Mac
OMNeT++ relies on programs such as perl that OS X we have also to inform the compiler that the
need to be installed before installing OMNeT++. libraries are compiled and loaded dynamically. The
In particular, software for the GUI (tcl/tk, btl), following lines were added in configure.user.
XML support (libxslt), documentation generation
(doxygen) and image processing (ghostscript, # File: configure.user
ImageMagick, graphviz, and giftrans) need to CFLAGS=”-Wno-long-double -I/usr/include/
be installed separately. For Mac OS X, package malloc -fPIC” # other options
management tools such as Fink (http://www.fink- like -g and -O2 can be added
project.org/) or MacPorts (http://www.macports. LDFLAGS=”-bind_at_load”
org/) can be used therefore. We used MacPorts for SHLIB_LD=”g++ -dynamiclib -undefined dy-
downloading, building and installing most of the namic_lookup”
packages automatically. However, only giftrans SO_LIB_SUFFIX=”.dylib”
was not in the repository and therefore it was built TK_CFLAGS=”-I/opt/local/include -fwrit-
manually and put in a directory where it could be able-strings”
found by OMNeT++. This is done by using the TK_LIBS=”-L/opt/local/lib -ltk8.4
following commands: -ltcl8.4”
BLT_LIBS=”-L/opt/local/lib -lBLT”
bash # sudo port install tcl tk blt
libxslt doxygen ghostscript ImageMagick Next we extend the environment variables to
graphviz include libraries required by OMNeT++. This is
bash# gcc -O2 -g -Wall -o giftrans done by editing the .bashrc file in the user’s home
giftrans.c directory by replacing /path/to/omnetpp/ with the
bash# mv giftrans /opt/local/bin/ actual directory containing OMNeT++:

OMNeT++ is available for download at no export PATH=$PATH:/path/to/omnetpp/bin


costs under MS Windows, which includes all export LD_LIBRARY_PATH=/path/to/omnetpp/
required third party tools and no further software lib:/path/to/omnetpp/INET/bin
is required. The package is also available to down- export TCL_LIBRARY=/opt/local/lib/tcl8.4

382
Modeling and Simulation of IEEE 802.11g using OMNeT++

export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_ INET FRAMEWORK


PATH:/path/to/omnetpp/lib
OMNeT++ provides a simulation kernel and
These changes take effect for command lines some basic utilities for queues, random number
started after the file has been modified. So, either generators, and simulation output statistics only. It
open a new command line or enter source ~/.bashrc does not provide network components or modules
in the command line to get the new environment for simulations. However, the INET Framework
variables. (user community) and Mobility Framework (spe-
OMNeT++ was compiled using the ./configure cialised on mobile nodes and protocols) provide
and make commands. This can be done inside various modules that can be used for developing
of Apple’s X11 environment or from a normal simulation models. The INET framework can
command line while X11 runs and the DISPLAY be downloaded from the OMNeT++ homepage
environment variable is set correctly (e.g. export (www.hit.bme.hu/phd/varga/omnetpp.htm), and
DISPLAY=”:0.0”): installed on a machine easily. In the inetconfig
the ROOT parameter is adjusted to the directory
bash# export MACOSX_DEPLOYMENT_TAR- of the framework before one can build it using
GET=10.4 the ./makemake and make commands. The INET
bash# ./configure Framework is used for modeling and simulation
bash# make of IEEE 802.11g networks.

The ./configure checks whether all require-


ments for OMNeT++ are fulfilled and displays MODELING THE NETWORK
messages if anything is missing as follows.
OMNeT++ Architecture
WARNING: The configuration script could
not detect the following packages: In OMNeT++, the most important elements
MPI (optional) Akaroa (optional) beside events are modules. There are two dif-
... ferent types of modules: (1) simple modules;
Your PATH contains /path/to/omnetpp/bin. and (2) compound modules. Simple modules are
Good! implemented in C++ and represent the active
Your LD_LIBRARY_PATH is set. Good! components of OMNeT++ where events occur and
TCL_LIBRARY is set. Good! model behaviours are defined. Compound modules
are a composition of other simple or compound
We can see from the above warning message modules, and are used as containers to structure a
that two packages (MPI and Akaroa) have not model. An example of a compound module is an
been found by the configuration script which are 802.11 network interface card (NIC) represented
optional packages for parallel simulation. More in OMNeT++ as Ieee80211Nic (Figure 1). The
details about Akaroa can be found at www.cosc. Ieee80211Nic consists of three simple modules
canterbury.ac.nz/research/RG/net_sim/simula- namely, Ieee80211Mac, Ieee80211Mgmt, and
tion_group/akaroa/. Ieee80211Radio.
Unlike simple modules, compound modules
are not written in C++. However, they use NEt-
work Description (NED), a simple language to
describe the topology of the simulation model.

383
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 1. Compound module Ieee80211Nic


The .ned file(s) can either be loaded dynamically
or compiled static into binary of the INET frame-
work. The actual simulation model in OMNeT++
is called ‘network’ which is a compound module.
To configure a network one can either specify
the parameters for modules in .ned files or in the
omnet.ini configuration file. The advantage of
using the configuration file is to define simula-
tion runs with various parameter values. This is
particularly useful when we run simulations from
a script or batch file.
OMNeT++ provides two types of simulation
data outputs: scalar and vector. A scalar has one
single value of the simulation output (e.g., the
number of packets received). However, vectors
store series of time-value pairs over a certain pe-
riod such as response times during the simulation.
One can add a new statistic by modifying simple
modules to log the data of interest and recompile

Figure 2. Description of an ad hoc network in OMNeT++

384
Modeling and Simulation of IEEE 802.11g using OMNeT++

the framework. These statistics are stored in files ing the following four elements described below.
at the end of simulation and can be analysed The complete description of an ad hoc network
later using plove and scalar programs provided is shown in Figure 2.
by OMNeT++ or other statistical tools such as R
(http://www.r-project.org/), Octave (http://www. 1) Parameters: Module variables which are
octave.org/) or MATLAB. used to configure submodules.
2) Submodules: Instances of other modules.
Developing a Simulation Model One can assign values to the parameters of
submodules. We define a number of instances
We develop simulation models (using INET of the MobileHost which are stored in an
Framework) for IEEE 802.11g networks. The array called host. The number of instances
following INET Framework modules are used: is determined by a module parameter which
‘MobileHost’ for an ad hoc network, ‘Channel- allows us to develop simulation model with
Control’ for keeping track of the station position several stations by changing the numHosts
and to provide an area for station movement, and parameter. We also define a variable called
‘FlatNetwork Configurator’ for configuration of IP display for the modules to display in the GUI.
addresses and routing tables for simple networks. More details about module configuration
Each of the modules that we use in the simulation can be found in the OMNeT++ user manual
was imported from a .ned file. The list of modules (doc/manual/usman.html).
that the INET Framework supports can be found 3) Gates: Connection points of a module.
in “documentation/index.html”. We also develop 4) Connections: Define a connection between
a compound module called Adhoc80211 contain- two gates or submodules. For example, two

Figure 3. AdhocSimple.ned loaded in gned

385
Modeling and Simulation of IEEE 802.11g using OMNeT++

wireless stations are connected through a ent parameters. For example, in Run 1 we assign
NIC. OMNeT++ checks whether all gates numHosts to 2, to create two wireless stations at
are connected properly to the network. the start of the simulation. In Run 2, no values
are assigned so that a user can enter the desired
OMNeT++ also provides a graphical tool to number of stations at the system prompt.
build compound modules, called gned. Figure 3 OMNeT++ provides two environments for
shows our ad hoc network module loaded in gned. simulations: Tkenv (GUI) and Cmdenv (non
However, as the syntax of .ned is straight forward GUI). Tkenv is used for developing a model or
it is faster to model it directly in a text editor. demonstration purposes, and Cmdenv is used for
fast simulation. The behaviour of the environments
Configuring a Simulation Model can be specified as follows:

The simulation model is configured and parameter [Cmdenv]


values are assigned to the network (adhoc80211) express-mode = no
and submodules before running simulations. [Tkenv]
Figure 4 shows the network configuration file ;default-run=1
(omnet.ini). It sets the general configurations for
OMNeT++ and loads all .ned files in the cur- Figure 6 shows the detailed simulation pa-
rent directory. The network for the simulation is rameter setting for an 802.11 ad hoc network.
adhoc80211. Wildcards are used to assign values to one vari-
Next we define a couple of simulation runs as able of different instances. While one asterisk
shown in Figure 5. These runs are mainly used matches only one instance in the hierarchy
for executing simulation experiments with differ- (e.g. *.playground-SizeX matches adhoc80211.

Figure 4. Configuration file (omnet.ini)

Figure 5. Simulation runs defined

386
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 6. Simulation parameter setting

playgroundSizeX, but not adhoc80211.test.play- are set. Stations are moved within the defined
groundSizeX), two asterisks match all variables playground and all stations send ping messages
in the hierarchy. For example, **.debug matches to station [0] for an infinite time (i.e. simulation
every debug variable in each module and its sub will never stop) as events will never run out.
modules. The parameters that need to be set can Simulation time can be defined to stop simula-
be found in the documentation of each module tion. For example, sim-time-limit (elapsed time
where all unassigned module parameters and its in the simulation) or cpu-time-limit (elapsed
submodules are listed. real time) can be defined in the general section
The parameters for wireless local area network of the configuration file to stop simulations. The
(WLAN), ARP, IP, TCP, and station mobility last section of the configuration file is for output

387
Modeling and Simulation of IEEE 802.11g using OMNeT++

vectors which can be enabled or disabled. By under OMNeT++ to study WLAN performance.
default all output vectors (time-value pairs) are The INET Framework provides wireless modules
enabled and stored in memory. This consumes a for developing simulation models to be compiled
lot of memory and slows down simulations. Thus, and executed in OMNeT++.
it is recommended to disable all output vectors While the simulation model defines the net-
except the ones that we are interested in. This work to be simulated, the model is configured by a
can be achieved by enabling all output vectors configuration file which is also loaded at the start
of the pingApp in station 1 and all other vectors of the simulator. Finally, the simulation results are
disabled. Instead of setting output vectors glob- stored in output files for later analysis.
ally, it is also possible to set them up in each of A simulation model can be run by calling the
the individually. INET binary from the configuration folder. How-
ever, it is more convenient to run simulations by
[OutVectors] calling the following script and make it executable
**.host[1].pingApp.**.enabled = yes (e.g., chmod +x file):
**.enabled = no
../bin/INET $*

Running a Simulation Model After executing the script, one of the runs de-
fined in Figure 5 can be selected. After selecting
Figure 7 shows a simple framework in which we Run 1 two windows are opened: The main window
develop and execute various simulation models of OMNeT++ and another window containing our

Figure 7. A framework for developing and executing simulation models in OMNeT++

388
Modeling and Simulation of IEEE 802.11g using OMNeT++

simulated network. As shown in Figure 8, the tool window, the output of the ping command is dis-
bar has buttons for running and stopping simula- played on the console and the summary is stored
tion. The simulation speed can also be adjusted to files (Figure 11).
using the ‘slider’ button (next to ‘stop’ button). When Run 2 (Figure 5) is selected, a simulated
Instead of running a simulation one can also step network with N = 10 wireless stations as shown
through the simulation, from event to event using in Figure 12 is opened. The host[2] sends a ping
the ‘step’ button. Two wireless stations are com- packet to other nine hosts including host[0].
municating by sending ping messages (Figure 8).
The details of host[0] in Figure 8 can be found by
double clicking on it (Figure 9). ANALYSIS OF RESULTS
Figure 10 shows the main window of OM-
NeT++, which can be used to controlled simula- For evaluation and interpretation of experimental
tions. For example, three lines between the toolbar results, it is useful to compare results obtained
and the timeline displaying simulation statistics from more than one set of simulation outputs with
such as the number of events and events per sec- different parameters. We used the same amount of
ond. The main window contains the simulation events in each output vector by limiting the num-
log (message, event and debugging information). ber of pings to 50 and have two runs by changing
On the left of the window is a tree displaying the the packet length from 56 to 112 bytes. We set
simulation events. This is very useful information **.pingApp.count variable to 50 and change the
if one observes the internal state of each module. packet length between the runs. This was achieved
While all log messages are displayed in the log by defining extra simulation runs in the omnetpp.

Figure 8. Adhoc80211 network simulation with 2 stations pinging each other

389
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 9. Details of host[0] submodule

ini. While scalars from multiple runs are saved to obtained from the scalar file. For graphical pre-
the same file, OMNeT++ overwrites the vector sentation of the above statistics, scalars included
files of previous runs. However one can specify in OMNeT++ can be used to display the statistics
the output files for each run and save them to disk in bar graphs.
independently. Figure 13 illustrates the addition
of two runs in the omnetpp.ini.
To continue the simulation run until 50 ping PERFORMANCE EVALUATION
messages, one can select Call finish() for all
modules in the main simulation window menu The 802.11b implementation of OMNeT++’s INET
to write the output scalars and vectors to disk. To Framework is used as a reference. The 802.11g
plot vector files, OMNeT++ provides a tool called implementation in OMNeT++ (www.omnetpp.
‘plove’ where different vector files could be loaded org/listarchive/msg08494.php) is adapted here,
simultaneous and plotted on a single graph. replacing 802.11b support of the INET Frame-
Figure 14 compares the round trip times (RTTs) work which requires a recompilation of the INET
of two ping simulations using plove. One can Framework. The performance of a typical 802.11g
observe that the RTT for 112 bytes ping packet is network is evaluated by measuring the network
longer than that of 56 bytes. The mean, min, max, mean throughput which is basically the same as an
and standard deviation of RTT can be calculated access point (AP) throughput without RTS/CTS.
from all values contained in the vector file or We initially consider an 802.11b network with N

390
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 10. Main window of OMNeT++ illustrating simulation runs

= 1 station sending a large volume of data to an with a few stations. The maximum throughput
AP. We then observe the network throughput by is achieved with N = 5 stations and N = 7 to 10
increasing the number of wireless stations (i.e., stations for 802.11b and 802.11g, respectively.
increased traffic). Figure 15 shows an 802.11b The lower backoff times as a result of concur-
network with seven stations and an AP where all rent access to the wireless channel contributing
stations are sending data to the AP. Likewise, the to less wastage of channel bandwidth and hence
impact of increasing the number of stations on an achieving higher throughput for N >1 user. The
802.11g throughput was investigated. number of collisions increases and the collisions
To simulate 802.11g, we made some changes take over the positive backoff effect leading to
to the omnetpp.ini by adding parameters as shown throughput decline at N > 5 stations for 802.11b
in Figure 16. and N > 10 stations for 802.11g networks. Our
Each simulation experiment was run for five simulation results are in accordance with the
minutes of simulation time (up to 20 minutes in work of other network researchers (Choi, Park,
real time depending on number of stations). The & Kim, 2005).
experimental results for the throughput perfor- Another observation is that the achieved net-
mance of 802.11b and 802.11g are summarized in work throughputs are a lot lower compared to
Table 1. We observe that the maximum throughput the data rate. As shown in Table 1, the maximum
is not achieved with N= 1 wireless station, but achieved efficiencies are 51% and 47% for 802.11b

391
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 11. Output of ping message

Figure 12. Adhoc80211 network simulation with 10 stations (hosts)

392
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 13. Run 3 and 4 are added in the omnetpp.ini

Figure 14. Comparison of RTT from two ping messages

393
Modeling and Simulation of IEEE 802.11g using OMNeT++

Figure 15. IEEE 802.11b network with an AP and seven wireless stations

Figure 16. IEEE 802.11g parameters in the omnetpp.ini

and 802.11g, respectively. The network (channel) an IEEE 802.11b (11 Mbps) network, the proto-
efficiency is computed as follows: Efficiency = col overhead is about 49%. These results are in
(network mean throughput/ data rate) ×100%. accordance with the real network performance
Our simulation results show that more than 50% measurement reported in (Broadcom, 2003).
is protocol overhead for 802.11g networks. For

394
Modeling and Simulation of IEEE 802.11g using OMNeT++

Table 1. Impact of increasing the number of stations on the 802.11b (11 Mbps) and 802.11g (54 Mbps)
network throughput

Number of nodes IEEE 802.11b IEEE 802.11g


Throughput (Mbps) Efficiency Throughput (Mbps) Efficiency
(%) (%)
1 5.11 46.45 20.26 37.52
3 5.39 49.00 23.67 43.83
5 5.61 51.00 25.52 47.26
7 5.53 50.27 25.56 47.33
10 5.35 48.64 25.56 47.33
20 4.94 44.91 24.9 46.11
30 4.46 40.55 22.89 42.39
40 4.12 37.45 21.27 39.39

CONCLUSION AND FUTURE WORK and modelling. The following research problems
have been suggested as an extension to the work
Stochastic discrete event simulation has become presented in this chapter: (1) to investigate the
popular as a network modeling and performance impact of backoff mechanisms on throughput of
analysis tool. This chapter provided a walk- 802.11g; and (2) to run simulations in multiple
through tutorial on OMNeT++, a discrete-event replications in parallel (MRIP). This can be
network simulator that can be used for modeling achieved by developing network models using
and performance evaluation of computer and tele- OMNeT++ and interfacing them with Akaroa II
communication networks. The tutorial focused on (Ewing et al., 1999).
installation of OMNeT++, modelling the network,
running simulations, and analysing results. The
performance test results of a typical IEEE 802.11g REFERENCES
network are also presented. The models built
under OMNeT++ were validated using empirical Bianchi, G. (2000). Performance analysis of the
measurements from wireless laptops and access IEEE 802.11 distributed coordination function.
points for IEEE 802.11g. A good match between IEEE Journal on Selected Areas in Communica-
simulation and measurement results for N = 1 to tions, 18(3), 535–547. doi:10.1109/49.840210
3 stations validates the OMNeT++ simulation Broadcom. (2003). IEEE 802.11g: the new main-
models (N. I. Sarkar & Lo, 2008). stream wireless LAN standard. Retrieved May
In summary, we want to stress the importance 23 2007, from http://www.54g.org/pdf/802.11g-
of using a good simulator for modeling and per- WP104-RDS1
formance analysis of wireless communication
networks. The OMNeT++ offers more flexibil- Carson, J. S., II. (2004, December). Introduction to
ity in model construction and validation, and Modeling and Simulation. Paper presented at 2004
incorporates appropriate analysis of simulation Winter Simulation Conference (pp. 1283-1289).
output data.
There are several possibilities that one can con-
tribute in the emerging area of network simulation

395
Modeling and Simulation of IEEE 802.11g using OMNeT++

Choi, S., Park, K., & Kim, C. (2005). On the Sarkar, N. I., & Halim, S. A. (2008, June 23-26).
performance characteristicsof WLANs: revisited. Simulation of computer networks: simulators,
Paper presented at the 2005 ACM SIGMETRICS methodologies and recommendations. Paper
international conference on Measurement and presented at the 5th IEEE International Confer-
modeling of computer systems (pp. 97–108). ence on Information Technology and Applications
(ICITA’08), Cairns, Queensland, Australia (pp.
Ewing, G., Pawlikowski, K., & McNickle, D.
420-425).
(1999, June). Akaroa 2: exploiting network
computing by distributed stochastic simulation. Sarkar, N. I., & Lo, E. (2008, December 7-10). In-
Paper presented at European Simulation Mul- door propagation measurements for performance
ticonference (ESM’99), Warsaw, Poland, (pp. evaluation of IEEE 802.11g. Paper presented at
175-181). the IEEE Australasian Telecommunications Net-
works and Applications Conference (ATNAC’08),
Fantacci, R., Pecorella, T., & Habib, I. (2004).
Adelaide, Australia (pp. 163-168).
Proposal and performance evaluation of an ef-
ficient multiple-access protocol for LEO satellite Sarkar, N. I., & Petrova, K. (2005, June 27-30).
packet networks. IEEE Journal on Selected Areas WebLan-Designer: a web-based system for inter-
in Communications, 22(3), 538–545. doi:10.1109/ active teaching and learning LAN design. Paper
JSAC.2004.823437 presented at the 3rd IEEE International Conference
on Information Technology Research and Educa-
GloMoSim. (2007). GloMoSim Manual. Retrieved
tion, Hsinchu, Taiwan (pp. 328-332).
April 20, 2007, from http://pcl.cs.ucla.edu/proj-
ects/glomosim/GloMoSimManual.html Scalable Network Technologies. (2007). QualNet
Developer. Retrieved 20 April, 2007, from http://
Law, A. M., & Kelton, W. D. (2000). Simulation
www.qualnet.com/products/developer.php
modelling and analysis (3rd ed.). New York:
McGraw-Hill. Technologies, O. P. N. E. T. (2009). Retrieved
January 20, 2009, from www.opnet.com
Network Simulator 2 (2008). Retrieved September
15, 2008, from www.isi.edu/nsnam/ns/ Tetcos. (2007). Products. Retrieved 20 April, 2007,
from http://www.tetcos.com/software.html
OMNEST (2007). Retrieved September 15, 2007,
from www.omnest.com Tickoo, O., & Sikdar, B. (2003). On the impact
of IEEE 802.11 MAC on traffic characteris-
OMNeT++ (2008). Retrieved September 15, 2008,
tics. IEEE Journal on Selected Areas in Com-
from http://www.omnetpp.org/
munications, 21(2), 189–203. doi:10.1109/
Pawlikowski, K. (1990). Steady-state simula- JSAC.2002.807346
tion of queuing processes: a survey of problems
Varga, A. (1999). Using the OMNeT++ dis-
and solutions. ACM Computing Surveys, 1(2),
crete event simulation system in education.
123–170. doi:10.1145/78919.78921
IEEE Transactions on Education, 42(4), 1–11.
Pawlikowski, K., Yau, V. W. C., & McNickle, D. doi:10.1109/13.804564
(1994). Distributed stochastic discretevent simula-
Varga, A. (2001, June 6-9). The OMNeT++ dis-
tion in parallel time streams. Paper presented at the
crete evenet simulation system. Paper presented
Winter Simulation Conference. pp. 723-730.
at the European Simulation Multiconference
(ESM’01), Prague, Czech Republic.f

396
Modeling and Simulation of IEEE 802.11g using OMNeT++

KEY TERMS AND DEFINITIONS Computer Simulation: Computer simulation


is a methodology that can be used for performance
IEEE 802.11g: The IEEE 802.11g is the high- study of computer and telecommunication net-
speed wireless LAN with a maximum data rate of works. It allows greater flexibility in model con-
54 Mbps operating at 2.4 GHz. The IEEE 802.11g struction and validation offered by simulation.
is backward compatible with the IEEE 802.11b. INET Framework: The INET Framework
OMNeT++: OMNeT++ is an object-oriented (specialised on mobile nodes and protocols) pro-
discrete event network simulator. It is an open vides various modules for developing simulation
source software package primarily designed for models. The INET framework can be downloaded
simulation and performance modeling of computer from www.hit.bme.hu/phd/varga/omnetpp.htm.
and data communication networks.

397
398

Chapter 18
Performance Modeling of IEEE
802.11 WLAN using OPNET:
A Tutorial
Nurul I. Sarkar
AUT University, New Zealand

ABSTRACT
Computer simulation is becoming increasingly popular among computer network researchers for per-
formance modeling and evaluation of computer and telecommunication networks. This popularity is due
to the availability of various sophisticated and powerful simulators, and also because of the flexibility in
model construction and validation offered by simulation. While various network simulators (both open
source and commercial) exist for modeling and performance evaluation of communication networks,
OPNET is becoming popular network simulator as the package is available to academic institutions at
no cost, especially OPNET IT Guru. This chapter aims to provide a tutorial on OPNET focusing on the
simulation and performance modeling of IEEE 802.11 wireless local area networks (WLANs). Results
obtained show that OPNET provides credible simulation results close to a real system.

INTRODUCTION and generalization of propagation measurement


results. Although a real network testbed allows
The IEEE 802.11 is one of the most popular maximum integrity for performance testing and
WLAN technologies in use today worldwide. This prediction, it is however, more economical to use
popularity results from the simplicity in operation, simulation for performance evaluation purposes.
low-cost, high-speed, and user mobility offered by Moreover, simulation can be performed in a very
the technology. Computer simulation is becoming early stage of the system design and can therefore
one of the most important tools for performance be very helpful in the design process. However,
modeling and evaluation of telecommunication simulation can never be as accurate as a real sys-
networks. It is often used to verify analytical models tem and there are intrinsic drawbacks that network
researchers/developers need to be aware of when
DOI: 10.4018/978-1-60566-774-4.ch018 using network simulators.

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Performance Modeling of IEEE 802.11 WLAN using OPNET

There are several issues that need to be consid- LITERATURE REVIEW


ered when selecting a network simulation package
for simulation studies. For example, use of reliable Computer network design and implementation
pseudo-random number generators, an appropriate involves interaction of various networking devices
method for analysis of simulation output data, and including Servers, network interface cards (NICs),
statistical accuracy of the simulation results (i.e., switches, routers, and firewalls. In most cases it
desired relative precision of errors and confidence is ineffective with respect to time, money, and
interval). These aspects of credible simulation effort to test the performance of a live network.
studies are recommended by leading network Computer network simulators are often used to
researchers (Law & Kelton, 2000; Pawlikowski, evaluate the system performance without build-
Jeong, & Lee, 2002; N.I. Sarkar & Halim, 2008; ing a real network. However, the operation of a
Schmeiser, 2004). network simulator relies on various stochastic
OPNET (OPNET Technologies, 2008) is be- processes, including random number generators.
coming one of the most popular network simulators Therefore the accuracy of simulation results and
as the package is available to academic institutions model validation is an important issue. A main
at no cost under OPNET academic program. It concern in wireless network simulations or any
contains numerous models of commercially avail- simulation efforts is to ensure a model is credible
able network elements, and has various real-life and represents reality. If this can’t be guaranteed,
network configuration capabilities. This makes the the model has no real value and can’t be used
simulation of a real-life network environment close to answer desired questions (McHaney, 1991;
to reality. However, network researchers are often Sargent, 2004). For selecting an appropriate net-
reluctant to use this package because they may not work simulator for a particular application, it is
aware of the potential strengths of this package important to have good knowledge of the simula-
and also because of the lack of good tutorial on tor tools available, along with their strengths and
wireless network simulation using OPNET. To weaknesses.
overcome this problem we provide a walk-through However, selecting the right level of detail for
tutorial on OPNET focusing on the modeling and the simulation is a non-trivial task (Heidemann
performance evaluation of IEEE 802.11 WLANs. et al., 2001). For example, if simulating a large
This tutorial may be useful to both undergraduate company intranet the elements in the network
and postgraduate students or professionals who are are so complex that it would be a large overhead
interested in using a credible network simulator to simulate each single instruction that is being
for wireless network simulations. executed on a node. This, however, may be neces-
The remainder of this document is organized as sary if simulating a wireless sensor network where
follows. We first provide a review of literature on energy consumption is one of the main concerns
network simulations, including OPNET. We then for the system developer (Shnayder, Hempstead,
highlight strengths and weaknesses of OPNET. A Chen, Allen, & Welsh, 2004).
tutorial on modelling and simulation of 802.11 While some simulation tools are multi-protocol
WLAN using OPNET is provided. The simula- and can be use for multi-purposes, another class of
tion results are presented for a realistic network simulation tools can be highly specialized and use
scenario. Finally the chapter concludes with a brief for specific purposes. This is why research groups
summary and direction for future work. of different network (cellular networks, ad-hoc
networks, sensor networks, IP networks) have
developed simulators to meet specific require-
ments with respect to the level of detail. But even

399
Performance Modeling of IEEE 802.11 WLAN using OPNET

within one area of network research the level of made and different simulators suggest different
detail can make a significant difference in terms conclusions, implementing a real testbed would
of simulation speed and memory consumption be appropriate.
(N.I. Sarkar & Halim, 2008). To avoid misleading simulation results it is
Modeling simulation of wireless networks important to select an appropriate level of detail
can be challenging and a network developer for the simulation runs. Depending on the network-
must be aware of the inaccuracy of the results a ing area and the layer for which new algorithms
simulation tool can produce. According to Takai or protocols are developed, the selection of the
et al. (2001) the physical layer is usually the least appropriate simulator is also viable. Therefore
detailed modelled layer in network simulations. network researchers and developers should use an
Since the wireless physical layer is more complex appropriate simulation package for their simula-
than that of wired physical layer and may affect tion tasks.
the simulation results drastically. Physical layer
parameters such as received signal strength, path
loss, fading, interference and noise computation, STRENGTH AND
and preamble length greatly influence WLAN WEAKNESSES OF OPNET
performance. Therefore, simulation results may
differ from real testbed evaluation both qualita- OPNET has easy to use GUI and therefore it is
tively and quantitatively (Kurkowski, Camp, & easier to develop a network model for simula-
Colagrosso, 2005). Although Lucio et al. (2008) tions. A skilled user can develop a complex
did not find the differences in their evaluation model containing many different hierarchical
between OPNET and ns-2 (Fall & Varadhan, layers within a short period of time especially
2008) most of the other authors could observe interoperability of wide area networks (WANs)
that changing the network simulator can result in and LANs can be modelled efficiently. Although
completely different conclusions. Some recom- OPNET IT Guru has limited functionality, but
mendations are being proposed to run a hybrid it supports various servers, routers and other
simulation/emulation based network evaluation networking devices with different manufacturer
which simulate the physical layer and execute more specifications which is adequate for develop-
abstract layers of the protocol stack. However, ing models (small scale) of real-world network
this would add a huge amount of complexity and scenarios. It is easier to evaluate the scalability
could lead to scalability problems when dealing of enterprise networks and the effects of chang-
with large number of wireless nodes. ing server capacities and switching to a different
Although drawing a conclusion from simula- network equipment provider. Other features of
tion run has to be done very carefully, network OPNET include GUI interface, comprehensive
developers are rarely independent from using library of network protocols and models, source
simulators. Network simulation packages offer a code for all models, and graphical presentation
rich environment for the rapid development, per- of simulation results. More importantly, OPNET
formance evaluation and deployment of networks. has gained considerable popularity in academia
Perhaps simulation results between two competing as it is being offered free of charge to academic
architectures or protocols are more important than institutions. That has given OPNET an edge over
quantitative results. Since the latter is not neces- ns-2 in both marketplace and academia.
sarily the case and the use of different simulators OPNET supports huge amount of low level
might be appropriate (Haq & Kunz, 2005; Takai details (CPU speed of servers, number of cores,
et al., 2001). If a major design decision has to be CPU utilisation, etc.) but the effect of these re-

400
Performance Modeling of IEEE 802.11 WLAN using OPNET

mains uncertain. Even the effect of changes made TUTORIAL


to the protocol behaviour (e.g. QoS scheduling)
are difficult to observe in the simulation. Model- This section provides a walk-through tutorial on
ing the effect of low level protocol modification performance modeling of IEEE 802.11 WLANs
is generally very difficult to achieve. Whereas using OPNET IT Guru. Although OPNET Mod-
other OPNET products are likely to allow the eler has more functionality than that of IT Guru,
development of own network nodes and protocol the process of developing models is very similar
specification the academic version of IT Guru in both cases. This tutorial covers how to create
does not. Therefore, OPNET appears to be more a new model from scratch, a scenario, setting up
suitable for high level network simulations (e.g. simulation parameters, running simulations and
evaluating the scalability of a network archi- getting simulation results.
tecture) and less suitable for low level protocol
performance evaluation. The main weakness lies The Workbench and the Workflow
in the abstraction from the real world. Choosing
the appropriate level of detail is vital for the ac- Figure 1 shows an empty workbench for a new
curacy of network simulations. project. A infrastructure can easily be developed
by dragging networking devices and components
on the workbench from object palette.
Figure 2 lists the eight toolbar objects and their
brief description/function. These objects allow

Figure 1. An empty workbench

401
Performance Modeling of IEEE 802.11 WLAN using OPNET

verification of links between nodes. Hierarchical Figure 3 shows a framework in which OPNET
networks can be developed using subnets. models can be developed and run. OPNET sup-
In OPNET simulation models, both the ‘ap- ports protocol modification at various levels as
plication configuration’ and ‘profile configuration’ shown in Figure 3. The main components of the
are often used. The application configuration is framework are described next.
used to define all the services offered by the net-
work. Examples of services are FTP, database, and Creating a Model
HTTP. These services can be assigned to network
nodes and/or servers, thus defining which elements After executing OPNET one can either create
are acting as service providers. The profile con- a new model or open an existing model. A new
figuration is used to define the kind of services/ project can be created by selecting ‘new’ from
applications a network node is using. Assigning file menu. After entering a project name and
a profile to a network node allows specific node scenario name one can choose network topology
to act as client of that services. by selecting “Create Empty Scenario”. Next, we
Once the client and server roles are defined to choose the scale of the model where nodes can
the network nodes one has to select the proper- easily be placed. In this tutorial we describe how
ties of the network of interest. Examples of node to develop an office network. The pre-selected
specific properties are response time and active size of an office is 100m x 100m. We select the
connections. Global properties such as global network technology such as wireless_lan and
throughput or delay can also be selected. After wireless_lan_adv for wireless network model-
creating the network topology, assigning server ing. Figure 4 shows a screen shoot of the startup
and client roles, and selecting the network param- wizard review.
eters of interest we can run the simulation.

Figure 2. Tool bar objects and their function

402
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 3. A framework for developing a model using OPNET

Figure 4. The review screen of the Startup Wizard

Creating the First Scenario Similarly, Server is added into the workbench by
selecting wlan_server_adv (fix). One can easily
OPNET allows modeling a network with multiple assign custom name to each component on the
scenarios. This is particularly useful for compar- workbench by right clicking on it and selecting
ing different network behaviour under different the ‘Set Name’ option from the context menu.
network topologies. Therefore it makes sense to The mobile nodes can easily be added to
create a new scenario for each network topol- Scenario 1 using rapid configuration tool. For
ogy for a particular project. After creating a new example, by selecting Topology from Rapid
model the first (empty) scenario (“Scenario 1”) Configuration menu and creating an “unconnected
is created implicitly and nodes are added in Sce- net”. Figure 5 shows a screen shoot of mobile
nario 1. By selecting “wireless_lan_adv” palette, nodes rapid configuration. In this example, node
nodes are added into workbench by drag and drop. model is “wlan_wkstn_adv” and node type is

403
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 5. Mobile nodes rapid configuration

Figure 6. Developing a simple wireless network model

“mobile”. In the number field one can enter the Application and Profile Configuration objects are
desired number of nodes for rapid configuration included in the model to make network active and
(in this case is 23). The placement and size of the to run applications (Figure 6). After setting up the
network can remain with their default values. The attributes of Application Configuration (Figure 7),
network topology (after rapid configuration) is the value of “Application definitions” attribute is
shown in Figure 6. then set up as shown Figure 8.
Because of WLAN modeling, there is no need As shown in Figure. 8, various services (ap-
to define a connection between the nodes. Both plications) can be selected to run on the network

404
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 7. Application Configuration attributes menu

Figure 8. Assigning a service behaviour

for performance evaluation. For example, http (a Server attributes and assign services to the Server
light web browsing) and heavy ftp services are (Figure 9).
selected to study the impact of these services on Next, we configure a Profile object so that
network performance. By creating two new rows network users (clients) can access these services.
in “Application Definitions” table and assigning More detail about configuring a Profile object can
them to ftp and http, we now have two network be found in OPNET manual (www.opnet.com).
services available to the nodes. We then set up We only need one Profile definition because all

405
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 9. Assigning services to the Server

Figure 10. Setting up client profile

406
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 11. Assigning a client profile to multiple nodes

Figure 12. Selecting network performance metrics

clients in the network will behave the same way. Figure 11 illustrates the process of assigning a
The Profile Configuration is shown in Figure 10. client profile to multiple nodes. This is achieved
By editing profile attributes two rows are created in by right clicking on a node (client) and selecting
the applications table for ftp and http services. “Similar Nodes”.

407
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 13. Setting up simulation statistics

We have now created the first network Sce- Running the Simulation
nario 1 and configured both the server and clients.
Notice that we have not assigned any traffic flow A simulation model can be executed by clicking
between two nodes yet. This is handled implicitly on the Run button from the tool bar. At this stage
by OPNET. one can also specify the Seed-Value for random
number generations used in the simulator. This
Simulation Parameter Setup is particularly useful for experimenting with the
impact of Seed on random number generations
Before executing a simulation model one has to and consequently on simulation results. We used
select the desired performance metric(s). This the default Seed. The length of simulation run can
can be achieved by right clicking on a node and easily be modified by changing the duration field.
selecting “Choose Individual Statistic” from the We run the simulation for 15 minutes simulation
menu. It is likely to take longer simulation time time to obtain results in steady-state. After the
to produce results if you choose to obtain more completion of simulation runs, results can be
performance metrics. We are more interested to viewed by clicking on view graphs and tables of
obtain the overall network throughput and packet the collected statistics button from the toolbar.
delay that a node experiences. Figure 12 shows a Figure 13 shows a screen shoot of setting up
screenshoot for the selection of network perfor- simulation statistics. One can access the statistics
mance metrics. by selecting them from the tree (left hand side). By

408
Performance Modeling of IEEE 802.11 WLAN using OPNET

selecting “average” from the drop down menu at We run Scenario 2 and compare the result with
the bottom of the screen one can obtain the aver- that of Scenario 1. Figure 15 compares the results
age network throughput and packet delay. obtain from Scenarios 1 and 2. The graph on the
Figure 14 shows the simulation results. The top represents Scenario 1 whereas the bottom
graph on the top represents the network throughput graph represents Scenario 2. It is observed that the
(bps) whereas the bottom graph represents packet packet delay for Scenario 2 is slightly lower than
delay for node 15. that of Scenario 1. This is because the network
under Scenario 2 operates at 11 Mbps whereas
Adding a Second Scenario and Scenario 1 operates at Mbps.
Making Changes to the Protocol

After simulating a network model for a particu- RESULTS AND ANALYSIS


lar scenario, it is useful to observe the network
performance with another scenario. In OPNET, Study 1: Effect of Increasing
an existing scenario can easily be duplicated and the Number of Nodes on
altered to create a new network scenario. One IEEE 802.11 MAC delay
can easily switch between scenarios by pressing
CTRL+ n, where n is the number of the scenario. To study the impact of nodes on IEEE 802.11
We created a second scenario called Scenario 2 media access control (MAC) delay, we simu-
and change the data rate from 1 Mbps to 11 Mbps. lated seven scenarios with varying number of

Figure 14. Graphical presentation of simulation results

409
Performance Modeling of IEEE 802.11 WLAN using OPNET

nodes. The summary of results is shown in random number that a wireless node waits after
Figure 16. an unsuccessful transmission attempt as well
As shown in Figure 16, generally the network as after sending a packet. This random value
experiences high MAC delay when a large number changes depending on the number of nodes that
of nodes contending for accessing the wireless are currently active in the network (Cali, Conti,
channel. After a start-up phase the MAC delay is & Gregori, 2000). If there are only few (less than
in the range of 10-20 ms depending on the number 6) nodes in the wireless network the backoff time
of nodes on the network. It is observed that the is lower than the same scenario with less nodes.
MAC delay with N= 2 nodes is slightly higher than Choi et al. (2005) observed similar results when
the network with N = 3 nodes. Also, the network measuring the behaviour of the throughput of
with 5 nodes outperforms 2-node network as far wireless LANs.
as MAC delay is concerned. The lower backoff
times for concurrent access to the wireless channel Study 2: Impact of Increasing
contributing to less wastage of channel bandwidth Wireless Node Density on
and hence achieving lower MAC delay for small IEEE 802.11g Throughput
number of nodes. The long-term averages of this
value are roughly the same for the scenarios with In this study we focus on the throughput perfor-
2 nodes and 10 nodes. However, networks with mance of IEEE 802.11g infrastructure network.
more than 10 nodes have an increased MAC delay In the simulation model an office network of
that is proportional to the number of nods. 35m×15m area is used. Figure 17 shows a snapshot
The network behaviour (Figure 16) can be of the OPNET wireless network model with one
explained further with respect to the backoff AP and 30 wireless stations. A generic wireless
time of CSMA/CA algorithm that is used in the station was configured as an IEEE 802.11g AP as
IEEE 802.11 protocol. The backoff time is a well as wireless stations.

Figure 15. Mean packet delay versus simulation time (Scenarios 1 and 2)

410
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 16. IEEE 802.11 MAC delay versus simulation time

Figure 18 shows a screenshot of AP con- setting, and the frequency spectrum were set to
figuration and simulation parameter setting. The the default values for 802.11g. The packet length
transmit power of the AP was set to 32 mw which threshold was set to 2,346 bytes (a realistic figure
is close to the D-Link (DWL-2100) AP. Another for wireless Ethernet networks).
important parameter is the packet reception power The data rate, Tx power, RSS threshold, packet
threshold (same as RSS), it was set to -88 dBm. length threshold, and segmentation threshold
This allows the wireless AP to communicate with are the same for the AP. All wireless stations
wireless stations even when signals are weak; a communicate using identical half-duplex wire-
common scenario in an obstructed office environ- less radio based on the 802.11g AP. The capture
ment. Other parameters such as data rate, channel effects, transmission errors due to interference

411
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 17. OPNET representation of fully connected network with one AP and 30 wireless stations

and noise in the system, and hidden and exposed SIMULATION ACCURACY
station problems are not considered in the simu- AND MODEL VALIDATION
lation model. Streams of data packets arriving at
stations are modeled as CBR with an aggregate OPNET is a well known commercial network
mean packet generating rate λ packets/s. The simulation package and is becoming more and
WLAN performance is studied under steady state more popular in academia as the package is avail-
conditions. able to academic institutions at no cost (OPNET
As shown in Figure 19, the 802.11g AP through- Technologies, 2008). As ns-2, OPNET is also a
put decreases with node density. For example, the credible network simulator which has been tested
network mean throughputs are 8.6, 3.2, 2.0, 1.3, by numerous network engineers and researchers
0.9, and 0.7 Mbps for N = 2, 10, 15, 20, 25 and worldwide (Banitsas, Song, & Owens, 2004;
30, respectively. Because of the higher contention Chang, 1999; Chow, 1999; Green & Obaidat,
delays and backoff the amount of data being trans- 2003; Salah & Alkhoraidly, 2006; Zhu, Wang,
mitted by a source station to a particular destination Aweya, Oullette, & Montuno, 2002).
decreases as N increases, consequently network Lucio et al. (Lucio et al., 2008) have tested
mean throughput decreases. The main conclusion the accuracy of network simulation and model-
is that the number of active nodes (i.e., station ling using both the OPNET Modeler and ns-2
density) has a significant effect on throughput of by comparing the simulation results with the
an IEEE 802.11g infrastructure network. experimental results obtained from a live net-
work. Based on the modelling of CBR and FTP
sessions, the authors concluded that both the ns-2

412
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 18. IEEE 802.11g AP configuration

and OPNET Modeler perform well and provide results were compared with the results obtained
very similar results. from ns-2 (Fall & Varadhan, 2008) and a good
Before evaluating the system performance it match between two sets of results validated our
was necessary to verify that the simulation models models (N. I. Sarkar & Lo, 2008; N. I. Sarkar
represent reality. First, OPNET models were vali- & Sowerby, 2009). The simulation results pre-
dated through indoor propagation measurements sented in this chapter were also compared with
from wireless laptops and access points for IEEE the work of other network researchers to ensure
802.11b and 802.11g networks. A good match be- the correctness of the simulation model (Heusse,
tween simulation and real measurement results for Rousseau, Berger-Sabbatel, & Duda, 2003; Ng &
N = 2 to 4 stations validates the simulation models Liew, 2007; Nicopoliditis, 2003; Schafer, Maurer,
(N. I. Sarkar & Lo, 2008; N.I. Sarkar & Sowerby, & Wiesbeck, 2002).
2006; Siringoringo & Sarkar, 2009). Second, the
function of individual network components and
their interactions was checked. Third, OPNET

413
Performance Modeling of IEEE 802.11 WLAN using OPNET

Figure 19. IEEE 802.11g throughput versus simulation time

CONCLUSION delay. The general characteristic of the protocol


is evident from the simulation results, i.e. the
This chapter provided a tutorial on modeling MAC delay increases with larger number of
and simulation of IEEE 802.11 networks using nodes. The models built using OPNET simulator
OPNET. The tutorial covers how to create a new were validated using propagation measurements
model from scratch; creating the first scenario, from wireless laptops and access points for an
setting up simulation parameters, running simula- IEEE 802.11b/g WLAN. A good match between
tions and obtaining results. The level of detailed OPNET simulation results and real measurements
simulation offered by OPNET IT Guru suggests were reported.
that this particular simulator is suitable for per- The chapter stresses the importance of using
formance modeling of both wired and wireless a good simulator for the performance modeling
networks. The chapter provided a review of of wireless networks. The OPNET offers more
literature on simulation in general and OPNET flexibility in model construction and validation,
in particular focusing on the strengths and weak- and incorporates appropriate analysis of simula-
nesses of OPNET. tion output data and statistical accuracy of the
The chapter demonstrated the performance simulation results. Without these features, a
modeling of IEEE 802.11 by measuring MAC simulation model would be useless since it will

414
Performance Modeling of IEEE 802.11 WLAN using OPNET

produce invalid results. An implementation of Green, D. B., & Obaidat, M. S. (2003). Modeling
a new MAC protocol for wireless multimedia and simulation of IEEE 802.11 WLAN mobile ad
applications is suggested as an extension to the hoc networks using topology broadcast reverse-
work reported in this chapter. path forwarding (TBRPF). Computer Com-
munications, 26(15), 1741–1746. doi:10.1016/
S0140-3664(03)00043-4
ACKNOWLEDGMENT
Haq, F., & Kunz, T. (2005). Simulation vs. emu-
lation: Evaluating mobileadhoc network routing
We would like to thank Christoph Lauer for setting
protocols. Paper presented at the International
up and conducting simulation experiments.
Workshop on Wireless Ad-hoc Networks (IW-
WAN 2005).
REFERENCES Heidemann, J., Bulusu, N., Elson, J., Intanagon-
wiwat, C., Lan, K., & Xu, Y. (2001). Effects of
Banitsas, K. A., Song, Y. H., & Owens, T. J. detail in wireless network simulation. Paper pre-
(2004). OFDM over IEEE 802.11b hardware for sented at the SCS Multiconference on Distributed
telemedical applications. International Journal of Simulation (pp. 3–11).
Mobile Communications, 2(3), 310–327.
Heusse, M., Rousseau, F., Berger-Sabbatel, G., &
Cali, F., Conti, M., & Gregori, E. (2000). IEEE Duda, A. (2003, March 30-April 3). Performance
802.11 protocol: design and performance evalu- anomaly of 802.11b. Paper presented at the IEEE
ation of an adaptive backoff mechanism. IEEE INFOCOM’03 (pp. 836-843).
Journal on Selected Areas in Communications,
18(9), 1774–1786. doi:10.1109/49.872963 Kurkowski, S., Camp, T., & Colagrosso, M.
(2005). MANET simulation studies: the in-
Chang, X. (1999). Network simulation with opnet. credibles. ACM SIGMOBILE Mobile Comput-
Paper presented at 1999 winter simulation confer- ing and Communications Review, 9(4), 50–61.
ence, Phoenix, AZ, (pp. 307-314). doi:10.1145/1096166.1096174
Choi, S., Park, K., & Kim, C. (2005). On the Law, A. M., & Kelton, W. D. (2000). Simulation
performance characteristicsof WLANs: revisited. modelling and analysis (3rd Ed.). New York:
Paper presented at the 2005 ACM SIGMETRICS McGraw-Hill.
international conference on Measurement and
modeling of computer systems (pp. 97–108). Lucio, G. F., Paredes-Farrera, M., Jammeh,
E., Fleury, M., & Reed, M. J. (2008). OPNET
Chow, J. (1999). Development of channel models Modeler and Ns-2: comparing the accuracy of
for simulation of wireless systems in OPNET. network simulators for packet-level analysis using
Transactions of the Society for Computer Simula- a network testbed. Retrieved June 15, from http://
tion International, 16(3), 86–92. privatewww.essex.ac.uk/~gflore/
Fall, K., & Varadhan, K. (2008). The ns manual. McHaney, R. (1991). Computer simulation: a
The VINT project. Retrieved February 10, 2008, practical perspective. San Diego, CA: Academic
from http://www.isi.edu/nsnam/ns/doc/ Press.

415
Performance Modeling of IEEE 802.11 WLAN using OPNET

Ng, P. C., & Liew, S. C. (2007). Throughput Sarkar, N. I., & Sowerby, K. W. (2006, November
analysis of IEEE 802.11 multi-hop ad hoc net- 27-30). Wi-Fi performance measurements in the
works. IEEE/ACM Transactions on Networking, crowded office environment: a case study. Paper
15(2), 309-322. presented at the 10th IEEE International Confer-
ence on Communication Technology (ICCT),
Nicopoliditis, P., Papadimitriou, G. I., & Pomport-
Guilin, China (pp. 37-40).
sis, A. S. (2003). Wireless Networks. Hoboken,
NJ: Wiley, Jonn Wiley & Sons Ltd. Sarkar, N. I., & Sowerby, K. W. (2009, April 4-8).
The combined effect of signal strength and traffic
OPNET Technologies. (2008). Retrieved Septem-
type on WLAN performance. Paper presented at the
ber 15, 2008, from www.opnet.com
IEEE Wireless Communication and Networking
Pawlikowski, K., Jeong, H.-D. J., & Lee, J.-S. Conference (WCNC’09), Budapest, Hungary.
R. (2002). On credibility of simulation stud-
Schafer, T. M., Maurer, J., & Wiesbeck, W. (2002,
ies of telecommunication networks. IEEE
September 24-28). Measurement and simulation
Communications Magazine, 40(1), 132–139.
of radio wave propagation in hospitals. Paper
doi:10.1109/35.978060
presented at IEEE 56th Vehicular Technology
Salah, K., & Alkhoraidly, A. (2006). An OPNET- Conference (VTC2002-Fall), (pp. 792-796).
based simulation approach for deploying VoIP.
Schmeiser, B. (2004, December). Simulation
International Journal of Network Management,
output analysis: A tutorial based on one research
16, 159–183. doi:10.1002/nem.591
thread. Paper presented at 2004 Winter Simulation
Sargent, R. G. (2004, December). Validation Conference, (pp. 162-170).
and verification of Simulation Models. Paper
Shnayder, V., Hempstead, M., Chen, B., Allen,
presented at 2004 Winter Simulation Conference,
G., & Welsh, M. (2004). Simulating the power
(pp. 17-28).
consumption of large-scale sensor network ap-
Sarkar, N. I., & Halim, S. A. (2008, June 23-26). plications. Paper presented at the 2nd international
Simulation of computer networks: simulators, conference on Embedded networked sensor sys-
methodologies and recommendations. Paper tems, (pp. 188-200).
presented at the 5th IEEE International Confer-
Siringoringo, W., & Sarkar, N. I. (2009). Teaching
ence on Information Technology and Applications
and learning Wi-Fi networking fundamentals using
(ICITA’08), Cairns, Queensland, Australia, (pp.
limited resources. In J. Gutierrez (Ed.), Selected
420-425).
Readings on Telecommunications and Networking
Sarkar, N. I., & Lo, E. (2008, December 7-10). In- (pp. 22-40). Hershey, PA: IGI Global.
door propagation measurements for performance
Takai, M., Martin, J., & Bagrodia, R. (2001, Oc-
evaluation of IEEE 802.11g. Paper presented at
tober). Effects of wireless physical layer modeling
the IEEE Australasian Telecommunications Net-
in mobile ad hoc networks. Paper presented at
works and Applications Conference (ATNAC’08),
MobiHOC, Long Beach, CA, (pp. 87-94).
Adelaide, Australia, (pp. 163-168).
Zhu, C., Wang, O. W. W., Aweya, J., Oullette, M.,
& Montuno, D. Y. (2002). A comparison of active
queue management algorithms using the OPNET
Modeler. IEEE Communications Magazine, 40(6),
158–167. doi:10.1109/MCOM.2002.1007422

416
Performance Modeling of IEEE 802.11 WLAN using OPNET

KEY TERMS AND DEFINITIONS GUI: GUI stands for Graphical User Interface.
Most of the modern operating systems provide
IEEE 802.11: A family of wireless local area a GUI, which enables a user to use a pointing
network (LAN) standards. The IEEE 802.11b is device, such as a computer mouse, to provide
the wireless LAN standard with a maximum data the computer with information about the user’s
rate of 11 Mbps operating at 2.4 GHz. The IEEE intentions.
802.11a is the high-speed wireless LAN with a OPNET IT GURU: OPNET is a discrete event,
maximum data rate of 54 Mbps operating at 5 object-oriented, general purpose network simula-
GHz. The IEEE 802.11g is backward compatible tor (commercial simulation package). OPNET IT
with the IEEE 802.11b, with a maximum data rate GURU is a smaller version of OPNET Modeler
of 54 Mbps operating at 2.4 GHz. which is available at no costs under OPNET
NIC: NIC stands for Network Interface Card academic program.
which is the hardware interface that provides
the physical link between a computer and a
network.

417
418

Chapter 19
On the Use of Discrete-
Event Simulation in Computer
Networks Analysis and Design
Hussein Al-Bahadili
The Arab Academy for Banking & Financial Sciences, Jordan

ABSTRACT
This chapter presents a description of a newly developed research-level computer network simulator,
which can be used to evaluate the performance of a number of flooding algorithms in ideal and realistic
mobile ad hoc network (MANET) environments. It is referred to as MANSim. The simulator is written in
C++ programming language and it consists of four main modules: network, mobility, computational, and
algorithm modules. This chapter describes the philosophy behind the simulator and explains its internal
structure. The new simulator can be characterized as: a process-oriented discrete-event simulator using
terminating simulation approach and stochastic input-traffic pattern. In order to demonstrate the ef-
fectiveness and flexibility of MANSim, it was used to study the performance of five flooding algorithms,
these as: pure flooding, probabilistic flooding, LAR-1, LAR-1P, and OMPR. The simulator demonstrates
an excellent accuracy, reliability, and flexibility to be used as a cost-effective tool in analyzing and de-
signing wireless computer networks in comparison with analytical modeling and experimental tests. It
can be learned quickly and it is sufficiently powerful, comprehensive, and extensible to allow investiga-
tion of a considerable range of problems of complicated geometrical configuration, mobility patterns,
probability density functions, and flooding algorithms.

INTRODUCTION The designer relies on the simulation model to


provide guidance in choosing among alternative
System designers use performance evaluation as an design choices, to detect bottlenecks in system
integral component of the design effort. Figure 1 performance, or to support cost-effective analysis.
describes the general role of simulation in design. As part of this process, the designer may use the
simulation output to modify the system abstrac-
DOI: 10.4018/978-1-60566-774-4.ch019 tion, model, or implementation as opposed to the

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 1. The role of simulation in validating a design model

system itself. The simulation output may also be results from an analytical model. The system
used include detail that may have not been con- models A and B in Figure 2 may actually be
sidered in the previous abstraction, model, or the identical, or they may be quite different (Nutaro
implementation, for example to collect additional 2007, Chung 2004).
or alternative types of data (Sinclair 2004, Law Computer simulation is widely-used in inves-
& Kelton 2000). tigating the performance of existing and proposed
Another important use of simulation is as a tool systems in many areas of science, engineering,
to help validate an analytical approach to perfor- operations research, and management science,
mance evaluation. In an analytical approach, the especially in applications characterized by com-
system model is implemented as a set of equations. plicated geometries and interaction probabilities,
The solution to these equations captures in some and for dealing with system design in the pres-
way the behavior of the model and thus optimis- ence of uncertainty (Banks et. al. 2005). This is
tically of the system itself. Analytical modeling particularly true in the case of computer systems
often requires simplifying assumptions that make and computer networks (Forouzan 2007, Stall-
the results suspect until they have been confirm ings 2005, Tanenbaum 2003). In order to study
by other techniques, such as simulation. Figure a system using simulation, first some features
2 illustrates the role of simulation in validating from the system are abstracted, which believe

Figure 2. The role of simulation in validating an analytical model

419
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

significant in determining its performance. This location-aided routing scheme 1 (LAR-1) (Ko
abstraction is called the system model. Next, the & Vaidya 2000), LAR-1-probabilistic (LAR-1P)
model is implemented by writing a computer (Al-Bahadili et. al. 2007), the optimal multipoint
program whose execution mimics the behavior relaying (OMPR) (Jaradat 2009, Qayyum et. al.
of the model. Data collected during the simula- 2002). The results demonstrated that simulation
tion program’s execution are used to compute is an excellent cost-effective tool that can be used
estimates of the performance of the original in analyzing and designing wireless computer
system. The accuracy of these estimates depends networks in comparison with analytical modeling
on the fidelity of the model and the way in which and experimental tests.
the measurements are taken from the simulation The rest of the chapter is organized as fol-
programs (Sinclair 2004). lows: The application of computer simulation
The principle objective for this work is to is presented in Section 2. In Section 3, the main
introduce the reader to the principles of computer challenges to computer simulation are discussed.
simulation and how it can be used in computer Section 4 provides a detailed discussion to the dif-
networks analysis and design. Simulation models ferent classes of simulation models. Examples of
can be classified according to different criteria, simulation languages and network simulators and
however, in this work, we consider three widely- their main features are presented in Sections 5 and
used criteria in classifying the different types of 6, respectively. Description of MANSim network
simulation models, and these are: time-variation simulator is given in Section 7. The different wire-
of the state of the system variables, simulation less environments and the computed parameters
termination procedure, and input-traffic pattern. that are implemented in and computed by MAN-
According to first criteria, simulation models can Sim are defined in Sections 8 and 9, respectively.
be classified into continuous-valued, discrete- Section 10 presents an example of simulation
event, or a combination of the two. Discrete-event results obtained from MANSim. Finally, in Section
simulation can be implemented using one of 11, conclusions are drawn and recommendations
the three methodologies: event-driven, process- for future work are pointed-out.
oriented, and distributed simulation. Process-
oriented is a well-approved and a powerful tool
for evaluating the performance of computer THE APPLICATION OF
networks in comparison with analytical modeling COMPUTER SIMULATION
and experimental tests.
The process-oriented model is used in devel- The application of computer simulation can po-
oping a computer network simulator, namely, tentially improve the quality and effectiveness
MANSim (Al-Bahadili 2008), which can be used of the system model. In general, modeling and
to investigate the performance of a number of simulation can be considered as a decision sup-
flooding algorithms in MANETs. The simulator port tool in many areas of science, engineering,
is written in C++ programming language and it management, accountant, etc. It provides us with
consists of four main modules: network, mobility, a more economical and safer option in order to
computational, and algorithm modules. In order learn from potential mistakes - that is to say, it can
to demonstrate the effectiveness and flexibility of reduce cost, risk, and improve the understanding
MANSim, it was used to study the performance of the real life systems that are being investigated.
of five flooding algorithms, these as: pure flood- It also can be used to investigate and analyze the
ing (Bani-Yassein 2006), probabilistic flooding performance of the system model under extreme
(Scott & Yasinsac 2004, Sasson et. al. 2003), working environment.

420
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Computer simulation translates some aspects • Validate a process - to increase understand-


of the physical world into a mathematical model ing of the process.
(description) followed by regenerating that model • Study reproducibility - to determine those
on a computer – which can be used instead of factors to which the process is sensitive
performing an actual physical task. For instance, and quantify their effect.
simulations are widely used to evaluate the per- • Study process economics - to perform cost-
formance of routing protocols in wireless ad hoc effective and reliable analyses for different
networks characterized by presence of noise and operational scenarios.
high node mobility (Tseng et. al. 2002), measure • Reduce experimental costs - to reduce tri-
packet delay in data networks (Fusk et. al. 2003), al-and-error experimentation over the long
simulate TCP/IP applications (Ahmed & Shahriari term.
2001). In addition, computer modeling and simula- • Study process optimization - to perform
tion can be used as a computer network learning ‘what-if’ scenarios and investigate differ-
tool (Asgarkhani 2002). ent possibilities, especially, extreme oper-
A quick review of some of the projects that are ating environment which may be difficult
employing computer simulation reveals various to experiment.
applications, such as:
However, there are some pitfalls which need to
• Training people to perform complex tasks. be carefully considered in simulation. Simulation
• Designing better computer chips. can be a time consuming and complex exercise,
• Providing better weather forecasts. from modeling through output analysis that ne-
• Performing predictions of the global cessitates the involvement of resident experts and
economy. decision makers in the entire process. Following
• Studying social interaction. is a checklist of pitfalls to be considered.
• Analyzing financial risks.
• Compiling complex corporate plans. • Unclear objective.
• Designing complex computer networks. • Using simulation when an analytic solu-
tion is appropriate.
Simulation modeling and analysis is one of • Invalid model.
the most frequently used operations research • Simulation model too complex or too
techniques. When used judiciously, simulation simple.
modeling and analysis makes it possible to (Maria • Erroneous assumptions.
1997): • Undocumented assumptions. This is ex-
tremely important and it is strongly sug-
• Improve communication – e.g. standardiz- gested that assumptions made at each stage
ing the treatment of experimental data and of the simulation modeling and analysis
the description of results to various interest exercise be documented thoroughly.
research groups all over the world. • Using the wrong input probability
• Build a knowledge base of quantitative distribution.
information. • Replacing a distribution (stochastic) by its
• Characterize the data mathematically, by mean (deterministic).
parameter fitting - rather than continually • Using the wrong performance measure.
referring to the raw data. • Bugs in the simulation program.

421
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

• Using standard statistical formulas that as- • Unbiased: The results must not be biased
sume independence in simulation output to show a high performance for specific
analysis. scenarios.
• Initial bias in output data. • Realistic: The scenarios used for tests must
• Making one simulation run for a be of a realistic nature.
configuration. • Statistically sound: The execution and
• Poor schedule and budget planning. analysis of the experiment must be based
• Poor communication among the personnel on mathematical principles.
involved in the simulation study.

CLASSIFICATION OF
CHALLENGES TO COMPUTER SIMULATION MODELS
NETWORKS SIMULATION
Simulation models can be classified according
It is generally unfeasible to implement computer to several criteria, for example (Sinclair 2004,
networks algorithms before valid tests are be- Roeder 2004, Hassan & Jain 2003):
ing performed to evaluate their performance. It
is clear that testing such implementations with • Time variation of the state of the system
real hardware is quite hard, in terms of the man- variables.
power, time, and resources required to validate (1) Continuous-valued simulation
the algorithm, and measure its characteristics in (2) Discrete-event simulation
desired realistic environments. External condi- • Simulation terminating.
tions also can affect the measured performance (1) Terminating simulation
characteristics. The preferred alternative is to (2) Steady-state simulation
model these algorithms in a detailed simulator • Input traffic pattern.
and then perform various scenarios to measure (1) Trace-driven simulation
their performance for various patterns of realistic (2) Stochastic simulation
computer networks environments (e.g., connec-
tion media, node densities, node mobility, radio In what follows a description is given for each
transmission range, transmission environment, of the above simulation models.
size of traffic, etc.).
The main challenge to simulation is to model Continuous-Valued versus
the process as close as possible to reality; otherwise Discrete-Event Simulation
it could produce entirely different performance
characteristics from the ones discovered dur- Simulation models can be categorized according
ing actual use. In addition, the simulation study to the state of the system variables (events) in
must carefully consider four major factors while two categories; these are: continuous-valued and
conducting credible simulation for computer discrete-event simulations, which are described
network research. The simulation study must be below.
(Kurkowski et. al. 2006):
Continuous-Valued Simulation
• Repeatable: The same results can be ob-
tained for the same scenario every time the In a continuous-valued simulation, the values of
simulation is repeated. the system states are continuously change with

422
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

time. The various states of the system are usu- constant. Since nothing of interest happens to the
ally represented by a set of differential-algebraic model between these changes, it is not necessary
equations, differential equations (either partial or to observe the model’s behavior (its state) except
ordinary), or integro-differential equations. The at the time a change occurs (Zeigler 2003).
simulation program solves all the equations, and Discrete-event simulations usually fit into one
uses numbers to change the state and output of the of three categories (Sinclair 2004):
simulation. Originally, these kinds of simulations
were actually implemented on analog computers, i. Event-driven simulation
where the differential equations could be repre- ii. Process-oriented simulation
sented directly by various electrical components iii. Distributed simulation
such as op-amps. By the late 1980s, however, most
“analog” simulations were run on conventional In this section we present a detail description
digital computers that emulate the behavior of an of the process-oriented simulation because it is
analog computer. the major focus of this chapter, but to understand
To implement a continuous-valued model as how it works, we need to begin with a discussion
a simulation program on digital computers, the of event-driven simulation. However, distributed
equations represented the system are usually simulation model is implemented as a set of
approximated by difference equations. In such processes that exchange messages to control the
simulations, time typically advances in fixed incre- sequencing and nature of changes in the model
ments or clock ticks. When time is incremented, state. The advantage of this approach is that it
the simulation program computes new values is a natural and perhaps the only practical way
for all system state variables. Often iteration is to describe a discrete-event simulation model to
required to converge to a “correct” solution for be executed in parallel (Nutaro 2004 & Nutaro
the new values. If the time increment is too large, 2003).
state variable values may not converge, or may
violate some model-specific constraint between i. Event-driven simulation
successive clock ticks. If this occurs, a simulator
may try reducing the size of the clock tick and An event is a change in the model state. The
recomputing the new values. Continuous-valued change takes zero time; i.e., each event is the
simulation is widely used in many areas of sci- boundary between two stable periods in the
ence and engineering, such as: chemical process model’s evolution (periods during which the state
simulation, physical process simulation, electrical variables do not change), and no time elapses in
and electronic circuit simulation, control system making the change. The model evolves as a se-
simulation, etc (Nutaro 2007). quence of events. To describe the evolution of the
model, we need to know when the events occur
Discrete-Event Simulation and what happens to the model at each event.
The heart of an event-driven simulation is
Discrete-event simulation deals with system mod- the event set. This is a set of (event, time) pairs,
els in which changes happen at discrete instances where event specifies a particular type of state
in time, rather than continuously. For example, in a change and time is the point in simulation time
model of a computer communication network, the at which the event occurs. The event set is often
arrival of a message at a router is a change in the implemented as a list, maintained in time-sorted
state of the model; the model state in the interval order. The first entry (the head of the list) has an
between successive message arrivals remains event time that is less than or equal to the event

423
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

times of all other events in the list. An event-driven be a pointer to a structure containing several re-
simulation also maintains a simulated time clock, lated pieces of information. The simulation driver,
the value of which is the time of the most recent which is the part of the simulator responsible for
event that has been processed. maintaining the event list and checking for termi-
The basic operation of an event-driven simula- nation conditions based on simulation time, does
tion with the event set implemented as a sorted not interpret the information in E.info; this is the
list is as follows (Sinclair 2004): responsibility of the event service routine.
The diagram in Figure 3 represents the over-
1. Set the simulation clock to 0; place a set of all structure of a simple event-driven simulation
one or more initial events in the event list, execution. The boxes labeled execute event i
in time-sorted order. represent event service routines, which may be
2. Fetch the event E consisting of the ordered implemented as individual procedures or as part
pair (E.type, E.time) at the head of the event of a general event-processing module (or both).
list; if the event list is empty, terminate the As noted above, an event service routine may
simulation. cause zero or more new events to be inserted on
3. Set the simulation time to E.time. If E.time the event list. If an event service routine inserts
is greater than the maximum simulation time a new event on the event list, the type of the new
specified for the execution of the simulation event may be different from the event associated
model, terminate the simulation. with the service routine. A key point is that events
4. Use the event identifier E.type to select the never change the simulation time directly; they can
appropriate event-processing code, called only affect simulation time by the creation of new
an event service routine. events which are inserted on the event list.
5. Execute the selected code. During this execu- Event-driven simulation is completely general;
tion, an event may update system information any discrete-event simulation may be implemented
held in global data structures, and it may with this approach. However, it is not “user-
cause new events E’ (with E’.time≥E.time) friendly” in some respects. This is because each
to be inserted in the event list. Note that it event stands alone - events by definition maintain
does not change simulation time. no context because they only exist for zero simu-
6. At the completion of execution of the event lation time. Process-oriented simulation provides
service routine, go to 2. an approach that allows related state changes to
be combined in the context of a process.
Often, an empty event list at step 2 indicates an
error in the design of the simulation. Any simula- ii. Process-oriented simulation
tion that is intended to describe the steady-state
behavior of a system will, on average, add one In process-oriented simulation, sets of related
event to the event set for each event removed from event types are grouped together in a process,
the event set. However, the length of the event list which is similar to the concept of a process in an
may vary during simulation execution. operating system. A process consists of a body of
The basic idea of the representation of an event code, the resources (primarily memory) allocated
can be extended to one that consists of a triple to that code for its execution, and its state - the
(E.type, E.time, E.info), where E.info is informa- current point of execution in the code and the
tion that is specific to the particular event type. It values of the variables accessed by the code. A
might be a single value such as an identifier of a process typically exists over a non-zero interval of
job that requires processing at a CPU, or it might simulation time, and may be active for the entire

424
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 3. Event-driven simulation

duration of the simulation. A process can execute processes are usually lightweight threads within
and then be suspended pending the passage of a the context of a single UNIX-style process (Peek
specified interval of simulation time or pending et. al. 2001).
some interaction with another process. We can take either of two viewpoints in
The simulation driver is considerably more structuring processes. A process can model some
complicated than in the event-driven case. In ad- resource, such as a CPU, a disk, a channel, or a
dition to managing the event list, it must provide communications network. Jobs or customers that
functions for: request service from these resources are repre-
sented as data structures that are passed from
• Process creation and activation/scheduling one process to another. Alternatively, jobs can
• Process suspension be represented by processes, and resources in the
• Process termination system are described by shared data structures to
• Interprocess communication/ which the jobs have access. In other words, the
synchronization state of each component is represented by one or
more global variables. The model is usually easier
Readers with operating system experience can to implement with one of these approaches than the
see the similarity between the simulation driver other, depending on how much the user can take
and process management in a conventional operat- advantage of the ability to group related events
ing system. Indeed, process-oriented simulators into a single process. The choice may also depend
could be written using the process management on the process interaction features provided by
facilities available under an operating system the simulation language.
such as UNIX, but the overhead in process-level The general structure of a process-oriented
context switching is too large to make this prac- simulation is shown in Figure 4. The event list has
tical for many situations. Instead, the simulation been simplified; essentially, there is only one type

425
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 4. Process-oriented simulation

of event – start or wake up a process - eliminat- is potentially a limitation in the use of process-
ing the need for a field that identifies the event oriented simulation.
type. The E.info field mentioned above becomes As Figure 4 indicates, a process that has been
the process ID field. It may point to a process activated can be suspended basically in two ways.
descriptor in a table of descriptors for all active First, the process can simply delay itself for a
processes, or it may be the process descriptor itself. fixed interval of simulation time, in which case
In this context, a process descriptor is an object its descriptor is reinserted into the event list at the
that contains all of the information necessary to proper place. Second, the process can suspend
determine the state of a process whose execution itself for an indefinite period of time. For example,
is currently suspended. When the simulation driver the process may wait on a semaphore. The process
gets the next event, it uses the process descriptor descriptor can only be reinserted into the event
to find and restore the environment or context list due to the action of some other process, such
of that process and then starts the process at the as signaling a semaphore. Not indicated in Fig-
point at which it suspended. ure 4 is the possibility that a process will simply
The process environment includes the process terminate and disappear.
stack, register contents (including the program The choice of process-oriented simulation
counter, stack pointer, and processor status word), over event-driven simulation is not one of mod-
and other process-specific information. In creat- eling power but of implementation convenience.
ing a process, the simulation driver must allocate Process-oriented implementations group related
memory for the process stack and initialize various events into a single process, using the normal flow
register values. The per-process stack allocation of control within the execution of the process to

426
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

define the sequencing of the events. Since the terminated as soon as the last of the 100
events within a process occur at different simula- objects is downloaded.
tion times, the process exists over an interval of
simulation time, unlike an event, which occurs at Steady-State Simulation
a single instant of time and then is gone. The ease
of implementation and debugging with process- A steady-state simulation is used to investigate the
oriented simulation compared to event-driven steady-state behavior of a system. In such simula-
simulation is gained at a cost of some additional tions, the simulation continues until the system
overhead due to the need to switch contexts when reaches a steady-state; otherwise the simulation
a process suspends. results can be significantly different from the true
It is accurate to say that all discrete-event results. Example of such as simulation is:
simulations are event-driven. Process-oriented
simulation cloaks the basic event mechanism in • Measuring the long-term packet loss rate
higher level features that make writing, debug- in congested router.
ging, and understanding the simulation model • Evaluating the average network reachabil-
easier, while distributed simulation replaces the ity in MANET.
event set completely with a mechanism that uses
time-stamped messages that semi-autonomous In such simulations, the simulation must be
parts of the simulation model exchange. In both continued until the system reaches a steady-state;
cases, the user’s view of the simulation model otherwise the mean packet-loss rate in the first
implementation and its execution is substantially example or average network reachability in the
different than in event-driven simulation. second example fluctuates as the system goes
through the transient phase.
Terminating versus Steady-
State Simulation Trace-Driven versus Stochastic
(Synthetic) Simulation
Based on the simulation terminating criteria,
computer simulation can be classified into two Computer and related systems are most often
categories (Hassan & Jain 2003): simulated using one of two types of input traffic
patterns; these are (Hassan & Jain 2003):
Terminating Simulation
Trace-Driven Simulation
A terminating simulation is used to study the
behavior of a system for a well-defined period Trace-driven simulation is an important tool in
of time or number of events. Examples for such many areas. The idea is that model inputs are
simulation may include: derived from a sequence of observations made on
a real system. A classic example from computer
• Evaluating the performance of a new TCP engineering involves the design of new cache
protocol stack only during the office hours memory organizations. The designer uses a com-
9 AM to 5 PM. A simulation is terminated puter system, either in hardware or in software,
after 8 hours of simulated time. to record the time-ordered sequence of memory
• Evaluating the performance of a new accesses generated during the execution of some
scheme for downloading 100 specific ob- program(s). This ordered set, called the trace or
jects from a popular website. Simulation address trace, contains the address of each memory

427
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

reference and the type of access - read or write. etc. The stochastically generated input traffic never
Then the designer uses the trace as input to a matches 100% with the actual traffic observed on
trace-driven simulation implementation of a model a given practical network.
of a cache memory system and from the model’s
outputs determines parameters of interest, such as
the miss ratio, in judging the merits of the design. SIMULATION LANGUAGES
In computer networks trace-driven simulations,
traces of packet arrival events (arrival time, packet A computer simulation language describes the
size, etc.) are first captured from an operational operation of a simulation model on a computer. A
network using a performance measurement tool simulation model can be implemented in a general-
(e.g., tcpdump). These traces are then processed purpose programming language (e.g., C++, C#,
to be converted in a suitable format, and used Java, etc.) or in a language tailored specifically
as input traffic for the simulation to produce the to describing simulation models (see below). The
performance results. main features of the simulation languages are
The main advantage of trace-driven simulation (Rorabaugh 2004, Maria 1997):
is that the model inputs are real world; they are not
approximations or “guesstimates” whose accuracy • Provide run-time environment.
may be questionable, and the same trace can be • Develop with low-level programming lan-
used to evaluate and compare the performance of guages such as C programming language.
several algorithms and schemes. To achieve more • Provide a library of simulation-oriented
credibility, many performance analysts use trace- functions to assist the user in implement-
driven simulation to evaluate the performance ing the simulation model.
of a new algorithm. A major disadvantage is that • Include mechanisms for managing the
this approach is not applicable to all types of simulation program execution and collect-
systems. Limitations also exist if the system to ing data.
be modeled invalidates the trace in some way. In • Support parallel and distributed computing
the example of trace-driven simulation of a cache, systems.
the address trace generated by a uniprocessor may • Support both event and process-oriented
be misleading if applied to a model of a cache in simulations.
a multiprocessor system. • Combine related events into a single pro-
cess which makes the design, implementa-
Stochastic (Synthetic) Simulation tion, and, perhaps most importantly, debug-
ging of the simulation model much easier.
In stochastic simulation the system workload or the • Has the advantage that users are likely to
model input is characterized by various probability find many of its concepts familiar because
distributions, Poisson, exponential, On/Off, self- of prior knowledge of operating system de-
similar, etc. During the simulation execution, these sign and parallel programs.
distributions are used to produce random values
which are the inputs to the simulation model. Most Simulation languages can handle both major
computer network simulation studies rely heavily types of simulation: continuous and discrete-
on stochastic model in generating the input traffic event though more modern languages can handle
pattern. In MANETs, a stochastic model is used to combinations of them. Most languages also have
calculate node locations within the network area, a graphical interface and at least simple statistical
direction of movement, probability of reception, gathering capability for the analysis of the results.

428
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

An important part of discrete-event languages is NETWORK SIMULATORS


the ability to generate pseudo-random numbers
and varieties of probability distributions. There are a number of computer network simu-
Discret- event simulation languages are view- lators that have been developed throughout the
ing the model as a sequence of random events years to support computer networks analysis and
each causing a change in state. Examples are: design. Some of them are of general-purpose use
AutoMod, eM-Plant, Rockwell Arena, GASP, and other dedicated to simulate particular types of
GPSS, SimPy (an open-source package based computer networks. A typical network simulator
on Python), SIMSCRIPT II.5 (a well established can provide the programmer with the abstraction of
commercial compiler), Simula, Java Modelling multiple threads of control and inter-thread com-
Tools (an open-source package with graphical munication. Functions and protocols are described
user-interface), Poses++, etc. either by finite-state machine, native programming
Continuous simulation languages are viewing code, or a combination of the two. A simulator
the model essentially as a set of differential equa- typically comes with a set of predefined modules
tions. Examples include: Advanced Continuous and user-friendly graphical user-interface (GUI).
Simulation Language (ACSL) (supports textual Some network simulators even provide extensive
or graphical model specification), Dynamo, Si- support for visualization and animation.
mApp (simple simulation of dynamic systems There are also emulators such as the NIST
and control systems), Simgua (simulation toolbox Network Emulation Tool (NIST NET). By op-
and environment and it supports Visual Basic), erating at the IP level, it can emulate the critical
Simulation Language for Alternative Modeling end-to-end performance characteristics imposed
(SLAM), VisSim (a visually programmed block by various wide area network (WAN) situations
diagram language), etc. or by various underlying subnetwork technologies
Hybrid simulation languages handle both con- in a lab test-bed environment. Computer network
tinuous and discrete-event simulations. Examples simulators can also be classified as academic or
of such languages include: AMESim (simulation commercial simulators (Chang 1999).
platform to model and analyze multi-domain Some examples of academic simulators in-
systems and predict their performances), Any- clude:
Logic multi-method simulation tool (supports
system dynamics, discrete-event simulation, • REAL: REAL is a simulator for studying
agent-based modeling), Modelica (open-standard the dynamic behavior of flow and conges-
object-oriented language for modeling of complex tion control schemes in packet switch data
physical systems), EcosimPro Language (EL) networks. Network topology, protocols,
(continuous modeling with discrete-events), data and control parameters are represent-
Saber-Simulator (simulates physical effects in ed by Scenario, which are described using
different engineering domains, e.g., hydraulic, NetLanguage, a simple ASCII representa-
electronic, mechanical, thermal, etc.), Simulink, tion of the network. About 30 modules are
SPICE (analog circuit simulation), Z simulation provided which can exactly emulate the
language, Scilab (contains a simulation package actions of several well-known flow control
called Scicos), XMLlab (simulations with XML), protocols.
Flexsim 4.0 (powerful interactive software for • INSANE: INSANE is a network simula-
continuous and discrete-event flow simulation), tor designed to test various IP-over-ATM
Simio software (for continuous, discrete-event, algorithms with realistic traffic loads de-
and agent-based simulation). rived from empirical traffic measurements.

429
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

It’s ATM protocol stack provides real-time library of network objects with one COMNET
guarantees to ATM virtual circuits by us- III object representing real world objects, The
ing rate controlled static priority (RCSP) COMNET III’s object-oriented framework and
queuing. A protocol similar to the real-time GUI gives user the flexibility to try an unlimited
channel administration protocol (RCAP) is number of “what if” scenarios.
implemented for ATM signaling. A TK- For maximum effectiveness, a simulation envi-
based graphical simulation monitor can ronment should be modular, hierarchical, and take
provide an easy way to check the progress advantage of the graphical capabilities of today’s
of multiple running simulation processes. workstations. Optimized network engineering tool
• NetSim: NetSim is intended to offer a very (OPNET) is an object-oriented simulation environ-
detailed simulation of Ethernet, including ment that meets all these requirements and is the
realistic modeling of signal propagation, most powerful general-purpose network simulator.
the effect of the relative positions of sta- OPNET provides a comprehensive development
tions on events on the network, the colli- environment for the specification, simulation and
sion detection and handling process and performance analysis of communication networks.
the transmission deferral mechanism. But A large range of communication systems from a
it cannot be extended to address modern single LAN to global satellite networks can be
networks. supported. OPNET’s comprehensive analysis tool
• Maisie: Maisie is a C-based language for is special ideal for interpreting and synthesizing
hierarchical simulation, or more specifi- output data. A discrete-event simulation of the
cally, a language for parallel discrete-event call and routing signaling was developed using
simulation. A logical process is used to a number of OPNET’s unique features such as
model one or more physical processes; the the dynamic allocation of processes to model
events in the physical system are modeled virtual circuits transiting through an ATM switch.
by message exchanges among the corre- Moreover, its built-in Proto-C language support
sponding logical processes in the model. provides it the ability to realize almost any func-
tion and protocol.
Other examples also include ns-2 (ns Network
simulator), VINT, U-Net, USC TCP-Vegas test-
bed, TARVOS, NCTUns, WiredShark, Harvard THE MANET SIMULATOR (MANSIM)
simulator, and Network Workbench.
As to commercial simulator, examples include This section describes a process-oriented discrete-
BONeS, COMNET III and OPNET. BONeS event MANET simulator, namely, MANSim
DESIGNER provides lots of building blocks, (Al-Bahadili 2008). In fact, this section describes
modeling capabilities, and analysis tools for the philosophy behind the simulator, gives a brief
development and analysis of network products, outline of its history, explains its internal structure,
protocols, and system architectures. With its recent and describes its use in computer network teaching
released ATM Verification Environment (AVE), and research. MANSim is especially developed
it is specifically targeted for ATM architectural to simulate and evaluate the performance of a
exploration and hardware sizing. COMNET III, a number of flooding algorithms for MANETs, and
graphical, off-the-shelf package, lets you quickly it is a research-level and available to the academic
and easily analyzes and predicts the performance community under no-cost license.
of networks ranging from simple LANs to complex According to classification criteria discussed
enterprise-wide systems (CACI). Starting with a in Section 4, MANSim can be characterized as:

430
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 5. Regular-grid nodes distribution (4-node


where nodes are placed at each intersection of
degree)
the grid as illustrated in Figures 5 and 6. For this
configuration, two node degrees are considered,
namely 4-node degree and 8-node degree. In a
4-node degree (Figure 5), each node is allowed
to communicate directly with its vertical and
horizontal neighbors, and the radio transmission
range of the node covers one-hop neighbor in
each direction. In an 8-node degree (Figure 6),
nodes are also allowed to communicate with the
diagonal neighbors.
The regular-grid configuration is quite sim-
plistic but it is useful for calculating benchmark
analytical results for some computed network
parameters for a specific network condition.
These benchmark analytical results can be used
• Process-oriented discrete-event simulator to validate the simulation results. However, a
• Terminating simulation more realistic configuration is required, that may
• Stochastic input traffic pattern consider random (non-regular) node distribution
and produce variable node degrees.
It is written in C++ language, and it consists
of four major modules: Random Node Distribution
(1) N e t w o r k m o d u l e (Geometrical In a random node distribution configuration, the
configuration) nodes are randomly placed on the X×Y network
(2) Mobility module area as illustrated in Figure 7. They are placed
(3) Computational module according to a particular probability distribution
(4) Algorithm module function (PDF), such as linear distribution, Pois-
son’s distribution, etc. For example, in a linear
In what follows a description is given to each distribution, the x and y positions of the a node
of the above modules. are calculated as follows:

Network Module (Geometrical x = X•ξ (1)


Configuration)
y = Y•ξ (2)
The network module is concerned with the distri-
bution of mobile nodes within the network area. Where X and Y are the length and width of the
MANSim simulates two types of nodes distribu- network area, and ξ is a random number uniformly
tion, these are: selected between 0 and 1 (0≤ξ<1). Two nodes i
and j are considered to be connected or neighbors
Regular-Grid Nodes Distribution if the Euclidean distance between these two nodes
(r) is less than or equal to radio transmission range
In a regular-grid node distribution configura- of the node (R), where r is given by:
tion, the network is considered as a regular-grid

431
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 6. Regular-grid nodes distribution (8-node


of the most widely-used patterns is the random-
degree)
walk mobility pattern. In which, the direction of
movement for a mobile node is randomly chosen
from an appropriate PDF. In most applications, a
node is allowed to move with equal probability
in any direction within the geographical are of
interest, i.e., the direction is sampled randomly
from a uniform PDF.
The random walk mobility pattern is simulated
as follows: each node is allowed to move around
randomly within the network area during the
simulation. The movement pattern of a node is
simulated by generating a direction (θ), a speed
(u), and a time interval (τ), which is also referred
to as a pause time. The direction is sampled from
a uniform PDF between 0 to 2π, thus
(x - x ) + (y - y )
2 2
r= i j i j
(3)
θ = 2πξ (4)
One important point that must be carefully con-
sidered using random node distribution is to make Nodes are either allow to move with a pre-
sure that initially each node within the network assigned average speed (uav), i.e., u=uav, a pre-
should have at least one neighboring node. assigned maximum speed (umax), i.e., u=umax, or
a node speed is sampled randomly between 0 and
Mobility Module umax (i.e., u=ξumax).
In order to consider node mobility, a simula-
One of the main characteristics of MANETs is the tion time (Tsim) must be setup; it is divided into a
mobility of their nodes. Mobility can be simulated number of time intervals (nIntv) that allows the
using different mobility patterns (models). One pause time to be calculated as:

τ = Tsim/nIntv (5)
Figure 7. Random node distribution
The distance traveled by the node is calcu-
lated as

d = uτ (6)

Then, a new node location at time t+τ is cal-


culated by:

x(t +τ ) = x(t ) + d Cos (θ ) (7)

y (t +τ ) = y (t ) + d Sin(θ ) (8)

432
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Where x(t), y(t) and x(t+τ), y(t+τ) are the old the network parameters is performed sequentially
and new locations of the node, respectively. This over all node, except the source node, as destination
new node location must be checked to be within nodes. The computed parameters for each source
the network area, if it is not (i.e., the node leaves node are averaged over (n-1), and then these aver-
the network area), there are different ways to aged values are averaged again over (n). In other
bring the node back to the network. In this model, words, the computed parameters are averaged over
a reduced weight approach is used to ensure that (n(n-1)). In this case, the computed parameters
the node remains within the network area. may well represent the average behavior of any
In the reduced weight approach the node is of the nodes within the network.
kept moving in the same direction, but the dis- Due to the probabilistic approach, in order to
tance traveled (d) is reduced by multiplying it enhance the accuracy of the solution, the computa-
by descending weight until the new location be tion is repeated, in an inner loop, for each source
within the network area (i.e., d =d•ω). The weight and destination nodes for a number of runs, i.e.,
ω is given by: each source is allowed to initiate S requested
messages. Once again, the computed parameters
( I max - k ) are averaged over S. However, it has been found
ω=
I max (9) that with small number of runs the solution is
converged to a more stable solution, and for
An appropriate value for Imax is between 2 to networks having no probabilistic behavior, i.e.,
10, and k is set to zero and is incremented by 1 pt=1, S has no effect on the computed parameters
each time the node traveled outside the network and can be set to 1.
area. However, other mobility patterns can be As it has been mentioned earlier, in order to
easily implemented in MANSim. consider node mobility, a simulation time (Tsim) is
set. It is divided into a number of time intervals
Computational Module (nIntv) that yields a time interval or pause time (τ).
The calculation is repeated, in an outer loop, for
Many computational models start a simulation nIntv, and the results obtained for the computed
from a single source node positioned at the center parameters are averaged over nIntv. In general, it
of the network area, or from a single source node has been found that to obtain an adequate network
randomly selected within the network area. The performance, the pause time must be carefully
simulation is repeated for S times, i.e., the source chosen so that the distance traveled by the node,
node is assumed to transmit S request messages. during location update interval, is less than the
The results obtained from these simulations are radio transmission range of the source node. For
averaged to give average values for the computed non-mobile nodes (fixed nodes) nIntv has no effect
parameters. The results reflect the average behav- on the computed parameters and can be set to 1.
ior with regards to this particular source node, but Figure (8) outlines the computational module of
they may not well reflect the average behavior of MANSim simulator.
other nodes within the network.
But, a major feature of MANSim computa- Algorithm Module
tional module is that it does not randomly pick a
node and use it as a fixed source node. Instead, a In this module, the flooding optimization algo-
loop is performed using all nodes within the net- rithm is implemented. In MANSim, until now,
work as a source node, then the computation for five flooding algorithms have been implemented,
these are:

433
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 8. Computational module of the MANSim simulator

• Pure flooding (Al-Bahadili & Jaradat 2007, and updated when a node receives a broad-
Jaradat 2007) cast message. This occurs if the receiving
• Probabilistic flooding (Al-Bahadili & node is within the radio transmission range
Jaradat 2007, Jaradat 2007) of the transmitting node, and no error oc-
• Location-aided routing scheme 1 (LAR-1) curs during data transmission due to noise
algorithms (Al-Bahadili et. al. 2007, Al- interference. Each time a node i success-
Thaher 2007) fully receives a request, an index iRec(i)
• A hybrid LAR-1 and probabilistic (LAR- is incremented by 1, where i represents the
1P) algorithm (Al-Bahadili et. al. 2007, Al- node ID. This index is used to calculate the
Thaher 2007) network parameters such as: the average
• Optimal multipoint relaying (OMPR) al- duplicate reception (ADR) and the reach-
gorithm (Al-Bahadili et. al. 2009, Jaradat ability (RCH).
2009) • A node retransmission index vector iRet(i).
This index is initialized with zero value,
Discussion of the algorithms is beyond the except for the source and destination nodes
scope of this chapter and details on each of the and updated each time the node succeed-
above algorithms can be found in the references ed to retransmit the broadcast message. A
mentioned next to each of them. In addition, more node index iRet(i) is set to 1 if the node i
discussions on these algorithms can be found in retransmits a received message. This index
(Bani-Yassein 2006, Ko & Vaidya 2000, Tseng et. is used to calculate the number of retrans-
al. 2002, Qayyum et. al. 2002). However, they all mission (RET) within the network.
have a number of algorithm-dependent procedures
to calculate:

• A node reception index vector iRec(i).


This index is initialized with zero value

434
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

WIRELESS NETWORK ted data is survived being lost and successfully


ENVIRONMENTS delivered to a destination node despite the presence
of all or any of the above impairments. Thus, pc
In MANSim, two types of network environments can be calculated as:
can be simulated; these are:
pc=patt ∙ pfree ∙ ptherm ∙ patm ∙ pmult ∙ pref …… (10)
1. Noiseless (error-free) environment:
Noiseless (error-free) environment repre- For a noiseless environment pc is set to
sents an ideal network environment, in which unity.
it is assumed that all data transmitted by a
source node is successfully and correctly
received by a destination node. It can be COMPUTED PARAMETERS
characterized by the following axioms or
assumptions, which are still part of many MANSim calculates a number of network pa-
MANET simulation studies, despite the in- rameters to evaluate the performance of the
creasing awareness of the need to represent implemented algorithms. These parameters were
noisy features (Kotz et. al. 2004): recommended by the IETF to judge the perfor-
i. The world is flat mance of flooding algorithms, such as (Ko &
ii. All radios have equal and circular radio Vaidya 2000, Tseng et. al. 2002):
transmission range
iii. Communication link symmetry (1) Number of retransmission (RET). The
iv. Perfect link percentage of nodes that retransmits the
v. Signal strength is a simple function of broadcast message. It is calculated as the
distance. number of node that actually retransmits
2. Noisy (error-prone) environment: Noisy the broadcast message divided by the total
(error-prone) environment represents a re- number of nodes within the network (n).
alistic network environment, in which the (2) Average Duplicate Receptions (ADR).
received signal will differ from the trans- The number of times the same broadcast
mitted signal, due to various transmission message being received by each node within
impairments, such (Kotz et. al. 2004): the network.
i. Wireless signal attenuation (patt) (3) Reachability (RCH). The probability of
ii. Free space loss (pfree) delivering a broadcast message between
iii. Thermal noise (ptherm) a source and a destination node within the
iv. Atmospheric absorption (patm) network. It is calculated as the number of
v. Multipath effect (pmult) nodes that can be reached from a certain
vi. Refraction (pref) source node divided by n.
(4) Average hop counts (AHC). The average
All of these impairments are represented by a number of network segments in the route
generic name, noise. The environment is called from source to destination node.
noisy environment. For modeling and simulation (5) Saved rebroadcast (SRB). The percentage
purposes, the noisy environment can be described reduction in the number of retransmission.
by introducing a probability function, which It is calculated by dividing the difference
referred to as the probability of reception (pc). It between the number of nodes receiving the
defined as the probability that a wireless transmit- broadcast message and the number of nodes

435
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

that actually retransmitted the message by used to provide insight into the performance of
the number of nodes receiving the broadcast these algorithms in ideal and realistic MANET
message. environments.
(6) Disconnectivity (DIS). The probability of In this work, we present results for one scenario.
failing to deliver broadcast message to a In this scenario, we compare the performance of
certain destination node within the network. six broadcast algorithms in noiseless and noisy
It is calculated as 1-RCH. environments. These algorithms are:
(7) Average latency (LAT). The interval from
the time the broadcast was initiated to the time (1) Pure flooding
the last node finished its rebroadcasting. (2) Probabilistic flooding with fixed retransmis-
sion probability (pt=0.8)
In addition, MANSim can be used to inves- (3) Probabilistic flooding with dynamic retrans-
tigate the effect of a number of input network mission probability
parameters, such as: (4) LAR-1 algorithm
(5) LAR-1P algorithm with fixed retransmission
(1) Node density (nd). The number of nodes per probability (pt=0.8)
unit area (nd=n/A), where A is the network (6) OMPR algorithm.
area (A=X×Y).
(2) Node mobility or node speed (u). Nodes The performance is compared in terms of three
are assumed to move with either an aver- parameters, these are: RET, ADR, and RCH. The
age speed (uav), maximum speed (umax), or input parameters for this scenario are summarized
a randomly selected speed. in Table 1.
(3) Node transmission radius (R), which rep- The simulation results for RET, ADR, and
resents the area that can be covered by a RCH are shown in Figures 9, 10, and 11, respec-
certain node. tively. Since the main objectives of this work is
(4) Retransmission probability (pt). The prob- to demonstrate the use of discrete-event simula-
ability of retransmitting a received broadcast tion in computer networks analysis and design,
message. the simulation results are briefly discussed here,
(5) Reception probability (pc). The probability and a detail discussion can be found in the ref-
of a broadcast message being successfully erences mentioned next to each algorithm or in
received by a destination node that located (Al-Bahadili et. al. 2009, Jaradat 2009).
within the transmission range of the source The main points that are concluded from this
node. scenario can be summarized as follows:
(6) Simulation time (Tsim). The total simulation
time. • Probabilistic flooding algorithm always
achieves the highest possible RCH, but at
the same time it introduces the low sav-
SIMULATION RESULTS ing in RET and ADR, when it is compared
with the other algorithms.
The discrete-event simulator MANSim has been • LAR-1 and LAR-1P algorithms presents
used in a number of researches to evaluate the the highest reduction in RET and ADR but
performance of flooding algorithms, such those at the same time they provides the lowest
mentioned in Section 7. It is approved as an ef- RCH.
ficient and flexible simulation tool that can be • The OMPR algorithm presents a moderate

436
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Table 1. Input parameters

Parameters Values
Geometrical model Random node distribution
Network area 1000x1000 m
Number of nodes (n) 100 nodes.
Transmission radius (R) 200 m
Average node speed (u) 5 m/sec
Prob. of reception (pc) 0.5 to 1.0 in step of 0.1
Simulation time (Tsim) 300 sec
Pause time (τ) 30 sec

Figure 9. Variation of RET with pc for various algorithms

reduction in RET and ADR, when it is com- Since the main objective of using flooding
pared with probabilistic (fixed and dynamic optimization is to achieve a cost-effective reach-
pt), LAR-1, and LAR-1P. It performs better ability, which means a highest possible reachability
than probabilistic and less than LAR-1 and at a reasonable cost, in this work, cost is measured
LAR-1P for various network noise levels in terms of RET and ADR. The results obtained
and nodes speeds. However, the RCH it demonstrate that the OMPR algorithm provides
achieves is higher than that of LAR-1 and an excellent performance as it can achieve the
LAR-1P algorithms. most excellent cost-effective reachability, for
various network noise levels and nodes speeds,

437
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Figure 10. Variation of ADR with pc for various algorithms

Figure 11. Variation of RCH with pc for various algorithms

438
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

as compared to probabilistic (fixed and dynamic allows investigating various network and
pt), LAR-1, and LAR-1P algorithms. operation environments with minimum
Figures 9, 10, and 11 demonstrate that the costs and efforts. For example various
OMPR algorithm provides an excellent network node densities, node speeds, node radio
RCH in noisy environment when compared with transmission range, noise-level, etc.
LAR-1 and LAR-1P, at a significant reduction in • The simulator can be learned quickly and
RET and ADR. For example, for fixed nodes and it is sufficiently powerful, comprehen-
pc=0.5, it achieves a RCH of 47.1% compared sive, and extensible to allow investigation
with 33.9% and 23.8% for LAR-1 and LAR-1P of a considerable range of complicated
(pt=0.8), respectively. This is achieved at a cost problems.
of 11% RET compared with 6.6% and 4.2%, for • Due to the flexible modular structure of the
LAR-1 and LAR-1P (pt=0.8), respectively. simulator, it can be easily modified to pro-
The figures also show that the probabilistic and cess other flooding algorithms, mobility
OMPR algorithms provide almost a comparative models, probability density functions, etc.
performance in noiseless and low-noise environ- • Many of its modules are design for easy
ments (pc>0.8). But, in terms of network reach- re-use with other simulators.
ability, the probabilistic approach overwhelmed
the performance of the OMPR algorithm in noisy For future work, it is highly recommended to
environment. For example, for mobile nodes implement more flooding algorithms, realistic
with u=5 m/sec and pc=0.5, the OMPR algorithm mobility pattern, variable radio transmission
achieves a reachability of only 34.6%, while for range, various probability distribution functions
the same environment, the probabilistic approach for node distribution within the network area or
achieves over 85%. But, the probabilistic approach to sample the direction of movement of mobile
achieves this high network reachability at a very nodes. Furthermore, we suggest modifying the
high cost of RET (≈68%) and ADR (≈3.5 duplicate computational model to allow computing the
reception per node) compared with RET=10.3% power conservation using variable radio range
and ADR=0.587 for the OMPR algorithm. adjustment methodologies.

CONCLUSION REFERENCES

This chapter presents a description of a newly Ahmed, Y., & Shahriari, S. (2001). Simulation of
developed research-level computer network TCP/IP Applications on CDPD Channel. M.Sc
simulator, namely, MANSim. It can be used to Thesis, Department of Signals and Systems, Chal-
evaluate the performance of a number of flooding mers University of Technology, Sweden.
algorithms in ideal and realistic MANETs. Al-Bahadili, H. (2008). MANSim: A Mobile Ad
The main conclusions of this chapter can be Hoc Network Simulator. Personal Communica-
summarized as follows: tion.

• The simulator demonstrates an excel-


lent accuracy, reliability, and flexibility
in analyzing and designing wireless com-
puter networks in comparison with analyti-
cal modeling and experimental tests as it

439
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Al-Bahadili, H., Al-Basheer, O., & Al-Thaher, A. Chang, X. (1999). Network simulation with OP-
(2007). A Location Aided Routing-Probabilistic NET. In P. A. Farrington, H. B. Nembhard, D.
Algorithm for Flooding Optimization in MA- T. Sturrock, & G. W. Evans (Eds.), Proceedings
NETs. In Proceedings of Mosharaka International of the 1999 Winter Simulation Conference, (pp.
Conference on Communications, Networking, 307-314).
and Information Technology (MIC-CNIT 2007),
Chung, A. C. (2004). Simulation and Modeling
Jordan.
Handbook: A Practical Approach. Boca Raton,
Al-Bahadili, H., Al-Zoubaidi, A. R., Al-Zayyat, FL: CRC Press.
K., Jaradat, R., & Al-Omari, I. (2009). Develop-
Forouzan, B. A. (2007). Data Communications and
ment and performance evaluation of an OMPR
Networking (4th Ed.). Boston: McGraw-Hill.
algorithm for route discovery in noisy MANETs.
To be published. Fusk, H., Lawniczak, A. T., & Volkov, S. (2001).
Packetdelayinmodelsofdatanetworks. ACMTrans-
Al-Bahadili, H., & Jaradat, Y. (2007). Develop-
actions on Modeling and Computer Simulation,
ment and performance analysis of a probabilistic
11(3), 233–250. doi:10.1145/502109.502110
flooding in noisy mobile ad hoc networks. In
Proceedings of the 1st International Conference on Hassan, M., & Jain, R. (2003). High Performance
Digital Communications and Computer Applica- TCP/IP Networking: Concepts, Issues, and Solu-
tions (DCCA2007), (pp. 1306-1316), Jordan. tions. Upper Saddle River, NJ: Prentice-Hall.
Al-Thaher, A. (2007). A Location Aided Routing- Jaradat, R. (2009). Development and Performance
Probabilistic Algorithm for Flooding Optimiza- Analysis of Optimal Multipoint Relaying Algo-
tion in Mobile Ad Hoc Networks. M.Sc Thesis, rithm for Noisy Mobile Ad Hoc Networks. M.Sc
Department of Computer Science, Amman Arab Thesis, Department of Computer Science, Amman
University for Graduate Studies, Jordan. Arab University for Graduate Studies, Jordan.
Asgarkhani, M. (2002). Computer modeling and Jaradat, Y. (2007). Development and Performance
simulation as a learning tool: A preliminary study Analysis of a Probabilistic Flooding in Noisy Mo-
of network simulation products. Christchurch bile Ad Hoc Networks. M.Sc Thesis, Department
Polytechnic Institute of Technology (CPIT), of Computer Science, Amman Arab University
Christchurch, New Zealand. for Graduate Studies, Jordan.
Bani-Yassein, M., Ould-Khaoua, M., Mackenzie, Ko, Y., & Vaidya, N. H. (2000). Location-Aided
L., & Papanastasiou, S. (2006). Performance Routing (LAR) in Mobile Ad Hoc Networks.
analysis of adjusted probabilistic broadcasting in Journal of wireless . Networks, 6(4), 307–321.
mobile ad hoc networks. International Journal of
Kotz, D., Newport, C., Gray, R. S., Liu, J., Yuan,
Wireless Information Networks, 13(2), 127–140.
Y., & Elliott, C. (2004). Experimental evaluation
doi:10.1007/s10776-006-0027-0
of wireless simulation assumptions. In Proceed-
Banks, J., Carson, J. S., & Nelson, B. L. (1996). ings of the 7th ACM International Symposium on
Discrete-Event System Simulation (2nd Ed.). Upper Modeling, Analysis, and Simulation of Wireless
Saddle River, NJ: Prentice Hall. and Mobile Systems, (pp. 78-82).

440
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

Kurkowski, S., Camp, T., & Colagrosso, M. Sasson, Y., Cavin, D., & Schiper, A. (2003).
(2006). MANET simulation studies: The current Probabilistic broadcast for flooding in wireless
state and new simulation tools. Department of mobile ad hoc networks. Proceedings of Wireless
Mathematics and Computer Sciences, Colorado Communications and Networking Conference
School of Mines, CO. (WCNC ’03), 2(16-20), 1124-1130.
Law, A., & Kelton, W. D. (2000). Simulation Mod- Scott, D., & Yasinsac, A. (2004). Dynamic
eling and Analysis (3rd Ed.). Boston: McGraw-Hill probabilistic retransmission in ad hoc networks.
Higher Education. Proceedings of the International Conference on
Wireless Networks, (pp. 158-164).
Maria, A. (1997). Introduction to modeling and
simulation. In S. Andradottir, K. J. Healy, D. H. Sinclair, J. B. (2004). Simulation of Computer
Withers, & B. L. Nelson (Ed.) Proceedings of the Systems and Computer Networks: A Process-
1997 Winter Simulation Conference, (pp. 7-13). Oriented Approach. George R. Brown School of
Engineering, Rice University, Houston, Texas,
Nutaro, J. (2003). Parallel Discrete Event Simula-
USA.
tion with Application to Continuous Systems. PhD
Thesis, University of Arizona, Tuscon, Arizona. Stallings, W. (2005). Wireless Communications
and Networks (2nd Ed.). Upper Saddle River, NJ:
Nutaro, J. (2007). Discrete event simulation of
Prentice-Hall.
continuous systems. In P. Fishwick, (Ed.) Hand-
book of Dynamic System Modeling. Boca Raton, Tanenbaum, A. (2003). Computer Networks (4th
FL: Chapman & Hall/CRC. Ed.). Upper Saddle River, NJ: Prentice Hall.
Nutaro, J., & Sarjoughian, H. (2004). Design Tseng, Y., Ni, S., Chen, Y., & Sheu, J. (2002). The
of distributed simulation environments: A uni- broadcast storm problem in a mobile ad hoc net-
fied system-theoretic and logical processes ap- work. Journal of Wireless Networks, 8, 153–167.
proach. Journal of Simulation, 80(11), 577–589. doi:10.1023/A:1013763825347
doi:10.1177/0037549704050919
Zeigler, B. P. (2003). DEVS Today: Recent
Peek, J., Todin-Gonguet, G., & Strang, J. (2001). advances in discrete event-based information.
Learning the UNIX Operating System (5th Ed.). Proceedings of the 11th IEEE/ACM International
Sebastopol, CA: O’Reilly & Associates. Symposium on Modeling, Analysis and Simula-
tion of Computer Telecommunications Systems,
Qayyum, A., Viennot, L., & Laouiti, A. (2002).
(pp. 148-162).
Multipoint relaying for flooding broadcast mes-
sages in mobile wireless network. Proceedings
of the 35th Hawaii International Conference on
System Sciences, (pp. 3866- 3875). KEY TERMS AND DEFINITIONS
Roeder, T. M. K. (2004). An Information Taxonomy Discrete-Event Simulation: In discrete-event
for Discrete-Event Simulations. PhD Dissertation, simulation the operation of a system is represented
University of California, Berkeley, CA. as a chronological sequence of events. Each event
Rorabaugh, C. B. (2004). Simulating Wireless occurs at an instant in time and marks a change
Communication Systems: Practical Models in of state in the system.
C++. Upper Saddle River, NJ: Prentice-Hall. Continuous-Valued Simulation: In a contin-
uous-valued simulation, the values of the system

441
On the Use of Discrete-Event Simulation in Computer Networks Analysis and Design

states are continuously change with time. The vari- to evaluate, analyze, and compare the performance
ous states of the system are usually represented by a of a number of flooding algorithms in ideal and
set of algebraic differential, or integro-differential realistic MANET environments. It is written in
equations. The simulation program solves the C++ programming language, and consists of four
equations and uses the numbers to change the main modules: network, mobility, computational,
state and output of the simulation. and algorithm modules.
Process-Oriented Simulation: It is a type of MANET: A MANET, which stands for mobile
simulation that allows related state changes to be ad hoc network, is defined as a collection of low-
combined in the context of a process. power wireless mobile nodes forming a temporary
Event-Driven Simulation: It is a type of simu- wireless network without the aid of any established
lation that allows the system model to evolve as a infrastructure or centralized administration.
sequence of events, where an event represents a Flooding Algorithm: A flooding algorithm
change in the model state. The change takes zero is an algorithm for distributing material to every
time; i.e., each event is the boundary between two part of a connected network. They are used in
stable periods in the model’s evolution (periods systems such as Usenet and peer-to-peer file shar-
during which the state variables do not change), ing systems and as part of some routing protocols.
and no time elapses in making the change. There are several variants of flooding algorithm:
Trace-Driven Simulation: It is an important most work roughly as follows: each node acts as
tool in many simulation applications in which both a transmitter and a receiver, and each node
the model’s inputs are derived from a sequence tries to forward every message to every one of its
of observations made on a real system. neighbors except the source node. This results in
Stochastic Simulation: In stochastic simula- every message eventually being delivered to all
tion the system workload or the model input is reachable parts of the network.
characterized by various probability distributions, Distribute Simulation: In a distributed
e.g., Poisson, exponential, On/Off, self-similar, simulation the model is implemented as a set of
etc. During the simulation execution, these distri- processes that exchange messages to control the
butions are used to produce random values which sequencing and nature of changes in the model
are the inputs to the simulation model states.
Simulation Language: Simulation language Terminating Simulation: A terminating simu-
is a computer language describes the operation lation is used to study the behavior of a system
of a simulation model on a computer. for a well-defined period of time or number of
Network Simulator: It is a software tool events.
develops to support computer networks analysis Steady-State Simulation: A steady-state
and design. Some of the network simulators are simulation is used to investigate the steady-state
of general-purpose use and other dedicated to behavior of a system, where the simulation con-
simulate particular types of computer networks. tinues until the system reaches a steady-state.
MANSim: It is an academic, research-level Otherwise the simulation results can be signifi-
computer network simulator, which can be used cantly different from the true results.

442
443

Chapter 20
Queuing Theory and
Discrete Events Simulation
for Health Care:
From Basic Processes to Complex
Systems with Interdependencies
Alexander Kolker
Children’s Hospital and Health Systems, USA

ABSTRACT
This chapter describes applications of the discrete events simulation (DES) and queuing analytic (QA)
theory as a means of analyzing healthcare systems. There are two objectives of this chapter: (i) to il-
lustrate the use and shortcomings of QA compared to DES by applying both of them to analyze the same
problems, and (ii) to demonstrate the principles and power of DES methodology for analyzing both simple
and rather complex healthcare systems with interdependencies. This chapter covers: (i) comparative
analysis of QA and DES methodologies by applying them to the same processes, (ii) effect of patient
arrival and service time variability on patient waiting time and throughput, (iii) comparative analysis
of the efficiency of dedicated (specialized) and combined resources, (iv) a DES model that demonstrates
the interdependency of subsystems and its effect on the entire system throughput, and (v) the issues and
perspectives of practical implementation of DES results in health care setting.

INTRODUCTION: SYSTEM- public and private resources on research in the life


ENGINEERING METHODS FOR sciences, as well as on design and development of
HEALTHCARE DELIVERY medical clinical and imaging devices.
At the same time, relatively little technical
Modern healthcare achieved great progress in de- talent and material resources have been devoted
veloping and manufacturing of medical devices, to improving operations, quality and productivity
instruments, equipment and drugs to serve individual of the overall health care as an integrated delivery
patients. This was achieved mainly by focusing system. According to the joint report of the National
Academy of Engineering and Institute of Medicine
(Reid et al, 2005), the cost of this collective inat-
DOI: 10.4018/978-1-60566-774-4.ch020

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Queuing Theory and Discrete Events Simulation for Health Care

tention and the failure to take advantage of the of the output of the various subsystems should
methods that has provided quality and productivity not be confused with maximizing the final output
breakthroughs in many other sectors of economy of the overall system’. Similarly, Goldratt (2004,
are enormous. p. 211) states that ‘a system of local optimums
In this report system-engineering methods is not an optimum system at all; it is a very inef-
have been identified that have transformed the ficient system’.
quality, safety and productivity performance of Analysis of a complex system is usually in-
many other large-scale complex industries (e.g. complete and can be misleading without taking
telecommunications, transportation, manufactur- into account subsystems’ interdependency (see
ing). (Reid et al, 2005). These system-engineering section 3.3). Analysis of a mathematical model
methods could also be used to improve efficiency using analytic or computer algorithmic techniques
of health care delivery as a patient-centered inte- reveals important hidden and critical relationships
grated system. in the system that allows leveraging them to find
Ryan (2005) summarized system-engineering out how to influence the system’s behavior into
principles for healthcare. A system is defined as a desired direction.
set of interacting, interrelated elements (subsys- The elements included in the system model
tems) - objects and/or people- that form a complex and required information depends on the problem
whole that behaves in ways that these elements to be solved. For the output of the model to be
acting alone would not. Models of a system enable useful, the model must mimic the behavior of the
one to study the impact of alternative ways of run- real system.
ning the system, i.e. alternative designs, different According to the already mentioned report
configurations and management approaches. This by The National Academy of Engineering and
means that system models enable one to experi- Institute of Medicine (Reid et al, 2005), the most
ment with systems in ways that cannot be used powerful system-analysis methods are Queuing
with real systems. Theory and Discrete Event Simulation (DES).
The models usually include a graphic represen- Both methods are based on principles of
tation of the system, which is a diagram showing operations research. Operations research is the
the flow of items and resources. A mathemati- discipline of applying mathematical models of
cal description of the model includes objective complex systems with random variability aimed
functions, interrelationship and constraints. The at developing justified operational business deci-
components of the mathematical model can be sions. It is widely used to quantitatively analyze
grouped into four categories: (i) decision variables characteristics of processes with random demand
that represent possible options; (ii) variables, pa- for services, random service time and available
rameters and constants, which are inputs into the capacity to provide those services. Operations
model, (iii) the objective functions, which are the research methodology is the foundation of the
output of the model, and (iv) constraints and logic modern management science.
rules that govern operation of the system. Health care management science is applied
Large systems are usually deconstructed into to the various aspects of patient flow, capacity
smaller subsystems using natural breaks in the planning, allocation of material assets and human
system. The subsystems are modeled and ana- resources to improve the efficiency of healthcare
lyzed separately, but they should be reconnected delivery.
back in a way that recaptures the most important For the last 40 years, hundreds of articles
interdependency between them. Lefcowitz (2007) have been published that demonstrate the power
summarized, for example, that ‘…maximization and benefits of using management science in

444
Queuing Theory and Discrete Events Simulation for Health Care

healthcare. Several reviews have appeared that usually applied to a number of pre-determined
specifically examine DES applications in health- simplified models of the real processes for which
care, such as Jun et al (1999), Carter (2002) and analytic formulas can be developed.
the recent review by Jacobson et al. (2006) that Weber (2006) writes that ‘…There are probably
provides new updates that have been reported 40 (queuing) models based on different queue
since 1999. management goals and service conditions…’ and
In contrast to these reviews that mainly listed that it is easy ‘… to apply the wrong model’ if
DES publications without much discussion on one does not have a strong background in opera-
how and why the models actually work and de- tions research.
liver, the objective of this chapter is to present Development of tractable analytic formulas is
a detailed description of the ‘inner workings’ of possible only if a flow of events in the system is
DES models for some healthcare processes starting a steady-state Poisson processes. On definition,
from basic applications and proceeding to rather this is an ordinary stochastic process of indepen-
advanced models. dent events with the constant parameter equal
It is also a goal to demonstrate the fundamental to the average arrival rate of the corresponding
advantage of DES methodology over queuing flow. Time intervals between events in a Poisson
analytic (QA) models. This is demonstrated by flow are always exponentially distributed with
comparative analysis of both DES and QA applied the average inter-arrival time that is the inverse
to the same problems. Poisson arrival rate. Service time is assumed to
The focus of this chapter is DES of the various follow an exponential distribution or, rather rarely,
aspects of random and non-random patient flow uniform or Erlang distribution. Thus, processes
variability and its effect on process performance with a Poisson arrival of events and exponential
metrics. Using concrete examples and scenarios, service time are Markov stochastic processes with
it was demonstrated how simple DES models help discrete states and continuous time.
to gain understanding of the basic principles that Most widely used queuing models for which
govern patient flow with random and non-random relatively simple closed analytical formulas have
variability. It is further demonstrated how more been developed are specified as M/M/s type (Hall,
advanced DES models have been used to study 1990; Lawrence and Pasternak, 1998; Winston
system behavior (output) changes with the change and Albright, 2000). (M stands for Markov since
of the input data and/or process parameters. Poisson process is a particular case of a stochas-
tic process with no ‘after-effect’ or no memory,
known as continuous time Markov process). These
QUEUING ANALYTIC (QA) models assume an unlimited queue size that is
MODELS AND DISCRETE served by s providers.
EVENT SIMULATION (DES) Typically M/M/s queuing models allow
calculating the following steady-state charac-
Queuing Analytic Models: teristics:
Their Use and Limitations
• probability that there are zero customers in
The term ‘queuing theory’ is usually used to define the system
a set of analytic techniques in the form of closed • probability that there are K customers in
mathematical formulas to describe properties of the system
the processes with a random demand and supply • the average number of customers waiting
(waiting lines or queues). Queuing formulas are in the queue

445
Queuing Theory and Discrete Events Simulation for Health Care

• the average time the customers wait in the test. The authors obtained the test p-values in
queue the range from 0.136 to 0.802 for different days
• the average total time the customer spends of the week. Because p-values were greater than
in the system (‘cycle time’) 0.05 level of significance, they failed to reject the
• utilization rate of servers, i.e. percentage of null-hypothesis of Poisson distribution (accepted
time the server is busy the null-hypothesis).
On the other hand, the fundamental property
As more complexity is added in the system, the of a Poisson distribution is that its mean value is
analytic formulas become less and less tractable. equal to its variance (squared standard deviation).
Analytic formulas are available that include, for However, the authors’ own data indicated that the
example, limited queue size, customers leaving mean value was not even close to the variance for
the system after waiting a specified amount of at least four days of the week. Thus, the use of
time, multiple queues with different average ser- a Poisson distribution was not actually convinc-
vice time and different providers’ types, different ingly justified for the patient arrivals. Apparently,
service priorities, etc (Lawrence and Pasternak, chi-square test p-values were not large enough to
1998; Hall, 1990). accept the null-hypothesis with high enough con-
However the use of these cumbersome for- fidence (alternatively, the power of the statistical
mulas even built in Excel spreadsheets functions test was likely too low).
(Ingolfsson et al, 2003) or tables (Hillier, Yu, 1981; Despite its rather limited applicability to many
Seelen et al, 1985) is rather limited because they actual patient arrival patterns, a Poisson process
cannot capture complexity of most healthcare is widely used in operation research as a standard
systems of practical interest. theoretical assumption because of its mathematical
Assumptions that allow deriving most queuing convenience (Gallivan, 2002; Green, 2006; Green
formulas are not always valid for many health- et al, 1991; McManus et al, 2003).
care processes. For example, several patients The use of QA theory is often recommended to
sometimes arrive in Emergency Department solve many pressing hospital problems of patient
at the same time (several people injured in the flow and variability, calculating needed nursing
same auto accident), and/or the probability of resources, the number of beds and operating
new patient arrivals could depend on the previ- rooms (IHI, 2008; Litvak, 2007; McManus et
ous arrivals when ED is close to its capacity, or al, 2004; Haraden et al, 2003). However, such a
the average arrival rate varies during a day, etc. recommendation ignores some serious practical
These possibilities alone make the arrival process limitations of QA theory for hospital applications.
a non-ordinary, non-stationary with after-effect, D’Alesandro (2008) summarized why QA theory
i.e. a non-Poisson process for which queuing is often misplaced in hospitals.
formulas are not valid. Therefore, it is important Some authors are trying to make queuing for-
to properly apply statistical goodness-of-fit tests mulas applicable to real processes by fitting and
to verify that the null-hypothesis that actual ar- calibration. For example, in order to use queuing
rival data follow a Poisson distribution cannot be formulas for a rather complex ED system, Mayhew
rejected at some level of significance. and Smith (2008) made a significant process sim-
An example of a conclusion from the goodness- plification by presenting the workflow as a series
of-fit statistical test that is not convincing enough of stages. The stages could include initial triage,
can be found, for instance, in Harrison et al (2005). diagnostic tests, treatment, and discharge. Some
The authors tried to justify the use of a Poisson patients experienced only one stage while others
process by using a chi-square goodness-of-fit more than one. However, ‘… what constitutes a

446
Queuing Theory and Discrete Events Simulation for Health Care

‘stage’is not always clear and can vary…and where A number of specific examples that illustrate
one begins and ends may be blurred’ (Mayhew the use of simple QA models and their limitations
and Smith, 2008). The authors assumed a Poisson are presented in the next sections.
arrival and exponential service time but then used
actual distribution service time for ‘calibration’ Flu Clinic: Unlimited Queue Size
purposes. Moreover, the authors observed that with Steady State Operation
exponential service time for the various stages
‘…could not be adequately represented by the A small busy clinic provides flu shots during a flu
assumption that the service time distribution season on a walk-in basis (no appointment neces-
parameter was the same for each stage’. In the sary). The clinic has two nurses (servers). Average
end, all the required calibrations, adjustments, patient arrival rate is 54 patients per hour with
fitting to the actual data made the model to lose its about the same number of elderly and all others.
main advantage as a queuing model: its analytical Each shot takes on average about 2 min.
simplicity and transparency. On the other hand, Usually there are quite a few people in the queue
all queuing formulas assumptions and approxima- waiting for the shot. In order to reduce waiting
tions still remained. time and the number of people in the queue, the
Therefore many complex healthcare systems staff conducted a brainstorming session. It was
with interactions and interdependencies of the proposed to have one nurse to perform flu shots
subsystems cannot be effectively analyzed using only for most vulnerable elderly patients, and
analytically derived closed formulas. another nurse to perform shots for all others. The
Moreover, queuing formulas cannot be directly staff began developing a pilot project plan to test
applied if the arrival flow contains a non-random this new operation mode on the clinic floor.
component, such as scheduled arrivals (see sec- However, the clinic’s manager decided first
tions 2.2.5, 2.3 and 3.2). Therefore, in order to to test this idea using principles of management
use analytic queuing formulas, the non-random science and operations research. The manager
arrival component should be first eliminated, assumed that analytical queuing formulas would
leaving only random arrival flow for which QA be applicable in this case.
formulas could be used (Litvak, 2007). For the current operation mode, the following
Green (2004) applied M/M/s model to predict M/M/s analytical model with the unlimited queue
delays in the cardiac and thoracic surgery unit size can be used:
with mostly elective scheduled surgical patients Random patient arrivals are assumed to be a
assuming a Poisson pattern of their arrivals. The Poisson process with the total average arrival rate
author acknowledged that this assumption could λ=54 pts/hr, average flu shot time τ=2 min and
result in an overestimate of delays. In order to the number of servers N=2.
justify the use of M/M/s model the author argued The final (steady-state) probability that there
that some ‘…other factors are likely to more than are no patients in the system, p0, is calculated using
compensate for this’. However, it was not clear the formula (Hall, 1990; Green, 2006)
what those factors are and how much they could
N −1
an aN
compensate the overestimated delays. p0 = [∑ + ]−1 (2.1)
Still, despite their limitations, QA models have n=0 n! N !(1 − ρ )
some place in operation research for application
to simply structured steady-state processes if a where
Poisson arrival and exponential service time as-
sumptions are accurate enough.

447
Queuing Theory and Discrete Events Simulation for Health Care

a = λ *τ , ρ = a / N The idea of separating one random patient flow


with two servers on two separate random flows,
The average number of patients in the queue, each with one dedicated server, does not result
Lq, is in improvement because two separate servers
cannot help each other if one of them becomes
Lq = a N +1 * p0 / [ N * N !*(1 - ρ ) 2 ] overworked for some time due to a surge in
(2.2) patient volume because of random variability of
the patient flow.
This is an illustration of the general principle
and the average time in the queue, t, is (the Little’s that the combined random work flow with unlim-
formula) ited queue size and no leaving patients is more
efficient than separate work flows with the same
t = Lq / λ total work load (see also section 2.3).
(2.3)
An example of using discrete events simula-
Substituting in the formulas (2.1) to (2.3) tion (DES) to analyze the same process without
N=2, λ=54 pts/hr, and τ=2 min=0.033 hrs, we get resorting to analytical formulas will be given in
average Lq =7.66 patients, and the average wait- section 2.
ing time, t, about 8.5 min. The clinic’s average
utilization is 90%.
Flu Clinic: Unlimited Queue Size
In practice, an excel spreadsheet is usually used
with Non-Steady-State Operation
to perform calculations using queuing formulas.
The manager of the same clinic decided to verify
The new proposed system that is supposed
that the clinic would operate smoothly enough
to perform better will consist of two dedicated
with a new team of two less experienced nurses
patient groups that would form two separate
who would work only a little slower than the
queues, one for elderly patients and another
previous one. The average time to make a shot
for all others (two separate subsystems with
will be about 2.5 min (instead of average 2 min
N=1). Arrival rate for each patient group is
for more experienced staff, as in section 2.1.1).
going to be λ=54 /2=27 pts/hr. Parameter:
The manager reasoned that because this difference
ρ = a = λ *τ = 27 pts / hr * 0.0333 hr = 0.9 .
is relatively small it would not practically affect
Using formulas (2.1) to (2.3), we get the
the clinic operation: the number of patients in the
average number of patients in each queue Lq
queue and their waiting time on the typical work-
=8.1. Thus, the total queue for all patients will
ing day could be only a little higher than in the
be 2*8.1=16.2 patients.
previous case or practically not different at all.
The average waiting time in the queue will
The manager plugged the average service time
be about 8.1/27=0.3 hours= 18 min.Thus, in
2.5 min and the same arrival rate 54 pts/hr in the
the proposed ‘improved’ process the average
M/M/s queuing calculator. However the calculator
waiting time and the number of patients in
returned no number at all, which means that no
the queue will be about twice of those in the
solution could be calculated. Why is that?
original process.
An examination of Equation (2.1) shows that
It should be concluded that the proposed
if ρ becomes greater than 1 the last term in the
improvement change that might look reasonable
on the surface does not stand the scrutiny of the sum becomes negative and the calculated p0 also
quantitative analysis. becomes negative, which does not make sense.

448
Queuing Theory and Discrete Events Simulation for Health Care

(If ρ is equal to 1, the term becomes uncertain made a conclusion that this clinic process will be
and the calculations cannot be carried out at all). in a steady-state condition and that the waiting
The average service time in this example is only time and the number of patients in the queue is
slightly higher than it was in the previous case. acceptable.
However, this small difference made parameter ρ But is it a correct conclusion? Recall that
greater than 1 (ρ=1.125 >1). This explains why the QA models assume that a Poisson arrival rate is
calculations cannot be done using this value. constant during a steady-state time period (Hall,
Queuing analytical formulas with unlimited 1990; Lawrence et al, 2002; Green, 2006). If it
queue size are applicable only for steady-state is not constant, such as in this case, QA results
processes, i.e. for the established processes whose could be very misleading. The wait time will be
characteristics do not depend on time. The steady- significantly greater in the mid-day period (and/
state condition is possible only if ρ < 1, otherwise or the steady-state condition will be violated). At
the above formulas are not applicable and the the beginning and at the end of the day, though,
queue grows indefinitely. the wait time will be much smaller. Because the
In section 2.2.2 it will be demonstrated how arrival rate is included non-linearly in the expo-
DES methodology easily handles this situation nential term of a Poisson distribution formula,
and demonstrates growth of the queue. the arrival rate cannot be averaged first and then
substituted in the exponential term. (For non-
Flu Clinic: Time-Varying Arrival Rates linear functions, the average value of a function
is not equal to the function of average values of
In the previous section 2.1.2 parameters of the its arguments).
queuing system (average arrival rate 54 patients As Green (2006) stated ‘…this illustrates a
per hour and average shot time 2.5 min) made situation in which a steady-state queuing model
a patient flow a non-steady-state one for which is inappropriate for estimating the magnitude
QA model with unlimited queue size could not and timing of delays, and for which a simulation
be used. model will be far more accurate’.
However, the clinic’s manager realized that It is tempting, as a last resort, to save the use
the average patient arrival rate varies significantly of QA model by dividing the day into time periods
during a day, and that 54 patients per hour was in which arrival rate is approximately constant.
actually a peak arrival rate, from noon to 3 pm. Then a series of M/M/s models is constructed,
In the morning hours from 8 am to 10 am the one for each period. This approach is called SIPP
arrival rate was lower, 30 patients / hour. From (stationary independent period-by-period) (Green,
10 am to noon it was 40 patients / hour, and in 2006; Green et al, 1991).
the afternoon from 3 pm to 6 pm it was about 45 If we apply this approach, the following results
patients / hour. can be obtained:
Thus, the manager calculated the average
arrival rate for these time periods for the day. It • Time period 8 am to 10 am: Lq =0.8 pa-
turned out to be (30+40+54+45)/4=42.25 patients/ tients in the queue, waiting time 1.6 min
hour. He/she plugged this number in the queuing • Time period 10 am to noon: Lq =3.8 pa-
calculator (along with the average time to make a tients in the queue, waiting time 5.7 min
shot 2.5 min), and obtained the average number • Time period noon to 3 pm: no steady-state
of patients in queue Lq =6.1 patients, and the solution
average waiting time about 8.6 min. Because the • Time period 3 pm to 6 pm: Lq =13.6 pa-
calculator produced some numbers the manager tients in the queue, waiting time 18.1 min

449
Queuing Theory and Discrete Events Simulation for Health Care

Notice how these results differ from those inter-arrival time. If other distributions with the
based on the averaging of the arrival rate for the same average are used we should get a different
entire day. result.
However this SIPP patch applied to QA models For example, length of stay could be in the
was found to be unreliable (Green, 2006; Green range from 2 to 3 days with the average 2.5 days,
et al, 2001). This is because in many systems and be described by a triangle distribution. Or the
with time-varying arrival rates, the time of peak length of stay could follow a long-tailed lognor-
congestion significantly lags the time of the peak mal distribution, also with the same average of
in the arrival rate (Green et al, 1991). These au- 2.5 days and standard deviation of, say, 2 days
thors developed a modification called Lag-SIPP (these values would correspond to log-normal
that incorporates an estimation of this lag. This parameters 3.85 and 0.703).
approach has been shown to often be more effec- Thus, QA does not distinguish between dif-
tive than a simple SIPP (Green, 2006). ferent distributions with the same averages. This
Even it is so, this does not make QA models is a serious limitation of QA.
application easier if there are many time periods It will be demonstrated in section 2.2.7 how
with different constant arrival rates because many easy it is to use DES for different length of stay
different M/M/s models need to be constructed distributions with the same average.
accordingly to describe one process.
It will be demonstrated in section 2.2.4 how DES Models: Basic Applications
DES model easily and elegantly handles this situ-
ation with time-varying arrival rate. In contrast to queuing formulas, DES models are
much more flexible and versatile. They are free
ICU Waiting Time from assumptions of the particular type of the
arrival process (Poisson or not), as well as the
This problem is presented by (Litvak 2007; We- service time (exponential or not). They can be
ber, 2006) to demonstrate how QA can be used used for the combined random and non-random
to compare patient average waiting time to get arrival flow. The system structure (flow map)
into ICU if it has 5 beds and 10 beds and patient can be complex enough to reflect a real system
average arrival rate is 1 per day and 2 per day, structure, and custom action logic can be built in
accordingly. The average length of stay in ICU is to capture the real system behavior.
2.5 days. It is assumed that the length of stay in the At the same time it should be noted that building
ICU is exponentially distributed and, of course, a complex realistic simulation model sometimes
that patient arrival is a Poisson process. requires a significant amount of time for custom
In order to apply QA formulas an additional logic development, debugging, model validation,
assumption should be used that length of stay and input data collection.
follows exponential distribution with the above However, a good model is well worth the efforts
average value. because it becomes a powerful and practically the
Using M/M/s model it is easy to calculate only real tool for quantitative analysis of complex
that the average waiting time for 10 beds ICU is hospital operations and decision-making.
0.43 hours, and that for 5 beds ICU is 3.1 hours. Many currently available simulation software
Average ICU utilization is 50%. packages (ProcessModel, ProModel, Arena,
Thus, the waiting time for the larger unit is Simula8, and many others) provide a user-friendly
about 7 times shorter. Notice, however, that this interface that makes the efforts of building a real-
result is valid only for exponential service and istic simulation model not more demanding than

450
Queuing Theory and Discrete Events Simulation for Health Care

Table 1.

Inter-arrival time, min Service time, min


2.6 1.4
2.2 8.8
1.4 9.1
2.4 1.8

the efforts to make simplifications, adjustments To illustrate how a DES model works step
and calibrations to develop a rather complex but by step, let’s consider a very simple system that
inaccurate queuing model. consists of a single patient arrival line and a single
Swain (2007), Abu-Taeh et al (2007), Hlupic server. Suppose that patient inter-arrival time is
(2000), Nikoukaran (1999) provided a review uniformly (equally likely) distributed between
and a comparative study of dozens commercially 1 min and 3 min. Service time is exponentially
available simulation packages. distributed with the average 2.5 min. (Of course,
A DES model is a computer model that mimics any statistical distributions or non-random pat-
the dynamic behavior of a real process as it evolves terns can be used instead). A few random numbers
with time in order to visualize and quantitatively sampled from these two distributions are shown
analyze its performance. The validated and veri- in Table 1.
fied model is then used to study behavior of the Let’s start our example simulation at time
original process and then identify the ways for zero, t=0, with no patients in the system. We will
its improvement (scenarios) based on some im- be tracking any change or event that happened in
provement criteria. This strategy is significantly the system.
different from the hypothesis-based clinical testing A summary of what is happening in the system
widely used in medical research (Kopach-Konrad looks like Table 2.
et al, 2007). These simple but tedious logical and numerical
DES models track entities moving through event-tracking operations (algorithm) are suit-
the system at distinct points of time (events). The able, of course, only for a computer. However,
detailed track is recorded of all processing times they illustrate the basic principles of any discrete
and waiting times. Then the system’s statistics for events simulation model, in which discrete events
entities and activities is gathered. (changes) in the system are tracked when they

Table 2.

Event # Time Event that happened in the system


1 2.6 1st customer arrives. Service starts that should end at time= 4
2 4 Service ends. Server waits for patient
3 4.8 2nd patient arrives. Service starts that should end at time =13.6. Server is idle 0.8 min
4 6.2 3rd patient arrives. Joins the queue waiting for service
5 8.6 4th patient arrives. Joins the queue waiting for service
6 13.6 2nd patient (from event 3) service ends. 3rd patient at the head of queue (first in-first out) starts service
that should end at time 22.7
7 22.7 Patient #4 starts service and so on.

451
Queuing Theory and Discrete Events Simulation for Health Care

occur over the time. In this particular example, finding and analysis. DES is the most effective
we were tracking events at discrete points in time tool to perform quantitative ‘what-if’ analysis and
t=2.6, 4.0, 4.8, 6.2, 8.6, 13.6, 22.7. play different scenarios of the process behavior
Once the simulation is completed for any length as its parameters change with time. This simula-
of time, another set of random numbers from the tion capability allows one to make experiments
same distributions is generated, and the procedure on the computer, and to test different options
(called replication) is repeated. Usually multiple before going to the hospital floor for actual
replications are needed to properly capture the implementation.
system’s variability. In the end, the system’s output The basic elements (building blocks) of a
statistics is calculated, e.g. the average patient simulation model are:
and server waiting time, its standard deviation,
the average number of patients in the queue, the • Flow chart of the process, i.e. a diagram
confidence intervals and so on. that depicts logical flow of a process from
In this example, only two patients out of four its inception to its completion
waited in the queue. Patient 3 waited 13.6-6.2=7.4 • Entities, i.e. items to be processed, e.g. pa-
min and patient 4 waited 22.7-8.6=14.1 min, so the tients, documents, customers, etc.
simple average waiting time for all four patients • Activities, i.e. tasks performed on entities,
is (0+0+7.4+14.1)/4=5.4 min. Notice, however, e.g. medical procedures, exams, document
that the first two patients did not wait at all while approval, customer check-in, etc
patient 4 waited 2.6 times longer than the average. • Resources, i.e. agents used to perform ac-
This illustrates that the simple average could be tivities and move entities, e.g. service per-
rather misleading as a performance metric for sonnel, equipment, nurses, physicians
highly variable processes without some additional • Entity routings that define directions and
information about the spread of data around the logical conditions flow for entities
average (a so-called flaw of averages, see also
concluding remarks for section 3.1). Typical information usually required to popu-
Similarly, the simple arithmetic average of late the model includes:
the number of waiting patients (average queue
length) is 0.5. However a more informative • Quantity of entities and their arrival time,
metric of the queue length is the time-weighted e.g. periodic, random, scheduled, daily
average that takes into account the length of time pattern, etc. There is no restriction on the
each patient was in the queue. In this case it is arrival distribution type, such as a Poisson
(1*7.4+1*14.1)/22.7=0.95. Usually the time- distribution, required by the QA formulas
weighted average is a better system’s performance • The time that the entities spend in the ac-
metric than the simple average. tivities, i.e. service time. This is usually not
DES models are capable of tracking hundreds a fixed time but a statistical distribution.
of individual entities, each with its own unique There is no restriction to an exponential
set of attributes, enabling one to simulate the service time distribution that is typically
most complex systems with interacting events required by the QA formulas
and component interdependencies. • Capacity of each activity, i.e. the max
Typical DES applications include: staff and number of entities that can be processed
production scheduling, capacity planning, cycle concurrently in the activity
time and cost reduction, throughput capability, • The maximum size of input and output
resources and activities utilization, bottleneck queues for the activities

452
Queuing Theory and Discrete Events Simulation for Health Care

• Resource assignments: their quantity and advantage side by side, DES methodology will
scheduled shifts be applied to the same processes that have been
analyzed using QA in previous sections 2.1.1 to
Analysis of patient flow is an example of the 2.1.4
general dynamic supply and demand problem.
There are three basic components that should be Comparative Analysis of QA
accounted for in such problems: (i) the number and DES Methodologies
of patients (or any items) entering the system
at any point of time, (ii) the number of patients Unlimited Queue Size with
(or any items) leaving the system after spending Steady-State Operation
some time in it, (iii) capacity of the system that
defines the number of items that can be processed Let’s consider a flu clinic that was analyzed using
concurrently. All three components affect the flow QA in section 2.1.1.
of patients (items) that the system can handle DES model structure is presented on Figure 1.
(the system’s throughput). A lack of the proper It simply depicts the arrived patient flow connected
balance between these components results in the to Queue, then coming to the flu clinic (box called
system’s over-flow and gridlock. DES methodol- Flu_Clinic), and then exit the system. These basic
ogy provides invaluable means for analyzing and model elements are simply dragged down from
managing the proper balance. the pallet and then connected to each other.
It will be demonstrated in the following sections Next step is to fill in the process information:
that even simple DES models have a significant patients arrive periodically, one patient at a random
advantage over QA models. To illustrate this exponentially distributed time interval with the

Figure 1.Layout of the simulation model of flu clinic. Information on the panel indicates patient arrival
type (Periodic) that repeats on average every E(1.111) min (E stands for exponential distribution)

453
Queuing Theory and Discrete Events Simulation for Health Care

average inter-arrival time 60 min/54=1.111 min, some small fluctuations around this average (top
E(1.111), as indicated on the data arrival panel on plot).
Figure 1 corresponds to Poisson arrival rate of 54 The average waiting time is presented on
patients per hour (E stands for exponential distri- Figure 2 (bottom plot). Similarly, the average
bution). In the Flu_Clinic data panel the capacity steady-state waiting time 8.45 min was reached
input was 2 (two patients served concurrently by with some variations around the average. The
two nurses), and the service time was exponentially average steady-state utilization is 89.8%.
random with the average value 2 min, E(2). This Average number of patients in the queue (top)
completes the model set-up. and average waiting time in the queue (bottom)
The model was run 300 replications at which a Thus, we received practically the same re-
statistically stable simulation output was reached. sults with DES model as with QA model using
Multiple replications capture the variability of about the same efforts. As an additional bonus
patient arrivals and service time. Results are with DES, though, we could watch how fast a
presented on Figure 2. steady-state operation was reached (a so-called
It is seen that the number of patients in the warm-up period).
queue steadily increases until a steady-state op-
eration (plateau) is reached. The average steady-
state number of patients in the queue is 7.7 with

Figure 2. Unlimited queue size with steady-state operation

454
Queuing Theory and Discrete Events Simulation for Health Care

Unlimited Queue Size with Non- min) would result in a small change in the output
Steady State Operation (small increase in the number of waiting patients
and their waiting time). For some systems this
In section 2.1.2 QA model with the average service is indeed true. Systems in which the output is
time 2.5 min could not produce results because a always directly proportional to input are called
steady-state operation did not exist. linear systems. However, there are quite a few
Using the same DES model described in the systems in which this simple reasoning breaks
previous section we simply plug this average down: a small change in the value of the system’s
service time in the Flu_Clinic data panel mak- input parameter(s) results in a dramatic change
ing it E(2.5) min, and run the simulation model. (even qualitative change) in the system’s outcome
Results are given on Figure 3. (behavior), e.g. from a steady-state regime to a
Average number of patients in the queue (top) non-steady-state regime. Such systems are called
and average waiting time in the queue (bottom). non-linear or complex systems despite the fact that
These plots demonstrate how the patient queue they can consist of only a few elements.
(top plot) and waiting time (bottom plot) grow with
clinic operation time. The plots demonstrate no Limited Queue Size with ‘Inpatient’
apparent trend to a steady-state regime (plateau). Patients Leaving the System
The growth goes on indefinitely with time.
This example also illustrates an important prin- Unlimited queue size is not always a good model
ciple of ‘unintended consequences’. An intuition of real systems. In many cases patients wait in a
that is not supported by objective quantitative waiting lounge that has usually a limited number
analysis says that a small change in the system of chairs (space). QA models designated M/M/s/K
input (service time from the average 2 min to 2.5 are available that include a limited queue size, K

Figure 3. Unlimited queue size with non-steady state operation

455
Queuing Theory and Discrete Events Simulation for Health Care

(Green, 2006; Lawrence and Pasternak, 1998; Arrivals with Time-Varying


Hall, 1990). However analytic formulas become Poisson Arrival Rate
very cumbersome. If the QA model includes some
patients that leave the system after waiting some In section 2.1.3 it was discussed why the use of
time in the queue, the analytic formulas become QA model with a Poisson time-varying patient
almost intractable. arrival rate is rather unreliable.
In contrast, DES models easily handle the Let’s see how easy it is to use DES model to
limited queue size and patients leaving before the address the same problem.
service began (‘inpatient’ patients). The DES model structure (layout) for time-
To illustrate, we use the same DES model as in varying arrival rate is the same as it was used in
2.2.1 with only a slight modification to include a section 2.2.1. The only difference is a different
new limited queue size and ‘inpatient’ patients. arrival routing type: instead of periodic arrival
Suppose that the queue size limit is 10 (the with the random inter-arrival time, an input daily-
max number of chairs or beds) and patients leave pattern arrival panel should be used. We use one
after waiting 10 min (of course, these could be day of the week, and input 60 patients from 8 am
any numbers including statistical distributions). to 10 am (30 pts/hour *2); 80 patients from 10 to
We put 10 in the field ‘Input queue size’, and noon (40 pts/hour*2); 162 patients from noon to
draw a routing ‘renege after 10 min’. The new 3 pm (54 pts/hour*3); and 135 patients from 3 pm
model is now ready to go. Simulation results are to 6 pm (45 pts/hour * 3). The model of the entire
presented on Figure 4. day (from 8 am to 6 pm) is ready to go.
The difference between an unlimited queue The following simulation results are obtained
size (section 2.2.2) and a limited one with leaving (compare with the approximated QA SIPP model
patients is significant. results from section 2.1.3):
The plots suggest that limited queue size results Time period 8 am to 10 am: Lq =0.6 patients
in a steady-state solution (plateau). (It could be in the queue, waiting time 0.84 min
proved that a steady-state solution always exists if Time period 10 am to noon: Lq =2.26 patients
the queue size is limited). The steady-state aver- in the queue, waiting time 2.75 min
age number of patients in the queue is about 4.5 Time period noon to 3 pm: Lq =14.5 patients
(top plot) and the average waiting time is about in the queue, waiting time 15.3 min
5.6 min (bottom plot). Time period 3 pm to 6 pm: Lq =20 patients in
However the model’s statistics summary the queue, waiting time 25.5 min
also shows that 10% to 11% of patients are lost It is seen that QA SIPP model over-estimates
because they did not stay in the queue more the queue at the beginning of the day and under-
than 10 min. Thus, this simple DES model estimates the queue at the end of the day. Of
gives a lot of valuable information and serves course, only a DES model can provide results for
as powerful tool to find out how to better man- the time period noon to 3 pm.
age the flu-clinic.
Average number of patients in the queue (top) Mixed Patient Arrivals:
and average waiting time in the queue (bottom). Random and Scheduled

We frequently deal with mixed patient arrival


pattern, i.e. some patients are scheduled to ar-
rive on specific time while some patients arrive
unexpectedly. For example, some clinics accept

456
Queuing Theory and Discrete Events Simulation for Health Care

Figure 4. Limited queue size with ‘inpatient’ patients leaving the system

patients who made an appointment, but also ac- 0.5, waiting time in queue, Wq=1 hour, and time
cept urgent random walk-in patients. Operating in the system, Ws =2 hours.
room suites schedule elective surgeries while sud- Now, let’s use a simple DES model with two
denly a trauma patient arrives and an emergency arrival flows, one random, E(4) hours, and another
surgery is required. Such a mixed arrival pattern one with scheduled six patients, as indicated on
with a different degree of the variability requires Figure 5.
a special treatment. Simulation length was 24 hours. The average
QA models should not be used if arrival flow number of patients in the queue was Lq = 0.3,
contains a non-random component, i.e. it is not a waiting time in the queue Wq=0.33 hours, time
Poisson random. Let’s illustrate what happens if in the system, Ws =0.55 hours.
this principle is violated. Notice how badly QA model over-estimated
Suppose that there is one operating room (OR) the time: almost by a factor of 3 for waiting time
and there are six scheduled surgeries for a day, in the queue, and almost by a factor of 4 for the
at 7 am, 10 am, 1 pm, 4 pm, 7 pm, 10 pm. On time in the system !
this day six random emergency patients also ar- Thus, QA models cannot account accurately
rived with the average inter-arrival time 4 hours, enough for arrival variability that is lower than
i.e. E(4) hours. Total number of patients for one Poisson variability.
day is 12. There are some approximate QA formulas
If QA model is applied assuming that all 12 that include a coefficient of variation of service
patients are random arrivals, then we would get time distribution but only for one server (Green,
arrival rate 12 pts/24= 0.5 pts per hour. Using the 2006).
average surgery time 1 hour, E(1) hours, we get
the average number of patients in the queue, Lq =

457
Queuing Theory and Discrete Events Simulation for Health Care

Figure 5. Mixed patient arrivals: random and scheduled. Two arrival flows, one random, E(4) hours, and
one with six patients scheduled at 7 am, 10 am, 1 pm, 4 pm, 7 pm, 10 pm, as indicated on the panel

Effect of Added Variability on • Scenario 4. There are both types of vari-


Process Flow and Delay ability. Poisson arrival with the average ar-
rival rate 0.5 patients per hour (average in-
Let’s now demonstrate how additional arrival ter-arrival time 2 hours), and service time
variability, service time variability, and/or both variability with the average service time 2
would affect the throughput and waiting time in hours, E(2)
the system. We will be using a simple DES model • Scenario 5. Poisson arrival variability with
similar to the model presented on Figure 1. the average arrival rate 0.5 patients per
We consider five scenarios with consecutive- hour (average inter-arrival time 2 hours).
ly added step-by-step patient flow variability: Service time variability is log-normally
distributed with the distribution mean
• Scenario 1. No variability at all. Each pa- value 2 hours and standard deviation 4
tient arrives exactly every 2 hours. Service hours (these values correspond to the log-
time (surgery) is exactly 2 hours. normal parameters: location= -0.112 and
• Scenario 2. There is arrival variability, i.e. scale=1.27).
a Poisson flow with the average arrival rate
0.5 pts/hr (average inter-arrival time is 2 Results for 24 hours simulation time are
hours, E(2) hrs). No service time variabil- summarized in Figure 6. It follows from this
ity, it is exactly 2 hours. table that as the variability steps are added to
• Scenario 3. No arrival variability. There is the process, patient throughput decreases, an
a service time variability with the average overall waiting time increases and utilization
service time 2 hours, E(2) hours. decreases.

458
Queuing Theory and Discrete Events Simulation for Health Care

Figure 6. Five DES scenarios with consecutively added variability. Simulation performed for 24 hours
period

At the same time, the variability distribution The following steady-state DES waiting times
should not be characterized only by a single pa- were obtained: 0.43 hours for 10 beds ICU and 2.94
rameter, such as its coefficient of variation (CV). for 5 beds ICU, accordingly. These are practically
The overall shape of the variability distribution the same results as for QA (section 2.1.4).
also plays a big role. For example, the coefficient Now, let’s see how different distributions with
of variation for the lognormal distribution service the same average length of stay affect the waiting
time (CV=4/2=2) is greater than that for exponen- time. Recall, that QA cannot answer such practi-
tial distribution (CV=1). Nonetheless, this did not cally important questions at all, and it is valid
result in increase of the wait time, as this would only for exponential distribution or, at best, for
follow from an approximated queuing formula distributions with coefficient of variation close
(Allen, 1978; Green, 2006). Only DES can ac- to 1 (Green, 2006).
curately account for the effect of the distribution For example, the triangle distribution limited
shape and skewness. between 2 days and 3 days with the average 2.5
days results in:
ICU Waiting Time
• for 10 beds ICU average waiting time is
Analysis of ICU waiting time considered in sec- 0.27 hours while for 5 beds unit it is 1.72
tion 2.1.4 using QA could be done using the same hours. Notice, how significantly differ-
model as in the previous section (Figure 1). We ent these values are from the exponential
simply use capacity 5 or 10, accordingly. Let’s start length of stay with the same average.
with the exponential distribution of the length of
stay with the average 2.5 days to compare with Similarly, for the lognormal distribution with
QA results (section 2.1.4). the same average 2.5 days and standard deviation,
Let’s also use the average inter-arrival time as say, 2 days, we get:
1 day or 0.5 days, accordingly.

459
Queuing Theory and Discrete Events Simulation for Health Care

• for 10 beds ICU average waiting time is nate separate ORs for scheduled and unscheduled
0.35 hours while for 5 beds unit it is 2.46 (emergency) surgeries. The authors state that in
hours. this arrangement ‘…Since the vast majority of
surgeries is scheduled, most of the OR space
Thus, QA is severely limited in what it cannot should be so assigned. Utilization of the scheduled
account for different distributions of service time rooms becomes predictable, and wait times for
and always produces the same result if the same unscheduled surgery become manageable’. The
average is used regardless of the effect of different authors imply that this statement is self-evident,
distributions with the same average. and provide no quantitative analysis or any other
justification for this recommendation.
Emergency and Elective On the other hand, Wullink et al (2007) built
Surgeries: Dedicated OR vs. DES model of OR suite for large Erasmus Medical
Combined Service OR Center hospital (Rotterdam, The Netherlands) to
quantitatively test scenarios of using dedicated
In this section we will discuss the use of a simple ORs for emergency and elective surgeries vs.
DES model to address an issue that caused a con- combined use of all ORs for both types of surger-
troversy in literature on healthcare improvement. ies. These authors concluded that based on DES
If patient flow into operating rooms consists of model results ‘…Emergency patients are operated
both elective (scheduled) and emergency (random) upon more efficiently on elective ORs instead of
surgeries, is it more efficient to reserve dedicated a dedicated emergency ORs. The results of this
operating rooms (OR) separately for elective and study led to closing of the emergency OR in this
emergency surgeries, or to perform both types of hospital’.
surgeries in any available OR? In contrast to the unsupported recommenda-
Haraden et al (2003) recommends that hospitals tion of Haraden et al (2003), Wullink et al (2007)
that want to improve patient flow should desig- presented specific data analysis to support their

Figure 7. Emergency and Elective surgeries: dedicated OR vs. combined service OR

460
Queuing Theory and Discrete Events Simulation for Health Care

conclusions: combined use of all ORs for both Scenario 1: two ORs, one dedicated only for
types of surgery results in reduction of average elective surgeries (OR1), and another dedicated
waiting time for emergency surgery from 74 min only for emergency surgeries (OR2), as shown
to 8 min. on Figure 7 If the dedicated OR is not available,
In this section we present a simple generic then the new patient waits in the Queue area
DES model to address the same issue and verify until the corresponding dedicated OR becomes
literature results. For simplicity, we consider an available.
OR suite with two rooms, OR1 and OR2. Patient Scenario 2: also two ORs. However both emer-
flow includes both emergency (random) and gency and elective patients go to any available
scheduled patients. OR, as indicated on Figure 8. If both ORs are not
Let’s first consider the situation when the ma- available, then the new patient waits in the Queue
jority of surgeries are emergency (random) ones. area until one of ORs becomes available.
We assume a Poisson emergency patient arrival Scenario 1 model layout: two ORs, one
with the average inter-arrival time 2 hours, i.e. dedicated only for elective surgeries (OR1), and
E(2) hours (Poisson arrival rate 0.5 pts/hr). another dedicated only for emergency surgeries
Four elective surgeries are assumed to be sched- (OR2). Scheduled arrival pattern is indicated on
uled three days a week on Tuesday, Wednesday the panel.
and Thursday at 8 am, 10 am, 1 pm and 3 pm. Simulation was run for 4 days (96 hours)
Both emergency and elective surgery duration is Monday to Thursday (on Friday there were no
assumed to be random with the average value 2 scheduled elective surgeries) using 300 replica-
hours, i.e. E(2) hours. (Wullink et al, 2007 used tions. DES results for these two scenarios are
mean case duration 2.4 hours for elective and 2.1 given in Figure 9.
hours for emergency surgeries) Examination of the results is instructive. While
Using these arrival and service time data, let’s the number of elective surgeries is the same for both
consider two scenarios. scenarios, the number of performed emergency

Figure 8. Emergency and Elective surgeries: dedicated OR vs. combined service OR

461
Queuing Theory and Discrete Events Simulation for Health Care

Figure 9. Simulation results: dedicated ORs vs. combined ORs. Most surgeries are emergency ones.
Simulation performed for 4 days (96 hours) time period

Characteristics dedicated OR combined ORs


elective emergency elective emergency
average
number of
surgeries 12 41.3 12 47.3

average
waiting time in
the system, hrs 0.7 7.1 1.3 1.06
average
number of
patients in
Queue area 0.1 3.76 0.16 0.59
average OR
utilization, % 24.2 85.8 62.7

surgeries is higher for combined OR scenario. The am, 9 am, 11 am, 1 pm, 3 pm, 5 pm. Emergency
average waiting time for elective surgery increases (random) surgeries are less frequent with the aver-
about 2 times for the combined OR scenario (from age inter-arrival time 6 hours, E(6), i.e. Poisson
0.7 to 1.3 hours); however, the average waiting arrival rate about 0.167 patients per hour.
time for emergency surgery drops dramatically Simulation results for 96 hours (4 days), 300
from about 7 hours down to 1 hour ! (Compare replications are given in Figure 10.
this dramatic drop with Wullink et al (2007) Notice that in this case the average waiting time
result). The average number of patients waiting for combined ORs drops about 3 to 4 times both
for emergency surgery is significantly lower for for emergency and elective patients, as well as the
combined OR scenario. Dedicated elective OR is average number of patients in the Queue area.
under-utilized (~24%) while dedicated emergency Overall, these DES results support the con-
OR is highly utilized (~85%), resulting in a sig- clusions of Wullink et al (2007) that performing
nificant increase in waiting time for emergency emergency surgeries in the combined OR scenario
surgeries. OR utilization in the combined OR is more effective than in the reserved dedicated
scenario is a rather healthy 63%. emergency OR. These authors provided a detailed
Scenario 2 model layout: two ORs. Both emer- instructive discussion on why dedicated OR
gency and elective patients go to any available OR. scenario performs worse especially for emer-
Information of the panel indicates capacity of ORs gency surgeries, while intuitively it seems that it
(2) and the average surgery (service) time E(2) hr. should perform better, like Haraden et al (2003)
Now, let’s consider the situation when the assumed.
majority of surgeries are elective. Wullink et al (2007) pointed out that besides
We have the same two scenarios with two reserving OR capacity for emergency surgeries
ORs (dedicated and combined) with the average arrivals, ORs need to reserve capacity to cope with
surgery duration 2 hours, E(2) hours. the variability of surgery duration. In the com-
However this time 6 daily elective surgeries are bined OR scenario, reservation might be shared
scheduled Monday to Thursday (no Fridays) at 7 to increase the flexibility for dealing with unex-

462
Queuing Theory and Discrete Events Simulation for Health Care

Figure 10. Simulation results: dedicated ORs vs. combined ORs. Most surgeries are elective. Simulation
performed for 4 days (96 hours) time period

pected long case duration and emergency surgery, Wullink’s DES model, or from about 7 hours to 1
whereas the dedicated scenario does not offer the hour according to our simplified DES model with
opportunity to use the overflow principle (compare generic input data described in this section).
this with two simplified scenarios considered If the majority of surgeries are scheduled ones,
earlier in section 2.1.1 using QA model). then there is not much delay in combined ORs at
On top of that, a dedicated OR scenario may all both for scheduled and emergency surgeries.
cause queuing of emergency surgeries themselves Examples presented in Section 2 illustrate a
because of their random arrival time. If emergency general fundamental principle: the lower vari-
surgeries were allocated to all available ORs ability in the system (both arrival and service),
(combined scenario), then it would be possible the lower delays (see also Green, 2006). In other
to perform them simultaneously reducing thereby words, lowering variability is the key to improv-
a waiting time. ing patient flow and to reduce delays and waiting
Wullink et al (2007) acknowledge that ‘… times.
interrupting the execution of the elective surgical One of the root causes of why intuition usu-
case schedule for emergency patients may delay ally fails to account for the effect of variability
elective cases. However, inpatients are typically even in very simple systems is a general human
admitted to a ward before they are brought to the tendency to avoid the complications of uncertainty
OR. Although delay due to emergency arrivals in the decision making by turning it into certainty.
may cause inconvenience for patients, it does not Average procedure time or average number of
disturb processes in the OR’. arrived patients is typically treated as if they are
As DES modeling indicates, delay in sched- fixed values ignoring the variability around these
uled cases in the combined ORs (if the majority averages. This ignorance often results in errone-
of surgeries are emergency) is usually not too ous conclusions. DES models, however, naturally
dramatic (e.g. from 0.7 to 1 hour), while reduction handle complex variability using statistical dis-
of waiting time for emergency surgeries is very tributions with multiple replications.
substantial (from 74 min to 8 min according to

463
Queuing Theory and Discrete Events Simulation for Health Care

DES MODELS: ADVANCED two major groups of patients with different LOS
APPLICATIONS distributions: (i) patients admitted as inpatients
into the hospital (OR, ICU, floor nursing units),
In this section more advanced features of DES and (ii) patients stabilized, treated and discharged
models will be presented, such as custom-built ac- home. Mayhew and Smith (2008) also recognized
tion logic to capture fine details of system behavior, a key difference between these two groups.
conditional and alternate routing types, multiple In order to effectively attack the problem of ED
entities entries, multiple scheduled arrivals and diversion reduction, the LOS of these two groups
highly skewed service time distributions that should be quantitatively linked to ED diversion.
accurately reflect real data rather than assuming Then the target LOS limits can be established
some hypothetical data distribution. based on ED patient flow analysis.
It will also be demonstrated how a specific A number of publications are available in
problem statement leads to simulating different which the importance of having ED LOS target
‘what-if’ scenarios to address practically relevant was discussed. Kolker (2008) provided a detailed
issues for ED and ICU (sections 3.1 and 3.2), as analysis of the literature.
well as interdependencies of patient flow for ED, One instructive article published recently by
ICU, OR and floor nursing units (NU) (section Mayhew and Smith (2008) evaluates the conse-
3.3). quences of 4 hours LOS limits mandated by the
UK National Health Services for the UK hospitals’
DES of Emergency Department Accident & Emergency Departments (A&ED).
Patient Flow: Effect of Patient Length One of the main conclusions of this work was ‘…
of Stay on ED Ambulance Diversion that a target should not only be demanding but
that it should also fit with the grain of the work
Emergency Department (ED) ambulance diversion on the ground… Otherwise the target and how to
due to ‘no available beds’ status has become a com- achieve it becomes an end in itself ’. Further, ‘…
mon problem in most major hospitals nationwide. the current target is so demanding that the integrity
A diversion status due to ‘no available ED beds’ of reported performance is open to question’. This
is usually declared when the ED census is close work vividly illustrated the negative consequences
to or at the ED beds capacity limit. ED remains of the administratively mandated LOS targets that
in this status until beds become available when have not been based on the objectives analysis
patients are moved out of ED (discharged home, of the patient flow and an A&ED capability to
expired, or admitted into the hospital as inpatients). handle it.
Percent of time when ED is on diversion is one Despite a considerable number of publications
of the important ED patient performance metrics, on the ED patient flow and its variability, there
along with the number of patients in queue in is not much in the literature that could help to
ED waiting room, or ED patient waiting time. answer a practically important question regard-
ED diversion results in low quality of care, dis- ing the target patient LOS: what it should be and
satisfaction of patients and staff, and lost revenue how to establish it in order to reduce ED diversion
for hospitals. to an acceptable low level, or to prevent diver-
Patients’ length of stay (LOS) in ED is one of sion at all? Therefore, a methodology that could
most significant factors that affect the overall ED quantitatively link the patient LOS limits and ED
throughput and ED diversion (Blasak et al, 2003; performance metrics would have a considerable
Gunal and Pidd, 2006; Garcia et al, 1995; Miller et practical value.
al, 2003; Simon et al, 2003). There are generally

464
Queuing Theory and Discrete Events Simulation for Health Care

Description of the ED census hit ED beds capacity limit (total 30 beds),


Patient Flow Model an ambulance was bounced back (diverted), as it
is indicated on Figure 11 Ambulance diversion
Kolker (2008) described the entire ED system. continued until the time when the ED census
It included a fast-track lane, minor care, trauma dropped below the capacity limit. An action logic
rooms and the main patient beds area. Total ED code was developed that tracked the percentage of
capacity was 30 beds. time when the census was at the capacity limit. It
Because the objective of this work was simulat- was reported as percent diversion in the simula-
ing an effect of patient LOS on diversion for the tion output file.
entire ED, the detailed model layout was signifi- All simulation runs start at week 1, Monday, at
cantly simplified. A widely recognized guideline 12 A.M. (midnight). Because ED was not empty at
in DES modeling is to keep the model as simple this time, Monday midnight patients’ census was
as possible while capturing the simulation objec- used as the simulation initial condition on January
tives (Jacobson et al, 2006). This was reiterated by 1, 2007: ED was pre-filled by 15 patients.
Dearie et al (1976) who stressed the importance Each patient in the arrival flow was charac-
of capturing only relevant performance variables terized by its week number, day of week, and
when creating a simple but not necessarily the admitting time on the record, as indicated on the
most complete model. Following this guideline, panel on Figure 11. The following descriptive
a simplified model is presented on Figure 11. attributes (also indicated on the panel on Figure
There are two modes of transportation by 11) were assigned to each patient on the arrival
which patients arrive into ED indicated on Figure schedule to properly track each patient’s routing
11: walk-in and ambulance. When ED patients’ and statistics in the simulation action logic:

Figure 11. Simplified ED structure used to simulate limiting length of stay for patients discharged home
and patients admitted to the hospital

465
Queuing Theory and Discrete Events Simulation for Health Care

• Mode of transportation: (i) walk-in, (ii) Random numbers drawn from these distribu-
ambulance. tions were used to perform multiple replications in
• Disposition: (i) admitted as inpatient, (ii) each simulation run. It was identified in ‘cold’ runs
discharged home that about 100 replications were needed for each
simulation in order to get a stable outcome.
Arrived patients take available free beds reduc- Because the objective was to quantify the
ing ED free capacity. effect of the LOS limits (both for discharged
Discharged patients (released home or admitted home patients and admitted as inpatients) on the
as inpatients) moved out of the simulation system percent diversion, the LOS limits were used as
according to their disposition conditional routings. two simulation parameters.
The patients’ flow ‘in and out’ of the ED formed An overall simulation approach was based on
a dynamic supply and demand balance. a full factorial design of experiments (DOE) with
Total number of patients included in the simu- two factors (parameters) at six levels, each im-
lation was 8411 for the two-month period from posed on the original (baseline) LOS distribution
January 1 to February 28, 2007. This number of functions. Response function was the simulated
patients was representative enough to make results percent diversion. Imposing LOS limits (param-
valid for subsequent months and years (Mayhew eters) on original (baseline) LOS distribution
and Smith (2008) used three months 2002 data- functions means that no drawn random LOS value
base to calibrate the queuing model; however, the higher than the given limiting value was allowed
total number of patients was not given). in the simulation run. Therefore, the original LOS
distribution densities should have recalculated
Overall Simulation Approach and for each simulation run as functions of the LOS
LOS Distribution Density Functions limits (parameters).
One might be tempted to assume that if a
The critical element of the dynamics of the supply randomly drawn LOS number was higher than
and demand balance was the time that the patients the given LOS limit value this number should be
spent in ED. This time was fitted by a continuous made equal to the LOS limit. However, such an
LOS distribution density functions, separately approach would result in a highly skewed simula-
for admitted as inpatients and discharged home tion output because a lot of LOS numbers would
patients. be concentrated at the LOS limit value.
The best fit distributions were identified us- Instead, a concept of conditional distribution
ing the Stat:Fit module built in the simulation density should be used. If a random LOS number
package: admitted inpatients best fit LOS was was in the interval from 0 to LOS lim, this number
log-logistic, while best fit LOS for patients dis- was used for a simulation replication. However if
charged home was Pearson 6 (Johnson et al, 1994). a random LOS number was outside the interval
These distributions were built into the simulation from 0 to LOS lim, this number was not used, and
action logic. the next random number was generated until it was
Because these LOS distributions represent a in the given interval. This procedure generated a
combination of many different steps of the patient new restricted random variable that is conditional
move through the entire ED process from registra- to being in the interval from 0 to LOS lim .
tion to discharge (including both value-added and Given the original LOS distribution density,
non-value-added steps and delays), there is no simple f (T )orig , and the limiting value, LOSlimit, the
interpretation: these are simply the best analytical fit conditional LOS distribution density function
used to represent actual patient LOS data.

466
Queuing Theory and Discrete Events Simulation for Health Care

of the new restricted random variable, f (T ) new version (21.5%) are close enough (in the range
will be: of a few percentage points). Thus, the model cap-
tures dynamic characteristics of the ED patients’
f (T )orig flow adequately enough to mimic the system’s
f (T ) new = LOSlim
, if T is less or equal to
behavior, and to compare alternatives (‘what-if’

0
f (T )orig dT
scenarios).
LOS. Along with the percent diversion calculation,
a plot of ED census as a function of time (hours/
f (T ) new = 0, if T is greater than LOS lim. weeks) was also simulated (Kolker, 2008). This
Conditional distribution density f (T ) new instructive plot visualizes the timing when the
depicted on Figure 12 (bottom panel, dotted bold ED census hits the capacity limit, and therefore
line) is a function of both original distribution den- ED diversion had to be declared. The plot also
sity and the simulation parameter LOS lim. (upper illustrated that at some periods of time (mostly
integration limit of the denominator integral). late night time) the ED was actually at a low
These denominator integrals were preliminary census.
calculated and then approximated by the 3-rd or- A full factorial computer design of experiments
der polynomials that were built in the simulation (DOE) was performed with two factors: LOS lim
action logic (Kolker, 2008). (home) for discharged home patients and LOS lim
The model’s adequacy check was performed (adm) for patients admitted into hospital. Each
by running the simulation of the original baseline factor had six levels. Simulated percent diversion
patients’ arrival. The model’s predicted percent was a response function.
diversion (~23.7%) and the reported percent di- A summary of results is presented on Figure 13.
It follows from this plot that several combinations

Figure 12. Distribution density function and imposed LOS limit

467
Queuing Theory and Discrete Events Simulation for Health Care

of parameters LOS lim (home) and LOS lim (adm) (b) re-calculated restricted LOS: bold dotted
would result in low percent diversion. line
For example, if LOS lim (home) stays at 5
hours (low curve) then LOS lim (adm) could be Analysis of the LOS pattern in the study hos-
about 6 hours with the practically negligible pital indicated that a significant percentage of ED
diversion about 0.5%. Notice that Clifford et al patients stayed much longer than the LOS targets
(2008) established the goal for ED LOS 6 hours suggested by the simulation. For example, ~24%
for inpatients to eliminate ambulance diversion patients of a study hospital exceeded LOS lim (adm)
and this metric is considered exceptional if less of 6 hours, and ~17% of patients exceeded LOS lim
than 5% of patients exceed this limit. Any other (home) of 5 hours. These long over-targets LOS
combination of LOS lim (home) and LOS lim (adm) for a significant percentage of patients were a root
could be taken from the graph to estimate a cor- cause of ED closure and ambulance diversion.
responding expected percent diversion. Established LOS lim targets could be used to
Thus, simulation helped to establish a quantita- better manage a daily patient flow. The actual cur-
tive link between an expected percent diversion rent LOS is being tracked down for each individual
and the limiting values of LOS. It has also sug- patient. If the current LOS for the particular patient
gested the reasonable targets for the upper limits at the moment is close to the target limiting LOS
LOS lim (home) and LOS lim (adm). lim
a corrective action should be implemented to
expedite a move of this patient.
(a) thin solid line: original LOS (top panel). Bold Multiple factors could contribute to the loom-
vertical line: imposed LOS limit 6 hrs ing delay over the target LOS, such as delayed lab
results or X-ray / CT, consulting physician is not

Figure 13. Summary plot representing simulated % diversion as a function of two parameters LOS lim
(home) and LOS lim (adm)

468
Queuing Theory and Discrete Events Simulation for Health Care

available, no beds downstream on hospital floor DES of Intensive Care Unit


(ICU) for admitted patients, etc. Analysis and (ICU) Patient Flow: Effect of
prioritizing the contributing factors to the over Daily Load Leveling of Elective
the target LOS lim is an important task. Surgeries on ICU Diversion
Notice that the average LOS that is frequently
reported as one of the ED patient flow performance Intensive Care Unit (ICU) is often needed for
metric cannot be used to manage a daily patient the patients’ care. Demand for ICU beds comes
flow. In order to calculate the average LOS, the from emergency, add-on and elective surgeries.
data should be collected retrospectively for at Emergency and add-on surgeries are random and
least a few dozens patients. Therefore, it would be cannot be scheduled in advance. Elective surgeries
too late to make corrective actions to expedite a are scheduled ahead of time. However, they are
move of the particular patient if the average LOS often scheduled for the daily block-time driven
becomes unusually high (whatever ‘high’ means). mostly by physicians’ priorities. (Daily block
In contrast, if the established upper limiting LOS time is the time in the operating room that is al-
lim
targets were not exceeded for the great majority located to the surgeon or the group of surgeons
of patients, it would guarantee a low ED percent on the particular days of the week to perform a
diversion, and the average LOS would be much particular type of surgical services). Usually elec-
lower than the upper limiting LOS lim. tive surgery scheduling does not take into account
Marshall et al (2005) and de Bruin et al (2007) the competing demand for ICU beds from the
also discussed the shortcomings of reporting LOS emergency and add-on cases.
only as averages (the flaw of averages) for the Because of the limited capacity of the ICU
skewed (long tailed) data (see also an illustrative beds, such a disconnection often results in the
example in section 2.2). Emergency Department (ED) closure (diversion)
EDs of other hospitals differ by their patient for patients due to no beds in ICU. This is a typi-
mix, their LOS distribution and bed capacity. cal example of a system bottleneck caused by the
However the overall simulation methodology interdependency and competing demands between
presented here will be the same regardless of a the material (patient) flows in a complex system:
particular hospital ED. the upstream problem (ED closure) is created
Such a general methodology would be more by the downstream problem (no ICU beds) (see
practically useful for other ED than some pre- section 3.3).
determined generalized ‘one size fits all’ target Usually two types of variability affect the sys-
values. tem’s patient flow: natural process flow variability
The negative consequences of the ‘one size and scheduled (artificial) flow variability (Litvak
fits all’ approach were summarized by Mayhew et al, 2001; Litvak and Long, 2000; Haraden et
and Smith, (2008): ‘…the practicality of a single al, 2003).
target fitting all A&ED will come under increas- Patients can be admitted into ICU from the
ing strain’. Emergency Department (ED), other local area
hospitals, inpatient nursing units, and / or operat-
ing rooms (OR). Patients admitted into ICU from
ED, other local area hospitals, and inpatient nurs-
ing units are primary contributors to the natural
random flow variability because the timing of
these admissions is not scheduled in advance and
is unpredictable.

469
Queuing Theory and Discrete Events Simulation for Health Care

Admissions into ICU from the OR include Description of the ICU


emergency, add-on, and elective surgeries. Elec- Patient Flow Model
tive surgeries (cases) are defined as surgeries that
could be delayed safely for the patient for at least Layout of the ICU model of the study hospital is
more than 24 hrs (or usually much longer). represented on Figure 14.
Emergency and add-on surgeries also contrib- The entire ICU system includes four special-
ute to the natural process flow variability. Because ized ICU units: Cardio ICU (CIC), Medical ICU
this type of variability is statistically random, it is (MIC), Surgical ICU (SIC) and Neurological ICU
beyond hospital control. It cannot be eliminated (NIC). Capacity (the number of beds) of each ICU
(or even much reduced). However, some statistical unit was: CIC=8, MIC=10, SIC=21 and NIC=12.
characteristics can be predicted over a long period Total ICU capacity was 51 beds.
of time that could help to manage it. All simulation runs start at week 1, Monday, at
Elective surgeries that require post-operative 12 A.M. (midnight). Because ICU was not empty
admission into ICU contribute to the scheduled at this time, Monday midnight patients’ census was
(artificial) flow variability. Elective surgery sched- used as the simulation initial conditions: CIC=8,
uling is driven mostly by individual priorities of MIC=9, SIC=20 and NIC=12.
the surgeons and their availability for other com- Patients admitted into each ICU unit formed
mitments (teaching, research, etc). This variability an entity arrival flow. The week number, day of
is usually within the hospital management control, the week and admitting time characterize each
and it could be reduced or eliminated by a proper patient in the arrival flow.
management of the scheduling system. Each discharged patient is also character-
It is possible to manage the scheduling of the ized by the week number, day of the week and
elective cases in a way to smooth (or to daily load discharge time.
level) overall patient flow variability. A daily load Patient flow ‘in and out’ formed a dynamic
leveling would reduce the chances of excessive supply and demand balance (supply of ICU beds
peak demand for the system’s capacity and, con- and demand for them by patients). If there was no
sequently, would reduce its diversion. free bed at the time of admission in the particular
There are quite a few publications in which the primary ICU unit, then the patient moved into
issues of smoothing surgical schedules and ICU other ICU units using alternative type routings
patient flow are discussed. Kolker (2009) provided (depicted by the thin lines between the units,
a detailed analysis of the literature. Figure 14).
Nonetheless, there is not much in the literature Patient move followed the following action
that could help a scheduler to directly answer an logic that simulated the established hospital’s rules
important question: what maximum number of to deal with the overcapacity of the ICU units:
elective surgeries per day should be scheduled
along with the competing demand from emergency • if no beds available in CIC move to SIC
surgeries in order to reduce ICU diversion to an • if no beds available in MIC move to CIC
acceptable low level, or to prevent diversion at else move to SIC else move to NIC
all ? • if no beds available in NIC move to CIC
Therefore, a methodology that could quantita- else SIC
tively link the daily number of elective surgeries
and ICU patient flow throughput (or diversion) Panel indicates an example of scheduled SIC
would have a considerable practical value. arrival: week number: 18; Day: Tuesday; Time:
2:00 pm. Patient attributes (bottom of the panel):

470
Queuing Theory and Discrete Events Simulation for Health Care

Figure 14. DES layout of the ICU patient flow model

adm_from = OR (Operating Room); pt_type = els signed to each patient on the arrival schedule to
(elective scheduled) properly track each patient’s routing and statistics
When the patient census of the ICU system in the simulation action logic:
hit or exceeded a critical limit, then an ICU di-
version was declared due to ‘no ICU beds’. The • Patient type attribute: elective surgery (els)
critical limit in the study hospital was defined as or emergency surgery (ems).
the number of occupied beds, which is two beds • Patient ‘admitted from’ attribute: emer-
less than the total capacity, i.e. 49 beds. The two gency department (ED), operating room /
‘extra’ beds were left as a buffer in the anticipa- recovery room (opr), external hospital, re-
tion of more admissions coming soon. habilitation, any floor nursing unit.
Diversion status was kept until the time when
the ICU census dropped below the critical limit. The total number of admitted patients included
An action logic code was developed that tracked into the ICU simulation model was 1847 during
the percentage of time when the census was at or the 18 weeks period (about four months worth
exceeded the critical limit. It was reported as the of data). The total number of elective cases was
percent diversion in the simulation output file. about 21% of all ICU admission for the 18 week
The following descriptive attributes were as- period.

471
Queuing Theory and Discrete Events Simulation for Health Care

Because elective cases in the study hospital is actually underutilized having low census.
were scheduled by block-time for the same days These peaks and valleys of the census impose a
of the different weeks, the weekly data have been significant strain on the effectiveness of the ICU
drilled down to analyze their variation for the same operations.
days of the different weeks, i.e. from one Monday Once the model was checked for its adequacy,
to another Monday of the following week, from it was used with enough confidence to simulate
one Tuesday to another Tuesday, and so on. ‘what-if’ scenarios. The approach was to actively
It follows from these data (Kolker, 2009) that manipulate only the day and time of elective sur-
there was a significant variability in the scheduling geries (leaving all emergency and add-on surgeries
practice of elective surgeries from one Monday timing unchanged). The objective was to quan-
to another Monday, from one Tuesday to another tify the effect of the elective surgeries schedule
Tuesday, and so on. For example, 8 cases were smoothing (or, equivalently, daily leveling, or
scheduled on Monday 6/5, 2007 while only 2 cases daily capping) on the ICU diversion.
were scheduled on Monday 6/26, 2007, and only The first ‘what-if’ scenario was: what would
1 case was scheduled for Monday 9/18, 2007. the percent diversion be if not more than 5 elective
A similar picture was observed for other days surgeries per day were scheduled for SIC (cap 5
of week and other ICU units: NIC, MIC, CIC. cases) and not more than 4 elective surgeries were
This highly uneven scheduling practice resulted scheduled for NIC (cap 4 cases)?
in straining the ICU system on busy days and un- For SIC three ‘extra’ elective surgery patients
derutilizing the system on light days. The overall were moved from the Monday 6/5 to other Mon-
variability of the schedule was quantified as the days, such as 6/26, 7/10, and 8/7. One ‘extra’
standard deviation of the daily number of elective elective surgery patient was moved from 6/19
cases over the entire period. to 8/21.
A model adequacy check was performed us- Similarly, two ‘extra’ elective surgery patients
ing the original baseline patient arrival database. were moved from Tuesday 6/13 to Tuesdays 6/27
The model’s predicted percent diversion for the and 9/5, accordingly, as illustrated on Figure
different time periods (from 1 month to 4 months 15.
long) was then compared with the actual percent Similar moves were performed for Wednes-
diversion. The later was reported by the ED as days, Thursdays, and Fridays, as well as for
the percent of time when the ED was closed to NIC. As a result of these moves, new smoother
the ambulances due to ‘no ICU beds’. schedules were obtained. Notice that the standard
It could be concluded (Kolker, 2009) that the deviations of the new schedules were now much
model captures dynamic characteristics of the lower than they were for the original schedules
ICU patient flow adequately enough (within 1 (Kolker, 2009).
to 2 percent from the actually reported values) Simulation runs for the new smoothed sched-
to mimic the system’s behavior and to compare ules resulted in the much-reduced diversion: about
alternative (‘what-if’) scenarios. ~3.5%. The simulated census clearly indicated
Along with the percent diversion calculation, that the critical census limit was exceeded less
a plot of ICU census as a function of time (hrs/ frequently and for a shorter period of time. This
weeks) was also simulated (Kolker, 2009). The is the reason for the reduced diversion compared
plot visualizes the timing when the ICU census to the original un-smoothed schedule.
hits or exceeds the critical limit, and therefore Notice that the total number of elective sur-
ICU diversion had to be declared. The plot also geries remains the same. Not a single surgery
illustrates that at some periods of time the ICU was dropped but rather ‘extra’ surgeries were

472
Queuing Theory and Discrete Events Simulation for Health Care

Figure 15. Diagram of the move of the number of elective surgeries for the daily level (cap) 5 cases
(Mondays and Tuesdays shown)

re-scheduled from the busy days to lighter later demonstrated that the diversion dropped down
days to make the overall schedule for the same to the level of ~1.5%.
time period smoother. Is the predicted low ICU diversion ~1.5%, with
Is it possible to lower diversion further? the SIC and NIC daily leveling of not more than 4
The next ‘what-if’ scenario was: what will the elective surgeries per day an acceptable solution?
percent diversion be if not more than 4 elective Technically, it is. However, some ‘extra’ surgeries
surgeries per day were scheduled, both for SIC would have bumped to more than 2 months apart,
and NIC (cap 4 cases)? e.g. from early June to early August (Figure 15).
A similar move of the ‘extra’ surgeries The problem is that not all patients could wait that
from the days in which the daily level of 4 long even though the surgery is elective. Also,
surgeries was exceeded to the later lighter from the practical standpoint, daily leveling of
days has resulted in the new schedule that was not more than 4 surgeries per day is sometimes
even smoother than the previous one with the too restrictive.
leveling limit of 5 surgeries per day. Standard Therefore a next series of ‘what-if’ scenarios
deviation was reduced by ~40% for Mondays, was considered: is it possible to get a low diver-
and ~35% for Tuesdays, Wednesdays, and sion about ~ 1% by bumping ‘extra’ cases to
Thursdays, respectively vs. the baseline original the block-time days which are not further than
schedule. This should have helped to further 2 weeks apart (Dexter et al, 1999) ? It was also
reduce diversion. Indeed, the simulation runs considered increasing the daily leveling back to

473
Queuing Theory and Discrete Events Simulation for Health Care

5 elective surgeries per day in order to make the One more valuable application of this DES
limit less restrictive. model could be determining a more appropriate
The elective schedule with the additional re- allocation of the number of beds between CIC,
striction ‘not more than two weeks apart’ is less SIC, MIC, and NIC units, compared to the current
smooth. It has a higher standard deviation (1.59) historical allocation.
than the schedule without restriction for which Once the DES model is validated, it becomes a
the standard deviation was 1.42. Notice that the powerful tool for the hospital operations decision-
original un-smoothed schedule had the highest making.
standard deviation 1.97.
Simulation runs of the ‘what-if’ scenario cor- DES of the Entire Hospital
responding to the restricted ‘within two weeks’ System Patient Flow: Effect of
schedule and SIC daily leveling 5 elective sur- Interdependencies of ED, ICU,
geries resulted in ICU diversion of only ~8%. Operating Rooms (OR) and
This is a relatively small gain compared to the Floor Nursing Units (NU)
original baseline un-smoothed schedule with
~10.5% diversion. This small reduction of the It was discussed in Introduction that large com-
diversion was a reflection of a lower smoothness plex hospital systems or multi-facility clinics are
(higher standard deviation) of this schedule. Thus, usually deconstructed into smaller subsystems
load leveling to 5 elective surgeries per day with or units. Most published DES models focus on
bumped ‘extra cases’ within 2 weeks apart was the separate analysis of these individual units
not effective enough alone. (Jacobson et al, 2006). However, according to the
In order to reduce the percent diversion back to principles of analysis of complex systems, these
low single digits while still keeping daily leveling separate subsystems (units) should be reconnected
at 5 elective surgeries per day and moving ‘extra back in a way that captures the most important
cases’ within two weeks apart, an additional factor interdependency between them. DES models that
was considered. This factor was a more rigorous capture the interaction of major units in a hospital,
implementation of ICU admission and discharge and the information that can be obtained from
criteria. It was suggested that patients with the analyzing the system as a whole, can be invaluable
likely LOS less than 24 hrs were excluded from to hospital planners and administrators.
ICU admission but moved to the regular nursing This section specifically illustrates a practi-
unit (see section 3.3). This scenario resulted in a cal application of this system-engineering prin-
significant reduction of ICU diversion, down to ciple.
about ~ 1%. DES models of the ED and ICU patient flow
There is a trade-off between these two sce- have been described separately in details in sec-
narios. From the practical standpoint the higher tions 3.1 and 3.2. It is well known that these
level-loading elective schedule (5 surgeries per subsystems are not stand-alone units but they are
day) would be easier to implement than the lower closely interdependent, as well as the Operating
level-loading one (4 surgeries per day) because Rooms (OR) and floor nursing units (NU).
the former is less restrictive. However, the for- A high-level patient flow map (layout) of the
mer assumes that the ICUs admission criteria/ entire hospital system is shown on Figure 16 One
exclusions are rigorously applied while the latter output of the ED model for patients admitted into
does not require exclusion from the current ICU the hospital (ED discharge) now becomes an ICU,
admission practice. OR and NU input through ED disposition. About

474
Queuing Theory and Discrete Events Simulation for Health Care

62% of admitted patients were taken into operating while 86% were admitted into floor NU. However
rooms (OR) for emergency surgery, about 28% of some patients (about 4%) were readmitted from
admitted patients moved directly into ICU, and floor NU back to ICU (indirect ICU admission
about 10% of patients admitted from ED into from OR).
floor nursing units. Patient length of stay (LOS) in NU was as-
OR suite size was 12 interchangeable operating sumed to be in the range from 1 day to 10 days
rooms used both for ED emergency and sched- with the most likely 5 days represented by a tri-
uled surgeries.. There were four daily scheduled angle distribution. NU overall capacity was 420
OR admissions at 6 am, 9 am, 12 pm and 3 pm, beds. At the simulation start, NU was pre-filled
Monday to Thursday (there were no scheduled with starting census 380 patients (see also sec-
surgeries on Fridays and weekends). tions 3.1 and 3.2).
The best fit of the emergency surgery duration Baseline DES resulted in ED diversion about
was found to be a Pearson 6 distribution. 24%, ICU diversion about 11% (see sections 3.1
Elective surgery duration depends on surgical and 3.2), and floor NU diversion about 14.6%.
service type, such as general surgery, orthopedics, If limiting ED LOS had aggressive targets 5
neuro-surgery, etc. For the simplicity of this model, hours for patients discharged home and 6 hours
elective surgery duration was weighted by each for patients admitted to the hospital (see sec-
service percentage. The best fit of the overall elec- tion 3.1) then ED diversion became practically
tive surgeries duration was found to be a Johnson negligible (less than 0.5%). However because of
SB distribution (Johnson et al, 1994). interdependencies of patient flows, ICU diver-
About 14% of post surgery patients were ad- sion increased to 12.5% and floor NU diversion
mitted from OR into ICU (direct ICU admission) remained about the same, 14.9%. Thus, aggres-

Figure 16. DES layout of a high-level patient flow map of the entire hospital system

475
Queuing Theory and Discrete Events Simulation for Health Care

sive process improvement in one subsystem (ED) PRACTICAL IMPLEMENTATION OF


resulted in worsening situation in other interrelated DES RESULTS IN HEALTHCARE:
subsystems (mostly, ICU). ISSUES AND PERSPECTIVES
If instead of the above aggressive limiting ED
LOS a more relaxed target is used, say, LOS not As it was already pointed out and supported by
more than 6 hours for discharged home and not extensive literature review (Jun et al, 1999; Jacob-
more than 10 hours for admitted to the hospital, son et al, 2006), DES modeling as a methodology
then simulated ED diversion would become about of choice in healthcare has been extensively used
7%, ICU diversion is about 11.1%, and floor NU to quantitatively analyze healthcare delivery pro-
diversion remained practically unchanged, 14.8%. cesses and operations and to help decision-makers
While ED diversion now became worse, it is still to justify business decisions.
better than it was at the baseline level by about a However, for DES to reach its full potential
factor of 3. At the same time, this less aggressive as the key methodology in healthcare, the results
ED target LOS did not, at least, make the ICU and of simulation must be practically implemented.
floor NU diversion much worse. Unfortunately, the number of publications that
Thus, from the entire hospital system stand- report successful implementation of DES results
point the primary focus of process improvement is much lower than the number of publications that
activities should be on ED, and then on floor NU report successful development of DES models.
followed by ICU. For example, in the survey of two hundred papers
At the same time, ED patient target LOS reporting the results of DES studies in healthcare,
reduction program should not be too aggressive only 16 reports of successful implementations
and it should be closely coordinated with that were identified (Jacobson et al, 2006). Fone
for floor NU and ICU. Otherwise, even if ED (2003) found that Operational Research has had
reports a significant progress in its patient LOS limited success in the implementation of results
reduction program, this progress will not translate in practice in the healthcare setting. Sachdeva
into improvement of the overall hospital system et al (2006) also states that ‘…Healthcare Op-
patient flow. This illustrates one of the funda- erational Research (OR) has had limited success
mental principles of the Theory of Constraints in achieving an adequate level of acceptance by
(Goldratt, 2004). stakeholders, particularly physicians, leading to
On the other hand, if ICU policy of declar- implementation of results’.
ing diversion is changed from 49 occupied beds A likelihood of successful implementation or
to full capacity of 51 occupied beds, then ICU effectiveness of DES project (E) can be related
reported diversion will drop down to ~9.6%, to two major factors:
leaving unchanged diversions of all other units
in the system. (i) the likelihood that DES model adequately
This illustrates that reported performance captures all relevant features of process
metric (percent of time at full capacity or some (system) and properly verified and validated
others) depends not only on physical capacity and to actual clinical data, i.e. technical quality
patient flow variability but also on the hospital of the model (TQ), and
administrative policy. (ii) the likelihood of acceptance of the results
Of course, many other scenarios could be ana- of the DES study by key stakeholders and
lyzed using DES model to find out how to improve decision-makers (A), i.e., E = TQ * A.
the entire hospital system patient flow rather than
each separate hospital department.

476
Queuing Theory and Discrete Events Simulation for Health Care

A similar relationship in which acceptance term closely with the simulation analyst to provide
(A) also included accountability was presented by details of the system, often for the first time. As
Mercy Medical Center (2007). a result, the decision-makers are likely to gain
Most reported DES studies focus on TQ, and a new perspective on the relationship between
the likelihood of developing a good verified and the available resources and the capability of the
validated DES model is high (although not guaran- system (Jacobson et al, 2006).
teed). However, if the likelihood of acceptance of The experience of the author of this chapter has
the model results is low (for whatever reason), then also shown that at least two main conditions are
the overall success and impact of the DES project needed: (i) the stake-holder (project owner) must
measured by effectiveness of its implementation have a genuine incentive for process improvement
will also be low regardless of technical merits of and realize that there is no real alternative to DES
the DES model itself. modeling, and (ii) a dedicated and flexible data
A number of recommendations were developed analyst must participate in the project. The data
to increase the likelihood of DES results imple- analyst must not only have a full access to raw data
mentation success. Some of these recommenda- stored in various data bases but be also qualified
tions include (Jacobson, 2006): the system being enough to convert these data bases into a data file
simulated actually needs decision, the DES project (usually Excel) with the fields that match the fields
must be completed before a deadline, data used developed by a simulation specialist who designed
for simulation are credible, and, most importantly, the model. Thus, a successful DES project truly
the key stakeholders and the decision-maker must takes teamwork by the professionals.
actively participate in the project. Rakich et al (1991) studied the effect of DES
Lowery (1996) noted that involvement of the on management development. The authors con-
upper management in the project is critical for cluded that conducting a simulation study not
its success. Litvak (2007) argues that if hospital only develops decision-making skills, but also
executives are not involved in the process (queu- forces to recognize the implication of system
ing theory or simulation), the analysts could do changes. It was noted that if decision-makers
their calculations but they would not be used for developed their own DES models, implementa-
decision-making. tion occurred much more frequently. Mahachek
Carter and Blake (2004) published an inter- (1992) noted that one of significant barriers in the
esting summary of their experience in practical implementation of DES is the decision-makers’
implementation of simulation projects. They have perception that ‘… simulation is an additional
used four projects to highlight the practical lessons layer of effort rather than an organizer of all your
of applying operations research methodologies current efforts’. According to Lowery (1996),
in health care: ‘…Experience suggests that OR DES projects start as a means of documenting
techniques can be successfully applied in the health assumptions, organizing the decision-making
care setting. The secret is to understand the unique process and identifying potential problem areas.
nature of the health care business and its impact She is quite right when she passionately writes
on models, decision makers, and the development that ‘…It is amazing how much time is spent in
of implementation policies’. Further, ‘…decision the planning process (especially in meetings)
making in hospitals is characterized by multiple arguing over the differences in opinion, where
players; …incorporating the objectives of all deci- these differences are due to disagreements over
sion makers is vital in this environment’. assumptions never actually acknowledged….
Thus, procedure and methodology of ap- While disagreements may still ensue over the
plying DES requires decision-makers to work content of the assumptions, the arguments be-

477
Queuing Theory and Discrete Events Simulation for Health Care

come focused when the assumptions… are in mapping was chosen as the soft OR approach.
front of all participants’. Cognitive maps attempt to capture beliefs, val-
Healthcare has a culture of rigid division of ues, and expertise of stakeholders by conducting
labor. This functional division does not effectively structured interviews. Cognitive mapping was
support the methodologies that cross the functional used for two main purposes: (i) to assist in the
areas, especially if they assume significant changes identification of issues that were previously not
in traditional relationships (Kopach-Konrad et al, captured using traditional DES models, and (ii)
2007). Furthermore, the role and status of DES comparing results from the outcomes research,
professionals in healthcare delivery is not usually hard and soft OR to enhance greater buy-in and
well defined and causes sometimes skepticism acceptance by the key stakeholders.
and fear. The results of cognitive mapping helped not
Relatively few health care administrators are only identify new issues not captured by hard OR
equipped to think analytically about how health but also supported many of the results from hard
care delivery should function as a system or to OR that were counter-intuitive to pre-existing
appreciate the relevance of system-engineering beliefs.
methods in healthcare. Even fewer are equipped Thus, results from this study support the view
to work with engineers to apply these tools (Reid that a combination of hard and soft OR allows a
et al, 2005). Thus, it is often difficult for many greater level of understanding leading to accep-
administrators to appreciate the DES approach tance and willingness to implement DES results.
contributions to the health care delivery process This is consistent with the recommendations of the
analysis. On the other hand, DES professionals UK Engineering and Physical Science Research
often have little, if any, education in health care Council (EPSRC) (2004) regarding soft OR and
delivery. This sometimes results in the lack of application in healthcare. ‘…It has always been
clinically relevant factors related to patient care one of the main characteristics of OR to seek for
included in DES model, ‘…which are at the opportunities to integrate soft and hard methods’
heart of physician decision-making’ (Sachdeva (EPRSC, 2004).
et al, 2006).
This underscores the importance of consider-
ing social and communication issues in the accep- CONCLUSION
tance and implementation of any socio-technical
system (Kopach-Konrad, et al, 2007). This chapter covers applications of the most widely
One approach proposed to overcome some used system-engineering methods, such as queuing
of these issues was applied to pediatric ICU analytic theory and DES models, to healthcare,.
(Sachdeva et al, 2006). It includes a combina- It demonstrates the power of DES models for
tion of ‘hard’ and ‘soft’ operations research. The the analysis of patient flow with variability and
initial simulation model of pediatric ICU patient subsystem interdependency.
flow was modified based upon input from the Many health care organizations are making se-
active participation of physicians working in rious strategic decisions such as new construction
this ICU. During interviews with stakeholders it to expand capacity, merging with other hospitals
was acknowledged that there are factors called etc., without using system engineering and, par-
‘soft’ that are difficult to model in an objective ticularly, DES modeling analysis to evaluate an
unambiguous way. Therefore, soft OR was used impact of these decisions.
to capture concerns that could not be captured However, DES models and management sci-
using traditional DES methodology. Cognitive ence principles are widely used in other indus-

478
Queuing Theory and Discrete Events Simulation for Health Care

tries, and demonstrate a great value in providing Blasak, R., Armel, W., Starks, D., & Hayduk,
important insights into operational strategies and M. (2003). The Use of Simulation to Evaluate
practices. Hospital Operations between the ED and Medical
Complex dynamics of delivery of care pro- Telemetry Unit. In S. Chick, et al (Ed.), Proceed-
cesses makes them an important area of applica- ings of the 2003 Winter Simulation Conference
tion of DES modeling and management science (pp. 1887-1893). Washington, DC: IEEE.
methodologies to help identify both trends in
Carter, M. (2002). Health Care Management,
capacity needs and the ways to use existing ca-
Diagnosis: Mismanagement of Resources. Opera-
pacity more efficiently.
tion Research / Management Science (OR/MS) .
At the same time, it is acknowledged that one
Today, 29(2), 26–32.
of the major current challenges is a relatively
low acceptance of DES results by the medical Carter, M., & Blake, J. (2004). Using simulation
community and hospital administration. Practical in an acute-care hospital: easier said than done. In
implementation of DES results in health care set- M. Brandeau, F. Sainfort, &W. Pierskala, (Eds.),
tings is sometimes rather slow and difficult. Operations Research and Health Care. A Hand-
There are various reasons for this situation, book of Methods and Applications, (pp.192-215).
both technical and psychological. Some of them Boston: Kluwer Academic Publisher.
have been discussed in section 4.
Clifford, J., Gaehde, S., Marinello, J., Andrews,
Nonetheless, more and more healthcare orga-
M., & Stephens, C. (2008). Improving Inpatient
nizations have started to recognize the value and
and Emergency Department Flow for Veterans.
predictive power of system engineering and DES
Improvement report. Institute for Healthcare
models through concrete and realistic examples.
Improvement. Retrieved from http://www.IHI.
The fast changing landscape of the healthcare
org/ihi
industry will help promote the organizational
changes needed for adoption improvement rec- D’Alesandro, J. (2008). Queuing Theory Mis-
ommendations based on DES models. Hopefully placed in Hospitals. Management News from the
this chapter contributes a little toward achieving Front, Process Improvement. PHLO. Posted Feb
this goal. 19, 2008 at http://phlo.typepad.com
de Bruin, A., van Rossum, A., Visser, M., & Koole,
G. (2007). Modeling the Emergency cardiac in-
REFERENCES
patient flow: an application of queuing theory.
Abu-Taieh, E., & El Sheikh, A. R. (2007). Com- Health Care Management Science, 10, 125–137.
mercial Simulation Packages: A Comparative doi:10.1007/s10729-007-9009-8
Study. International Journal of Simulation, 8(2), Dearie, J., & Warfield, T. (1976, July 12-14). The
66–76. development and use of a simulation model of an
Allen, A. (1978). Probability, statistics and queu- outpatient clinic. Proceedings of the 1976 Sum-
ing theory, with computer science applications. mer computer Simulation Conference, Simulation
New York: Academic Press Council, Washington, DC, (pp. 554-558).

479
Queuing Theory and Discrete Events Simulation for Health Care

Dexter, F., Macario, A., Traub, R., Hopwood, Green, L. (2006). Queuing Analysis in Healthcare.
M., & Lubarsky, D. (1999). An Operating Room In R. Hall, (Ed.), Patient Flow: Reducing Delay
Scheduling Strategy to Maximize the Use of Op- in Healthcare Delivery (pp. 281-307). New York:
erating Room Block Time: Computer Simulation Springer.
of Patient Scheduling and Survey of Patients’
Green, L., Kolesar, P., & Soares, J. (2001). Improv-
Preferences for Surgical Waiting Time. Anesthesia
ing the SIPP approach for staffing service systems
and Analgesia, 89, 7–20. doi:10.1097/00000539-
that have cyclic demands. Operations Research,
199907000-00003
49, 549–564. doi:10.1287/opre.49.4.549.11228
Engineering and Physical Sciences Research
Green, L., Kolesar, P., & Svoronos, A. (1991).
Council (EPSRC). (2004). Review of Research
Some effects of non-stationarity on multi- server
Status of Operational Research (OR) in the UK,
Markovian queuing Systems. Operations Re-
Swindon, UK. Retrieved from www.epsrc.ac.uk
search, 39, 502–511. doi:10.1287/opre.39.3.502
Fone, D., Hollinghurst, S., & Temple, M. (2003).
Green, L., Soares, J., Giglio, J., & Green, R. (2006).
Systematic review of the use and value of com-
Using Queuing Theory to Increase the Effective-
puter simulation modeling in population health
ness of Emergency Department Provider Staff-
and health care delivery. Journal of Public Health
ing. Academic Emergency Medicine, 13, 61–68.
Medicine, 25(4), 325–335. doi:10.1093/pubmed/
doi:10.1111/j.1553-2712.2006.tb00985.x
fdg075
Gunal, M., & Pidd, M. (2006). Understanding Ac-
Gallivan, S., Utley, M., Treasure, T., & Valencia, O.
cident and Emergency Department Performance
(2002). Booked inpatient admissions and hospital
using Simulation. In L. Perrone, et al (Ed.) Pro-
capacity: mathematical modeling study. British
ceedings of the 2006 Winter Simulation Conference
Medical Journal, 324, 280–282. doi:10.1136/
(pp. 446-452). Washington, DC: IEEE.
bmj.324.7332.280
Hall, R. (1990). Queuing methods for Service
Garcia, M., Centeno, M., Rivera, C., & DeCario,
and Manufacturing. Upper Saddle River, NJ:
N. (1995). Reducing Time in an Emergency Room
Prentice Hall.
via a Fast-track. In C. Alexopoulos, et al (Ed.),
Proceedings of the 1995 Winter Simulation Confer- Haraden, C., Nolan, T., & Litvak, E. (2003). Op-
ence, (pp. 1048-1053). Washington, DC: IEEE. timizing Patient Flow: Moving Patients Smoothly
Through Acute Care Setting [White papers 2].
Goldratt, E., & Cox, J. (2004). The Goal (3rd
Institute for Healthcare Improvement Innovation
Ed., p. 384). Great Barrington, MA: North River
Series, Cambridge, MA
Press.
Harrison, G., Shafer, A., & Mackay, M. (2005).
Green, L. (2004). Capacity Planning and Manage-
Modeling Variability in Hospital Bed Occupancy.
ment in Hospitals. In M. Brandeau., F. Sainfort,
Health Care Management Science, 8, 325–334.
W. Pierskala, (Eds.), Operations Research and
doi:10.1007/s10729-005-4142-8
Health Care. A Handbook of Methods and Ap-
plications, (pp.15-41). Boston: Kluwer Academic Hillier, F., & Yu, O. (1981). Queuing Tables and
Publisher. Graphs (pp.1-231). New-York: Elsevier, Hlupic,
V., (2000). Simulation Software: A Survey of
Academic and Industrial Users. International
Journal of Simulation, 1(1), 1–11.

480
Queuing Theory and Discrete Events Simulation for Health Care

IHI - Institute for Healthcare Improvement. (2008). Lawrence, J., Pasternak, B., & Pasternak, B. A.
Boston, MA: Author. Retrieved from http://www. (2002). Applied Management Science: Model-
ihi.org/IHI/Programs/ConferencesAndSeminars/ ing, Spreadsheet Analysis, and Communication
ApplyingQueuingTheorytoHealthCareJune2008. for Decision Making. Hoboken, NJ: John Wiley
htm & Sons.
Ingolfsson, A., & Gallop, F. (2003). Queuing Lefcowitz, M. (2007, February 26). Why does pro-
ToolPak 4.0. Retrieved from http://www.bus. cess improvement fail? Builder-AU by Developers
ualberta.ca/aingolfsson/QTP/ for developers. Retrieved from www.builderau.
com.au/strategy/projectmanagement/
Jacobson, H., Hall, S., & Swisher, J. (2006).
Discreet-Event Simulation of Health Care Sys- Litvak, E. (2007). A new Rx for crowded hospi-
tems. In R. Hall, (Ed.), Patient Flow: Reducing tals: Math. Operation management expert brings
Delay in Healthcare Delivery, (pp. 210-252). New queuing theory to health care. American College of
York: Springer. Physicians-Internal Medicine-Doctors for Adults,
December 2007, ACP Hospitalist.
Johnson, N. Kotz., S., & Balakrishnan, N., (1994).
Continuous Univariate Distributions (Vol.1). New Litvak, E., & Long, M. (2000). Cost and Quality
York: John Wiley & Sons. under managed care: Irreconcilable Differences?
The American Journal of Managed Care, 6(3),
... Journal of Medical Systems, 31(6), 543–546.
305–312.
doi:10.1007/s10916-007-9096-6
Litvak, E., Long, M., Cooper, A., & McManus, M.
Jun, J., Jacobson, H., & Swisher, J. (1999). Ap-
(2001). Emergency Department Diversion: Causes
plication of DES in health-care clinics: A survey.
and Solutions. Academic Emergency Medicine,
The Journal of the Operational Research Society,
8(11), 1108–1110.
50(2), 109–123.
Lowery, J. (1996). Introduction to Simulation
Kolker, A. (2008). Process Modeling of Emer-
in Healthcare. In J. Charness, D. Morrice, (Ed.)
gency Department Patient Flow: Effect of patient
Proceedings of the 1996 Winter Simulation Con-
Length of Stay on ED diversion. Journal of Medi-
ference, (pp. 78-84).
cal Systems, 32(5), 389-401. doi: http://dx.doi.
org/10.1007/s10916-008-9144-x Mahachek, A. (1992). An Introduction to patient
flow simulation for health care managers. Journal
Kolker, A. (2009). Process Modeling of ICU
of the Society for Health Systems, 3(3), 73–81.
Patient Flow: Effect of Daily Load Leveling of
Elective Surgeries on ICU Diversion. Journal of Marshall, A., Vasilakis, C., & El-Darzi, E. (2005).
Medical Systems. 33(1), 27-40. Doi: http://dx.doi. Length of stay-based Patient Flow Models: Recent
org/10.1007/s10916-008-9161-9 Developments and Future Directions. Health Care
Management Science, 8, 213–220. doi:10.1007/
Kopach-Konrad, R., Lawley, M., Criswell, M.,
s10729-005-2012-z
Hasan, I., Chakraborty, S., Pekny, J., & Doeb-
beling, B. (2007). Applying Systems Engineering Mayhew, L., & Smith, D. (2008). Using queuing
Principles in Improving Health Care Delivery. theory to analyze the Government’s 4-h comple-
Journal of General Internal Medicine, 22(3), tion time target in Accident and Emergency de-
431–437. doi:10.1007/s11606-007-0292-3 partments. Health Care Management Science, 11,
11–21. doi:10.1007/s10729-007-9033-8

481
Queuing Theory and Discrete Events Simulation for Health Care

McManus, M., Long, M., Cooper, A., & Litvak, Ryan, J. (2005). Building a better delivery system:
E. (2004). Queuing Theory Accurately Models the A new engineering / Healthcare partnership.
Need for Critical Care Resources. Anesthesiol- System Engineering: Opportunities for Health
ogy, 100(5), 1271–1276. doi:10.1097/00000542- Care (pp.141-142). Washington, DC: Commit-
200405000-00032 tee on Engineering and the Health Care System,
Institute of Medicine and National Academy of
McManus, M., Long, M., Cooper, A., Mandell,
Engineering, National Academy Press.
J., Berwick, D., Pagano, M., & Litvak, E. (2003).
Variability in Surgical Caseload and Access to Sachdeva, R., Williams, T., Quigley, J., (2006).
Intensive Care Services. Anesthesiology, 98(6), Mixing methodologies to enhance the implementa-
1491–1496. doi:10.1097/00000542-200306000- tion of healthcare operational research. Journal of
00029 the Operational Research Society, advance online
publication, September 6, 1 - 9
Mercy Medical Center. (2007). Creating a Culture
of Improvement. Presented at the Iowa Healthcare Seelen, L., Tijms, H., & Van Hoorn, M. (1985).
Collaborative Lean Conference, Des Moines, IA, Tables for multi-server queues (pp. 1-449). New-
August 22. Retrieved from www.ihconline.org/ York: Elsevier, Simon, S., Armel, W., (2003). The
toolkits/LeanInHealthcare/IHALeanConfCul- Use of Simulation to Reduce the Length of Stay
tureImprovement.pdf in an Emergency Department. In S. Chick, et al
(Ed.) Proceedings of the 2003 Winter Simula-
Miller, M., Ferrin, D., & Szymanski, J. (2003).
tion Conference (pp. 1907-1911). Washington,
Simulating Six Sigma Improvement Ideas for a
DC: IEEE
Hospital Emergency Department. In S. Chick, et
al (Ed.) Proceedings of the 2003 Winter Simula- Swain, J., 2007. Biennial Survey of discreet-event
tion Conference (pp. 1926-1929). Washington, simulation software tools. OR/MS Today, 34(5),
DC: IEEE. October.
Nikoukaran, J. (1999). Software selection for Weber, D. O. (2006). Queue Fever: Part 1 and Part
simulation in manufacturing: A review. Simulation 2. Hospitals & Health Networks, Health Forum.
Practice and Theory, 7(1), 1–14. doi:10.1016/ Retrieved from http://www.IHI.org
S0928-4869(98)00022-6
Wullink, G., Van Houdenhoven, M., Hans, E.,
Rakich, J., Kuzdrall, P., Klafehn, K., & Krigline, van Oostrum, J., van der Lans, M., Kazemier,
A. (1991). Simulation in the hospital setting: G., (2007). Closing Emergency Operating Rooms
Implications for managerial decision mak- Improves Efficiency.
ing and management development. Journal
of Management Development, 10(4), 31–37.
doi:10.1108/02621719110005069
KEY TERMS AND DEFINITIONS
Reid, P., Compton, W., Grossman, J., & Fanjiang,
G. (2005). Building a better delivery system: Operations Research: The discipline of ap-
A new engineering / Healthcare partnership. plying mathematical models of complex systems
Washington, DC: Committee on Engineering and with random variability aimed at developing
the Health Care System, Institute of Medicine justified operational business decisions
and National Academy of Engineering, National Management Science: A quantitative meth-
Academy Press. odology for assigning (managing) available
material assets and human resources to achieve

482
Queuing Theory and Discrete Events Simulation for Health Care

the operational goals of the system based on op- in simple systems without interdependency. Typi-
erations research. cally uses analytic formulas that must meet some
Non-Linear System: A system that exhibits rather stringent assumptions to be valid.
a mutual interdependency of components and for Simulation Package (also known as a simula-
which a small change in the input parameter(s) can tion environment): A software with user interface
result in a large change of the system output used for building and processing discrete event
Discrete Event Simulation: One of the most simulation models
powerful methodologies of using computer models Flow Bottleneck / Constraint: A resource
of the real systems to analyze their performance (material or human) whose capacity is less than
by tracking system changes (events) at discrete or equal to demand for its use.
moments of time
Queuing Theory: Mathematical methods for
analyzing the properties of waiting lines (queues)

483
484

Chapter 21
Modelling a Small Firm in
Jordan Using System Dynamics
Raed M. Al-Qirem
Al-Zaytoonah University of Jordan, Jordan

Saad G. Yaseen
Al-Zaytoonah University of Jordan, Jordan

ABSTRACT
The Jordanian banks and the risk analysts in particularly are seeking to adapt and buy new analytical
techniques and information systems that help in identifying, monitoring and analysing the credit risk
especially for the small firms that represents the biggest firms’ base in the Jordanian markets. This
chapter supports that what analysts need is a thinking tool that allow the user to simulate, understand
and control different policies or strategies. It will then enable better decision to be made. A simulator
based on system dynamics methodology is the thinking tool produced by this chapter. The system dy-
namics methodology allows the bank to test “What If” scenarios based on a model which captures the
behaviour of the real system over time. The objectives of this chapter is to introduce new performance
measures using systems thinking paradigm that can be used by the Jordanian banks to assess the credit
worthiness of firms applying for credit.

LITERATURE REVIEW and scientific methods. According to his studies, he


found that operations research was not effective in
System Dynamics was developed in the second helping to solve many strategic problems inside the
half of the 1950s by Jay W. Forrester at the Alfred organisations. It was too mathematically oriented
P. Sloan School of Management at the Massa- and focused too much on optimisation and analytical
chusetts Institute of Technology. Forrester’s main solutions. It neglected non-linear phenomena and
study was the activities in Operations Research relationships between corporate functions.
(or Management Science) that aimed to support Forrester (1961) proposed to move towards
managerial decision making through mathematical closed-loop thinking in order to enhance the deci-
sion making process where the decision are seen as
DOI: 10.4018/978-1-60566-774-4.ch021 a means to affect the environment and changes in

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Modelling a Small Firm in Jordan Using System Dynamics

the environment also provide input to decisions commodity production cycle, economic fluctua-
which aim to influence the connection with this tions, energy and project management and many
environment. This led Forrester to start studying more fields. Finally, there is an international
decision making in social systems from the view system dynamics society at MIT, holding a yearly
point of information feedback control systems, international system dynamics conference. In
so he made system dynamics more useful and addition there is the society’s journal (System
relevant to the study of managerial problems. Dynamics Review) and a huge number of chapters
Forrester developed a method to study and and literature on the system dynamics subject
simulate social systems as information feedback published in the conferences and journals around
systems. the world.
The method was first applied to corporate
problems and was called Industrial Dynamics. For-
rester (1961) defines Industrial Dynamics as “the THE APPROACH OF
study of the information feedback Characteristics SYSTEM DYNAMICS
of industrial activity to show how organizational
structure, amplification (in policies), and time System Dynamics is a systems thinking approach
delays (in decision and actions) interact to influ- that uses a perspective based on information
ence the success of the enterprise. It treats the feedback and delays to understand the dynamic
interactions between the flows of information, behaviour of complex physical, biological and
money, orders, materials, personnel, and capital social systems. It also helps the decision maker
equipment in a company, an industry, or a national untangle the complexity of the connections be-
economy”. Lane (1997) summarises Forrester’s tween various policy variables by providing a new
method to modelling and understanding manage- language and set of tools to describe. Then it does
ment problems as “social systems should be mod- this by modelling the cause and effect relation-
elled as flow rates and accumulations linked by ships among these variables. Furthermore, System
information feedback loops involving delays and Dynamics method enables the decision makers or
non-linear relationships. Computer simulation is the modeller via its tools in any system to identify
then the means of inferring the time evolutionary the underlying structure of their system or issue
dynamics endogenously created by such system and how this structure determines the system’s
structures. The purpose is to learn about their behaviour (see figure 1). The left arrow symbolizes
modes of behaviour and to design policies which the relationship while the right arrow indicates the
improve performance”. deeper understanding that happens from analysing
Because social systems contain lots of non- a system structure. System Dynamics can also be
linear relationships, Forrester choose an experi- used to study the changes in one part of a system
mental, or simulation, approach to be utilised in in order to observe its affect on the behaviour of
System Dynamics (Vennix 1996). Following the system as a whole (Martin 1997; Anderson and
Forrester’s studies and publications, the method Johnson 1997; Brehmer 1992). Sterman (2000)
came to be applied to a large variety of problems gives an insight that the real value of an SD model
and its name changed into the more general Sys- should be to eliminate problems by changing the
tem Dynamics. underlying structure of the system rather than
System dynamics is applied currently by both anticipating and reacting to the environment. This
academic researchers and practitioners from all allows the model to interact with the environment
over the world. Applications of system dynamics and gives/alerts feedback for structure changes.
have reached most of fields such as: health care, This is what the term (Dynamics) refers to: the

485
Modelling a Small Firm in Jordan Using System Dynamics

Figure 1. The link between structure and behavior


Feedback in System Dynamics
(Adapted from Sterman, 2000)
Feedback is one of the core concepts of System
Dynamics. Yet our mental models often fail to
include the critical feedbacks determining the
dynamics of our systems (Sterman, 2000).
Much of the art of system dynamics modelling
is to discover and represent the range of different
feedback processes in any complex systems that
enable the modeller to understand the dynamics
of these systems because all complex behaviour
arise from the interactions (feedbacks) among
the variables of the system. It’s not only dynam-
ics that rise from feedback, but all learning too
depends on feedback. As Sterman (2000) states
“we make decisions that alter the real world; we
gather information about the real world and uses
changes in the system’s variables while interacting the new information to revise our understanding
which stimulate changes over time? of the world and the decisions we make to bring
By applying SD, one can enhance the useful- our perception of the state of the system closer
ness of the model to address and analyse problems to our goals” .
and provide more significant, rational and pertinent All the dynamics in any system arise from the
policy recommendations. interaction of just two feedback loops, a positive
Lyneis (2000) stresses the importance of Sys- loops (or self reinforcing loops) and negative
tem Dynamic Models and its power to forecast loops (self correcting loops).
a market demand for instance and compares this Positive loop creates or reinforce actions which
with a statistical forecast. He mentioned that a increases the state of the system (whatever is hap-
SD Model provides more reliable forecasts than pening in the system) which in turn leads to more
the statistical (non-structural) models and tries to action further increasing the system state, this
understand the underlying structure that created is why it’s called self reinforcing. For example:
the data stream. Higher wages lead to higher prices, which in turn
In summary, the process is to “observe and increase the wages etc.
identify problematic behaviour of a system over On the other hand a negative loop is a goal seek-
time and to create a valid diagrammatic repre- ing. It counteracts and opposes change to stabilize
sentation of the system, capable of reproducing behaviour. An example of negative feedback is; the
(by computer simulation) the existing system more attractive a city, the greater the immigration
behaviour and facilitating the design of improved from other areas which will increase unemploy-
system behaviour. For example; changing behav- ment, housing price, crowding in the schools until
iour from decline to growth or from oscillation to it is no more attractive to move to.
stability” (Wolstenholme, 1990). It is important to notice that a positive feedback
loop is not associated with good and a negative
feedback loop is not associated with bad. Either
type of loop can be good or bad, depending on
which way it is operating and on the content. The

486
Modelling a Small Firm in Jordan Using System Dynamics

analysis of these feedback loops can facilitate our to the identification of a problem. These are
understanding of the process, its organisational respectively Causal loop diagrams, Stock and
boundaries, delay, information and strategies flows diagrams and computer simulation which
which interact to create its behaviour. all considered being important steps in building a
system dynamics model. We will briefly explain
Dealing with Complexity them as follows:

As the world becomes more complex, many people Causal Loop Diagrams (CLD)
and organisations find themselves bombarded with
lots of problems to solve, less time to solve them, “Causal loop diagrams provide a framework
and very few chances to learn from their mistakes. for seeing interrelationships rather than things,
Managers will need to deal with complexity and for seeing patterns of change rather than static
with these changes. Also they need to develop snapshots” (Senge, 1990). Causal loop diagrams
their systems thinking capabilities and to be able constitute a powerful way to express concisely
to create an effective learning process in complex causal statements and to identify feedback struc-
dynamic systems to overcome the different bar- ture of complex systems. In a causal loop diagram
riers to learning which are created by complex a causal relationship between two variables is
dynamics systems. Many philosophers, scientist represented by means of an arrow. The variable
and managers have been calling for a fundamental at the tail is supposed to have a causal effect on
new ways of thinking that improve the ability to the variable at the point. Also a distinction can be
see the world as a complex system. As Sterman made between two types of causal relationships:
stated, “a system in which we understand that you positive and negative. A positive causal relation-
can’t just do one thing and that everything else is ship implies that both variables will change in
connected to everything else. He argued that it’s the same direction while a negative causal loop
crucial to develop new ways of system thinking. implies that both variables change in opposite
He states that “if people had a holistic worldview, directions.
they would then act in consonance with the long Causal loop diagrams are excellent tools for
term best interest of the system as a whole, iden- capturing our hypotheses about the causes of
tify the high leverage points in systems and avoid dynamics, eliciting and capturing individuals
policy resistance” (Sterman, 2000). This actually and team’s mental models and finally its helpful
can be done using System Dynamics tools such for communicating the important feedbacks we
as Virtual world (formal models, Microworld, believe are responsible for the problem under
management flight simulators, computer simula- analysis. It can be very useful in the early stage
tion) in which decision makers and managers can of a project, when the modeller need to work
refresh decision making skills, test their scenarios with the client team to elicit and capture their
and strategies and conduct experiments (Bianchi mental models. They are effective in presenting
and Bivona 1999; Bach 2003). the results of modelling work in a non technical
fashion as we will see when studying the causal
loop diagrams for my project.
SYSTEM DYNAMICS TOOLS Figure 2 shows a simple example of a causal
loop diagram; it shows the effect of quality improve-
System Dynamics method uses different types of ments of a firm’s reputation, revenue and profit,
diagrammatic representations. These representa- market share which increases the firm’s capacity
tions and tools can be implemented in response for training and further improvements in quality.

487
Modelling a Small Firm in Jordan Using System Dynamics

Figure 2. Causal loop diagram for quality im-


The balance in our bank account is a stock. The
provement
number of employed people in a firm is a stock.
Stocks are altered by inflows and outflows.
The employees in a firm increases via hiring rate
and decreases via layoff, retirements and the rate
of leavers. Our bank account increases by our
deposits and decreases through our spending.
Although they are familiar to us, lots of people
can not distinguish clearly between them which
lead to underestimation of time delays, policy
resistance and other related issues.
Figure 3 is a simple example of stock and flows
diagram showing the typical symbols used in the
stock and flows diagram.
The stock in this example is the number of
employees in a firm. This is shown as a rectangle.
The hiring rate is the inflow represented by a pipe
(arrow) adding to the stock with a kind of valve
Causal loop diagrams have some limitations controlling the inflow. People leaving the firm
if they are used alone. The most important limi- is another flow which subtract from the stock of
tations of causal loop diagrams that they do not employees with a valve controlling the outflow
distinguish and capture the stocks and flows of of employees. Clouds represent the source (the
systems which sometimes might be helpful to stock from which a flow originating outside the
show. System dynamics method uses the second boundary of the model arises), and the sinks (the
diagrammatic representation which is Stock and stocks into which the flows leaving the model
flow diagram. boundary drain) for the flows. Sterman (2000)
addressed the contribution of stocks to dynamics.
Stocks and Flows He explained how stocks can be critical in generat-
ing the dynamics of systems. We will summarize
Stocks and Flows represent one of the essential his reasons as follows:
features of a system dynamics model in analysing
any system by describing it in terms of stocks Stocks Characterize the State
(another meaning is levels), flows (or rates), of the System and Provide
converters (auxiliaries) and the feedback loops the Basis for Actions
formed by these variables. The way in which
these variables represent a system is critical to the Stocks provide the decision makers in any system
dynamics of the system. Many system dynamics about the state of their systems providing them
experts say that after understanding the feedback with the needed information to act. For example;
structure of the system in a model, it is important the balance sheet summarize the financial health
to further elaborate by building a computer simula- of a firm by proving the decision makers with
tion designed to represent its dynamic behaviour reports about the value of cash, inventory, account
(Jackson 2003). receivable and debts etc than can be helpful for
Stock and flow diagram is usually constructed future decisions such as issuing new loans, paying
from a causal loop diagram that is familiar to us. dividends and controlling other variables.

488
Modelling a Small Firm in Jordan Using System Dynamics

Figure 3. Stock and flows diagram

Stocks Provide Systems about the size of the buffer will feed back in vari-
with Inertia and Memory ous ways to influence the inflows and outflows.
Often, these feedbacks will operate to bring the
Because stocks accumulate past events, these stock into balance. Whether and how equilibrium
events persist only if there is no changes happen is achieved cannot be assumed but is an emer-
to the stock through its inflow and outflow. Our gent property of the whole system as its many
beliefs and memories are stocks that show our different feedback loops interact simultaneously
mental states. They persist over time, generat- “(Sterman 2000).
ing inertia and continuity in out attitudes and
behaviours.
SYSTEM DYNAMICS SOFTWARE
Stocks are the Source of Delays
There is lots of user friendly software available
There is a difference between the time we mail now that allows conversion of causal loop diagram
a letter and the time it is received. During this and stocks and flows diagram into sophisticated
interval, my letter resides in a stock of letters. computer simulation of the problems or issues
There is a lag between the letters send (input) and being investigated. Examples of these different
the letters received (output). This is the delay. All kinds of software are DYNAMO, STELLA,
delays involve stocks. ITHINK, VENSIM and POWERSIM (the latest
is being used in this research chapter).
Stocks Create Disequilibrium Initial values are identified for the stocks,
Dynamics variables values are also identified for the re-
lationships, and the structural relationships are
Inflows and outflows usually not equals because determined between the variables using constants,
they are governed by different decisions which graphical relationships and mathematical func-
lead to disequilibrium dynamics in the system. tions where appropriate.
Sterman argued that understanding the nature and
stability of these dynamics is one of purposes of
a system dynamics model. He states “whenever THE PROCESS OF BUILDING
two coupled activities are controlled by different SYSTEM DYNAMICS MODEL
decision makers, involve different resources, and
are subject to different random shocks, a buffer Modelling purpose is to solve a problem and also
or stock between them must exist, accumulating to gain insight into the real problem to design
the difference. As these stocks vary, information effective policies in a real world, with all its

489
Modelling a Small Firm in Jordan Using System Dynamics

ambiguity, messiness, time pressure and inter- the system structure to improve our understand-
personal conflict. It is a feedback process, not a ing of the system.
linear sequence of steps.
Models are very important tools for managers Testing the Model
that enable them to design their organisations,
shaping its structure, strategies and design rules Testing the model is about comparing the model
and enable them to take different decisions in their behaviour after simulation with the actual behav-
organisations. It is an effective tool in promoting iour of the system to make sure that all variables
system thinking in an organisation. correspond to a meaningful concept in the real
In order to be in a position to employ that tool world and to test whether our model is robust
it is necessary that managers and modellers at least through testing it under extreme conditions
understand the basic principles in designing and (conditions that never exist in the real world)
building a system dynamics model. Some authors which enable the modeller to discover the flaws
(Sterman 2000Vennix 1996) distinguished several in his model and set the stage for improved
stages in building a system dynamics model. understanding.
These stages are summarized as follows.
Design and Evaluate Policies
Problem Identification for Improvement

It is the most important step in the building pro- After developing confidence in the model in terms
cess. We need a clear purpose in order to build any of its structure and behaviour, the modeller can
model and we should focus on the problem rather design and evaluate different policies and create
than a system. The modeller must ask: what is the strategies and decision rules and study the effect
problem? Why is it a problem? What are the main of the policies and their interactions and examine
concepts and variable we must consider? Problem the strength of policies under different scenarios
starts time and how far in the future should we which is known as sensitivity analysis.
consider? What is the historical and future behav- Figure 4 recasts the previous modelling stages.
iour of the variables under investigation? These stages are iterative and represent a continual
process.
Dynamic Hypothesis Formulation

The next step is to formulate a dynamic hypoth- CONSTRUCTING THE


esises that explain both the current theories of the SIMULATOR USING A SYSTEM
problem and as endogenous feedback interactions DYNAMICS MODEL
and structure. Also the modeller should construct
different maps based on the previous analysis such Introduction
as causal loop diagram, stock and flows diagram,
model boundary diagram and other tools. The aim of this section is to develop, model and
understand the critical influences of the variables
Simulation Model which determine the basic characteristics of any
small firm. The main motivations is to bring
This step is to formulate the simulation model dynamic perspectives and better understanding
and test it for consistency with the purpose and and analysis of small companies and enable the
estimate the parameters, equations and studying risk analyst to gain a deeper insight of how this

490
Modelling a Small Firm in Jordan Using System Dynamics

Figure 4. Modelling process

company would react under certain conditions (arising from without) are the constant factors in
and different scenarios. the model that cannot change by themselves over
The first part in building the Simulator is the simulation. These factors are controlled and
constructing the system dynamics model. This changed by the analyst himself according to a
model replicates the current method in calculat- set of “What if” scenarios when he interacts with
ing the financial ratios based on system dynamics the Simulator to test the result of his actions and
features such as Delays, feedback and causal rela- decisions on the firm’s future performance.
tions which are included in the model to help the The above figure shows a list of factors that
analyst to understand the interactions between the have not been included in the model. These are
company’s variables.The model was constructed factors that are outside our control and their effects
using Powersim Studio 2005. are reflected in the values given to the exogenous
factors. For example the interest rate would reflect
Model Boundary any inflation.

Setting the model boundary is very important tool Model Description


in system dynamics. It helps the modeller to repre-
sent the model’s causal structure and summarizes In this section, a simplified generic system dy-
the scope of the model. Sterman (2000) said “A namics model of a small firm has been built. This
broad model boundary that captures important model is of firms which import or buy finished
feedbacks is more important than a lot of details goods and sells these goods without doing any
in the specification of individual component”. production processes.
Figure 5 depicts the Boundary of the System It replicates the current method used in the
Dynamics model boundary which captures the Jordanian banks in calculating the financial ratios
scope of my study. which is considered a very important performance
Three types of factors are distinguished in measures for assessing a firm’s financial position,
this section. Endogenous factors (arising from but this prototype model is better than the current
within) are the key variables that are embodied methods in generating the financial ratios because
within the feedback structure. Exogenous factors it has an interactive user interface that allows

491
Modelling a Small Firm in Jordan Using System Dynamics

Figure 5. The system dynamics model boundary

the analyst not only to get the ratios and analyse solvency which makes the firm optimistic about its
them, but also to have the chance to analyse the future sales orders, so it motivates its customers
main variables that affect and changes these and brings new customers. This can be done by
ratios over time. These variables are considered giving them credit facilities in paying the price of
in the model as exogenous factors (arising from their purchases which creates the first reinforcing
without) and they do not change by themselves loop (R1). Improving the firm’s solvency is a good
over the simulation. indication of the firm’s eligibility for new loans or
The next sections will describe the financial for an increase in the line of credit which causes
model contents such as causal loop diagram, stock the cash to increase another time (R2).
and flows diagram and the model’s simulator and The third reinforcing loop represents another
interface. effect of increasing the sales which is reducing
the total cost per product as the fixed cost will still
Causal Loop Diagram the same while the sales increase. This of course
will cause a decrease in the product price which
Figure 6 shows the Causal Loop Diagram of makes the product attractive in the market and
the firm which is consists of four Reinforcing attract new buyers to place more orders
Loops (R) and Six Balancing Loops (B). These The financial cost will increase because the
loops represent the major Feedback structure of firm will pays the bank more interests as it is
any small production firm .Each causal loop is getting new loans (or withdrawing from its line
explained in the following figures. of credit) which decrease the cash and creates
Figure 7 shows the first three reinforcing loops. the first balancing loop(B1) in figure 8. Another
They depict the effect of increasing sales orders balancing one in the figure is when the account
in the firm. Simulating any increase in the firm’s receivables increases as the firm offers more credit
sales orders will increase the sales and eventu- facilities to its customers, this increase will have
ally increases the cash in the firm. The firm’s two effects, it decreases the cash as the collected
bank balance will increase which provides more cash will decrease (B2), it also will increase the
liquidity to the firm. It also improves its financial risk of bad debts which decreases both the col-

492
Modelling a Small Firm in Jordan Using System Dynamics

Figure 6. The causal loop diagram of a production firm

Figure 7. R1, R2 and R3 causal loops diagram

493
Modelling a Small Firm in Jordan Using System Dynamics

Figure 8. B1, B2 and B3 causal loops diagram

lected cash from customers and the cash balance problems. Increasing the cash will push the firm
in the firm (B3). to expand and buy more materials from suppliers
Receiving more sales orders will create a and convert these materials into finished goods,
backlog as the firm needs more capacity and more these purchases reduces the cash as the firm has
staff to fulfil these new orders, so the firm will to pay its purchases cost to the supplier. These
start hiring new staff to work at the firm which payments depend on the credit given to the firm by
increases the sales and the financial solvency again its suppliers of how much the percentage of cash
and encourages the customers to place more and sales. For instant, if the firm have to pay 100%
more orders by giving them credit facilities as the of its purchases in cash, the cash will decrease
capacity is improving by hiring new staff. This is and then the firm’s purchases from the supplier
the fourth reinforcing loop (R4). But this reinforc- will be less that the case in which the firm have
ing loop is not continuous as the firm cannot keep to pay nothing to receive its material purchases.
hiring staff always without any consequences. This is the balancing loop (B7).
Actually increasing the Workforce will increase The balancing loop (B5) represents the rela-
the wages paid by the firm which decreases the tion between sales and the inventory in stores.
cash and tighten the credit facilities given to the The more products in the firm’s store the more
customer which in turn decreases the sales orders completed sales. Delivering the sales to the buyers
and reduces hiring new staff and so on. This is will decrease the inventory in store which creates
the balancing loop (B4) in figure 9. a inventory gap between the quantity of products
Cash is considered the main variable in the are available in the firm’s store and the desired
firm as the firm uses cash to pay its loan repay- inventory level (B6) .Figure 10 shows these three
ment, wages, cost of its purchases and running balancing loops.
the operations in the firm without any liquidity

494
Modelling a Small Firm in Jordan Using System Dynamics

Figure 9. R4 and B4 causal loops diagram

Figure 10. B5, B6 and b7 causal loop diagram

495
Modelling a Small Firm in Jordan Using System Dynamics

Stocks and Flows Diagram goods inventory level and the desired produc-
tion rate to make sure that the production and the
The production firm model as mentioned before inventory levels are constraint by the expected
is an enlargement and enhancement of the pro- future demand.
totype model which consists of several sectors The future demand expectations establish the
which considered the main variables in each small need to produce more finished goods from mate-
manufactures firm that produce finished goods rials inventory to fulfil these orders and ship the
to the market. sales to the customers. As shown in Figure 12,
Figure 11 represents the sales orders structure starting the production is limited of the materials
which is basically the market demand on the firm’s inventory in stock, staff capacity and the desired
product. The sales orders level was set to be a production rate, while selling new products is
constant in the model to enable the credit analyst limited of how much finished products there are
to simulate the tested or assumed level and to in stores and on the firm’s capacity to fulfil the
observe the impact of any simulated sales orders existing demand.
on the different variables and financial measures, Figure 13 represents the workforce structure
and this the main objective for the varying types which increases by hiring new staff and decreases
of parameters in the model. by staff leaving the firm. The desired level of
The structure shows also the sales orders back- workforce is determined by what is the desired
log in the firm which is increased by receiving level of production and the productivity of its staff
new orders and decreases by selling and shipping in producing finished goods and sells them.
products to the customers. As the firm starts its production processes, the
The expected orders stock represents the fu- supply of materials starts as well according to the
ture expectations about the sales orders level that production level and the level of the desired and
eventually will determines the desired finished actual material inventory as shown in Figure 14

Figure 11. Sales orders structure

496
Modelling a Small Firm in Jordan Using System Dynamics

Figure 12. Production structure

Figure 13. Workforce structure

to run the productions without any disruption. The firm pays for its materials purchases ac-
Therefore, the analyst should consider all the cording to the credit given by the supplier which
variables that might affect supplying the material determines the payments schedule. As shown in
at the right time. For example, materials order Figure 15, new material purchases from supplier
completion rate and the delay in receiving the increases the account payable, while the payments
purchased materials. to supplier decrease it.

497
Modelling a Small Firm in Jordan Using System Dynamics

Figure 14. Raw material inventory structure

Figure 15. Account payable structure

Figure 16 shows the account receivable struc- used to pay bills, pay supplier, repayment of bank
ture which represents the total monies owed the loans and much more expected and unexpected
firm by its customers on credit sales made to them. outlays.
It depends mainly on the credit policy given to If the cash is not enough to meet the firm’s
the customer to encourage them to increase their obligations, the firm will withdraw cash from its
purchases from the firm. The effect of credit line of credit increasing the bank credit stock.
policy on the firm is explained in next sections Figure 18 represents the bank credit structure
in many aspects. which decreases by the firm’s repayments of loan
Collecting the sales revenue from the custom- and increases by using more cash from the bank
ers, new loans and the owner’s investment are credit which is constraint by its line of credit which
the main income to the firm which increases the Gitman (2000) defined as “An agreement between
cash as shown in figure 17. The cash is the most a commercial bank and a business specifying the
liquid asset in the firm that is always ready to be amount of un secured short-term borrowing the

498
Modelling a Small Firm in Jordan Using System Dynamics

Figure 16. Account receivables structure

Figure 17. Cash structure

bank will make available to the firm over a given Another significant part in measuring the firm’s
period of time” performance is to analyse the firm’s cash flow.
A financial summary of the firm’s operating Gitman (2000) divided the firm’s cash flow into
results during each period is shown in figure 19 three main flows which are shown in figure 20.
which represents the income structure for the These flows are: operating flows which are cash
firm which could be provided to the analyst as inflows and outflows related to the production and
an income statement report as shown in the final sales. Secondly are the investment flows which
Simulator. are the cash flows related to purchase or sale of

499
Modelling a Small Firm in Jordan Using System Dynamics

Figure 18. Bank credit structure

Figure 19. Income structure

the firm’s assets and finally the financing flows assess whether any development have occurred in
which are the cash flows related to debt and equity the firm that are contrary to the firm’s policy and
and the transactions related to them. to evaluate progress in the firm towards projected
The Statement of cash flows would be avail- goals. To be able to provide financial reports to
able as a report to credit analyst to enable him to the credit analyst in the Simulator, some few

500
Modelling a Small Firm in Jordan Using System Dynamics

Figure 20. Cash flow structure

Figure 21. Assets & liabilities structure

structures are produced such as figure 21 which As a part of the performance measures which
represents the main variables in the balance sheet the credit analyst used to analyse the firm’s perfor-
which are the assets and the liabilities structures mance is calculating and interpreting the common
as shown in the next figure. financial ratios. The inputs to ratio analysis are
Figure 22 illustrates the inventory value mainly the figures in both the financial statement
structure. The inventory in the firm includes raw and the income statement.
materials, in process inventory and the finished In the next figures, four categories of finan-
goods inventory held by the firm. The structure cial ratios are shown which are: Profitability
for each type of inventory was shown in previ- ratios, activity ratios, liquidity ratios and debt
ous figures. ratios.

501
Modelling a Small Firm in Jordan Using System Dynamics

Figure 22. Inventory value structure

Figure 23. Profitability ratios

Figure 23 shows the basic profitability ratios the speed with which the various accounts are
which measure the return and evaluate the firm’s converted into sales or cash” (Gitman 2000). The
earning with respects to the level of sales, level main activity ratios in this part are the inventory
of assets, or the owners. The banks are always turn over, average collection period and account
concern about the firm’s future so they pay a payable turnover.
close attention to any changes in the profit level In figure 25, the basic liquidity ratios are shown
during the year. to help the credit analyst to measure the firm’s
The next category is represented in figure ability to meet its short-term commitments as they
24 which are the activity ratios that “Measure come due which refer mainly to the solvency of

502
Modelling a Small Firm in Jordan Using System Dynamics

Figure 24. Activity ratios

Figure 25. Liquidity ratios firm indebtedness and its ability to satisfy the
claims of all its creditors.

Data

The data in this model is from these main


sources:
The initial value for all the stocks in the model
are the same as the values in the firm’s last balance
sheet which consists of the Assets (Cash, Inventory,
Account Receivables, Prepaid expenses & other
current assets, Fixed assets, Intangible Assets and
Depreciation) and the Liabilities (Bank Credit,
Account Payable, Accrued Expenses, Long Term
Debts, Retained Earnings and so on)
Some of the Data are given by the firm it self
such as selling Price, Material purchase cost, Usual
the firm’s overall financial situation. The liquidity sales orders, selling costs, Tax Rate, Interest Rate,
ratios used in the model are current ratio, quick or from other financial statement such as Income
ration and net working capital. statement or profit and loss statement.
The last category of the financial ratios is the
debt ratios which are shown in figure 26. Two
The Simulator
debt ratios were used in the model to assess the
Currently, the analyst doesn’t interact and doesn’t
debt position of the firm under investigation.
have the access to change the model it self, but he
These ratios are significant for the bank analyst,
will check the financial ratios and to test the effect
because the bank is usually concerns about the
of his decision on the financial ratios, whether

503
Modelling a Small Firm in Jordan Using System Dynamics

Figure 26. Debt ratios

Figure 27. The user interface of the simulator

it’s improving or not. In the first model, there are user would be able to switch between three main
some variables that can be changed through inter- pages. The interface in which he can change the
acting with the model’s interface and observing variables and display the main variables graphs,
the changes to the firm’s performance over time. the financial ratio’s page which shows the changes
The changeable variables included in the interface that might happen to each ratio every time step.
are price per unit, new loans given to the firm, Figure 27 shows the model’s interface with the
interest rate on loans, supply rate, sales orders, changing parameters and the graphs that help
changing the payment policy whether the one the user to get a better insight and to take his
given to the firm’s customers or the one given to decision efficiently, while figure 28 displays the
firm by its suppliers. Through this Simulator, the financial ratios report over the simulation period

504
Modelling a Small Firm in Jordan Using System Dynamics

Figure 28. The financial ratios report

and the changes that happened to each of these any firm. A simulator has been built based on a
ratios every month. This Simulator switches to system dynamics model which has been used by
the new paradigm as discussed in chapter three. the analyst to interact with.
It switches from static to dynamic, reductionism We found that the analyst got a better under-
to holism and from linear to non-linear thinking. standing of various interactions inside small firms
It allows the analyst to observe the performance through his ability to test different scenarios by
of the firm over the simulation time and analyse interacting with the simulator’s interface. The
the significant interactions inside a firm. simulator enabled the analysts to get a deeper
The next figure 29 is the excel sheet table that insight into the firm, predict the future behav-
provides the system dynamics model with the iour under any potential shock and simulate and
firm’s past balance sheet. The variables in this analyse any decision or policy that the firm will
table are connected directly to the model, and any implement in the future before it is implemented
changes happen on this sheet will immediately be in the real world.
transferred to the Simulator in case of changing This chapter found that if the bank’s credit of-
the firm under investigation. ficer used the Simulator at the time of approving
the loan; his would make better credit decision
based on deep understanding of the intercon-
CONCLUSION nections inside the firm and the organisational
structure assessment.
This chapter found that using new types of
thinking in analysing small firms in Jordan is an
effective method in gaining a deeper insight into

505
Modelling a Small Firm in Jordan Using System Dynamics

Figure 29. The Excel sheet

REFERENCES Cebenoyan, A., & Strahan, P. (2004). Risk


management, capital structure and Lending at
Anderson, V., & Johnson, L. (1997). Systems think- banks. Journal of Banking & Finance, 28, 19–43.
ing tools: from concepts to causal loops. Waltham, doi:10.1016/S0378-4266(02)00391-6
MA: Pegasus Communications, Inc.
Forrester, J. (1961). Industrial Dynamics. New
Bach, M. (2003). Surviving in an environment of York: MIT Press and Wiley Inc.
financial indiscipline: a case study from a transition
country. System Dynamics Review, 19(1), 47–74. Forrester, J. (1992). System dynamics, systems
doi:10.1002/sdr.253 thinking, and soft OR. System Dynamics Review,
10(2).
Bianchi, C., & Bivona, E. (1999). Commercial
and financial policies in small and micro family Gitman, L. (2000). Managerial Finance: Brief.
firms: the small business growth management Reading, MA: Addison-Wesley.
flight simulator. Simulation and gaming. Thousand Jackson, M. (2003). System Thinking: Creative
Oaks, CA: Sage publications. Holism for Managers. Chichester, UK: Wiley.
Brehmer, B. (1992). Dynamic decision mak- Jimenez, G. & Saurian, J. (2003). Collateral, type
ing: human control of complex systems. Acta of lender and relationship banking as determinants
Psychologica, 81, 211–241. doi:10.1016/0001- of credit risk. Journal of banking and finance.
6918(92)90019-A
Lane, D. (1997). Invited reviews on system dy-
Breierova, L. & Choudhari, M. (1996). An in- namics. The Journal of the Operational Research
troduction to sensitivity analysis. MIT system Society, 48, 1254–1259.
dynamics in education project.

506
Modelling a Small Firm in Jordan Using System Dynamics

Lyneis, J. (2000). System dynamics for mar- expenses are too high and whether the company
ket forecasting and structural analysis. System assets are being used properly to generate income.
Dynamics Review, 16(1), 3–25. doi:10.1002/ When computing financial relationships, a good
(SICI)1099-1727(200021)16:1<3::AID- indication of the company’s financial strengths
SDR183>3.0.CO;2-5 and weaknesses becomes clear. Examining these
ratios over time provides some insight as to how
Maani, K., & Cavana, R. (2000). Systems think-
effectively the business is being operated.
ing and modelling: understanding change and
Simulator: A computer simulation is an at-
complexity. New Zealand: Pearson Education,
tempt to model a real-life or hypothetical situation
New Zealand.
on a computer so that it can be studied to see how
Martin, L. (1997). Mistakes and Misunderstand- the system works. By changing variable, predic-
ings. System Dynamics in Education Project. tion and testing different scenarios.
System Dynamics Group, Sloan School of Man- Simulator’s Interface: The interface or the
agement, Massachusetts Institute of Technology, user interface is the aggregate of means by which
Cambridge, MA. the user interacts with the system, a particular ma-
chine, device, computer program or other complex
Senge, P. (1990). The Fifth Discipline, The art &
tools. The simulator in this research is constructed
practice of learning organisation. New York.
based on the simulation model. It is interactive
Sterman, J. (1992). Teaching Takes Off, Flight and provides managers with a user interface that
Simulator for Management Education, the Beer allows them to experiment with the model.
Game. Sloan school of Management, Massachu- Stock and Flows Diagram: Stock and flow
setts Institute of Technology, Cambridge, MA. diagrams provide a bridge to system dynamics
modeling and simulation. Basically Stock and
Vennix, J. (1996). Group model building: fa-
flow diagrams contain specific symbols and com-
cilitating team learning using system dynamics.
ponents representing the structure of a system.
Chichester, UK: Wiley
Stocks are things that can accumulate—(Think
Wolstenholme, E. (1990). System Enquiry: a sys- of a stock as a bathtub.) Flows represent rates of
tem dynamics approach. Wiley: New York. change—(Think of a flow as a bathtub faucet,
which adds to the stock, or a bathtub drain, which
reduces the stock.) These diagrams also contain
“clouds,” which represent the boundaries of the
KEY TERMS AND DEFINITIONS problem or system in question
System Dynamics Model: It’s a computer
Causal Loop Diagram: Causal loop diagrams
simulation model based on system dynamics
(CLDs) are a kind of systems thinking tool. These
features that aim to confirm that the structure
diagrams consist of arrows connecting variables
hypothesized can lead to the observed behaviour
(things that change over time) in a way that shows
and to test the effects of alternative policies on key
how one variable affects another
variables over time. Modeling purpose is to solve
Financial Ratios: Financial ratios are a valu-
a problem and also to gain insight into the real
able and easy way to interpret the numbers found
problem to design effective policies in a real world,
in statements. It can help to answer critical ques-
with all its ambiguity, messiness, time pressure
tions such as whether the business is carrying
and interpersonal conflict. It is a feedback process,
excess debt or inventory, whether customers are
not a linear sequence of steps. System dynamics
paying according to terms, whether the operating
models are very important tools for managers that

507
Modelling a Small Firm in Jordan Using System Dynamics

enable them to design their organizations, shap- system—a set of elements that interact to produce
ing its structure, strategies and design rules and behaviour—of which it is a part. This means that
enable them to take different decisions in their instead of isolating smaller and smaller parts of
organizations. It is an effective tool in promoting the system being studied, systems thinking works
system thinking in an organization. by expanding its view to take into account larger
System Dynamics: System dynamics is an and larger numbers of interactions as an issue is
approach to understanding the behaviour of over being studied. This results in sometimes strik-
time. It deals with internal feedback loops and ingly different conclusions than those generated
time delays that affect the behaviour of the entire by traditional forms of analysis, especially when
system. It also helps the decision maker untangle what is being studied is dynamically complex or
the complexity of the connections between vari- has a great deal of feedback from other sources,
ous policy variables by providing a new language internal or external. Systems thinking allows
and set of tools to describe. Then it does this by people to make their understanding of social
modeling the cause and effect relationships among systems explicit and improve them in the same
these variables way that people can use engineering principles to
Systems Thinking: Systems thinking, in make explicit and improve their understanding of
contrast, focuses on how the thing being stud- mechanical systems.
ied interacts with the other constituents of the

508
509

Chapter 22
The State of Computer
Simulation Applications
in Construction
Mohamed Marzouk
Cairo University, Egypt

ABSTRACT
Construction operations are performed under different working conditions including (but not limited to)
unexpected weather conditions, equipment breakdown, delays in procurement, etc. As such, computer
simulation is considered an appropriate technique for modeling the randomness of construction operations.
Several construction processes and operations have been modeled utilizing computer simulation such
as earthmoving, construction of bridges and tunnels, concrete placement operations, paving processes,
and coordination of cranes operations. This chapter presents an overview of computer simulation efforts
that have been performed in the area of construction engineering and management. Also, it presents
two computer simulation applications in construction; earthmoving and construction of bridges’ decks.
Comprehensive case studies are worked out to illustrate the practicality of using computer simulation
in scheduling construction projects, taking into account the associated uncertainties inherited in con-
struction operations.

INTRODUCTION difficulty in learning and applying simulation lan-


guages to industry (Sawhney and AbouRizk 1996,
Simulation is one of the techniques that has been Oloufa et al 1998, Touran 1990). The simulation
used to model uncertainties involved in construction process is an iterative process which involves dif-
operations. Although simulation is a powerful tool ferent steps. It has been defined as “imitation of a
for modeling construction operations, the applica- real-world process or system over time” (Banks et
tion of simulation is still limited in the construction al 2000). Modeling construction operations utilizing
domain. This has generally been attributed to the discrete event simulation, requires the modeler to
define three main elements (Schriber and Brunner
DOI: 10.4018/978-1-60566-774-4.ch022 1999): project, experiments and replications (see

Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The State of Computer Simulation Applications in Construction

Figure 1). A “project” is performed to study a environment. General purpose simulation (GPS)
certain operation which has specific characteris- is based on formulating a simulation model for the
tics, for example, an earthmoving operation that system under study, running the simulation and
contains a definite scope of work and specific road analyzing the results in order to decide whether
characterisitics. An “experiment” represents one the system is acceptable or not. In case of being
alternative of the project under consideration by unacceptable, the process is re-iterated and a
changing the resources assigned for the execu- new alternative system is considered. Different
tion of the project and/or its individual activities. GPS software systems have been developed for
A “replication” represents one execution of an a wide range of industries: AweSim (Pritsker et
experiment within the project. al 1997) and GPSS/H (Crain 1997); for construc-
Modeling utilizing simulation can be applied tion: Micro-CYCLONE (Halpin and Riggs 1992)
either in a general or special purpose simulation and STROBOSCOPE (Martinez 1996). Special

Figure 1. Elements of discrete event simulation

510
The State of Computer Simulation Applications in Construction

purpose simulation (SPS) is based on creation of a include INSIGHT (Paulson 1978), UM_CYLONE
platform or a template for a specific domain of ap- (Ioannou 1989), and Micro-CYCLONE (Halpin
plication (Marzouk and Moselhi 2000, AbouRizk and Riggs 1992). Several construction processes
and Hajjar 1998). The steps for simulation, in this and operations have been modeled utilizing
case, are the same as in the GPS case except for CYCLONE: selecting loader-truck fleets (Farid
the first step (construct simulation model) since and Koning 1994); construction of cable-stayed
the platform has the characteristics and behavior bridges (Abraham and Halpin 1998, Huang et al
of the system under study. Also, the modification 1994); resolving construction disputes (AbouRizk
is limited to the input parameter(s) of a pre-defined and Dozzi 1993); concrete placement operations
system and not to the characteristics and behavior (Vanegas et al 1993); placing and finishing slab
of the system. This chapter presents a state of units (Hason 1994); and paving processes (Lluch
computer simulation applications in construction. and Halpin 1982, Gowda at al 1998). Oloufa (1993)
Then, it describes in details two construction ap- proposed an object-oriented approach for simulat-
plications that have been modeled using computer ing construction operations. In this approach, the
simulation. These applications are earthmoving system at hand is modeled by creating objects of
and construction of bridges decks. classes which represent the system’s resources and
entities. These objects are interacting and commu-
Background nicating amongst one another via message transfer.
He developed a simulation language (MODSIM)
Considerable efforts have been made to model dedicated to earthmoving operations.
construction operations utilizing simulation. Tommelein et al (1994) developed an object-
These include Halpin (1977), Paulson (1978), oriented system (CIPROS) that models construc-
Ioannou (1989), Oloufa (1993), Tommelein et al tion processes by matching resource properties
(1994), Sawhney and abouRizk (1995), Shi (1995), to those of design components. Two types of
Martinez (1996), McCabe (1997), Oloufa et al resources were distinguished in this model:
(1998), Hajjar and AbouRizk (1999). CYCLONE, product components and construction resources.
CYCLic Operation Network (Halpin 1977), is Product components are design element classes
a modeling system that provides a quantitative which form the process being modeled. They are
way of viewing, planning, analyzing and control- defined for the contractor by plans and specifica-
ling construction processes and their respective tions. On the other hand, construction resources are
operations. CYCLONE is a network simulation temporary equipment and material that are used
language that models construction activities which during construction. In order to create a simulation
have a cyclic or repetitive nature. CYCLONE model utilizing the CIPROS, the user has to go
consists of six basic elements (see Figure 2): 1) through different steps: 1) define project design
NORMAL which represents an unconstrained and specification; 2) create activity-level plan and
work task; 2) COMBI which represents a con- relate activity; 4) initialize product components;
strained work task; 3) QUEUE which represents 5) identify construction resources; 6) construct
a waiting location for resources; 4) FUNCTION simulation network (formation of the elemental
which describes a process function (generation or simulation networks which describe the methods
consolidation); 5) COUNTER which controls the of construction); and finally, 7) run simulation.
iterations of the cyclic operation; and 6) ARCS HSM (Sawhney and AbouRizk 1995, Sawhney
which represent the flow logic. and AbouRizk 1996) is a hierarchical simulation
Different simulation implementations have modeling for planning construction projects which
been developed utilizing the CYCLONE which combines the concepts of work breakdown struc-

511
The State of Computer Simulation Applications in Construction

Figure 2. CYCLONE Modeling Elements (adapted from Halpin 1977)

ture and process modeling. Modeling a project conversion). Martinez and Ioannou (1999); and
via HSM requires performing different stages: Martinez (1996) developed a general purpose
1) divide the project into hierarchical structure simulation programming language (STROBO-
(project, operations, and processes); 2) create a SCOPE). To model construction operations utiliz-
resource library for the project (names, quantity, ing STROBOSCOPE, the modeler needs to write a
cost, etc.); 3) identify the sequence of operations series of programming statements that defines the
and links (serial, parallel, cyclic, or hammock network modeling elements. STROBOSCOPE has
links); 4) perform process modeling utilizing different advantages including: 1) ability to access
the CYCLONE modeling elements along with the state of the simulation (e.g. simulation time,
added ones dedicated to resources and linkages number of entities waiting in their queues, etc.)
of process; and finally 5) extend the model to a and 2) ability to distinguish involved resources
common level in order to run simulation. and entities. Several construction processes and
RBM (Shi 1995, Shi and AbouRizk 1997) is a operations have been modeled utilizing STROBO-
resource-based modeling for construction simula- SCOPE including: earthmoving (Ioannou 1999,
tion which defines the operating processes into Martinez 1998); location of temporary construc-
atomic models. For developing a model utilizing tion facilities (Tommelein 1999); and the impacts
RBM, three steps should be performed: 1) define of changes for highway constructions (Cor and
resource library and specify different atomic mod- Martinez 1999).
els along with their input and output ports; 2) define McCabe (1997 and 1998) developed an auto-
project specifications including system specifica- mated modeling approach which integrates com-
tions (involving r-processes “process-task level” puter simulation and belief networks. Computer
and their connectivity, resource assignment and simulation is used to model construction opera-
termination conditions); 3) model generation by tions, whereas, belief networks are utilized as a
formatting different r-processes into SLAM II diagnostic tool to evaluate project performance
network statements along with linkage transition indices such as: 1) queue length (QL); 2) queue
among these r-processes whether direct (where wait (QW); 3) server quantity (SQ); 4) server
entities can be routed directly to the following utilization (SU); and 5) customer delay (CD).
r-process) or indirect (where there is a need for a These indices are calculated from the simulation

512
The State of Computer Simulation Applications in Construction

statistics, subsequently, the performance of the • identify costs for these operations and to
system is evaluated by the believe network in optimize resource utilization in terms of la-
order to reach a corrective action. This correc- bor and equipment for waste management.
tive action includes modifying the number and/
or capacity of the involved servers or entities in Appleton et. al. (2002) presented a special
the model. Oloufa et al (1998) proposed a set of purpose simulation template, dedicated for the
resource-based simulation libraries as an applica- tower crane operations. They modeled on-site
tion of special purpose simulation (SPS). In this management of the tower crane resource based on
approach, simulation was performed by first se- prioritized work tasks that need to be performed
lecting construction resources from the developed within a specified period of time. Elfving and
libraries, then linking the resources to define the Tommelein (2003) presented a model that uses
logic which describes the interaction among the various input scenarios to show how sensitive the
resources used. Different resource libraries are procurement time is to the effects of multitasking
required to be defined in order to serve different and merge bias. The model determines the time
applications. For example, the resource library for required to procure complex equipment and to
the shield tunneling (the selected implementation locate and size time buffers in the procurement
for their approach) contains different resources process. Song and AbouRizk (2003) developed
including TBM (tunnel boring machine), trains, a system for building virtual fabrication shop
trucks, different rail types, vertical conveyor, models that can perform estimating, scheduling,
etc. Simphony (Hajjar and AbouRizk 1999) is and analyze production. The system is capable to
a computer system for building special purpose define conceptual models for product, process,
construction simulation tools. Different simulation and the fabrication facility itself. It offers tools,
tools were implemented in the Simphony envi- such as product modeling, process modeling and
ronment (AbouRizk and Hajjar 1998) including planning, and a special purpose facility model-
AP2-Earth for earthmoving analysis, CRUISER ing tool, which allow users to implement these
for aggregate production analysis, and CSD for conceptual models. Lu and Chan (2004) modeled
optimization of construction dewatering opera- the effects of operational interruptions upon the
tions. Chandrakanthi et. al. (2002) presented a system performance by applying Simplified Dis-
model to predict waste generation rates, as well as crete Event Simulation Approach (SDESA). The
to determine the economic advantages of recycling developed SDESA modeling was illustrated with
at construction sites. The model was developed an application of earthmoving operation.
to achieve the following: Lee and Pena-mora (2005) explored the use of
system dynamics in identifying multiple feedback
• estimate the amount of waste that one spe- processes and softer aspects of managing errors
cific project will generate using its project and changes. The developed system was applied
plan (stochastic activity schedule), in a design-build highway project in Massachu-
• quantify the reusable fraction of waste setts. They concluded that the system dynamics
material, approach can be an effective tool in understanding
• optimize the methods to sort, store and to of complex and dynamic construction processes
transport the collected reusable or recycla- and in supporting the decision making process of
ble materials, making appropriate policies to improve construc-
• identify the capacity, locations, and num- tion performance. Boskers and AbouRizk (2005)
ber of recycle bins required for a site, presented a simulation-based model for assessing
and uncertainty associated with capital infrastructure

513
The State of Computer Simulation Applications in Construction

projects. The proposed model accounts for ex- ideally be represented by simulation (Touran
pected fluctuations in the costs and durations of 1990). In fact, it is essential, in modeling these
various work packages. Also, it accounts for the operations, to consider the dynamic interaction
inflation of costs over time based on when the among the individual pieces of equipment in a
work packages occur. Zhang and Hammad (2007) production fleet.
presented a simulation model based on agents to
coordinate crane operations where two cranes are System Description
working together. The model can dynamically
control the kinematic action of two cranes, taking A simulation system (SimEarth) has been designed
into consideration the functional constraints for and implemented in Microsoft environment as a
safety and efficiency of operations. Hanna and special purpose tool for estimating time and cost
Ruwanpura (2007) proposed simulation model of earthmoving operations (Marzouk and Moselhi
that is capable to optimize resource requirements 2004-a, Marzouk and Moselhi 2004-b, Marzouk
for a petrochemical project, based on standard and Moselhi 2003-a, Marzouk and Moselhi 2003-
discipline requirements and involvements. All b, Marzouk and Moselhi 2002-a, Marzouk and Mo-
above listed efforts are geared towards the use selhi 2002-b, Marzouk 2002). The system consists
of computer simulation in construction. The fol- of several components including EarthMoving
lowing sections describe the developments made Simulation Program (EMSP) which has been de-
in two construction applications that have been signed utilizing discrete event simulation (DEVS)
modeled using computer simulation. (Banks 1998) and object-oriented modeling (Qua-
trani 1998, Deitel and Deitel 1998, Skansholm
1997). Different features of object-orientation
MODELING EARTHMOVING have been employed including classes, objects,
OPERATIONS dynamic data structure, and polymorphism. The
three-phase simulation (Pidd 1995) was employed
Problem Background instead of process interaction in order to control
the dynamics of EMSP by tracking the activities
Earthmoving operations are commonly encoun- of the simulated process. The three-phase simula-
tered in heavy civil engineering projects. In tion approach is considered most appropriate for
general, it is rare to find a construction project object oriented simulation (OOS), specially when
free of these operations. For large-scale projects there are too many entities involved in the process
(e.g. dams, highways, airports and mining), being modeled to avoid synchronicity loss (Pidd
earthmoving operations are essential, frequently 1995). The utilization of three-phase simulation
representing a sizable scope of work. Earthmov- and object-orientation in the developed engine
ing operations are frequently performed under are described subsequently.
different job conditions, which may give rise to EMSP tracks activities in three phases: phase
uncertainty. This includes equipment breakdown, 1 which advances simulation time to the next
inclement weather and unexpected site conditions. simulation event; phase 2 which carries out all
Traditional scheduling using network techniques due Bs; and phase 3 which carries out all possible
(critical path method (CPM), precedence diagram Cs. Figure 3 depicts a number of activities being
method (PDM) and line of balance (LOB) does tracked by EMSP for an earthmoving operation
not directly take into account such uncertainties that contains the main activities of load, haul,
(Sawhney and AbouRizk 1995). Also, earthmov- dump, and return. In Phase 1, EMSP’s first activity
ing operations have a cyclic nature which can is removed and the simulation time is advanced

514
The State of Computer Simulation Applications in Construction

to the next time. In Phase 2, all due B activities perform a complete simulation run.
(bound to happen) are carried out. In Phase 3, all The main classes of EMSP represent an earth-
possible C activities (conditional) are performed. moving operation that contains any combina-
It should be noted that an activity (e.g. haul) may tion of main and secondary activities, allowing
exist more than one time since it may be carried out interaction between equipment. The eight main
by different entities (e.g. hauler ID = 2 and 3) as classes which are: OPY_Simulate, OPE_Simulate,
shown in Figure 3. In the design of EMSP, different OSD_Simulate, OCT_Simulate, PS_Simulate,
types of classes have been defined to capture the PC_Simulate, PSC_Simulate and SC_Simulate.
properties of key objects in earthmoving opera- The OPY_Simulate class represents an earth-
tions. This includes objects of entities, resources, moving operation that consists of the four main
activities and stored simulation statistics. The activities: load, haul, dump and return. It has been
classes are coded in Microsoft Visual C++ 6.0. designed to act as a base class for the main classes
The classes used in the design of EMSP are of two used in EMSP (see Figure 4), benefiting from the
types: auxiliary and main (see Figure 4). inheritance feature of object-orientation. It also
Auxiliary classes are connected to the main has both association and aggregation relation-
classes through either association or aggregation ships with the auxiliary classes. The association
relationships, whereas, the main classes are con- relationship is used with the M_Queue, Activity,
nected to each other through inheritance relation- Activity_List, M_Activity, Hauled_Earth and
ships. The main classes of EMSP capture different Haul_Equip classes, whereas, the aggregation
situations according to the activities involved. relationship is used with the Queue class (see
Therefore, they represent different combinations Figure 4). Creating an object of the OPY_Simulate
of earthmoving activities. Figure 5 depicts the class and invoking its member functions provide
sequence diagram that shows the progression of a complete replication of a simulation experi-
message sending to the other classes in order to ment. During simulation replication, the follow-

Figure 3. Tracking of activities in EMSP

515
The State of Computer Simulation Applications in Construction

ing tasks are carried out: 1) importing input data been hauled); and 7) storing simulation statistics
from external files; 2) creating objects of haulers for further analysis.
(entities) and loaders (resources); 3) initializing Initiating simulation replication of an earth-
simulation and setting entities and resources into moving operation that contains the four main
their queues; 4) defining the probability distribu- activities, causes EMSP to allocate a memory space
tions for the duration of the activities involved; for an object of OPY_Simulate class. Seven mem-
5) adding and removing objects from the current ber functions (principal functions) of that class
activity list (CAL); 6) checking the termination will be called consecutively (see Figure 6):
condition for the replication (e.g. all earth has

Figure 4. EMSP main and auxiliary classes

516
The State of Computer Simulation Applications in Construction

Figure 5. sequence diagram for message transfer from principal function

1) Define_Activities(..); It should be noted that the last principal func-


2) Initiate_Simulation(); tion is called only in the case of second hauler
3) Activity_Drive(); model existence.
4) Store_QueueLength_Statistics(..);
5) Store_WaitTime_Statistics(..); Case Study
6) Store_Activities_Duration_Statistics(..);
7) Store_SecondHauler_Activities_ This case study considers the construction of
Statistics(..). Saint-Margurerite-3 (SM-3) dam. The dam is the

517
The State of Computer Simulation Applications in Construction

Figure 6. Flow of the principal functions


in three years. To model the rockfill dam, the fol-
lowing were considered: 1) the quantities and type
of soils used to fill the body of the dam (i.e. scope
of work), 2) the locations of soils’ borrow bits, 3)
the target three-year construction duration, 4) the
constraints paused by the relatively short construc-
tion season, 5) the equipment used to perform the
work, and 6) the indirect cost components. The
dam consists of several soil types with different
size and compaction requirements. For simplicity,
three soil types were considered in the model-
ing of the dam: 1) compacted moraine (clay), 2)
granular (sand and gravel) and 3) rock. The total
volume of the soil considered in the modeling
was 6.3 million m3 (Hydro-Quebec 1999). The
actual excavated natural soil from the riverbed
was estimated to be 1,038,000 m3 (Peer 2001). In
view of the relatively short construction season
and the targeted project duration, the project was
phased in three stages, each spanning a construc-
tion season as shown in Figure 7. Upon entering
the input data to SimEarth and requesting the fleet
that provides minimum total duration, SimEarth
first triggers its simulation engine (EMSP) to
perform pilot runs (see Figure 8). Subsequently,
simulation analysis is performed for each of the
recommended fleets by specifying the number
of simulation runs and activating the “Analyze”
function (see Figure 8). Different reports can then
be generated in a graphical form (see Figure 9).
Tables 1 and 2 list the estimated durations obtained
highest rockfill dam in the province of Quebec -
from the simulation analysis and their associated
Canada, located on “Saint-Marguerite” river, 700
direct cost. Table 3 provides the total cost, direct
km northeast of Montreal. SM-3 is a $2 billion
and indirect, of the project.
project, developed to generate 2.7 terawatthours
(TWh) of electricity annually. The project consists
of four main components (Hydro-Quebec 1999): 1)
a rockfill dam, 2) an 882-megawatt (MW) power
MODELING BRIDGES DECK
station, 3) a headrace tunnel to direct water to
CONSTRUCTION
the power station and 4) a spillway to discharge
excess water from the reservoir. The dam location
Problem Background
was chosen to benefit from a 330 m water head,
Bridge construction projects are classified as
seven times the height of Niagara Falls. The owner
infrastructure construction projects, which en-
targeted the completion of the dam construction
compass also tunnels, airports, dams, highways,

518
The State of Computer Simulation Applications in Construction

Figure 7. typical cross section of the dam

Figure 8. Results of pilot simulation

etc. Infrastructure projects are those which are falsework system. Segmental construction is ex-
characterized by long duration, large budget, ecuted in a cyclic manner which elites computer
and complexity. New construction methods have simulation as a best modeling technique. Such new
been developed to improve constructability, such construction methods can be modeled utilizing
as “Segmental” construction which eliminates computer simulation.

519
The State of Computer Simulation Applications in Construction

Figure 9. EMSP graphical report

Table 1. Estimated durations (hours)

Stage 1 Stage 2 Stage 3


Scope Main Spread Compact Main Spread Compact Main Spread Compact
Activities Activity Activity Activities Activity Activity Activities Activity Activity
Moraine 181 181 181* 857 857 857* 1000 1000 1000*
Granular 101 101 101* 746 746 746* 722 722 722*
Rock 452 516 532* 1505 1505 1505* 1253 1430 1476*
Excavated 458* 812 --- --- --- --- --- --- ---
Clay
Total Dura- 1272 hrs. = 159 Days = 7.2 Months 3108 hrs. = 195 Days = 7.5 Months 3198 hrs. = 200 Days = 7.7 Months
tion

520
The State of Computer Simulation Applications in Construction

Table 2. Estimated Direct Cost (Dollars)

Fleet Name Stage 1 Stage 2 Stage 3


Moraine 213,167 3,113,601 1,603,620
Granular 118,950 1,672,883 1,004,064
Rock 645,577 15,638,049 8,837,741
Excavated Clay 3,117,007 --- ---
Total Direct Cost 4,094,701 20,424,532 11,445,424

Table 3. Project Total Cost

Indirect Cost
Fleet Name Direct Cost
Time Related Lump sum
Stage 1 4,094,701 1,800,000
Stage 2 20,424,532 3,750,000 53,973,233
Stage 3 11,445,424 3,850,000
Total 35,964,657 63,373,233
Markup (2.5%) 2,331,572
Total Cost 101,669,462

System Description 1. The construction methods that is used in


construction. For each construction meth-
A framework (Bridge_Sim) has been developed od, a zone representing it is created and
to aid contractors in planning of bridge deck defined.
construction (Marzouk et. al. 2008-a, Marzouk 2. The set of assigned resources in each con-
et. al. 2008-b, Marzouk et. al. 2007, Marzouk et. struction method. For a given construction
al. 2006-a, Marzouk et. al. 2006-b, Zein 2006). method, resources can be assigned differ-
The developed framework performs two main ently in two different construction loca-
functions; deck construction planning and opti- tions, these two locations are defined as two
mizing deck construction using launching girder separate zones using the same construction
method. These two functions are performed by method.
Bridge_Sim’s main component modules; Bridge 3. General sequence of construction. If two
Analyzer Module, Simulation Module, Optimi- parts are constructed using the same con-
zation Module, and Reporting Module. Figure struction method by the same set of resources
10 depicts a schematic diagram for the proposed and are required to be executed simultane-
framework that shows the interaction between ously or successively, then, these two parts
its components. are defined as two separate zones with a
Bridge Analyzer Module is considered the specific relationship between them.
coordinator of the planning function, provided
by Bridge_Sim. This module analyzes the proj- The procedure followed by the Bridge Analyzer
ect and breaks it down into construction zones. Module to perform a planning session of deck
Dividing the bridge into zones is done taking construction can be summarized as following
into account: (see Figure 11):

521
The State of Computer Simulation Applications in Construction

Figure 10. Interaction amongst Bridge_Sim components

1. Define General Data. General data involves


Figure 11. Flowchart of bridge analyzer module
number of working hours per day, number
processes
of working days per week, number of bridge
zones according to contractor plan, and
estimated indirect costs. The framework
provides numerous indirect cost items falling
into two main categories; time-dependent
and time-independent indirect costs.
2. Define Zones Data. For each zone, the
contractor selects the applied construction
method, define number of sub-zones, and
assign labor crews and equipment. The sub-
zone may be a pier or a segment depending
on the type of construction method.
3. Define Sub-Zones data. For each sub-zone
within a zone, the contractor estimates the
durations of involved tasks and materials
costs. The framework provides the contrac-
tor with five probability density functions
to estimate task duration.
4. Run Simulation Module. The Bridge
Analyzer Module sends input data to the
Simulation Module to estimate the total dura-
tion and cost for each zone and/or sub-zone.
The contractor have to define two simulation
parameters; number of simulation runs and
confidence level.
5. Export output to Reporting Module. Once
the Simulation Module estimates the total
duration and cost for each zone and/or sub-

522
The State of Computer Simulation Applications in Construction

zone, Bridge Analyzer Module retrieves numbers of the assigned resources, 3) number of
output obtained from Simulation Module replications, 4) number of working hours per day,
and exports it to the Reporting Module. The and 5) estimated durations for construction tasks.
Reporting Module processes the output to After modifying the template simulation input file,
get the total duration and cost of the whole simulation module launches STROBOSCOPE to
deck. run this input file. The output (i.e. durations) is
exported to a text file to be retrieved by the simula-
Simulation Module is responsible for estimat- tion module to perform cost calculations.
ing the total duration and total cost of bridge deck Developing a simulation model, representing
construction. The proposed Simulation Module a specific construction operation, needs special
utilizes STROBOSCOPE (Martinez 1996), a skills to be acquired by the modeler like pro-
general purpose simulation language. The simu- gramming logic and engineering knowledge. The
lation module is developed in Microsoft Visual procedure of designing and building a simulation
Basic 6.0 to control STROBOSCOPE program. model can be summarized as following:
There are fourteen simulation models built-in the
developed Simulation Module as listed in Figure 1. Study the operation under consideration to
12. Figure 13 lists STROBOSCOPE elements analyze it and determine its main compo-
utilized to build up the simulation models. nents, specifying its technical limitations.
According to the method of construction, 2. Break-down the considered operation into
Simulation Module selects the simulation model main processes that have their respective
that represents the case and it modifies the selected tasks. For each task, the involved resources
model to account for the input data. The input data (materials, labor, and/or equipments) are
are: 1) the work size (i.e. number of segments), 2) defined.

Figure 12. Developed simulation models for bridge deck construction

523
The State of Computer Simulation Applications in Construction

3. Determine the type of each task, either Cast-in-Place on Falsework Method


Normal or Combi, depending on the need
for resources. A Combi task is required to Cast-in-place on falsework technique operation
be preceded by a Queue for each involved involves four main processes for each span: 1)
resource. false-work system erection, 2) bottom flange
4. Determine execution sequence and relation- construction, 3) top flange and webs construction,
ships between tasks by using Arcs to produce and 4) stressing and dismantling of formwork and
the network. false-work. Table 4 lists processes and tasks of
5. Create control statements to add more control deck construction using cast-in-place on false-
and model logical conditions which cannot work method. Figure 14 depicts the elements of
be modeled using normal arcs and tasks. the network that capture the subject construction
6. Code the developed network and con- method. False-work system erection process is
trol statements in a form of simulation done by four tasks: i) setting of pads, ii) construct-
language. ing bents, iii) setting stringers, and iv) rolling out
7. Verify the simulation model and test it by soffit (see Figure 15). Falsework system erection
running it via real cases to determine any starts by setting of timber pads, which act as a
bugs or errors. means of transferring dead and live loads of the
structure under construction to the ground prior
The following section describes cast-in-place to post tensioning (see Figure 16). Setting of
on falsework method as an example of the devel- pads is represented by Combi element, named
oped simulation models. SettingPads, which needs two types of resources:

Figure 13. STROBOSCOPE simulation elements (adapted from Martinez 1996)

524
The State of Computer Simulation Applications in Construction

Table 4. Processes and Tasks of falsework method

Process Task Description


SettingPads Setting of falsework pads
BentsErection Falsework bents erection
False-Work Erection
SettingStringers Setting of falsework stringers
RollingSoffit Falsework soffit rolling
ExternalFormwork Erection of webs external formwork
Rebar1 Placing of bottom flange and webs reinforcement

Bottom Flange & InstallDucts1 Placing of stressing ducts


Webs InternalFormwork Erection of webs internal formwork
Casting1 Bottom flange and webs casting
Curing1 Bottom flange and webs curing
Dismantle1 Dismantling of webs internal formwork
FlangeFormwork Erection of top flange formwork
Rebar2 Placing of top flange reinforcement
Top Flange
InstallDucts2 Placing of stressing ducts
Casting2 Top flange casting
Curing2 Top flange curing

Stressing & Disman- Stressing Inserting and stressing of stressing cables


tling CompleteDismantle Dismantling of all falsework and formwork

formwork crew, and dummy resource, which is consists of two main elements; timber joists and
used to maintain the logic flow and dependency plywood sheets. Timber joists are set perpendicu-
between activities. To allow resource tracing, each lar to the stringers, whereas, plywood sheets are
resource path is referenced with identity letters. placed on the timber joists, to serve both as the
The two resources; formwork crew and dummy; bottom form for the bottom flange of the bridge.
are drawn to SettingPads through links; F1, and RollingSoffit is a Normal element that draws its
L1, respectively. After pads setting, vertical bents resources (formwork crew and crane) from Set-
(steel or timber) are erected to serve as a support tingStringers.
for the stringers and transfer stringers loads to The process of bottom flange and webs con-
the bottom pads as shown in Figure 16. Vertical struction starts by the erection of the external
bents erection is represented by Normal element formwork of webs (ExternalFormwork). This
named BentsErection which has same resources task is a normal task where it derives its resource
required by the previous task. The third task in (i.e. Formcrew) immediately from preceding
falsework erection is setting stringers which are RollingSoffit task. This task is followed by five
made of either timber or steel. Stringers are set in tasks to finish up the process of bottom flange
place one by one and spaced as determined in the and webs construction: Rebar1, InstallDucts1,
false-work design. SettingStringers is a Normal InternalFormwork, Casting1, and Curing1. Each
element that receives its resources (formwork task requires its own resources except Curing1,
crew) from BentsErection. Once the stringers are which is a normal task requires only the dummy
set, rolling out soffit task takes place. The soffit resource from preceding task. The third process

525
The State of Computer Simulation Applications in Construction

Figure 14. Simulation model of cast-in-place on falsework method

526
The State of Computer Simulation Applications in Construction

Figure 15. Typical False-work Structure and Components

Figure 16. Falsework Components (adapted from Tischer and Kupre 2003)

named top flange construction involves same Until this moment, the whole section is con-
tasks of the previous process, but preceded by structed, but cannot sustain its own self weight
additional task named Dismantle1. This task is unless the last process is accomplished. This
to dismantle the inner formwork of webs to clear process involves only two tasks: Stressing, and
space for the top flange formwork. CompleteDismantle. The first task of the fourth

527
The State of Computer Simulation Applications in Construction

Figure 17. El-Warrak bridge layout and construction zones

Table 5. Bridge zones vs. number of spans

ID Zone No. of spans Total Length (m)


I Eastern Shore 13 400
II Eastern Nile Branch 6 600
III El-Warrak Island 6 230
IV Western Nile Branch 4 360
V Western Shore 22 670

process is to thread the stressing cables into River Nile at two locations: El-Warrak Island in
empty ducts and stress them to the design load. the north and El-Moneeb in the south. The total
The last task is to dismantle all erected formwork length of the bridge is 2,250 meters, whereas, the
and falsework systems after the superstructure is length of its outlet and inlet ramps is 600 meters.
considered self-supported. The simulation model The bridge width varies from 42 meters to 45
represents the construction of single span. The meters while the ramps have a width of 9 meters.
Start Queue is initiated in the beginning of the The total contract value of the bridge is L.E 170
simulation session by one dummy resource. The millions and it was planned to be executed in 55
simulation model runs until the dummy resource months. The contract scope of work includes the
reaches the Finish Queue. The simulation run stops execution of bridge foundations, piers, deck, and
as there is no more Dummy resource to initiate finishes. The bridge consists of five main zones
SettingPads. This termination of simulation is (see Figure 17): i) Eastern Shore (Basos shore),
named “lake of resources” termination. ii) Eastern Nile Branch, iii) El-Warrak Island, iv)
Western Nile Branch, and v) Western Shore (El-
Case Study Warrak shore). Eastern shore, El-Warrak Island,
and Western Shore Zones were constructed using
This case study considers the deck construction a traditional cast-in-place on falsework method.
of El-Warrak Bridge, which is a part of the Ring Eastern Nile Branch and Western Nile Branch
Road of Cairo - Egypt. It links Basos city to El- Zones were constructed using cantilever carriage
Warrak city over the River Nile and it crosses method. The number of spans and the length of
El-Warrak Island. The Ring Road crosses the each zone are listed in Table 5. The outlet and

528
The State of Computer Simulation Applications in Construction

inlet ramps, which exist on western shore zone, plied in construction domain (Marzouk et. al.
have four ramps. 2008, Marzouk and Moselhi 2004-b, Marzouk
Once the case data are fed to the developed and Moselhi 2003-a), several opportunities exist
Bridge_Sim system, the system provides its out- to be challenged such as multi-objective optimi-
puts in a form of minimum expected duration, zation, constraint optimization, and evolutionary
maximum expected duration, mean expected algorithms. Also, integrating computer simulation
duration, and standard deviation for each sub- in Building Information Modeling (BIM) is a
zone. Bridge_Sim adopts Central Limit theorem promising trend in the area construction engineer-
to estimate zones and project durations. Minimum, ing and management future research. That can be
maximum and mean expected durations, for each done by: 1) developing an easy-to-use 3D model-
zone, are estimated by summing up sub-zones’ ing approach that simplifies exploration, naviga-
values (see Table 6). Falsework sub-zones are tion, and analysis on construction operations, 2)
summed up considering the overlap period be- formulating an advanced estimating system that
tween casting segments. allows users to produce faster and more accurate
Construction of the Eastern and Western Shores cost estimates, reports, and bids, and 3) provid-
forms the longest path which controls the total ing a scheduling tools that automatically generate
duration. The mean expected total duration for multiple project schedules based on customized
deck construction of El-Warrak Bridge is estimated scheduling logic, taking into consideration the
to be 544 working days, which corresponds to uncertainties inherited in construction projects.
635 calendar days. The difference between the
estimated total duration and actual duration (the
3 years listed is the contract) represents the dura- CONCLUSION REMARKS
tion of mobilization, sub-structure construction,
piers construction and finishes. The associated This chapter reviewed the application of com-
total cost of the deck is 66,875,526 L.E which puter simulation in construction. Several general
consists of 57,791,708 L.E. as direct costs and purpose simulation (GPS) software systems and
9,083,818 L.E. as indirect costs. special purpose simulation (SPS) platforms have
been introduced in construction. Such systems and
Future Trends tools were extensively reviewed and described in
the chapter. The chapter also presented two com-
Future research on the use of computer simula- puter simulation applications in construction. The
tion in construction encompasses several areas. first application models earthmoving operation
Although simulation optimization has been ap- (SimEarth), whereas, the second one models the

Table 6. Expected durations for El-Warrak bridge zones (days)

Shortest Standard
Zones Mean Duration Longest Duration Variance
Duration Deviation
Eastern Shore 121.68 122.75 123.82 3.34 11.13
Eastern Nile Branch 379.74 381.33 382.92 4.19 17.60
El-Warrak Island 89.84 90.61 91.35 2.98 8.90
Western Nile Branch 373.7 375.41 377.08 4.31 18.55
Western Shore 402.34 405.93 409.52 7.65 58.59

529
The State of Computer Simulation Applications in Construction

construction of bridges’ decks (Bridge_Sim). The Banks, J., Carson, J. S., Nelson, B. L., & Nicol,
simulation engine of SimEarth, named EarthMov- D. M. (2000). Discrete-Event System Simulation.
ing Simulation Program (EMSP), was designed Upper Saddle River, NJ: Prentice Hall, Inc.
utilizing discrete event simulation (DEVS) and
Boskers, N. D., & AbouRizk, S. M. (2005). Model-
object-oriented modeling. Different features of
ing scheduling uncertainty in capital construction
object-orientation have been employed includ-
projects. Proceedings of the 2005 Winter Simula-
ing classes, objects, dynamic data structure, and
tion Conference, Orlando, FL, (pp. 1500-1507).
polymorphism. Bridge_Sim has been developed
to aid contractors in planning of bridge deck Chandrakanthi, M., Ruwanpura, J. Y., Hettiaratchi,
construction. The developed framework performs P., & Prado, B. (2002). Optimization of the waste
two main functions; deck construction planning management for construction projects using simu-
and optimizing deck construction using launching lation. Proceedings of the 2002 Winter Simulation
girder. The designated tasks of Bridge_Sim’s com- Conference, San Diego, CA, (pp. 1771-1777).
ponents (Bridge Analyzer Module and Simulation
Cor, H., & Martinez, J. C. (1999). A case study in
Module) were described. Two comprehensive case
the quantification of a change in the conditions of
studies, that were modeled using SimEarth and
a highway construction operation. Proceedings of
Bridge_Sim, were presented.
the 1999 Winter Simulation Conference, Phoenix,
AZ, (pp. 1007-1009).
REFERENCES Crain, R. C. (1997). Simulation using GPSS/H.
Proceedings of the 1997 Winter Simulation Con-
AbouRizk, S. M., & Dozzi, S. P. (1993). Applica- ference, Atlanta, GA, (pp. 567-573).
tion of computer simulation in resolving construc-
tion disputes. Journal of Construction Engineering Deitel, H. M., & Deitel, P. J. (1998). C++ how
and Management, 119(2), 355–373. doi:10.1061/ to program. Upper Saddle River, NJ: Prentice
(ASCE)0733-9364(1993)119:2(355) Hall.

AbouRizk, S. M., & Hajjar, D. (1998). A frame- EBC Web site (2001). EBC achievements: civil
work for applying simulation in construction. engineering/earthworks. Retrieved from http://
Canadian Journal of Civil Engineering, 25(3), www.ebcinc.qc.ca/
604–617. doi:10.1139/cjce-25-3-604 Elfving, J. A., & Tommelein, I. D. (2003). Impact
Abraham, D. M., & Halpin, D. W. (1998). Simula- of multitasking and merge bias on procurement
tion of the construction of cable-stayed bridges. of Complex Equipment. Proceedings of the 2003
Canadian Journal of Civil Engineering, 25(3), Winter Simulation Conference, New Orleans, LA,
490–499. doi:10.1139/cjce-25-3-490 (pp. 1527-1533).

Appleton, B. J. A., Patra, J., Mohamed, Y., & Farid, F., & Koning, T. L. (1994). Simulation veri-
AbouRizk, S. (2002). Special purpose simulation fies queuing program for selecting loader-truck
modeling of tower cranes. Proceedings of the 2002 fleets. Journal of Construction Engineering and
Winter Simulation Conference, San Diego, CA, Management, 120(2), 386–404. doi:10.1061/
(pp. 1709-1715). (ASCE)0733-9364(1994)120:2(386)

Banks, J. (1998). Handbook of simulation. New


York: John Wily & Sons, Inc.

530
The State of Computer Simulation Applications in Construction

Gowda, R. K., Singh, A., & Connolly, M. (1998). Ioannou, P. G. (1999). Construction of dam
Holistic enhancement of the production analysis embankment with nonstationary queue. Proceed-
of bituminous paving operations. Construction ings of the 1999 Winter Simulation Conference,
Management and Economics, 16(4), 417–432. Phoenix, AZ, (pp. 921-928).
doi:10.1080/014461998372204
Lee, S., & Pena-mora, F. (2005). System dynam-
Hajjar, D., & AbouRizk, S. M. (1999). Simphony: ics approach for error and change management
an environment for building special purpose simu- in concurrent design and construction. Proceed-
lation. Proceedings of the 1999 Winter Simulation ings of the 2005 Winter Simulation Conference,
Conference, Phoenix, AZ, (pp. 998-1006). Orlando, FL, (pp. 1508-1514).
Halpin, D. W. (1977). CYCLONE-method for Lluch, J., & Halpin, D. W. (1982). Construction
modeling job site processes. Journal of the Con- operations and microcomputers. Journal of the
struction Division, 103(3), 489–499. Construction Division, 108(CO1), 129–145.
Halpin, D. W., & Riggs, L. S. (1992). Planning Lu, M., & Chan, W. (2004). Modeling concur-
and analysis of construction operations. New rent operational interruptions in construction
York: John Wiley & Sons, Inc. activities with simplified discrete event simula-
tion approach (SDESA). Proceedings of the 2004
Hanna, M., & Ruwanpura, J. (2007). Simulation
Winter Simulation Conference, Washington, DC,
tool for manpower forecast loading and resource
(pp. 1260-1267).
leveling. Proceedings of the 2007 Winter Simu-
lation Conference, Washington, DC, (pp. 2099- Martinez, J. C. (1996). STROBOSCOPE-state
2103). and resource based simulation of construction
process. Ph.D. Thesis, University of Michigan,
Hason, S. F. (1994). Feasibility and implemen-
Ann Arbor, MI.
tation of automation and robotics in Canadian
building construction operation. M.Sc.Thesis, Martinez, J. C. (1998). EarthMover-simulation
Center for Building Studies, Concordia University, tool for earthwork planning. Proceedings of the
Portland, OR. 1998 Winter Simulation Conference, Washington
DC, (pp. 1263-1271).
Huang, R., Grigoriadis, A. M., & Halpin, D. W.
(1994). Simulation of cable-stayed bridges using Martinez, J. C., & Ioannou, P. J. (1999). General-
DISCO. Proceedings of the 1994 Winter Simula- purpose systems for effective construction simu-
tion Conference, Orlando, FL, (pp. 1130-1136). lation. Journal of Construction Engineering and
Management, 125(4), 265–276. doi:10.1061/
Hydro-Quebec (1999). Sainte-Marguerite-3
(ASCE)0733-9364(1999)125:4(265)
hydroelectric project: in harmony with the en-
vironment. Marzouk, M. (2002). Optimizing earthmoving op-
erations using computer simulation. Ph.D. Thesis,
Ioannou, P. G. (1989). UM-CYCLONE user’s
Concordia University, Montreal, Canada.
manual. Division of Construction Engineer-
ing and Management, Purdue University, West Marzouk, M., & Moselhi, O. (2000). Optimizing
Lafayette, IN. earthmoving operations using object-oriented
simulation. Proceedings of the 2000 Winter Simu-
lation Conference, Orlando, FL, (1926-1932).

531
The State of Computer Simulation Applications in Construction

Marzouk, M., & Moselhi, O. (2004-a). Multi- Marzouk, M., Zein, H., & El-Said, M. (2006b).
objective optimization of earthmoving opera- BRIGE_SIM: framework for planning and
tions. Journal of Construction Engineering and optimizing bridge deck construction using
Management, 130(1), 105–113. doi:10.1061/ computer simulation. Proceedings of the 2006
(ASCE)0733-9364(2004)130:1(105) Winter Simulation Conference, Monterey, CA,
(pp. 2039-2046).
Marzouk, M., & Moselhi, O.(2004-b). Fuzzy
clustering model for estimating haulers’ travel Marzouk, M., Zein, H., & El-Said, M. (2007).
time. Journal of Construction Engineering and Application of computer simulation to construc-
Management, 130(6), 878–886. doi:10.1061/ tion of deck pushing bridges. Journal of Civil
(ASCE)0733-9364(2004)130:6(878) Engineering and Management, 13(1), 27–36.
Marzouk, M., & Moselhi, O.(2003-a) Constraint Marzouk, M., Zein El-Dein, H., & El-Said, M.
based genetic algorithm for earthmoving fleet (2008a. (in press). A framework for multiobjective
selection. Canadian Journal of Civil Engineering, optimization of launching girder bridges. Journal
30(4), 673–683. doi:10.1139/l03-006 of Construction Engineering and Management.
Marzouk, M., & Moselhi, O.(2003-b). An object McCabe, B. (1997). An automated modeling ap-
oriented model for earthmoving operations. Jour- proach for construction performance improvement
nal of Construction Engineering and Management, using computer simulation and belief networks.
129(2), 173–181. doi:10.1061/(ASCE)0733- Ph.D. Thesis, Alberta University, Canada.
9364(2003)129:2(173)
McCabe, B. (1998). Belief networks in construc-
Marzouk, M., & Moselhi, O.(2002-a). Bid prepa- tion simulation. Proceedings of the 1998 Winter
ration for earthmoving operations. Canadian Simulation Conference, Washington DC (pp.
Journal of Civil Engineering, 29(3), 517–532. 1279-1286).
doi:10.1139/l02-023
Oloufa, A. (1993). Modeling operational ac-
Marzouk, M., & Moselhi, O.(2002-b). Simu- tivities in object-oriented simulation. Journal of
lation optimization for earthmoving opera- Computing in Civil Engineering, 7(1), 94–106.
tions using genetic algorithms. Construction doi:10.1061/(ASCE)0887-3801(1993)7:1(94)
Management and Economics, 20(6), 535–544.
Oloufa, A., Ikeda, M., & Nguyen, T. (1998).
doi:10.1080/01446190210156064
Resource-based simulation libraries for construc-
Marzouk, M., Said, H., & El-Said, M. (2008b). tion. Automation in Construction, 7(4), 315–326.
Special purpose simulation model for balanced doi:10.1016/S0926-5805(98)00048-X
cantilever bridges. Journal of Bridge Engineer-
Paulson, G. C. Jr. (1978). Interactive graphics for
ing, 13(2), 122–131. doi:10.1061/(ASCE)1084-
simulating construction operations. Journal of the
0702(2008)13:2(122)
Construction Division, 104(1), 69–76.
Marzouk, M., Zein, H., & El-Said, M. (2006a).
Peer, G. A. (2001). Ready to Serve. Heavy Con-
Scheduling cast-in-situ on falsework bridges
struction News, March 2001, 16-19.
using computer simulation. Scientific Bulletin,
Faculty of Engineering . Ain Shams University, Pidd, M. (1995). Object orientation, discrete
41(1), 231–245. simulation and three-phase approach. The Jour-
nal of the Operational Research Society, 46(3),
362–374.

532
The State of Computer Simulation Applications in Construction

Pritsker, A. A. B., O’Reilly, J. J., & LaVal, D. K. Tommelein, I. D. (1999). Travel-time simulation
(1997). Simulation with visual SLAM and Awesim. to locate and staff temporary facilities under
New York: John Wiley & Sons, Inc. changing construction demand. Proceedings of
the 1999 Winter Simulation Conference, Phoenix,
Quatrani, T. (1998). Visual modeling with rational
AZ (pp. 978-984).
rose and UML. Reading, MA: Addison-Wesley.
Tommelein, I. D., Carr, R. I., & Odeh, A. M.
Sawhney, A., & AbouRizk, S. M. (1995). HSM-
(1994). Assembly of simulation networks us-
Simulation-based planning method for construc-
ing designs, plans, and methods. Journal of
tion projects. Journal of Construction Engineering
Construction Engineering and Management,
and Management, 121(3), 297–303. doi:10.1061/
120(4), 796–815. doi:10.1061/(ASCE)0733-
(ASCE)0733-9364(1995)121:3(297)
9364(1994)120:4(796)
Sawhney, A., & AbouRizk, S. M. (1996). Com-
Touran,A. (1990). Integration of simulation with ex-
puterized tool for hierarchical simulation model-
pert systems. Journal of Construction Engineering
ing. Journal of Computing in Civil Engineering,
and Management, 116(3), 480–493. doi:10.1061/
10(2), 115–124. doi:10.1061/(ASCE)0887-
(ASCE)0733-9364(1990)116:3(480)
3801(1996)10:2(115)
Vanegas, J. A., Bravo, E. B., & Halpin, D.
Schriber, T. J., & Brunner, D. T. (1999). Inside
W. (1993). Simulation technologies for plan-
discrete-event simulation software: how it works
ning heavy construction processes. Journal of
and how it matters. Proceedings of the 1999
Construction Engineering and Management,
Winter Simulation Conference, Phoenix, AZ,
119(2), 336–354. doi:10.1061/(ASCE)0733-
(pp. 72-80).
9364(1993)119:2(336)
Shi, J. (1995). Optimization for construction
Zein, H. (2006). A framework for planning and
simulation. Ph.D. Thesis, Alberta University,
optimizing bridge deck construction using com-
Canada.
puter simulation. M.Sc. Thesis, Cairo University,
Shi, J., & AbouRizk, S. M. (1997). Resource-based Cairo, Egypt.
modeling for construction simulation. Journal
Zhang, C., & Hammad, A. (2007). Agent-based
of Construction Engineering and Management,
simulation for collaborative cranes. Proceedings
123(1), 26–33. doi:10.1061/(ASCE)0733-
of the 2007 Winter Simulation Conference, Wash-
9364(1997)123:1(26)
ington, DC. (pp. 2051-2056).
Skansholm, J. (1997). C++ From the beginning.
Harlow, UK: Addison-Wesley.
Song, L., & AbouRizk, S. M. (2003). Building a KEY TERMS AND DEFINITIONS
virtual shop model for steel fabrication. Proceed-
ings of the 2003 Winter Simulation Conference, General Purpose Simulation: is based on
New Orleans, LA, (pp. 1510-1517). formulating a simulation model for the system
under study, running the simulation and analyzing
Tischer, T. E., & Kuprenas, J. A. (2003). Bridge the results in order to decide whether the system
falsework productivity – measurement and influ- is acceptable or not.
ences. Journal of Construction Engineering and Special Purpose Simulation: is based on
Management, 129(3), 243–250. doi:10.1061/ creation of a platform or a template for a specific
(ASCE)0733-9364(2003)129:3(243) domain of application.

533
The State of Computer Simulation Applications in Construction

Object-Oriented Modeling: is a modeling Building Information Modeling: digital


paradigm mainly used in computer programming representation of the physical and functional
by considering the problem not as a set of func- characteristics of a facility.
tions that can be performed but primarily as a set Simulation Optimization: the process of
of related, interacting objects. maximizing information retrieval from simulation
B Activity: bound to happen activity. analysis without carrying out the analysis for all
C Activity: conditional activity combinations of input variables.

534
535

Compilation of References

AbouRizk, S. M., & Dozzi, S. P. (1993). Application R. Barton, K. Kang & P. A. Fishwick (Eds.), Proceedings
of computer simulation in resolving construction of the 2000 Winter Simulation Conference, Piscataway,
disputes. Journal of Construction Engineering and NJ (pp. 1684-1691). Washington, DC: IEEE.
Management, 119(2), 355–373. doi:10.1061/(ASCE)0733-
Agarwal, S., Handschuh, S., & Staab, S. (2005). An-
9364(1993)119:2(355)
notation, Composition and Invocation of Semantic Web
AbouRizk, S. M., & Hajjar, D. (1998). A framework for Services. Journal of Web Semantics, 2(1), 1–24.
applying simulation in construction. Canadian Journal
Agostini, A., & De Michelis, G. (2000, August). A Light
of Civil Engineering, 25(3), 604–617. doi:10.1139/cjce-
Workflow Management System Using Simple Process
25-3-604
Models. Computer Supported Cooperative Work, 9(3-4),
Abraham, D. M., & Halpin, D. W. (1998). Simulation 335–363. doi:10.1023/A:1008703327801
of the construction of cable-stayed bridges. Canadian
Ahmed, Y., & Shahriari, S. (2001). Simulation of TCP/IP
Journal of Civil Engineering, 25(3), 490–499. doi:10.1139/
Applications on CDPD Channel. M.Sc Thesis, Depart-
cjce-25-3-490
ment of Signals and Systems, Chalmers University of
Abu-Taieh, E., & El Sheikh, A. R. (2007). Commercial Technology, Sweden.
Simulation Packages: A Comparative Study. Interna-
Al Hudhud, G., & Ayesh, A. (2008, April 1-4). Real
tional Journal of Simulation, 8(2), 66–76.
Time Movement Cooredination Technique Based on
ACIMS. (2000). DEVS/HLA software. Retrieved Sep- Flocking Behaviour for Multiple Mobile Robots System.
tember 1st, 2008, from http://www.acims.arizona.edu/ In Proceedings of Swarming Intelligence Applications
SOFTWARE/software.shtml. Arizona Center for Inte- Symposium. Aberdeen, UK.
grative Modelling and Simulation.
Al-Bahadili, H. (2008). MANSim: A Mobile Ad Hoc
ACIMS. (2003). DEVSJAVA modeling & simulation tool. Network Simulator. Personal Communication.
Retrieved September 1st, 2008, from http://www.acims.
Al-Bahadili, H., & Jaradat, Y. (2007). Development
arizona.edu/SOFTWARE/software.shtml. Arizona Cen-
and performance analysis of a probabilistic flooding in
ter for Integrative Modeling and Simulation.
noisy mobile ad hoc networks. In Proceedings of the 1st
Adamski, M. (2001). A Rigorous Design Methodology for International Conference on Digital Communications
Reprogrammable Logic Controllers. In Proc. DESDes’01, and Computer Applications (DCCA2007), (pp. 1306-
(pp. 53 – 60), Zielona Gora, Poland. 1316), Jordan.

Adelsberger, H. H., Bick, M., & Pawlowski, J. M. (2000, Al-Bahadili, H., Al-Basheer, O., & Al-Thaher, A. (2007).
December). Design principles for teaching simulation A Location Aided Routing-Probabilistic Algorithm for
with explorative learning environments. In J. A. Joines, R. Flooding Optimization in MANETs. In Proceedings of

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References

Mosharaka International Conference on Communica- Altiok, T. (2001, December). Various ways academics
tions, Networking, and Information Technology (MIC- teach simulation: are they all appropriate? In B. A. Peters,
CNIT 2007), Jordan. J. S. Smith, D. J. Medeiros, and M. W. Rohrer (Eds.),
Proceedings of the 2001 Winter Simulation Conference,
Al-Bahadili, H., Al-Zoubaidi, A. R., Al-Zayyat, K.,
Arlington, VA, (pp. 1580-1591).
Jaradat, R., & Al-Omari, I. (2009). Development and
performance evaluation of an OMPR algorithm for route Analytica. (2006). Lumina. Retrieved 2008, from Ana-
discovery in noisy MANETs. To be published. lytica: www.lumina.com

Alberts, D. S., & Hayes, R. E. (2003). Power to the Edge, Anand, S. S., Bell, D. A., & Hughes, J. G. (1995). The
Command and Control in the Information Age. Depart- Role of Domain Knowledge in Data Mining. CIKM’95,
ment of Defense Command and Control Program; Infor- Baltimore, MD, (pp. 37–43).
mation Age Transformation Series. Washington, DC.
Anderson, V., & Johnson, L. (1997). Systems thinking
Alesso, H. P., & Smith, C. F. (2005). Developing Semantic tools: from concepts to causal loops. Waltham, MA:
Web Services. Wellesley, MA: A.K. Peters, Ltd. Pegasus Communications, Inc.

Alexander, S. E. (1982). Radio propagation within Appleton, B. J. A., Patra, J., Mohamed, Y., & AbouRizk,
buildings at 900 MHz. IEE Electronics Letters, 18(21), S. (2002). Special purpose simulation modeling of tower
913–914. doi:10.1049/el:19820622 cranes. Proceedings of the 2002 Winter Simulation Con-
ference, San Diego, CA, (pp. 1709-1715).
Al-Hudhud, G. (2006, July 10-12). Visualising the emer-
gent behaviour of a multiagent communication model. Asgarkhani, M. (2002). Computer modeling and simula-
In Proceedings of the 6th International Conference on tion as a learning tool: A preliminary study of network
Recent Advances in Soft Computing, (pp. 375–381). simulation products. Christchurch Polytechnic Institute
Canterbury, UK: University of Kent. of Technology (CPIT), Christchurch, New Zealand.

Allen, A. (1978). Probability, statistics and queuing Ashby, W. R. (1970). Analysis of the system to be mod-
theory, with computer science applications. New York: eled. The process of model-building in the behavioral
Academic Press sciences (pp. 94-114). Columbus, OH: Ohio State Uni-
versity Press.
Alnoukari, M., & Alhussan, W. (2008). Using data min-
ing techniques for predicting future car market demand. Asperti, A., & Busi, N. (1996, May). Mobile Petri Nets
International Conference on Information & Communica- (Tech. Rep. No. UBLCS-96-10). Bologna, Italy: Univer-
tion Technologies: From Theory to Applications, IEEE sità degli Studi di Bologna.
Conference, Syria.
Astrachan, O., & Rodger, S. H. (1998). Animation, vi-
Alnoukari, M., Alzoabi, Z., & Hanna, S. (2008). Us- sualization, and interaction in CS1 assignments. In The
ing applying adaptive software development (ASD) proceedings of the 29th SIGCSE technical symposium
agile modeling on predictive data mining applications: on computer science education (pp. 317-321), Atlanta,
ASD-DM methodology. International Symposium on GA. New York: ACM Press.
Information Technology, Malaysia.
Astrachan, O., Selby, T., & Unger, J. (1996). An object-
Al-Thaher, A. (2007). A Location Aided Routing- oriented, apprenticeship approach to data structures
Probabilistic Algorithm for Flooding Optimization in using simulation. In Proceedings of frontiers in educa-
Mobile Ad Hoc Networks. M.Sc Thesis, Department of tion (pp. 130-134).
Computer Science, Amman Arab University for Gradu-
Aulin, T. (1979). A modified model for fading signal
ate Studies, Jordan.
at a mobile radio channel. IEEE Transactions on

536
Compilation of References

Vehicular Technology, 28(3), 182–203. doi:10.1109/T- technical symposium on computer science education (pp.
VT.1979.23789 261-265), New Orleans, LA. New York: ACM Press.

Avatars and agents in immersive virtual environments. Balch, T. (2000, 4). TeamBots. Retrieved 10 2008, from
(2004). Technical Report, The Engineering and Physical Carnegie Mellon University - SCholl of Computer sci-
Sciences Research Council. Retrieved from http://www. ence: http://www.cs.cmu.edu/~trb/TeamBots/
equator.ac.uk/index.php/articles/697.
Balci, O. (1995). Principles and techniques of simula-
Axelrod, R., & Cohen, M. D. (2000). Harnessing com- tion validation, verification. In Proceedings of the 1995
plexity: Organizational implications of a scientific. New Winter Simulation Conference, eds. C. Alexopoulos, and
York: Basic Books. K. Kang, 147-154. Piscataway, New Jersey: Institute of
Electrical and Electronics Engineers, Inc.
Baars, H., & Kemper, H. G. (2007). Management
Support with Structured and Unstructured Data- Balci, O. (1998). Verification, Validation and Testing. In
An Integrated Business Intelligence Framework. J. Banks (Ed.) Handbook of Simulation. Hoboken, NJ:
Information Systems Management, 25, 132–148. John Wiley & Sons
doi:10.1080/10580530801941058
Balci, O. (1998). Verification, validation, and accredita-
Babich, F., & Lombardi, G. (2000). Statistical analysis tion. In Proceedings of the 1998 Winter Simulation Con-
and characterization of the indoor propagation channel. ference, eds. D. J. Medeiros, E. F. Watson, J. S. Carson
IEEE Transactions on Communications, 48(3), 455–464. and M. S. Manivannan, 41-48. Piscataway, New Jersey:
doi:10.1109/26.837048 Institute of Electrical and Electronics Engineers, Inc.

Bach, M. (2003). Surviving in an environment of financial Banitsas, K. A., Song, Y. H., & Owens, T. J. (2004).
indiscipline: a case study from a transition country. System OFDM over IEEE 802.11b hardware for telemedical
Dynamics Review, 19(1), 47–74. doi:10.1002/sdr.253 applications. International Journal of Mobile Commu-
nications, 2(3), 310–327.
Badouel, E., & Darondeau, P. (1997, September). Stratified
Petri Nets. In B. S. Chlebus & L. Czaja (Eds.), Proceedings Bani-Yassein, M., Ould-Khaoua, M., Mackenzie, L., & Pa-
of the 11th International Symposium on Fundamentals panastasiou, S. (2006). Performance analysis of adjusted
of Computation Theory (FCT’97) (p. 117-128). Kraków, probabilistic broadcasting in mobile ad hoc networks.
Poland: Springer. International Journal of Wireless Information Networks,
13(2), 127–140. doi:10.1007/s10776-006-0027-0
Badouel, E., & Oliver, J. (1998, January). Reconfigu-
rable Nets, a Class of High Level Petri Nets Supporting Bankes, S. C. (2002). Agent-based modeling: A revolu-
Dynamic Changes within Workflow Systems (IRISA tion? Proceedings of the National Academy of Sciences
Research Report No. PI-1163). IRISA. of the United States of America, 99(10), 7199–7200.
doi:10.1073/pnas.072081299
Baecker, R. M. (1998). Sorting out sorting: A case study
of software visualization for teaching computer science. Banks, J. (1998). Handbook of simulation. New York:
In M. Brown, J. Domingue, B. Price, & J. Stasko (Eds.), John Wily & Sons, Inc.
Software visualization: Programming as a multimedia
Banks, J. (2000, December 10-13). Introduction to Simu-
experience (pp. 369–381). Cambridge, MA: The MIT
lation. In J. A. Joines, R. R. Barton, K. Kang, & P. A.
Press.
Fishwick (Eds.), Proceedings of the 2000 Winter Simula-
Baker, R. S., Boilen, M., Goodrich, M. T., Tamassia, R., tion Conference, Orlando, FL, (pp. 510-517). San Diego,
& Stibel, B. A. (1999). Testers and visualizers for teach- CA: Society for Computer Simulation International.
ing data structures. In Proceedings of the 30th SIGCSE

537
Compilation of References

Banks, J. (2001, December). Panel session: Education Best, E. (1986, September). COSY: Its Relation to Nets
for simulation practice – five perspectives. In B. A. Pe- and CSP. In W. Brauer, W. Reisig, & G. Rozenberg
ters, J. S. Smith, D. J. Medeiros, & M. W. Rohrer (Eds.) (Eds.), Petri Nets: Central Models and Their Proper-
Proceedings of the 2001 Winter Simulation Conference, ties, Advances in Petri Nets (Part II) (p. 416-440). Bad
Arlington, VA, (pp. 1571-1579). Honnef, Germany: Springer.

Banks, J., Carson, J. S., II, Nelson, B. L., & Nicol, D. Better, M., Glover, F., & Laguna, M. (2007). Advances
M. (2005). Discrete event simulation (4th Ed.). Upper in analytics: Integrating dynamic data mining with
Saddle River, NJ: Pearson Prentice Hall. simulation optimization. IBM . Journal of Research and
Development (Srinagar), 51(3/4).
Banks, J., Gerstein, D., & Searles, S. P. (1998). Model-
ing Processes, Validation, and Verification of Complex Bian, Y., Poplewell, A., & O’Reilly, J. J. (1994). Novel
Simulations: A Survey. Methodology and Validation: simulation technique for assessing coding system per-
Simulation Series, 19(1), 13–18. formance. IEE Electronics Letters, 30(23), 1920–1921.
doi:10.1049/el:19941297
Barlas, Y., & Carpenter, S. (1990). Philosophical roots
of model validation: Two paradigms. System Dynamics Bianchi, C., & Bivona, E. (1999). Commercial and finan-
Review, 6(2), 148–166. doi:10.1002/sdr.4260060203 cial policies in small and micro family firms: the small
business growth management flight simulator. Simulation
Bäsken, M., & Näher, S. (2001). GeoWin a generic tool
and gaming. Thousand Oaks, CA: Sage publications.
for interactive visualization of geometric algorithms.
In S. Diehl (Ed.), Software visualization: International Bianchi, G. (2000). Performance analysis of the IEEE
seminar (pp. 88-100). Dagstuhl, Germany: Springer. 802.11 distributed coordination function. IEEE Journal
on Selected Areas in Communications, 18(3), 535–547.
Battista, G. D., Eades, P., Tamassia, R., & Tollis, I.
doi:10.1109/49.840210
(1999). Graph drawing: Algorithms for the visualization
of graphs. Upper Saddle River, NJ: Prentice Hall. Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004).
A quantitative map of the circuit of cat primary visual
Bečvář, M., Kubátová, H., & Novotný, M. (2006). Massive
cortex. The Journal of Neuroscience, 24(39), 8441–8453.
Digital Design Education for large amount of undergradu-
doi:10.1523/JNEUROSCI.1400-04.2004
ate students. [Royal Institute of Technology, Stockholm.].
Proceedings of EWME, 2006, 108–111. Birta, L. G. (2003). A Perspective of the Modeling and
Simulation Body of Knowledge. Modeling & Simulation
Benford, S., Burke, E., Foxley, E., Gutteridge, N., &
Magazine, 2(1), 16–19.
Zin, A. M. (1993). Ceilidh: A course administration
and marking system. In Proceedings of the 1st interna- Birta, L. G. (2003). The Quest for the Modeling and
tional conference of computer based learning, Vienna, Simulation Body of Knowledge. Keynote presentation
Austria. at the Sixth Conference on Computer Simulation and
Industry Applications, Instituto Tecnologico de Tijuana,
Bernardi, S., Donatelli, S., & Horvàth, A. (2001, Sep-
Mexico, February 19-21, 2003.
tember). Implementing Compositionality for Stochastic
Petri Nets. Journal of Software Tools for Technology Blasak, R., Armel, W., Starks, D., & Hayduk, M. (2003).
Transfer, 3(4), 417–430. The Use of Simulation to Evaluate Hospital Operations
between the ED and Medical Telemetry Unit. In S. Chick,
Bertoni, H. L., Honcharenko, W., Macel, L. R., & Xia,
et al (Ed.), Proceedings of the 2003 Winter Simulation
H. H. (1994). UHF propagation prediction for wireless
Conference (pp. 1887-1893). Washington, DC: IEEE.
personal communications. IEEE Proceedings, 82(9),
1333–1359. doi:10.1109/5.317081

538
Compilation of References

Bologna Process (2008). Strasbourg, France: Council Brehmer, B. (1992). Dynamic decision making: human
of Europe, Higher Education and Research. Retrieved control of complex systems. Acta Psychologica, 81,
August 15, 2008, from http://www.coe.int/t/dg4/ higher- 211–241. doi:10.1016/0001-6918(92)90019-A
education/EHEA2010/BolognaPedestrians_en.asp
Breierova, L. & Choudhari, M. (1996). An introduction
Boroni, C. M., Eneboe, T. J., Goosey, F. W., Ross, J. A., & to sensitivity analysis. MIT system dynamics in educa-
Ross, R. J. (1996). Dancing with Dynalab. In 27th SIGCSE tion project.
technical symposium on computer science education (pp.
Bremermann, H. J. (1962). Optimization through evolu-
135-139). New York: ACM Press.
tion and recombination. In F. T. J. M.C. Yovits, & G.D.
Boskers, N. D., & AbouRizk, S. M. (2005). Modeling Goldstein (Eds.), Self-oganizing systems (pp. 93-106).
scheduling uncertainty in capital construction projects. Washington D.C.: Spartan Books.
Proceedings of the 2005 Winter Simulation Conference,
Brette, R., Rudolph, M., Carnevale, T., Hines, M., &
Orlando, FL, (pp. 1500-1507).
Beeman, D., Bower et al. (2007). Simulation of networks
Boumans, M. (2006). The difference between answering of spiking neurons: A review of tools and strategies.
a ‘why’ question and answering a ‘how much’ question. Journal of Computational Neuroscience, 23, 349–398.
In G. K. Johannes Lenhard, & T. Shinn (Eds.), Simula- doi:10.1007/s10827-007-0038-6
tion: Pragmatic construction of reality; sociology of
Bridgeman, S., Goodrich, M. T., Kobourov, S. G., &
the sciences yearbook (pp. 107-124). Dordrecht, The
Tamassia, R. (2000). PILOT: An interactive tool for
Netherlands: Springer.
learning and grading. In Proceedings of the 31st SIGCSE
Brade, D. (2000). Enhancing M&S Accreditation by technical symposium on computer science education (pp.
Structuring V&V Results. Proceedings of the Winter 139-143). New York: ACM Press. Retrieved from http://
Simulation Conference, (pp. 840-848). citeseer.ist.psu.edu/bridgeman00pilot.html

Brade, D. (2003). A Generalized Process for the Verifica- Broadcom. (2003). IEEE 802.11g: the new mainstream
tion and Validation of Models and Simulation Results. wireless LAN standard. Retrieved May 23 2007, from
Dissertation, Universit‫ن‬t der Bundeswehr München, http://www.54g.org/pdf/802.11g-WP104-RDS1
Germany.
Brown, M. H. (1988). Algorithm animation. Cambridge,
Brade, D., & Lehmann, A. (2002). Model Validation and MA: MIT Press.
Verification. In Modeling and Simulation Environment
Brown, M. H., & Hershberger, J. (1992). Color and
for Satellite and Terrestrial Communication Networks–
sound in algorithm animation. Computer, 25(12), 52–63.
Proceedings of the European COST Telecommunication
doi:10.1109/2.179117
Symposium. Boston: Kluwer Academic Publishers.
Brown, M. H., & Raisamo, R. (1997). JCAT: Collabora-
Brader, J. M., Senn, W., & Fusi, S. (2007). Learning real-
tive active textbooks using Java. Computer Networks
world stimuli in a neural network with spike-driven syn-
and ISDN Systems, 29(14), 1577–1586. doi:10.1016/
aptic dynamics. Neural Computation, 19(11), 2881–2912.
S0169-7552(97)00090-1
doi:10.1162/neco.2007.19.11.2881
Brown, P. J. (1980). Writing Interactive Compilers and
Bradu, B., Gayet, P., & Niculescu, S.-I. (2007). A Dynamic
Interpreters. Chichester, UK: John Wiley & Sons.
Simulator for Large-Scale Cryogenic Systems. In R. K.
B. Zupančič (Ed.), Proc. EUROSIM, (pp. 1-8). Brown, R. (1988). Calendar queues: A fast 0(1) priority
queue implementation for the simulation event set prob-
Braun, W. R., & Dersch, U. (1991). A physical mobile
lem. Journal of Communication ACM, 31(10), 1220–1227.
radio channel model. IEEE Transactions on Vehicular
doi:10.1145/63039.63045
Technology, 40(2), 472–482. doi:10.1109/25.289429

539
Compilation of References

Bullington, K. (1977). Radio propagation for vehicular Capra, L., De Pierro, M., & Franceschinis, G. (2005,
communications. IEEE Transactions on Vehicular Tech- June). A High Level Language for Structural Relations
nology, 26(4), 295–308. doi:10.1109/T-VT.1977.23698 in Well-Formed Nets. In G. Ciardo & P. Darondeau
(Eds.), Proceeding of the 26th international conference
Burks, A. W., & Neumann, J. v. (1966). Theory of self-
on application and theory of Petri nets (p. 168-187).
reproducing automata. Urbana and London: University
Miami, FL: Springer.
of Illinois Press.
Carson, J. S., II. (2004, December). Introduction to Mod-
Cabac, L., Duvignau, M., Moldt, D., & Rölke, H.
eling and Simulation. Paper presented at 2004 Winter
(2005, June). Modeling Dynamic Architectures Using
Simulation Conference (pp. 1283-1289).
Nets-Within-Nets. In G. Ciardo & P. Darondeau (Eds.),
Proceedings of the 26th International Conference on Carter, M. (2002). Health Care Management, Diagnosis:
Applications and Theory of Petri Nets (ICATPN 2005) Mismanagement of Resources. Operation Research /
(p. 148-167). Miami, FL: Springer. Management Science (OR/MS) . Today, 29(2), 26–32.

Cairo, O., Aldeco, A., & Algorri, M. (2001). Virtual Carter, M., & Blake, J. (2004). Using simulation in an
Museum’s Assistant. In Proceedings of 2nd Asia-Pacific acute-care hospital: easier said than done. In M. Bran-
Conference on Intelligent Agent Technology. Hackensack, deau, F. Sainfort, &W. Pierskala, (Eds.), Operations
NJ: World Scientific Publishing Co. Pte. Ltd. Research and Health Care. A Handbook of Methods and
Applications, (pp.192-215). Boston: Kluwer Academic
Cali, F., Conti, M., & Gregori, E. (2000). IEEE 802.11
Publisher.
protocol: design and performance evaluation of an
adaptive backoff mechanism. IEEE Journal on Se- Casti, J. L. (1995). Complexification: Explaining a para-
lected Areas in Communications, 18(9), 1774–1786. doxical world through the science of surprise (1st Ed.),
doi:10.1109/49.872963 New York: HarperPerennial.

Capra, L., & Cazzola, W. (2007, December). Self-Evolving Cattaneo, G., Italiano, G. F., & Ferraro-Petrillo, U.
Petri Nets. Journal of Universal Computer Science, (2002, August). CATAI: Concurrent algorithms and
13(13), 2002–2034. data types animation over the internet. Journal of Visual
Languages and Computing, 13(4), 391–419. doi:10.1006/
Capra, L., & Cazzola, W. (2007, on 26th-29th of Sep-
jvlc.2002.0230
tember). A Reflective PN-based Approach to Dynamic
Workflow Change. In Proceedings of the 9th International CAVIAR project website (n.d.). Available at http://www.
Symposium in Symbolic and Numeric Algorithms for Sci- imse.cnm.es/caviar/.
entific Computing (SYNASC’07) (p. 533-540). Timisoara,
Cazzola, W. (1998, July 20th-24th). Evaluation of Object-
Romania: IEEE CS.
Oriented Reflective Models. In Proceedings of ecoop
Capra, L., & Cazzola, W. (2009). An Introduction to workshop on reflective object-oriented programming
Reflective Petri Nets. In E. M. O. Abu-Atieh (Ed.), and systems (ewroops’98). Brussels, Belgium.
Handbook of Research on Discrete Event Simulation
Cazzola, W., Ghoneim, A., & Saake, G. (2004, July).
Environments Technologies and Applications. Hershey,
Software Evolution through Dynamic Adaptation of
PA: IGI Global.
Its OO Design. In H.-D. Ehrich, J.-J. Meyer, & M. D.
Capra, L., & Cazzola, W. (2009). Trying out Reflective Ryan (Eds.), Objects, Agents and Features: Structuring
Petri Nets on a Dynamic Workflow Case. In E. M. O. Mechanisms for Contemporary Software (pp. 69-84).
Abu-Atieh (Ed.), Handbook of Research on Discrete Heidelberg, Germany: Springer-Verlag.
Event Simulation Environments Technologies and Ap-
plications. Hershey, PA: IGI Global.

540
Compilation of References

Cebenoyan, A., & Strahan, P. (2004). Risk manage- Cho, Y. K., Hu, X. L., & Zeigler, B. P. (2003). The RT-
ment, capital structure and Lending at banks. Journal DEVS/CORBA environment for simulation-based design
of Banking & Finance, 28, 19–43. doi:10.1016/S0378- of distributed real-time systems. Simulation Transactions,
4266(02)00391-6 79(4), 197–210. doi:10.1177/0037549703038880

Cellier, F., & Kofman, E. (2005). Continuous system Choi, S., Park, K., & Kim, C. (2005). On the performance
simulation. Berlin: Springer. characteristicsof WLANs: revisited. Paper presented at
the 2005 ACM SIGMETRICS international conference
Cercas, F. A. B., Cartaxo, A. V. T., & Sebastião, P. J. A.
on Measurement and modeling of computer systems
(1999). Performance of TCH codes with independent
(pp. 97–108).
and burst errors using efficient techniques. 50th IEEE
Vehicular Technology Conference, Amsterdam, Neth- Chow, J. (1999). Development of channel models for
erlands, (VTC99-Fall), (pp. 2536-2540). simulation of wireless systems in OPNET. Transactions
of the Society for Computer Simulation International,
Cercas, F. A., Tomlinson, M., & Albuquerque, A. A.
16(3), 86–92.
(1993). TCH: A new family of cyclic codes length 2m.
International Symposium on Information Theory, IEEE Chrysanthou, Y., Tecchia, F., Loscos, C., & Conroy, R.
Proceedings, (pp. 198-198). (2004). Densely populated urban environments. The
Engineering and Physical Sciences Research Council.
Chandrakanthi, M., Ruwanpura, J. Y., Hettiaratchi, P., &
Retrieved from http://www.cs.ucl.ac.uk/research/vr/
Prado, B. (2002). Optimization of the waste management
Projects/Crowds/
for construction projects using simulation. Proceedings
of the 2002 Winter Simulation Conference, San Diego, Chung, A. C. (2004). Simulation and Modeling Hand-
CA, (pp. 1771-1777). book: A Practical Approach. Boca Raton, FL: CRC
Press.
Chang, X. (1999). Network simulation with OPNET. In P.
A. Farrington, H. B. Nembhard, D. T. Sturrock, & G. W. Chung, W., Chen, H., & Nunamaker, J. F. Jr. (2005).
Evans (Eds.), Proceedings of the 1999 Winter Simulation A Visual Framework for Knowledge Discover on the
Conference, (pp. 307-314). web: An Empirical Study of Business Intelligence Ex-
ploration. Journal of Management Information Systems,
Chiola, G., Dutheillet, C., Franceschinis, G., & Haddad,
21(4), 57–84.
S. (1990, June). On Well-Formed Coloured Nets and Their
Symbolic Reachability Graph. In Proceedings of the 11th Clarke, R. H. (1968). A statistical theory of mobile
international conference on application and theory of radio reception. The Bell System Technical Journal, 47,
Petri nets, (p. 387-410). Paris, France. 957–1000.

Chiola, G., Dutheillet, C., Franceschinis, G., & Claverol, E., Brown, A., & Chad, J. (2002). Discrete simu-
Haddad, S. (1993, November). Stochastic Well-Formed lation of large aggregates of neurons. Neurocomputing,
Coloured Nets for Symmetric Modeling Applications. 47, 277–297. doi:10.1016/S0925-2312(01)00629-4
IEEE Transactions on Computers, 42(11), 1343–1360.
Clifford, J., Gaehde, S., Marinello, J., Andrews, M., &
doi:10.1109/12.247838
Stephens, C. (2008). Improving Inpatient and Emergency
Chiola, G., Franceschinis, G., Gaeta, R., & Ribaudo, Department Flow for Veterans. Improvement report.
M. (1995, November). GreatSPN 1.7: Graphical Editor Institute for Healthcare Improvement. Retrieved from
and Analyzer for Timed and Stochastic Petri Nets. Per- http://www.IHI.org/ihi
formance Evaluation, 24(1-2), 47–68. doi:10.1016/0166-
Cody, F., Kreulen, J. T., Krishna, V., & Spangler, W.
5316(95)00008-L
S. (2002). The Integration of Business Intelligence

541
Compilation of References

and Knowledge Management. Systems Journal, 41(4), Improvement. PHLO. Posted Feb 19, 2008 at http://phlo.
697–713. typepad.com

Coloured Petri Nets at the University of Aarhus (2009). Dahmann, J., Salisbury, M., Turrel, C., Barry, P., &
Retrieved from http://www.daimi.au.dk/CPnets/ Blemberg, P. (1999). HLA and beyond: Interoperability
challenges. Paper no. 99F-SIW-073 presented at the
Cor, H., & Martinez, J. C. (1999). A case study in the
Fall Simulation Interoperability Workshop Orlando,
quantification of a change in the conditions of a highway
FL, USA.
construction operation. Proceedings of the 1999 Winter
Simulation Conference, Phoenix, AZ, (pp. 1007-1009). Davis, P. K. (1992). Generalizing concepts and methods
of verification, validation, and. Santa Monica, CA:
Corbitt T. (2003). Business Intelligence and Data Mining.
RAND.
Management Services Magazine, November 2003.
Davis, P. K., & Blumenthal, D. (1991). The base of sand
Council of Graduate Schools. (2007). Findings from the
problem: A white paper on the state of military. Santa
2006 CGS International Graduate Admissions Survey,
Monica, CA: RAND.
Phase III Admissions and Enrolment Oct. 2006, Revised
March 2007. Council of Graduate Schools Research Re- de Bruin, A., van Rossum, A., Visser, M., & Koole,
port, Council of Graduate Schools, Washington DC. G. (2007). Modeling the Emergency cardiac in-patient
flow: an application of queuing theory. Health Care
Crain, R. C. (1997). Simulation using GPSS/H. Proceed-
Management Science, 10, 125–137. doi:10.1007/s10729-
ings of the 1997 Winter Simulation Conference, Atlanta,
007-9009-8
GA, (pp. 567-573).
Dearie, J., & Warfield, T. (1976, July 12-14). The develop-
Crews, W. (n.d.). Gwinnett County Public Schools. Re-
ment and use of a simulation model of an outpatient clinic.
trieved 2008, from Gwinnett County Public Schools,
Proceedings of the 1976 Summer computer Simulation
http://www.crews.org/curriculum/ex/compsci/8thgrade/
Conference, Simulation Council, Washington, DC, (pp.
stkmkt/index.htm
554-558).
CRONOS project website (n.d.). Available at www.
Dehaene, S., & Changeux, J.-P. (2005). Ongoing spon-
cronosproject.net.
taneous activity controls access to consciousness: A
Crosbie, R. E. (2000, December). Simulation curriculum: neuronal model for inattentional blindness. Public Library
a model curriculum in modeling and simulation: do we of Science Biology, 3(5), 910–927.
need it? Can we do it? In J. A. Joines, R. R. Barton, K.
Deitel, H. M., & Deitel, P. J. (1998). C++ how to program.
Kang & P. A. Fishwick, (Eds.) Proceedings of the 2000
Upper Saddle River, NJ: Prentice Hall.
Winter Simulation Conference, Piscataway, NJ, (pp.
1666-1668). Washington, DC: IEEE... Delorme, A., & Thorpe, S. J. (2003). SpikeNET: An event-
driven simulation package for modeling large networks
Czerwinski, T. (1998). Coping with the Bounds: Specula-
of spiking neurons. Network: Computational in Neural
tions on Nonlinearity in Military Affairs. Washington,
Systems, 14, 613–627. doi:10.1088/0954-898X/14/4/301
DC: National Defense University Press.
Department of Defense. (1996). Modeling & Simulation
D. Molkdar, D. (1991). Review on radio propagation
Verification, Validation, and Accreditation (US DoD
into and within buildings. IEEE Proceedings, 138(11),
Instruction 5000.61). Washington, DC: Author.
61-73.
Desel, J., & Esparza, J. (1995). Free Choice Petri Nets
D’Alesandro, J. (2008). Queuing Theory Misplaced in
(Cambridge Tracts in Theoretical Computer Science Vol.
Hospitals. Management News from the Front, Process
40). New York: Cambridge University Press.

542
Compilation of References

DEVS-Standardization-Group. (2008). General Info. ENOVIA. (2007, September). Dassault systèmes plm
Retrieved September 1st, 2008, from http://cell-devs. solutions for the mid-market [white-paper]. Retrieved
sce.carleton.ca/devsgroup/. from http:/www.3ds.com/fileadmin/brands/enovia/pdf/
whitepapers/CIMdata-DS_PLM_for_the_MidMarket-
Dexter, F., Macario, A., Traub, R., Hopwood, M., &
Program_review-Sep2007.pdf)
Lubarsky, D. (1999). An Operating Room Scheduling
Strategy to Maximize the Use of Operating Room Epstein, J. M., & Axtell, R. (1996). Growing artificial
Block Time: Computer Simulation of Patient Schedul- socieities: Social science from the bottom up. Washington,
ing and Survey of Patients’ Preferences for Surgical DC: Brookings Institution Press.
Waiting Time. Anesthesia and Analgesia, 89, 7–20.
Erhard, W., Reinsch, A., & Schober, T. (2001). Model-
doi:10.1097/00000539-199907000-00003
ing and Verification of Sequential Control Path Using
Diesmann, M., & Gewaltig, M.-O. (2002). NEST: An Petri Nets. In Proc. DESDes’01, (pp. 41 – 46) Zielona
environment for neural systems simulations. In V. Ma- Gora, Poland.
cho (Ed.), Forschung und wissenschaftliches rechnen.
Erl, T. (2005). Service-Oriented Architecture Concepts,
Heinz-Billing-Preis, GWDG-Bericht.
Technology and Design. Upper Saddle River, NJ: Pren-
Digilent, (2009). Retrieved from http://www.digilentinc. tice Hall.
com
Ester, M., Kriegel, H. P., Sander, J., Wimmer, M., & Xu,
Dobrev, P., Kalaydjiev, O., & Angelova, G. (2007). From X. (1998). Incremental clustering for mining in a data
Conceptual Structures to Semantic Interoperability warehousing environment. Proceedings Of The 24th
of Content. (LNCS Vol. 4604, pp. 192-205). Berlin: Vldb Conference, New York.
Springer-Verlag.
Everett, J. E. (2002). A decision support simulation
EBC Web site (2001). EBC achievements: civil engi- model for the management of an elective surgery waiting
neering/earthworks. Retrieved from http://www.ebcinc. system. Journal for Health Care Management Science,
qc.ca/ 5(2), 89–95. doi:10.1023/A:1014468613635

Elfving, J. A., & Tommelein, I. D. (2003). Impact of Ewing, G., Pawlikowski, K., & McNickle, D. (1999, June).
multitasking and merge bias on procurement of Complex Akaroa 2: exploiting network computing by distributed
Equipment. Proceedings of the 2003 Winter Simulation stochastic simulation. Paper presented at European
Conference, New Orleans, LA, (pp. 1527-1533). Simulation Multiconference (ESM’99), Warsaw, Poland,
(pp. 175-181).
Ellis, C., & Keddara, K. (2000, August). ML-DEWS:
Modeling Language to Support Dynamic Evolution with- Fall, K., & Varadhan, K. (2008). The ns manual. The
in Workflow Systems. Computer Supported Cooperative VINT project. Retrieved February 10, 2008, from http://
Work, 9(3-4), 293–333. doi:10.1023/A:1008799125984 www.isi.edu/nsnam/ns/doc/

Engineering and Physical Sciences Research Council Family of Standards for Modeling and Simulation (M&S)
(EPSRC). (2004). Review of Research Status of Opera- High Level Architecture (HLA) – (a) IEEE 1516-2000
tional Research (OR) in the UK, Swindon, UK. Retrieved Framework and Rules; (b) IEEE 1516.1-2000 Federate
from www.epsrc.ac.uk Interface Specification; (c) IEEE 1516.2-2000 Object
Model Template (OMT) Specification IEEE 1516-2000,
English, J., & Siviter, P. (2000). Experience with an
IEEE Press.
automatically assessed course. In Proceedings of the 5th
annual SIGCSE/sigcue conference on innovation and Fang, X., Sheng, O. R. L., Gao, W., & Iyer, B. R. (2006).
technology in computer science education, iticse’00 (pp. A data-mining-based prefetching approach to caching for
168-171), Helsinki, Finland. New York: ACM Press.

543
Compilation of References

network storage systems. INFORMS Journal on Comput- Fishman, G. S., & Kiviat, P. J. (1968). The statistics
ing, 18(2), 267–282. doi:10.1287/ijoc.1050.0142 of discrete-event simulation. Simulation, 10, 185–195.
doi:10.1177/003754976801000406
Fantacci, R., Pecorella, T., & Habib, I. (2004). Proposal
and performance evaluation of an efficient multiple- Fishwick, P. (2002). The Art of Modeling. Modeling &
access protocol for LEO satellite packet networks. IEEE Simulation Magazine, 1(1), 36.
Journal on Selected Areas in Communications, 22(3),
Fleischer, R., & Kucera, L. (2001). Algorithm animation
538–545. doi:10.1109/JSAC.2004.823437
for teaching. In S. Diehl (Ed.), Software visualization:
Farid, F., & Koning, T. L. (1994). Simulation verifies International seminar (pp. 113-128). Dagstuhl, Germany:
queuing program for selecting loader-truck fleets. Springer.
Journal of Construction Engineering and Manage-
Fone, D., Hollinghurst, S., & Temple, M. (2003). System-
ment, 120(2), 386–404. doi:10.1061/(ASCE)0733-
atic review of the use and value of computer simulation
9364(1994)120:2(386)
modeling in population health and health care delivery.
Farwer, B., & Moldt, D. (Eds.). (2005, August). Object Journal of Public Health Medicine, 25(4), 325–335.
Petri Nets, Process, and Object Calculi. Hamburg, Ger- doi:10.1093/pubmed/fdg075
many: Universität Hamburg, Fachbereich Informatik.
Fong, J., Li, Q., & Huang, S. (2003). Universal data
Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). warehousing based on a meta-data modeling approach.
From data mining to knowledge discovery in databases, International Journal of Cooperative Information Sys-
American Association for Artificial Intelligence. AI tems, 12(3), 318–325. doi:10.1142/S0218843003000772
Magazine, 37–54.
Forouzan, B. A. (2007). Data Communications and
Feinstein, A. H., & Cannon, H. M. (2002). Constructs Networking (4th Ed.). Boston: McGraw-Hill.
of simulation evaluation. Simulation & Gaming, 33(4),
Forrester, J. (1961). Industrial Dynamics. New York:
425–440. doi:10.1177/1046878102238606
MIT Press and Wiley Inc.
Feinstein, A. H., & Cannon, H. M. (2003). A herme-
Forrester, J. (1992). System dynamics, systems thinking,
neutical approach to external validation of simula-
and soft OR. System Dynamics Review, 10(2).
tion models. Simulation & Gaming, 34(2), 186–197.
doi:10.1177/1046878103034002002 French, R. C. (1979). The effect of fading and shadowing
on channel reuse in mobile radio. IEEE Transactions on
Ferber, J. (1999). Multi-agent systems: An introduc-
Vehicular Technology, 28(3), 171–181. doi:10.1109/T-
tion to distributed artificial intelligence. Harlow, UK:
VT.1979.23788
Addison-Wesley
Frigg, R., & Reiss, J. (2008). The philosophy of simula-
Feuerstein, M. J., Blackard, K. L., Rappaport, T. S., Seidel,
tion: Hot new issues or same old stew? Gershenson, C.
S. Y., & Xia, H. H. (1994). Path loss, delay spread, and
(2002). Complex philosophy. The First Biennial Seminar
outage models as functions of antenna height for micro-
on the Philosophical, Methodological.
cellular system design. IEEE Transactions on Vehicular
Technology, 43(3), 487–489. doi:10.1109/25.312809 Fujimoto, R. (1998). Time management in the High-Level
Architecture. Simulation: Transactions of the Society for
Filippi, J. B., & Bisgambiglia, P. (2004). JDEVS: an
Modeling and Simulation International, 71(6), 388–400.
implementation of a DEVS based formal framework
doi:10.1177/003754979807100604
for environmental modelling. Environmental Model-
ling & Software, 19(3), 261–274. doi:10.1016/j.env- Fujimoto, R. (2000). Parallel and distributed simulation
soft.2003.08.016 systems. Mahwah, NJ: John Wiley and Sons, Inc.

544
Compilation of References

Furber, S. B., Temple, S., & Brown, A. D. (2006). the 1995 Winter Simulation Conference, (pp. 1048-1053).
High-performance computing for systems of spiking Washington, DC: IEEE.
neurons. In Proceedings of the AISB’06 Workshop on
Gartner Group. (1996, September). Retrieved November
GC5: Architecture of Brain and Mind, (Vol. 2, pp 29-
12, 2005, from http://www.innerworx.co.za/products.
36). Bristol: AISB.
htm.
Fusk, H., Lawniczak, A. T., & Volkov, S. (2001). Packet
Gass, S. I. (1999). Decision-Aiding Models: Validation,
delay in models of data networks. ACM Transactions
Assessment, and Related Issues for Policy Analysis.
on Modeling and Computer Simulation, 11(3), 233–250.
Operations Research, 31(4), 601–663.
doi:10.1145/502109.502110
Gatfield, T., Barker, M., & Graham, P. (1999). Measur-
Gallivan, S., Utley, M., Treasure, T., & Valencia, O. (2002).
ing Student Quality Variables and the Implications for
Booked inpatient admissions and hospital capacity:
Management Practices in Higher Education Institutions:
mathematical modeling study. British Medical Journal,
an Australian and International Student Perspective.
324, 280–282. doi:10.1136/bmj.324.7332.280
Journal of Higher Education Policy and Management,
Gamez, D. (2007). SpikeStream: A fast and flexible 21(2). doi:10.1080/1360080990210210
simulator of spiking neural networks. In J. M. de Sá, L.
Gerstner, W., & Kistler, W. (2002). Spiking neuron models.
A. Alexandre, W. Duch & D. P. Mandic (Eds.) Proceed-
Cambridge, UK: Cambridge University Press.
ings of ICANN 2007, (Vol. 4668, pp. 370-79). Berlin:
Springer Verlag. Girault, C., & Valk, R. (2003). Petri Nets for Systems
Engineering. Berlin: Springer-Verlag.
Gamez, D. (2008). Progress in machine conscious-
ness. Consciousness and Cognition, 17(3), 887–910. Girija, N., & Srivatsa, S. K. (2006). A Research Study-
doi:10.1016/j.concog.2007.04.005 Using Data Mining in Knowledge Base Business Strat-
egies. Information Technology Journal, 5(3), 590–600.
Gamez, D. (2008). The Development and analysis of
doi:10.3923/itj.2006.590.600
conscious machines. Unpublished doctoral disserta-
tion. University of Essex, UK. Available at http://www. Gitman, L. (2000). Managerial Finance: Brief. Reading,
davidgamez.eu/mc-thesis/ MA: Addison-Wesley.

Gamez, D., Newcombe, R., Holland, O., & Knight, R. Gleick, J. (1987). Chaos: Making a new science. New
(2006). Two simulation tools for biologically inspired York: Viking.
virtual robotics. Proceedings of the IEEE 5th Chapter
GloMoSim. (2007). GloMoSim Manual. Retrieved April
Conference on Advances in Cybernetic Systems (pp.
20, 2007, from http://pcl.cs.ucla.edu/projects/glomosim/
85-90). Sheffield, UK: IEEE.
GloMoSimManual.html
Garcia, M. N. M., Roman, I. R., Penalvo, F. J. G., & Bo-
Gloor, P. A. (1998). User interface issues for algorithm
nilla, M. T. (2008). An association rule mining method
animation. In M. Brown, J. Domingue, B. Price, & J.
for estimating the impact of project management policies
Stasko (Eds.), Software visualization: Programming
on software quality, development time and effort . Expert
as a multimedia experience (pp. 145–152). Cambridge,
Systems with Applications, 34, 522–529. doi:10.1016/j.
MA: The MIT Press.
eswa.2006.09.022
Gödel, K. (1931). Über formal unentscheidbare sätze der
Garcia, M., Centeno, M., Rivera, C., & DeCario, N.
principia mathematica und verwandter. Monatshefte Für
(1995). Reducing Time in an Emergency Room via a
Mathematik Und Physik, (38), 173-198.
Fast-track. In C. Alexopoulos, et al (Ed.), Proceedings of

545
Compilation of References

Goldratt, E., & Cox, J. (2004). The Goal (3rd Ed., p. 384). Green, L., Soares, J., Giglio, J., & Green, R. (2006). Using
Great Barrington, MA: North River Press. Queuing Theory to Increase the Effectiveness of Emer-
gency Department Provider Staffing. Academic Emer-
Gomes, L., & Barros, J.-P. (2001). Using Hierarchical
gency Medicine, 13, 61–68. doi:10.1111/j.1553-2712.2006.
Structuring Mechanism with Petri Nets for PLD Based
tb00985.x
System Design. In Proc. DESDes’01 (pp. 47 – 52), Zielona
Gora, Poland. Griffin, A., Lacetera, J., Sharp, R., & Tolk, A. (Eds.).
(2002). C4ISR/Sim Technical Reference Model Study
Gorgone, J. T., Gray, P., Stohr, E. A., Valacich, J. S., &
Group Final Report (C4ISR/Sim TRM), (SISO-Reference
Wigand, R. T. (2006). MSIS 2006. Model Curriculum
Document 008-2002-R2). Simulation Interoperability
and Guidelines for Graduate Degree Programs in Infor-
Standards Organization, Orlando, FL.
mation Systems. Communications of the Association for
Information Systems, 17, 1–56. Grigori, D., Casati, F., Castellanos, M., Sayal, U. M.,
& Shan, M. C. (2004). Business Process Intelligence.
Gowda, R. K., Singh, A., & Connolly, M. (1998). Holistic
Computers in Industry, 53, 321–343. doi:10.1016/j.com-
enhancement of the production analysis of bituminous
pind.2003.10.007
paving operations. Construction Management and Eco-
nomics, 16(4), 417–432. doi:10.1080/014461998372204 Grillmeyer, O. (1999). An interactive multimedia textbook
for introductory computer science. In The proceed-
Graco, W., Semenova, T., & Dubossarsky, E. (2007).
ings of the thirtieth SIGCSE technical symposium on
Toward Knowledge-Driven Data Mining. ACM SIGKDD
computer science education (pp. 286–290). New York:
Workshop on Domain Driven Data Mining (DDDM2007),
ACM Press.
(pp. 49-54).
Gunal, M., & Pidd, M. (2006). Understanding Acci-
Green, D. B., & Obaidat, M. S. (2003). Modeling and
dent and Emergency Department Performance using
simulation of IEEE 802.11 WLAN mobile ad hoc net-
Simulation. In L. Perrone, et al (Ed.) Proceedings of
works using topology broadcast reverse-path forwarding
the 2006 Winter Simulation Conference (pp. 446-452).
(TBRPF). Computer Communications, 26(15), 1741–1746.
Washington, DC: IEEE.
doi:10.1016/S0140-3664(03)00043-4
Ha, T. T. (1990). Digital satellite communications (2nd
Green, L. (2004). Capacity Planning and Management
Ed.). New York: McGraw-Hill.
in Hospitals. In M. Brandeau., F. Sainfort, W. Pierskala,
(Eds.), Operations Research and Health Care. A Hand- Hagmann, C., Lange, D., & Wright, D. (2008, 1).
book of Methods and Applications, (pp.15-41). Boston: Cosmic-ray Shower Library (CRY). Retrieved 10 2008,
Kluwer Academic Publisher. from Lawrence Livermore National Laboratory, http://
nuclear.llnl.gov/
Green, L. (2006). Queuing Analysis in Healthcare. In R.
Hall, (Ed.), Patient Flow: Reducing Delay in Healthcare Hajjar, D., & AbouRizk, S. M. (1999). Simphony: an
Delivery (pp. 281-307). New York: Springer. environment for building special purpose simulation.
Proceedings of the 1999 Winter Simulation Conference,
Green, L., Kolesar, P., & Soares, J. (2001). Improving
Phoenix, AZ, (pp. 998-1006).
the SIPP approach for staffing service systems that have
cyclic demands. Operations Research, 49, 549–564. Hall, R. (1990). Queuing methods for Service and
doi:10.1287/opre.49.4.549.11228 Manufacturing. Upper Saddle River, NJ: Prentice Hall.

Green, L., Kolesar, P., & Svoronos, A. (1991). Some Halpin, D. W. (1977). CYCLONE-method for modeling
effects of non-stationarity on multi- server Markovian job site processes. Journal of the Construction Division,
queuing Systems. Operations Research, 39, 502–511. 103(3), 489–499.
doi:10.1287/opre.39.3.502

546
Compilation of References

Halpin, D. W., & Riggs, L. S. (1992). Planning and Hashemi, H. (1993). The indoor radio propagation
analysis of construction operations. New York: John channel. Proceedings of the IEEE, 81(7), 943–968.
Wiley & Sons, Inc. doi:10.1109/5.231342

Hanna, M., & Ruwanpura, J. (2007). Simulation tool Hason, S. F. (1994). Feasibility and implementation of
for manpower forecast loading and resource leveling. automation and robotics in Canadian building construc-
Proceedings of the 2007 Winter Simulation Conference, tion operation. M.Sc.Thesis, Center for Building Studies,
Washington, DC, (pp. 2099-2103). Concordia University, Portland, OR.

Hansen, F., & Meno, F. I. (1977). Mobile fading-Rayleigh Hassan, M., & Jain, R. (2003). High Performance TCP/
and lognormal superimposed. IEEE Transactions on IP Networking: Concepts, Issues, and Solutions. Upper
Vehicular Technology, 26(4), 332–335. doi:10.1109/T- Saddle River, NJ: Prentice-Hall.
VT.1977.23703
Hata, M. (1990). Empirical formula for propagation
Hansen, S. R., Narayanan, N. H., & Schrimpsher, D. loss in land mobile radio services. IEEE Transactions
(2000, May). Helping learners visualize and comprehend on Vehicular Technology, 29(3), 317–325. doi:10.1109/T-
algorithms. Interactive Multimedia Electronic Journal VT.1980.23859
of Computer-Enhanced Learning, 2(1).
Haydon, P. (2000). Neuroglial networks: Neurons and
Haq, F., & Kunz, T. (2005). Simulation vs. emulation: glia talk to each other. Current Biology, 10(19), 712–714.
Evaluating mobileadhoc network routing protocols. Pa- doi:10.1016/S0960-9822(00)00708-9
per presented at the International Workshop on Wireless
Haykin, S. (2001). Communication systems (4th Ed.).
Ad-hoc Networks (IWWAN 2005).
Chichester, UK: John Wiley & Sons, Inc.
Haraden, C., Nolan, T., & Litvak, E. (2003). Optimizing
Heidemann, J., Bulusu, N., Elson, J., Intanagonwiwat,
Patient Flow: Moving Patients Smoothly Through Acute
C., Lan, K., & Xu, Y. (2001). Effects of detail in wireless
Care Setting [White papers 2]. Institute for Healthcare
network simulation. Paper presented at the SCS Multi-
Improvement Innovation Series, Cambridge, MA
conference on Distributed Simulation (pp. 3–11).
Harmon, S. Y. (2002, February-March). Can there be a
Helstrom, C. W. (1984). Probability and stochastic pro-
Science of Simulation? Why should we care? Modeling
cesses for engineers (1st Ed.). New York: MacMillan.
& Simulation Magazine, 1(1).
Henry, R. R., Whaley, K. M., & Forstall, B. (1990).
Harrison, G., Shafer, A., & Mackay, M. (2005). Mod-
The University of Washington illustrating compiler.
eling Variability in Hospital Bed Occupancy. Health
In Proceedings of the ACM SIGPLAN’90 conference
Care Management Science, 8, 325–334. doi:10.1007/
on programming language design and implementation
s10729-005-4142-8
(pp. 223-233).
Hasenauer, H. (2006). Sustainable forest management:
Heusse, M., Rousseau, F., Berger-Sabbatel, G., & Duda,
growth models for europe. Berlin: Springer.
A. (2003, March 30-April 3). Performance anomaly of
Hashemi, H. (1979). Simulation of the urban radio propa- 802.11b. Paper presented at the IEEE INFOCOM’03
gation channel. IEEE Transactions on Vehicular Technol- (pp. 836-843).
ogy, 28(3), 213–225. doi:10.1109/T-VT.1979.23791
Hicheur, A., Barkaoui, K., & Boudiaf, N. (2006, Septem-
Hashemi, H. (1993). Impulse response modelling of ber). Modeling Workflows with Recursive ECATNets.
indoor radio propagation channels. IEEE Journal on In Proceedings of the Eighth International Symposium
Selected Areas in Communications, 11(7), 967–978. on Symbolic and Numeric Algorithms for Scientific
doi:10.1109/49.233210 Computing (SYNACS’06) (p. 389-398). Timisoara, Ro-
mania: IEEE CS.

547
Compilation of References

Higgins, C., Symeonidis, P., & Tsintsifas, A. (2002). The Huang, Y. (2002). Infrastructure, query optimiza-
marking system for CourseMaster. In Proceedings of the tion, data warehousing and data mining for scientific
7th annual conference on innovation and technology in simulation. A thesis, University of Notre Dame, Notre
computer science education (pp. 46–50). New York: Dame, IN.
ACM Press.
Hürsch, W., & Videira Lopes, C. (1995, February). Sepa-
Hilen, D. (2000). Taylor Enterprise Dynamics (User ration of Concerns (Tech. Rep. No. NUCCS- 95-03).
Manual). Utrecht, The Netherlands: F&H Simulations Northeastern University, Boston.
B.V.
Hundhausen, C. D., Douglas, S. A., & Stasko, J. T. (2002,
Hill, R. R., McIntyre, G. A., Tighe, T. R., & Bullock, R. June). A meta-study of algorithm visualization effective-
K. (2003). Some experiments with agent-based combat ness. Journal of Visual Languages and Computing, 13(3),
models. Military Operations Research, 8(3), 17–28. 259–290. doi:10.1006/jvlc.2002.0237

Hiller, L., Gosnell, T., Gronberg, J., & Wright, D. (2007, Hundhausen, C., & Brown, J. (2005). What you see is
November). RadSrc Library and Application Manual. what you code: A “radically-dynamic” algorithm vi-
Retrieved October 2008, from http://nuclear.llnl.gov/ sualization development model for novice learners. In
Proceedings IEEE 2005 symposium on visual languages
Hillier, F., & Yu, O. (1981). Queuing Tables and Graphs
and human-centric computing.
(pp.1-231). New-York: Elsevier, Hlupic, V., (2000). Simu-
lation Software: A Survey of Academic and Industrial Huyet, A. L. (2006). Optimization and analysis aid via
Users. International Journal of Simulation, 1(1), 1–11. data-mining for simulated production systems. Euro-
pean Journal of Operational Research, 173, 827–838.
Hoare, C. A. R. (1985). Communicating Sequential Pro-
doi:10.1016/j.ejor.2005.07.026
cesses. Upper Saddle River, NJ: Prentice Hall.
Hydro-Quebec (1999). Sainte-Marguerite-3 hydroelec-
Hodges, J. S. (1991). Six or so things you can do with
tric project: in harmony with the environment.
a bad model. Operations Research, 39(3), 355–365.
doi:10.1287/opre.39.3.355 Hyvönen, J., & Malmi, L. (1993). TRAKLA – a system
for teaching algorithms using email and a graphical editor.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative
In Proceedings of hypermedia in Vaasa (pp. 141-147).
description of membrane current and its application
to conduction and excitation in nerve. The Journal of IEEE. (2000). HLA Framework and Rules (Version IEEE
Physiology, 117, 500–544. 1516-2000). Washington, DC: IEEE Press.

Hoffmann, K., Ehrig, H., & Mossakowski, T. (2005, June). Ihde, D. (2006). Models, models everywhere. In G.
High-Level Nets with Nets and Rules as Tokens. In G. K. Johannes Lenhard, & T. Shinn (Eds.), Simulation:
Ciardo & P. Darondeau (Eds.), Proceedings of the 26th Pragmatic construction of reality; sociology of the sci-
International Conference on Applications and Theory ences yearbook (pp. 79-86). Dordrecht, The Netherlands:
of Petri Nets (ICATPN 2005) (pp. 268-288). Miami, FL: Springer.
Springer.
IHI - Institute for Healthcare Improvement. (2008).
Holland, J. H. (1995). Hidden order: How adaptation Boston, MA: Author. Retrieved from http://www.ihi.
builds complexity. Cambridge, MA: Helix Books. org/IHI/Programs/ConferencesAndSeminars/Applying-
QueuingTheorytoHealthCareJune2008.htm
Huang, R., Grigoriadis, A. M., & Halpin, D. W. (1994).
Simulation of cable-stayed bridges using DISCO. Pro- Ikegami, F., Yoshida, S., Takeuchi, T., & Umehira, M.
ceedings of the 1994 Winter Simulation Conference, (1984). Propagation factors controlling mean field on
Orlando, FL, (pp. 1130-1136).

548
Compilation of References

urban streets. IEEE Transactions on Antennas and Propa- Noisy Mobile Ad Hoc Networks. M.Sc Thesis, Depart-
gation, 32(8), 822–829. doi:10.1109/TAP.1984.1143419 ment of Computer Science, Amman Arab University for
Graduate Studies, Jordan.
Ilachinski, A. (2000). Irreducible semi-autonomous adap-
tive combat (ISAAC): An artificial-life approach to land Jaradat, Y. (2007). Development and Performance
warefare. Military Operations Research, 5(3), 29–47. Analysis of a Probabilistic Flooding in Noisy Mobile
Ad Hoc Networks. M.Sc Thesis, Department of Com-
Ingolfsson, A., & Gallop, F. (2003). Queuing ToolPak
puter Science, Amman Arab University for Graduate
4.0. Retrieved from http://www.bus.ualberta.ca/aingolfs-
Studies, Jordan.
son/QTP/
JarpPetri Nets Analyzer Home Page (2009). Retrieved
Ioannou, P. G. (1989). UM-CYCLONE user’s manual.
from http://jarp.sourceforge.net/
Division of Construction Engineering and Management,
Purdue University, West Lafayette, IN. Jaschik, S. (2006). Making Sense of ‘Bologna Degrees.’
Inside Higher Ed. Retrieved November 15, 2008 from
Ioannou, P. G. (1999). Construction of dam embankment
http://www.insidehighered.com/news/ 2006/11/06/
with nonstationary queue. Proceedings of the 1999 Winter
bologna
Simulation Conference, Phoenix, AZ, (pp. 921-928).
Jensen, K., & Rozenberg, G. (Eds.). (1991). High-Level
ITU-R, P.341-5, (1999). The concept of transmission loss
Petri Nets: Theory and Applications. Berlin: Springer-
for radio links, [Recommendation].
Verlag.
Izhikevich, E. M. (2003). Simple model of spiking
Jensen, P. A., & Bard, J. F. (2003). Operations research
neurons. IEEE Transactions on Neural Networks, 14,
models and methods. Hoboken, NJ: John Wiley &
1569–1572. doi:10.1109/TNN.2003.820440
Sons.
Izhikevich, E. M., & Edelman, G. M. (2008). Large-
Jermol, M., Lavrac, N., & Urbanci, T. (2003). Managing
scale model of mammalian thalamocortical systems.
business intelligence in a virtual enterprise: A case study
Proceedings of the National Academy of Sciences of the
and knowledge management lessons learned. Journal of
United States of America, 105, 3593–3598. doi:10.1073/
Intelligent & Fuzzy Systems, 14, 121–136.
pnas.0712231105
Jeruchim, M., Balaban, P., & Shanmugan, K. S. (2000).
Jackson, D., & Usher, M. (1997). Grading student pro-
Simulation of communication systems – modelling
grams using ASSYST. In Proceedings of 28th ACM
methodology and techniques (2nd ed.). Amsterdam, The
SIGCSE symposium on computer science education
Netherlands: Kluwer Academic.
(pp. 335-339).
Jimenez, G. & Saurian, J. (2003). Collateral, type of lender
Jackson, M. (2003). System Thinking: Creative Holism
and relationship banking as determinants of credit risk.
for Managers. Chichester, UK: Wiley.
Journal of banking and finance.
Jacobson, H., Hall, S., & Swisher, J. (2006). Discreet-
Johnson, A. (2006). The shape of molecules to come. In
Event Simulation of Health Care Systems. In R. Hall,
G. K. Johannes Lenhard, & T. Shinn (Eds.), Simulation:
(Ed.), Patient Flow: Reducing Delay in Healthcare
Pragmatic construction of reality; sociology of the sci-
Delivery, (pp. 210-252). New York: Springer.
ences yearbook (pp. 25-39). Dordrecht, The Netherlands:
Jakes, W. C. (1974). Microwave mobile communications Springer.
(1st Ed.). Washington, DC: IEEE Press.
Johnson, G. W. (1994). LabVIEW Graphical Program-
Jaradat, R. (2009). Development and Performance ming. New York: McGraw-Hill.
Analysis of Optimal Multipoint Relaying Algorithm for

549
Compilation of References

Johnson, N. Kotz., S., & Balakrishnan, N., (1994). Con- Kim, Y. J., Kim, J. H., & Kim, T. G. (2003). Heterogeneous
tinuous Univariate Distributions (Vol.1). New York: Simulation Framework Using DEVS BUS. Simulation
John Wiley & Sons. Transactions, 79, 3–18. doi:10.1177/0037549703253543

Jukic, N. (2006). Modeling strategies and alternatives Kincaid, H. (1998). Philsophy: Then and now. In N. S.
for data warehousing projects. Communications of the Arnold, T. M. Benditt & G. Graham (Eds.), (pp. 321-338).
ACM, 49(4), 83–88. doi:10.1145/1121949.1121952 Malden, MA: Blackwell Publishers Ltd.

Jun, J., Jacobson, H., & Swisher, J. (1999). Application Kindler, E. (Ed.). (2004). Definition, Implementation and
of DES in health-care clinics: A survey. The Journal of Application of a Standard Interchange Format for Petri
the Operational Research Society, 50(2), 109–123. Nets. In Proceedings of the Workshop of the satellite
event of the 25th International Conf. on Application and
Kabal, P. (1993). TEXdraw – PostScript drawings from
Theory of Petri Nets, Bologna, Italy.
TEX. Retrieved from http://www.tau.ac.il/cc/pages/docs/
tex-3.1415/ texdraw_toc.html Klein, E. E., & Herskovitz, P. J. (2005). Philosophical
foundations of computer simulation validation. Simula-
Kaipainen, T., Liski, J., Pussinen, A., & Karjalainen,
tion \& Gaming, 36(3), 303-329.
T. (2004). Managing carbon sinks by changing rotation
length in European forests. Environmental Science & Kleindorfer, G. B., & Ganeshan, R. (1993). The philosophy
Policy, 7(3), 205–219. doi:10.1016/j.envsci.2004.03.001 of science and validation in simulation. In Proceedings
of the 1993 Winter Simulation Conference, ed. G.W.
Karavirta, V., Korhonen, A., Malmi, L., & Stålnacke, K.
Evans, M. Mollaghasemi, E.C. Russell, and W.E. Biles,
(2004, July). MatrixPro – A tool for on-the-fly demonstra-
50-57. Piscataway, New Jersey: Institute of Electrical
tion of data structures and algorithms. In Proceedings of
and Electronic Engineers, Inc.
the third program visualization workshop (pp. 26–33).
Warwick, UK: Department of Computer Science, Uni- Kleindorfer, G. B., O’Neill, L., & Ganeshan, R. (1998).
versity of Warwick, UK. Validation in simulation: Various positions in the philoso-
phy of science. Management Science, 44(8), 1087–1099.
Kavi, K. M., Sheldon, F. T., Shirazi, B., & Hurson, A.
doi:10.1287/mnsc.44.8.1087
R. (1995, January). Reliability Analysis of CSP Speci-
fications Using Petri Nets and Markov Processes. In Knuuttila, T. (2006). From representation to production:
Proceedings of the 28th Annual Hawaii International Parsers and parsing in language. In G. K. Johannes Len-
Conference on System Sciences (HICSS-28) (p. 516-524). hard, & T. Shinn (Eds.), Simulation: Pragmatic construc-
Kihei, Maui, HI: IEEE Computer Society. tion of reality; sociology of the sciences yearbook (pp.
41-55). Dordrecht, The Netherlands: Springer.
Keating, B. A., P. S. (2003). An overview of APSIM, a
model designed for farming systems simulation. Euro- Ko, Y., & Vaidya, N. H. (2000). Location-Aided Routing
pean Journal of Agronomy, 18(3-4), 267–288. doi:10.1016/ (LAR) in Mobile Ad Hoc Networks. Journal of wireless
S1161-0301(02)00108-9 . Networks, 6(4), 307–321.

Kelton, W. D., Sadowski, R. P., & Sadowski, D. A. A. Kolker, A. (2008). Process Modeling of Emergency De-
(1998). Simulation with Arena. Boston: McGraw-Hill. partment Patient Flow: Effect of patient Length of Stay on
ED diversion. Journal of Medical Systems, 32(5), 389-401.
Kerdprasop, N., & Kerdpraso, K. (2007). Moving data
doi: http://dx.doi.org/10.1007/s10916-008-9144-x
mining tools toward a business intelligence system.
Enformatika, 19, 117–122. Kolker, A. (2009). Process Modeling of ICU Patient Flow:
Effect of Daily Load Leveling of Elective Surgeries on
Kheir, N. A. (1988). Systems Modeling and Computer
ICU Diversion. Journal of Medical Systems. 33(1), 27-40.
Simulation. Basel, Switzerland: Marcel Dekker Inc.
Doi: http://dx.doi.org/10.1007/s10916-008-9161-9

550
Compilation of References

Kopach-Konrad, R., Lawley, M., Criswell, M., Hasan, Kubátová, H. (2004). Direct Implementation of Petri Net
I., Chakraborty, S., Pekny, J., & Doebbeling, B. (2007). Based model in FPGA. In Proceedings of the International
Applying Systems Engineering Principles in Improving Workshop on Discrete-Event System Design - DESDes’04.
Health Care Delivery. Journal of General Internal Medi- Zielona Gora: University of Zielona Gora, (pp. 31-36).
cine, 22(3), 431–437. doi:10.1007/s11606-007-0292-3
Kubátová, H., & Novotný, M. (2005). Contemporary
Korhonen, A., & Malmi, L. (2000). Algorithm simula- Methods of Digital Design Education. Electronic Circuits
tion with automatic assessment. In Proceedings of the and Systems Conference, (pp. 115-118), Bratislava FEI,
5th annual SIGCSE/SIGCUE conference on innovation Slovak University of Technology.
and technology in computer science education, IT-
Küppers, G., & Lenhard, J. (2005). Validation of simula-
iCSE’00 (pp. 160–163), Helsinki, Finland. New York:
tion: Patterns in the social and natural sciences. Journal
ACM Press.
of Artificial Societies and Social Simulation, 8(4), 3.
Korhonen, A., Malmi, L., Myllyselkä, P., & Scheinin,
Küppers, G., & Lenhard, J. (2006). From hierarchical
P. (2002). Does it make a difference if students exercise
to network-like integration: A revolution of modeling.
on the web or in the classroom? In Proceedings of the
In G. K. Johannes Lenhard, & T. Shinn (Eds.), Simula-
7th annual SIGCSE/SIGCUE conference on innova-
tion: Pragmatic construction of reality; sociology of
tion and technology in computer science education,
the sciences yearbook (pp. 89-106). Dordrecht, The
ITICSE’02 (pp. 121-124), Aarhus, Denmark. New York:
Netherlands: Springer.
ACM Press.
Küppers, G., Lenhard, J., & Shinn, T. (2006). Computer
Kotz, D., Newport, C., Gray, R. S., Liu, J., Yuan, Y.,
simulation: Practice, epistemology, and social dynamics.
& Elliott, C. (2004). Experimental evaluation of wire-
In G. K. Johannes Lenhard, & T. Shinn (Eds.), Simula-
less simulation assumptions. In Proceedings of the 7th
tion: Pragmatic construction of reality; sociology of
ACM International Symposium on Modeling, Analysis,
the sciences yearbook (pp. 3-22). Dordrecht, The Neth-
and Simulation of Wireless and Mobile Systems, (pp.
erlands: Springer.
78-82).
Kuhlmann, A., Vetter, R. M., Lubbing, C., & Thole, C. A.
Kreutzer, W. (1986). System Simulation - Programming
(2005). Data mining on crash simulation data. Proceed-
Styles and Languages. Reading, MA: Addison Wesley
ings Conference MLDM 2005, Leipzig/Germany.
Publishers.
Kummer, O. (1998, October). Simulating Synchronous
Krichmar, J. L., Nitz, D. A., Gally, J. A., & Edelman,
Channels and Net Instances. In J. Desel, P. Kemper,
G. M. (2005). Characterizing functional hippocampal
E. Kindler, & A. Oberweis (Eds.), Proceedings of the
pathways in a brain-based device as it solves a spatial
Workshop Algorithmen und Werkzeuge für Petrinetze
memory task. Proceedings of the National Academy
(Vol. 694, pp. 73-78). Dortmund, Germany: Universität
of Sciences of the United States of America, 102(6),
Dortmund, Fachbereich Informatik.
2111–2116. doi:10.1073/pnas.0409792102
Kurkowski, S., Camp, T., & Colagrosso, M. (2005).
Kubátová, H. (2003, June). Petri Net Models in Hardware.
MANET simulation studies: the incredibles. ACM
[Technical University, Liberec, Czech Republic.]. ECMS,
SIGMOBILE Mobile Computing and Communications
2003, 158–162.
Review, 9(4), 50–61. doi:10.1145/1096166.1096174
Kubátová, H. (2003, September). Direct Hardware Imple-
Kurkowski, S., Camp, T., & Colagrosso, M. (2006).
mentation of Petri Net based Models. [Linz, Austria: J.
MANET simulation studies: The current state and new
Kepler University – FAW.]. Proceedings of the Work in
simulation tools. Department of Mathematics and Com-
Progress Session of EUROMICRO, 2003, 56–57.
puter Sciences, Colorado School of Mines, CO.

551
Compilation of References

Kvaale, H. (1988). A decision support simulation model design and construction. Proceedings of the 2005 Winter
for design of fast patrol boats. European Journal of Simulation Conference, Orlando, FL, (pp. 1508-1514).
Operational Research, 37(1), 92–99. doi:10.1016/0377-
Lee, W. C. Y. (1989). Mobile cellular telecommunications
2217(88)90283-4
systems. New York: McGraw-Hill.
Laakso, M.-J., Salakoski, T., Grandell, L., Qiu, X.,
Leevers, D., Gil, P., Lopes, F. M., Pereira, J., Castro, J.,
Korhonen, A., & Malmi, L. (2005). Multi-perspective
Gomes-Mota, J., et al. (1998). An autonomous sensor
study of novice learners adopting the visual algorithm
for 3d reconstruction. In 3rd European Conference
simulation exercise system TRAKLA2. Informatics in
on Multimedia Applications, Services and Techniques
Education, 4(1), 49–68.
(ECMAST98), Berlin, Germany.
Lake, T., Zeigler, B., Sarjoughian, H., & Nutaro, J. (2000).
Lefcowitz, M. (2007, February 26). Why does process
DEVS Simulation and HLA Lookahead, (Paper no. 00S-
improvement fail? Builder-AU by Developers for devel-
SIW-160). Presented at Spring Simulation Interoperability
opers. Retrieved from www.builderau.com.au/strategy/
Workshop Orlando, FL, USA.
projectmanagement/
Lane, D. (1997). Invited reviews on system dynamics.
Lehmann, A., Saad, S., Best, M., Kِ ster, A., Pohl, S.,
The Journal of the Operational Research Society, 48,
Qian, J., Walder, C., Wang, Z., & Xu, Z., (2005). Leit-
1254–1259.
faden für Modelldokumen-tation, Abschlussbericht (in
Lange, K. (2006). Differences between statistics and German). ITIS e.V.
data mining. DM Review, 16(12), 32–33.
Levy, S. (1992). Artificial life: A report from the frontier
Langton, C. G. (1989). Artificial life. Artificial Life, where computers meet. New York: Vintage Books.
1–48.
Li, H., Tang, W., & Simpson, D. (2004). Behavior based
Latane, B. (1996). Dynamic social impact: Robust predi- motion simulation for fire evacuation procedures. In
tions from simple theory. In U. M. R. Hegselmann, & Conference Proceedings of Theory and Practice of
K. Triotzsch (Eds.), Modelling and simulatiion in the Computer Graphics. Washington DC: IEEE.
social sciences fromthe philosophy of science point of
Ligetvári, Zs. (2005). New Object’s Development in DES
view. New York: Springer-Verlag.
LabVIEW (in Hungarian). Unpublished Master’s thesis,
Lau, K. N., Lee, K. H., Ho, Y., & Lam, P. Y. (2004). Budapest University of Technology and Economics,
Mining the web for business intelligence: Homepage Hungary.
analysis in the internet era. Database Marketing &
Liski, J., Palosuo, T., Peltoniemi, M., & Sievanen, R.
Customer Strategy Management, 12, 32–54. doi:10.1057/
(2005). Carbon and decomposition model Yasso for
palgrave.dbm.3240241
forest soils. Ecological Modelling, 189(1-2), 168–182.
Law, A. M., & Kelton, W. D. (1991). Simulation Modeling doi:10.1016/j.ecolmodel.2005.03.005
and Analysis. San Francisco: McGraw-Hill.
Litvak, E. (2007). A new Rx for crowded hospitals:
Lawrence, J., Pasternak, B., & Pasternak, B. A. (2002). Math. Operation management expert brings queuing
Applied Management Science: Modeling, Spreadsheet theory to health care. American College of Physicians-
Analysis, and Communication for Decision Making. Internal Medicine-Doctors for Adults, December 2007,
Hoboken, NJ: John Wiley & Sons. ACP Hospitalist.

Lee, S., & Pena-mora, F. (2005). System dynamics ap- Litvak, E., & Long, M. (2000). Cost and Quality under
proach for error and change management in concurrent managed care: Irreconcilable Differences? The American
Journal of Managed Care, 6(3), 305–312.

552
Compilation of References

Litvak, E., Long, M., Cooper, A., & McManus, M. (2001). channel – recording, statistics, and channel model. IEEE
Emergency Department Diversion: Causes and Solutions. Transactions on Vehicular Technology, 40, 375–386.
Academic Emergency Medicine, 8(11), 1108–1110. doi:10.1109/25.289418

Lluch, J., & Halpin, D. W. (1982). Construction opera- Lyneis, J. (2000). System dynamics for market forecasting
tions and microcomputers. Journal of the Construction and structural analysis. System Dynamics Review, 16(1),
Division, 108(CO1), 129–145. 3–25. doi:10.1002/(SICI)1099-1727(200021)16:1<3::AID-
SDR183>3.0.CO;2-5
Lombardi, S., Wainer, G. A., & Zeigler, B. P. (2006). An
experiment on interoperability of DEVS implementations Maani, K., & Cavana, R. (2000). Systems thinking and
(Paper no. 06S-SIW-131). Presented at the Spring Simula- modelling: understanding change and complexity. New
tion Interoperability Workshop Huntsville, AL, USA. Zealand: Pearson Education, New Zealand.

Lönhárd, M. (2000). Simulation System of Discrete Maas, W., & Bishop, C. M. (Eds.). (1999). Pulsed neural
Events in Delphi (in Hungarian). Unpublished Master’s networks. Cambridge, MA: The MIT Press.
thesis, Budapest University of Technology and Econom-
Macal, C. M., & North, M. J. (2006). Tutorial on agent-
ics, Hungary.
based modeling and simulation part 2: How to model. In
Lönnberg, J., Korhonen, A., & Malmi, L. (2004, May). Proceedings of the 2006 Winter Simulation Conference,
MVT — a system for visual testing of software. In Pro- eds. L. R. Perrone, F. P. Wieland, J. Liu, B. G. Lawson,
ceedings of the working conference on advanced visual D. M. Nicol, and R. M. Fujimoto, 73-83. Piscataway,
interfaces (AVI’04) (pp. 385–388). New Jersey: Institute of Electrical and Electronics
Engineers, Inc.
Loscos, C., Marchal, D., & Meyer, A. (2003, June).
Intuitive crowd behaviour in dense urban environments Madewell, C. D., & Swain, J. J. (2003, April-June). The
using local laws. In Proceedings of the conference of Huntsville Simulation Snapshot: A Quantitative Analysis
Theory and Practice of Computer Graphics, TP.CG03, of What Employers Want in a Systems Simulation Profes-
University of Birmingham, Birmingham UK. Wash- sional. Modelling and Simulation (Anaheim), 2(2).
ington DC: IEEE.
Maes, P. (1987, October). Concepts and Experiments
Lowery, J. (1996). Introduction to Simulation in Health- in Computational Reflection. In N. K. Meyrowitz (Ed.),
care. In J. Charness, D. Morrice, (Ed.) Proceedings of the Proceedings of the 2nd conference on object-oriented
1996 Winter Simulation Conference, (pp. 78-84). programming systems, languages, and applications
(OOPSLA’87) (Vol. 22, pp.147-156), Orlando, FL.
Lu, M., & Chan, W. (2004). Modeling concurrent op-
erational interruptions in construction activities with Mahachek, A. (1992). An Introduction to patient flow
simplified discrete event simulation approach (SDESA). simulation for health care managers. Journal of the
Proceedings of the 2004 Winter Simulation Conference, Society for Health Systems, 3(3), 73–81.
Washington, DC, (pp. 1260-1267).
Malmi, L., & Korhonen, A. (2004). Automatic feedback
Lucio, G. F., Paredes-Farrera, M., Jammeh, E., Fleury, and resubmissions as learning aid. In Proceedings of
M., & Reed, M. J. (2008). OPNET Modeler and Ns-2: 4th IEEE international conference on advanced learn-
comparing the accuracy of network simulators for packet- ing technologies, ICALT’04 (pp. 186-190), Joensuu,
level analysis using a network testbed. Retrieved June Finland.
15, from http://privatewww.essex.ac.uk/~gflore/
Malmi, L., Karavirta, V., Korhonen, A., & Nikander,
Lutz, E., Cygan, D., Dippold, M., Dolainsky, F., & Papke, J. (2005, September). Experiences on automatically
W. (1991). The land mobile satellite communication assessed algorithm simulation exercises with different

553
Compilation of References

resubmission policies. Journal of Educational Resources Martinez, J. C. (1998). EarthMover-simulation tool for
in Computing, 5(3). doi:10.1145/1163405.1163412 earthwork planning. Proceedings of the 1998 Winter
Simulation Conference, Washington DC, (pp. 1263-
Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Sep-
1271).
pälä, O., & Silvasti, P. (2004). Visual algorithm simulation
exercise system with automatic assessment: TRAKLA2. Martinez, J. C., & Ioannou, P. J. (1999). General-purpose
Informatics in Education, 3(2), 267–288. systems for effective construction simulation. Journal of
Construction Engineering and Management, 125(4), 265–
Malmi, L., Korhonen, A., & Saikkonen, R. (2002). Ex-
276. doi:10.1061/(ASCE)0733-9364(1999)125:4(265)
periences in automatic assessment on mass courses and
issues for designing virtual courses. In Proceedings of Marzouk, M. (2002). Optimizing earthmoving operations
the 7th annual SIGCSE/SIGCUE conference on inno- using computer simulation. Ph.D. Thesis, Concordia
vation and technology in computer science education, University, Montreal, Canada.
ITiCSE’02 (pp. 55–59), Aarhus, Denmark. New York:
Marzouk, M., & Moselhi, O. (2000). Optimizing earth-
ACM Press.
moving operations using object-oriented simulation.
Mandelbrot, B. B. (1982). The Fractal Geometry of Proceedings of the 2000 Winter Simulation Conference,
Nature. New York: W.H. Freeman. Orlando, FL, (1926-1932).

Mannila, H. (1997). Methods and problems in data mining. Marzouk, M., & Moselhi, O. (2004). Multiobjective
Proceedings of International Conference on Database optimization of earthmoving operations. Journal of Con-
Theory, Delphi, Greece. struction Engineering and Management, 130(1), 105–113.
doi:10.1061/(ASCE)0733-9364(2004)130:1(105)
Maria, A. (1997). Introduction to modeling and simula-
tion. In S. Andradottir, K. J. Healy, D. H. Withers, & B. L. Marzouk, M., & Moselhi, O.(2002). Bid preparation
Nelson (Ed.) Proceedings of the 1997 Winter Simulation for earthmoving operations. Canadian Journal of Civil
Conference, (pp. 7-13). Engineering, 29(3), 517–532. doi:10.1139/l02-023

Marian, I. (2003). A biologically inspired computational Marzouk, M., & Moselhi, O.(2002). Simulation op-
model of motor control development. Unpublished MSc timization for earthmoving operations using genetic
Thesis, University College Dublin, Ireland. algorithms. Construction Management and Economics,
20(6), 535–544. doi:10.1080/01446190210156064
Markram, H. (2006). The Blue Brain project. Nature Re-
views. Neuroscience, 7, 153–160. doi:10.1038/nrn1848 Marzouk, M., & Moselhi, O.(2003) Constraint based
genetic algorithm for earthmoving fleet selection. Ca-
Marshall, A., Vasilakis, C., & El-Darzi, E. (2005). Length
nadian Journal of Civil Engineering, 30(4), 673–683.
of stay-based Patient Flow Models: Recent Developments
doi:10.1139/l03-006
and Future Directions. Health Care Management Science,
8, 213–220. doi:10.1007/s10729-005-2012-z Marzouk, M., & Moselhi, O.(2003). An object oriented
model for earthmoving operations. Journal of Construc-
Martin, L. (1997). Mistakes and Misunderstandings.
tion Engineering and Management, 129(2), 173–181.
System Dynamics in Education Project. System Dynam-
doi:10.1061/(ASCE)0733-9364(2003)129:2(173)
ics Group, Sloan School of Management, Massachusetts
Institute of Technology, Cambridge, MA. Marzouk, M., & Moselhi, O.(2004). Fuzzy clustering
model for estimating haulers’ travel time. Journal of
Martinez, J. C. (1996). STROBOSCOPE-state and re-
Construction Engineering and Management, 130(6), 878–
source based simulation of construction process. Ph.D.
886. doi:10.1061/(ASCE)0733-9364(2004)130:6(878)
Thesis, University of Michigan, Ann Arbor, MI.

554
Compilation of References

Marzouk, M., Said, H., & El-Said, M. (2008). Special McCabe, B. (1998). Belief networks in construction
purpose simulation model for balanced cantilever simulation. Proceedings of the 1998 Winter Simulation
bridges. Journal of Bridge Engineering, 13(2), 122–131. Conference, Washington DC (pp. 1279-1286).
doi:10.1061/(ASCE)1084-0702(2008)13:2(122)
McHaney, R. (1991). Computer simulation: a practical
Marzouk, M., Zein El-Dein, H., & El-Said, M. (2008. perspective. San Diego, CA: Academic Press.
(in press). A framework for multiobjective optimization
McManus, M., Long, M., Cooper, A., & Litvak, E. (2004).
of launching girder bridges. Journal of Construction
Queuing Theory Accurately Models the Need for Criti-
Engineering and Management.
cal Care Resources. Anesthesiology, 100(5), 1271–1276.
Marzouk, M., Zein, H., & El-Said, M. (2006). BRIGE_ doi:10.1097/00000542-200405000-00032
SIM: framework for planning and optimizing bridge deck
McManus, M., Long, M., Cooper, A., Mandell, J., Ber-
construction using computer simulation. Proceedings of
wick, D., Pagano, M., & Litvak, E. (2003). Variability in
the 2006 Winter Simulation Conference, Monterey, CA,
Surgical Caseload and Access to Intensive Care Services.
(pp. 2039-2046).
Anesthesiology, 98(6), 1491–1496. doi:10.1097/00000542-
Marzouk, M., Zein, H., & El-Said, M. (2006). Schedul- 200306000-00029
ing cast-in-situ on falsework bridges using computer
Mei, L., & Thole, C. A. (2008). Data analysis for paral-
simulation. Scientific Bulletin, Faculty of Engineering .
lel car-crash simulation results and model optimization.
Ain Shams University, 41(1), 231–245.
Simulation Modelling Practice and Theory, 16, 329–337.
Marzouk, M., Zein, H., & El-Said, M. (2007). Application doi:10.1016/j.simpat.2007.11.018
of computer simulation to construction of deck pushing
Mercy Medical Center. (2007). Creating a Culture of
bridges. Journal of Civil Engineering and Management,
Improvement. Presented at the Iowa Healthcare Col-
13(1), 27–36.
laborative Lean Conference, Des Moines, IA, August
Mason, D. V., & Woit, D. M. (1999). Providing mark-up 22. Retrieved from www.ihconline.org/toolkits/LeanIn-
and feedback to students with online marking. In The Healthcare/IHALeanConfCultureImprovement.pdf
proceedings of the thirtieth SIGCSE technical symposium
Miller, B. P. (1993). What to draw? When to draw? An
on computer science education (pp. 3-6), New Orleans,
essay on parallel program visualization. Journal of
LA. New York: ACM Press.
Parallel and Distributed Computing, 18(2), 265–269.
Mattia, M., & Del Guidice, P. (2000). Efficient event- doi:10.1006/jpdc.1993.1063
driven simulation of large networks of spiking neurons
Miller, J. H., & Page, S. E. (2007). Complex adaptive
and dynamical synapses. Neural Computation, 12,
systems: An introduction to computational models of
2305–2329. doi:10.1162/089976600300014953
social life. Princeton, NJ: Princeton University Press.
Mayhew, L., & Smith, D. (2008). Using queuing theory
Miller, M., Ferrin, D., & Szymanski, J. (2003). Simulating
to analyze the Government’s 4-h completion time target
Six Sigma Improvement Ideas for a Hospital Emergency
in Accident and Emergency departments. Health Care
Department. In S. Chick, et al (Ed.) Proceedings of the
Management Science, 11, 11–21. doi:10.1007/s10729-
2003 Winter Simulation Conference (pp. 1926-1929).
007-9033-8
Washington, DC: IEEE.
McCabe, B. (1997). An automated modeling approach for
Mitchell, M., Crutch, J. P., & Hraber, P. T. (1994).
construction performance improvement using computer
Dynamics, computation, and the “edge of chaos”: A re-
simulation and belief networks. Ph.D. Thesis, Alberta
examination. In G. Cowan, D. Pines, & D. Melzner (Eds),
University, Canada.
Complexity: Metaphors, Models and Reality. Reading,
MA: Addison-Wesley.

555
Compilation of References

Mittal, S., & Risco-Martin, J. L. (2007). DEVSML: Au- tional Symposium on Applied Modelling and Simulation
tomating DEVS Execution over SOA Towards Transpar- (pp. 200-203). Calgary, Canada: ACTA Press.
ent Simulators Special Session on DEVS Collaborative
Mosca, R., & Giribone, P. (1982).A mathematical method
Execution and Systems Modeling over SOA. In Proceed-
for evaluating the importance of the input variables in
ings of the DEVS Integrative M&S Symposium, Spring
simulation experiments. In M.H. Hamza (Ed.), IASTED
Simulation Multiconference, Norfork, Virginia, USA,
International Symposium on Modelling, Identifica-
(pp. 287–295). Washington, DC: IEEE Press.
tion and Control (pp. 54-58). Calgary, Canada: ACTA
Molnar, I., Moscardini, A. O., & Breyer, R. (2009). Simu- Press.
lation – Art or Science? How to teach it? International
Mosca, R., & Giribone, P. (1982).Optimal length in o.r.
Journal of Simulation and Process Modelling, 5(1),
simulation experiments of large scale production system.
20–30. doi:10.1504/IJSPM.2009.025824
In M.H. Hamza (Ed.), IASTED International Symposium
Molnar, I., Moscardini, A. O., & Omey, E. (1996, June). on Modelling, Identification and Control (pp. 78-82).
Structural concepts of a new master curriculum in Calgary, Canada: ACTA Press.
simulation. In A. Javor, A. Lehmann & I. Molnar (Eds.),
Mosca, R., & Giribone, P. (1983). O.r. muliple-objective
Proceedings of the Society for Computer Simulation
simulation: experimental analysis of the independent
International on Modelling and Simulation ESM96,
varibales ranges. In M.H. Hamza (Ed.), IASTED Interna-
Budapest, Hungary, (pp. 405-409).
tional Symposium on Applied Modelling and Simulation
Montgomery, D. C. (Ed.). (2005). Design and Analysis (pp. 68-73). Calgary, Canada: ACTA Press.
of Experiments. New York: John Wiley & Sons.
Mosca, R., & Giribone, P. (1986). Flexible manufacturing
Morbitzer, C., Strachan, P., & Simpson, C. (2004). Data system: a simulated comparison between discrete and
mining analysis of building simulation performance continuous material handling. In M.H. Hamza (Ed.),
data. Building Services Engineers Res. Technologies, IASTED International Symposium on Modelling, Iden-
25(3), 253–267. tification and Control (pp. 100-106). Calgary, Canada:
ACTA Press.
Morrison, A., Mehring, C., Geisel, T., Aertsen, A., &
Diesmann, M. (2005). Advancing the boundaries of Mosca, R., & Giribone, P. (1986). FMS: construction
high-connectivity network simulation with distrib- of the simulation model and its statistical validation. In
uted computing. Neural Computation, 17, 1776–1801. M.H. Hamza (Ed.), IASTED International Symposium
doi:10.1162/0899766054026648 on Modelling, Identification and Control (pp. 107-113).
Calgary, Canada: ACTA Press.
Morrison, M., & Morgan, M. S. (1999). Models as me-
diating instruments. In M. S. Morgan, & M. Morrison Mosca, R., & Giribone, P. (1988). Evaluation of stochastic
(Eds.), Models as mediators (pp. 10-37). Cambridge, discrete event simulators of F.M.S. In P. Breedveld et al.
UK: Cambridge University Press. (Ed.), IMACS Transactions on Scientific Computing:
Modelling and Simulation of Systems (Vol. 3, pp. 396-
Mosca, R., & Giribone, P. (1982). An application of the
402). Switzerland: Baltzer AG.
interactive code for design of o.r. simulation experiment
to a slabbing-mill system. In M.H. Hamza (Ed.), IASTED Mosca, R., & Giribone, P. (1993). Critical analysis of
International Symposium on Applied Modelling and Simu- a bottling line using simulation techniques. In M.H.
lation (pp. 195-199). Calgary, Canada: ACTA Press. Hamza (Ed.), IASTED International Symposium on
Modelling, Identification and Control (pp. 135-140).
Mosca, R., & Giribone, P. (1982). An interactive code for
Calgary, Canada: ACTA Press.
the design of the o.r. simulation experiment of complex
industrial plants. In M.H. Hamza (Ed.), IASTED Interna-

556
Compilation of References

Moya, L., & Weisel, E. (2008) The Difficulties with Nance, R. E. (2000, December). Simulation education:
Validating Agent Based Simulations of Social Systems. Past reflections and future directions. In J. A. Joines, R.
Proceedings Spring Multi-Simulation Conference, Agent- R. Barton, K. Kang & P. A. Fishwick, (Eds.) Proceedings
Directed Simulation, Ottawa, Canada. of the 2000 Winter Simulation Conference, Piscataway,
NJ (pp. 1595-1601). Washington, DC: IEEE.
Muguira, J. A., & Tolk, A. (2006). Applying a Methodol-
ogy to Identify Structural Variances in Interoperations. Nance, R. E., & Balci, O. (2001, December). Thoughts
Journal for Defense Modeling and Simulation, 3(2), and musings on simulation education. In B. A. Peters,
77–93. doi:10.1177/875647930600300203 J. S. Smith, D. J. Medeiros, & M. W. Rohrer (eds.),
Proceedings of the 2001 Winter Simulation Conference,
Muhdi, R. A. (2006). Evaluation Modeling: Develop-
Arlington, VA (pp. 1567-1570).
ment, Characteristics and Limitations. Proceedings of
the National Occupational Research Agenda (NORA), Naps, T. L., Rößling, G., Almstrum, V., Dann, W.,
(pp. 87-92). Fleischer, R., & Hundhausen, C. (2003, June). Exploring
the role of visualization and engagement in computer
Mund, M., Profft, I., Wutzler, T., Schulze, E.D., Weber,
science education. SIGCSE Bulletin, 35(2), 131–152.
G., & Weller, E. (2005). Vorbereitung für eine laufende
doi:10.1145/782941.782998
Fortschreibung der Kohlenstoffvorräte in den Wäldern
Thüringens. Abschlussbericht zur 2. Phase dem BMBF- National Research Council. (2002). Modeling and
Projektes “Modelluntersuchungen zur Umsetzung des Simulation in Manufacturing and Defense Acquisition:
Kyoto-Protokolls”. (Tech. rep., TLWJF, Gotha). Pathways to Success. Washington, DC: Committee on
Modeling and Simulation Enhancements for 21st Cen-
MVASpike [computer software] (n.d.). Available from
tury Manufacturing and Defense Acquisition, National
http://mvaspike.gforge.inria.fr/.
Academies Press.
Myers, B. A., Chandhok, R., & Sareen, A. (1988, Oc-
National Research Council. (2006). Defense Modeling,
tober). Automatic data visualization for novice Pascal
Simulation, and Analysis: Meeting the Challenge. Wash-
programmers. In IEEE workshop on visual languages
ington, DC: Committee on Modeling and Simulation for
(pp. 192-198).
Defense Transformation, National Academies Press.
Myers, R. H., & Montgomery, D. C. (Eds.). (1995). Re-
National Science Board. (2008). Science and Engineer-
sponse Surface Methodology. New York: John Wiley
ing Indicators 2008. Arlington, VA: National Science
& Sons.
Foundation.
Nabuurs, G. J., Pussinen, A., Karjalainen, T., Erhard,
Navarro-Serment, L., Grabowski, R., Paredis, C. &
M., & Kramer, K. (2002). Stemwood volume increment
Khosla, P. (2002, December). Millibots. IEEE Robotics
changes in European forests due to climate change-a
& Automation Magazine.
simulation study with the EFISCEN model. Global
Change Biology, 8(4), 304–316. doi:10.1046/j.1354- Nayak, R., & Qiu, T. (2005). A data mining applica-
1013.2001.00470.x tion: analysis of problems occurring during a software
project development process. International Journal of
Nagel, J. (2003). TreeGrOSS: Tree Growth Open Source
Software Engineering and Knowledge Engineering, 15(4),
Software - a tree growth model component.
647–663. doi:10.1142/S0218194005002476
Nagel, J., Albert, M., & Schmidt, M. (2002). Das wald-
Naylor, T. H., & Finger, J. M. (1967). Verification of
bauliche Prognose- und Entscheidungsmodell BWINPro
computer simulation models. Management Science,
6.1. Forst und Holz, 57(15/16), 486–493.
14(2), 92–106. doi:10.1287/mnsc.14.2.B92

557
Compilation of References

Nelson, B. L. (2002). Using Simulation To Teach Prob- of the National Science Foundation Blue Ribbon Panel
ability. In C.-H. C. E. Yücesan (Ed.), Proceedings of on Simulation-Based Engineering Science, http://www.
the 2002 Winter Simulation Conference (p. 1815). San nsf.gov/pubs/reports/sbes_final_report.pdf
Diego, CA: informs-cs.
Nutaro, J. (2003). Parallel Discrete Event Simulation
NEST [computer software] (n.d.). Available from http:// with Application to Continuous Systems. PhD Thesis,
www.nest-initiative.org. University of Arizona, Tuscon, Arizona.

Network Simulator 2 (2008). Retrieved September 15, Nutaro, J. (2007). Discrete event simulation of continu-
2008, from www.isi.edu/nsnam/ns/ ous systems. In P. Fishwick, (Ed.) Handbook of Dynamic
System Modeling. Boca Raton, FL: Chapman & Hall/
Newcombe, R. (2007). SIMNOS virtual robot [computer
CRC.
software]. More information available from www.crono-
sproject.net. Nutaro, J. J. (2005). Adevs. Retrieved Jan 15, 2006, from
http://www.ece.arizona.edu/ nutaro/
Ng, P. C., & Liew, S. C. (2007). Throughput analysis
of IEEE 802.11 multi-hop ad hoc networks. IEEE/ACM Nutaro, J., & Sarjoughian, H. (2004). Design of distributed
Transactions on Networking, 15(2), 309-322. simulation environments: A unified system-theoretic and
logical processes approach. Journal of Simulation, 80(11),
NI LabVIEW Developer Team. (2007). LabVIEW 8.5
577–589. doi:10.1177/0037549704050919
User Manual. Austin, TX: National Instruments.
NVIDIA PhysX [computer software] (n.d.). Available from
Nicopoliditis, P., Papadimitriou, G. I., & Pomportsis, A.
http://www.nvidia.com/object/nvidia_physx.html.
S. (2003). Wireless Networks. Hoboken, NJ: Wiley, Jonn
Wiley & Sons Ltd. Odersky, M. (2000, March). Functional Nets. In G. Smolka
(Ed.), Proceedings of the 9th European Symposium on
Nigash, S. (2004). Business Intelligence. Communica-
Programming (ESOP 2000) (p. 1-25). Berlin, Germany:
tions of the Association for Information Systems, 13,
Springer.
177–195.
Okuhara, K., Ishii, H., & Uchida, M. (2005). Support of
Nikoukaran, J. (1999). Software selection for simulation
decision making by data mining using neural system.
in manufacturing: A review. Simulation Practice and
Systems and Computers in Japan, 36(11), 102–110.
Theory, 7(1), 1–14. doi:10.1016/S0928-4869(98)00022-
doi:10.1002/scj.10577
6
Oloufa, A. (1993). Modeling operational activities in
Nonaka, I., Toyama, R., & Konno, N. (2000). SECI, Ba
object-oriented simulation. Journal of Computing in
and aeadership: a unified model of dynamic knowledge
Civil Engineering, 7(1), 94–106. doi:10.1061/(ASCE)0887-
creation. Long Range Planning, 33, 5–34. doi:10.1016/
3801(1993)7:1(94)
S0024-6301(99)00115-6
Oloufa, A., Ikeda, M., & Nguyen, T. (1998). Resource-
North Atlantic Treaty Organization. (2002). NATO Code
based simulation libraries for construction. Automation
of Best Practice for Command and Control Assessment.
in Construction, 7(4), 315–326. doi:10.1016/S0926-
Revised ed. Washington, DC: CCRP Press
5805(98)00048-X
North, M. J., & Macal, C. M. (2007). Managing business
OMNEST (2007). Retrieved September 15, 2007, from
complexity: Discovering strategic solutions with. New
www.omnest.com
York, NY: Oxford University Press.
OMNeT++ (2008). Retrieved September 15, 2008, from
NSF-Panel. (2006, May). Revolutionizing Engineering
http://www.omnetpp.org/
Science through Simulation. Retrieved 2008, from Report

558
Compilation of References

OPNET Technologies. (2008). Retrieved September 15, nent-based environmental models. Ecological Modelling,
2008, from www.opnet.com 179(1), 61–76. doi:10.1016/j.ecolmodel.2004.05.013

OR/MS. (2003). OR/MS. Retrieved from OR/MS, www. Papoulis, A. (1984). Probability, random variables, and
lionhrtpub.com/orms/surveys/Simulation/Simulation. stochastic processes (2nd Ed.). New York: McGraw-
html Hill, Inc.

Oren, T. I. (2002, December). Rationale for a Code of Parsons, J. D. (1992). The mobile radio propagation chan-
Professional Ethics for Simulationists. In E. Yucesan, C. nel (1st Ed.). Chichester, UK: John Wiley & Sons, Inc.
Chen, J. L. Snowdon & J. M. Charnes (Eds.), Proceedings
Paul, R. J., Eldabi, T., & Kuljis, J. (2003, December).
of the 2002 Winter Simulation Conference, San Diego,
Simulation education is no substitute for intelligent
CA, (pp. 13-18).
thinking. In S. Chick, P. J. Sanchez, D. Ferrin & D. J.
Oren, T. I. (2008). Modeling and Simulation Body Morrice (Eds.), Proceedings of the 2003 Winter Simula-
of Knowledge. SCS International. Retrieved May 31 tion Conference, New Orleans, LA, (pp. 1989-1993).
2008 from http://www.site.uottawa.ca/~oren/MSBOK/
Paulson, G. C. Jr. (1978). Interactive graphics for simulat-
MSBOK-index.htm#coreareas
ing construction operations. Journal of the Construction
Ören, T. I., & Numrich, S. K. S.K., Uhrmacher, A.M., Division, 104(1), 69–76.
Wilson, L. F., & Gelenbe, E. (2000). Agent-directed
Pawlikowski, K. (1990). Steady-state simulation of
simulation: challenges to meet defense and civilian
queuing processes: a survey of problems and solu-
requirements. In Proceedings of the 32nd Conference
tions. ACM Computing Surveys, 1(2), 123–170.
on Winter Simulation, Orlando, Florida, December 10
doi:10.1145/78919.78921
– 13. San Diego, CA: Society for Computer Simulation
International. Pawlikowski, K., Jeong, H.-D. J., & Lee, J.-S. R. (2002).
On credibility of simulation studies of telecommunica-
Page, E. H., Briggs, R., & Tufarolo, J. A. (2004). To-
tion networks. IEEE Communications Magazine, 40(1),
ward a Family of Maturity Models for the Simulation
132–139. doi:10.1109/35.978060
Interconnection Problem. In Proceedings of the Spring
Simulation Interoperability Workshop. New York: IEEE Pawlikowski, K., Yau, V. W. C., & McNickle, D. (1994).
CS Press. Distributed stochastic discretevent simulation in parallel
time streams. Paper presented at the Winter Simulation
Painter, M. K., Erraguntla, M., Hogg, G. L., & Beach-
Conference. pp. 723-730.
kofski, B. (2006). Using simulation, data mining, and
knowledge discovery techniques for optimized aircraft Peek, J., Todin-Gonguet, G., & Strang, J. (2001). Learning
engine fleet management. Proceedings of the 2006 Winter the UNIX Operating System (5th Ed.). Sebastopol, CA:
Simulation Conference, 1253-1260. O’Reilly & Associates.

Palosuo, T., Liski, J., Trofymow, J. A., & Titus, B. D. Peer, G. A. (2001). Ready to Serve. Heavy Construction
(2005). Litter decomposition affected by climate and lit- News, March 2001, 16-19.
ter quality - Testing the Yasso model with litterbag data
Peltoniemi, M., Mäkipää, R., Liski, J., & Tamminen, P.
from the Canadian intersite decomposition experiment.
(2004). Changes in soil carbon with stand age - an evalu-
Ecological Modelling, 189(1-2), 183–198. doi:10.1016/j.
ation of a modelling method with empirical data. Global
ecolmodel.2005.03.006
Change Biology, 10(12), 2078–2091. doi:10.1111/j.1365-
Papajorgji, P., Beck, H. W., & Braga, J. L. (2004). An 2486.2004.00881.x
architecture for developing service-oriented and compo-

559
Compilation of References

Phillips, D. C. (2000). Constructivism in education. Power, D. J., & Sharda, R. (2007). Model-driven deci-
opinions and second opinions on controversial issues. 99th sion support systems: Concepts and research directions.
yearbook of the national society for the study of education Journal for Decision Support Systems, 43(3), 1044–1061.
(Part 1). Chicago: The University of Chicago Press. doi:10.1016/j.dss.2005.05.030

Phillips-Wren, G., & Jain, L. C. (Eds.). (2005). Intelligent Price, B. A., Baecker, R. M., & Small, I. S. (1993). A
Decision Support Systems in Agent-Mediated Environ- principled taxonomy of software visualization. Journal
ments. The Netherlands: IOS Press. of Visual Languages and Computing, 4(3), 211–266.
doi:10.1006/jvlc.1993.1015
Phillips-Wren, G., Ichalkaranje, N., & Jain, L. C. (Eds.).
(2008). Intelligent Decision Making –An AI-Based Ap- Pritsker, A. A. B., O’Reilly, J. J., & LaVal, D. K. (1997).
proach. Berlin: Springer-Verlag. Simulation with visual SLAM and Awesim. New York:
John Wiley & Sons, Inc.
Pidd, M. (1995). Object orientation, discrete simulation
and three-phase approach. The Journal of the Operational Proakis, J. G. (2001). Digital communications (4th Ed.).
Research Society, 46(3), 362–374. New York: McGraw- Hill.

Pidd, M. (2003). Tools for thinking: Modelling in man- Program for International Student Assessment (PISA).
agement science. (2nd Ed., pp. 289-312) New York: John (2006). Highlights from PISA 2006. Retrieved August 15,
Wiley and Sons. 2008 from Web site: http://nces.ed.gov/surveys/pisa/

Pierson, W., & Rodger, S. (1998). Web-based animation Pullen, J. M., Brunton, R., Brutzman, D., Drake, D.,
of data structures using JAWAA. In Proceedings of the Hieb, M. R., Morse, K. L., & Tolk, A. (2005). Using Web
29th SIGCSE technical symposium on computer sci- Services to Integrate Heterogeneous Simulations in a Grid
ence education (pp. 267-271), Atlanta, GA. New York: Environment. Journal on Future Generation Computer
ACM Press. Systems, 21, 97–106. doi:10.1016/j.future.2004.09.031

Pohl, J. G., Wood, A. A., Pohl, K. J., & Chapman, A. Qayyum, A., Viennot, L., & Laouiti, A. (2002). Multipoint
J. (1999). IMMACCS: A Military Decision-Support relaying for flooding broadcast messages in mobile wire-
System. DARPA-JFACC 1999 Symposium on Advances less network. Proceedings of the 35th Hawaii International
in Enterprise Control, San Diego, CA. Conference on System Sciences, (pp. 3866- 3875).

Porté, A., & Bartelink, H. H. (2002). Modelling mixed Qiu, Z.-M., & Wong, Y. S. (2007, June). Dynamic Work-
forest growth: a review of models for forest manage- flow Change in PDM Systems. Computers in Industry,
ment. Ecological Modelling, 150, 141–188. doi:10.1016/ 58(5), 453–463. doi:10.1016/j.compind.2006.09.014
S0304-3800(01)00476-8
Quatrani, T. (1998). Visual modeling with rational rose
Potter, S. S., Elm, W. C., & Gualtieri, J. W. (2006). Mak- and UML. Reading, MA: Addison-Wesley.
ing Sense of Sensemaking: Requirements of a Cognitive
Rakich, J., Kuzdrall, P., Klafehn, K., & Krigline, A.
Analysis to Support C2 Decision Support System Design.
(1991). Simulation in the hospital setting: Implications
Proceedings of the Command and Control Research
for managerial decision making and management devel-
and Technology Symposium. Washington, DC: CCRP
opment. Journal of Management Development, 10(4),
Press.
31–37. doi:10.1108/02621719110005069
Power, D. J. (2007). A Brief History of Decision Support
Rappaport, T. S. (2002). Wireless communications
Systems. DSSResources.com. Retrieved from http://DSS-
principles and practice (2nd Ed.). Upper Saddle River,
Resources.COM/history/dsshistory.html, version 4.0.
NJ: Prentice Hall.

560
Compilation of References

Rappaport, T. S., Seidel, S. Y., & Takamizawa, K. (1991). Robinson, S. (2008). Conceptual modelling for simulation
Statistical channel impulse response models for factory Part I: definition and requirements. The Journal of the
and open plan building radio communication system Operational Research Society, 59, 278–290. doi:10.1057/
design. IEEE Transactions on Communications, 39(5), palgrave.jors.2602368
794–806. doi:10.1109/26.87142
Roeder, T. M. K. (2004). An Information Taxonomy for
Reek, K. A. (1989). The TRY system or how to avoid testing Discrete-Event Simulations. PhD Dissertation, Univer-
student programs. In [New York: ACM Press.]. Proceed- sity of California, Berkeley, CA.
ings of SIGCSE, 89, 112–116. doi:10.1145/65294.71198
Rogers, R. V. (1997, December) What makes a model-
Reichert, M., & Dadam, P. (1998). ADEPTflex - Support- ling and simulation professional? The consensus view
ing Dynamic Changes in Workflow Management Systems from one workshop. In S. Andradottir, K. J. Healy, D.
without Losing Control. Journal of Intelligent Information A. Whiters & B. L. Nelson (Eds.), Proceedings of the
Systems, 10(2), 93–129. doi:10.1023/A:1008604709862 1997 Winter Simulation Conference, Atlanta, GA (pp.
1375-1382).Washington, DC: IEEE.
Reid, P., Compton, W., Grossman, J., & Fanjiang, G.
(2005). Building a better delivery system: A new en- Rohl, J. S. (1986). An Introduction to Compiler Writing.
gineering / Healthcare partnership. Washington, DC: New York: Macdonald and Jane’s.
Committee on Engineering and the Health Care System,
Rohrmeier, M. (1997). Telemanipulation of robots via
Institute of Medicine and National Academy of Engineer-
internet mittels VRML2.0 and Java. Institute for Robotics
ing, National Academy Press.
and System Dynamic, Technical University of Munchen,
Reisig, W. (1985). Petri nets: An introduction (EATCS Munich, Germany.
Monographs in Theoretical Computer Science Vol. 4).
Rorabaugh, C. B. (2004). Simulating Wireless Com-
Berlin: Springer.
munication Systems: Practical Models in C++. Upper
Remondino, M., & Correndo, G. (2005). Data mining Saddle River, NJ: Prentice-Hall.
applied to agent based simulation. Proceedings 19th
Ross, M. D. (n.d.). 3-D Imaging In Virtual Environment:
European Conference on Modelling and Simulation.
A Scientific, Clinical And Teaching Tool. Retrieved from
Resnyansky, L. (2007). Integration of social sciences in United States National Library of Medicine- National
terrorism modelling: Issues, problems. Edinburgh, Aus- Institiute of Health, http://biocomp.arc.nasa.gov/
tralia: Australian Government Department of Defence,
Ross, M. S. (1987). Introduction to probability and
DSTO Command and Control.
statistics for engineers and scientists. Chichester, UK:
Reutimann, J., Giugliano, M., & Fusi, S. (2003). John Wiley & Sons, Inc.
Event-driven simulation of spiking neurons with sto-
Rößling, G., Schüler, M., & Freisleben, B. (2000). The
chastic dynamics. Neural Computation, 15, 811–830.
ANIMAL algorithm animation tool. In Proceedings of
doi:10.1162/08997660360581912
the 5th annual SIGCSE/SIGCUE conference on inno-
Rice, S. O. (1958). Distribution of the duration of fades vation and technology in computer science education,
in radio transmission: Gaussian noise model. The Bell ITiCSE’00 (pp. 37-40), Helsinki, Finland. New York:
System Technical Journal, 37(3), 581–635. ACM Press.

Robertson, N., & Perera, T. (2001). Feasibility for Auto- Roxburgh, S. H., & Davies, I. D. (2006). COINS: an inte-
matic Data Collection. In Proceedings of the 2001 Winter grative modelling shell for carbon accounting and general
Simulation Conference. ecological analysis. Environmental Modelling & Soft-
ware, 21(3), 359–374. doi:10.1016/j.envsoft.2004.11.006

561
Compilation of References

Ruardija, P., J. W.-B. (1995). SESAME, a software Sargent, R. G. (1996). Verifying and Validating Simu-
environment for simulation and analysis of marine eco- lation Models. Proc. of 1996 Winter Simulation Conf.,
systems. Netherlands Journal of Sea Research, 33(3-4), (pp. 55–64).
261–270. doi:10.1016/0077-7579(95)90049-7
Sargent, R. G. (2000). Verification, Validation, and Ac-
Russell, S. (2007, May). Open Dynamics Engine. Re- creditation of Simulation Models. Proceedings of the
trieved October 2008, from Open Dynamics Engine, Winter Simulation Conference, (pp. 50-59).
http://www.ode.org/
Sargent, R. G. (2000, December). Doctoral colloquium
Ryan, J. (2005). Building a better delivery system: A keynote address: being a professional. In J. A. Joines, R.
new engineering / Healthcare partnership. System En- R. Barton, K. Kang and P. A. Fishwick (Eds.), Proceedings
gineering: Opportunities for Health Care (pp.141-142). of the 2000 Winter Simulation Conference, Piscataway,
Washington, DC: Committee on Engineering and the NJ, (pp. 1595-1601). Washington, DC: IEEE.
Health Care System, Institute of Medicine and National
Sarjoughian, H., & Zeigler, B. (2000). DEVS and HLA:
Academy of Engineering, National Academy Press.
Complementary paradigms for modeling and simulation?
Sachdeva, R., Williams, T., Quigley, J., (2006). Mix- Simulation Transactions, 17(4), 187–197.
ing methodologies to enhance the implementation of
Sarkar, N. I., & Halim, S. A. (2008, June 23-26). Simula-
healthcare operational research. Journal of the Opera-
tion of computer networks: simulators, methodologies
tional Research Society, advance online publication,
and recommendations. Paper presented at the 5th IEEE
September 6, 1 - 9
International Conference on Information Technology and
Saikkonen, R., Malmi, L., & Korhonen, A. (2001). Fully Applications (ICITA’08), Cairns, Queensland, Australia,
automatic assessment of programming exercises. In (pp. 420-425).
Proceedings of the 6th annual SIGCSE/SIGCUE confer-
Sarkar, N. I., & Halim, S. A. (2008, June 23-26). Simula-
ence on innovation and technology in computer science
tion of computer networks: simulators, methodologies
education, ITiCSE’01 (pp. 133-136), Canterbury, UK.
and recommendations. Paper presented at the 5th IEEE
New York: ACM Press.
International Conference on Information Technology and
Salah, K., & Alkhoraidly, A. (2006). An OPNET-based Applications (ICITA’08), Cairns, Queensland, Australia
simulation approach for deploying VoIP. Interna- (pp. 420-425).
tional Journal of Network Management, 16, 159–183.
Sarkar, N. I., & Lo, E. (2008, December 7-10). Indoor
doi:10.1002/nem.591
propagation measurements for performance evaluation
Salimifard, K., & Wright, M. B. (2001, November). Petri of IEEE 802.11g. Paper presented at the IEEE Australasian
Net-Based Modeling of Workflow Systems: An Overview. Telecommunications Networks and Applications Confer-
European Journal of Operational Research, 134(3), ence (ATNAC’08), Adelaide, Australia, (pp. 163-168).
664–676. doi:10.1016/S0377-2217(00)00292-7
Sarkar, N. I., & Petrova, K. (2005, June 27-30). WebLan-
Sampaio, A., & Henriques, P. (2008). Visual simulation Designer: a web-based system for interactive teaching
of Civil Engineering activities:Didactic virtual models. and learning LAN design. Paper presented at the 3rd
International Conferences in Central Europe onCom- IEEE International Conference on Information Tech-
puter Graphics, Visualization and Computer Vision. nology Research and Education, Hsinchu, Taiwan (pp.
Czech Republic: University of West Bohemia. 328-332).

Sargent, R. G. (1996). Some Subjective Validation Meth- Sarkar, N. I., & Sowerby, K. W. (2006, November 27-
ods Using Graphical Displays of Data. Proc. of 1996 30). Wi-Fi performance measurements in the crowded
Winter Simulation Conf., (pp. 345–351). office environment: a case study. Paper presented at the

562
Compilation of References

10th IEEE International Conference on Communication Schriber, T. J., & Brunner, D. T. (1999). Inside discrete-
Technology (ICCT), Guilin, China (pp. 37-40). event simulation software: how it works and how it
matters. Proceedings of the 1999 Winter Simulation
Sarkar, N. I., & Sowerby, K. W. (2009, April 4-8). The
Conference, Phoenix, AZ, (pp. 72-80).
combined effect of signal strength and traffic type
on WLAN performance. Paper presented at the IEEE ScienceDaily. (2006). Digital Surgery with Touch Feed-
Wireless Communication and Networking Conference back Could Improve Medical Training. ScienceDaily.
(WCNC’09), Budapest, Hungary.
Scott, D., & Yasinsac, A. (2004). Dynamic probabilistic
Sasson, Y., Cavin, D., & Schiper, A. (2003). Probabilistic retransmission in ad hoc networks. Proceedings of the
broadcast for flooding in wireless mobile ad hoc networks. International Conference on Wireless Networks, (pp.
Proceedings of Wireless Communications and Network- 158-164).
ing Conference (WCNC ’03), 2(16-20), 1124-1130.
Searle, J., & Brennan, J. (2006). General Interoperability
Sawhney, A., & AbouRizk, S. M. (1995). HSM- Concepts. In Integration of Modelling and Simulation
Simulation-based planning method for construction (pp. 3-1 – 3-8), (Educational Notes RTO-EN-MSG-043).
projects. Journal of Construction Engineering and Neuilly-sur-Seine, France
Management, 121(3), 297–303. doi:10.1061/(ASCE)0733-
Sebastião, P. J. A. (1998). Simulação eficiente do desem-
9364(1995)121:3(297)
penho dos códigos TCH através de modelos estocásticos
Sawhney, A., & AbouRizk, S. M. (1996). Computer- [Efficient simulation to obtain the performance of TCH
ized tool for hierarchical simulation modeling. Journal codes using stochastic models], (Master Thesis, Instituto
of Computing in Civil Engineering, 10(2), 115–124. Superior Técnico – Technical University of Lisbon, 1998),
doi:10.1061/(ASCE)0887-3801(1996)10:2(115) Lisboa, Portugal.

Scalable Network Technologies. (2007). QualNet Devel- Sebastião, P. J. A., Cercas, F. A. B., & Cartaxo, A. V.
oper. Retrieved 20 April, 2007, from http://www.qualnet. T. (2002). Performance of TCH codes in a land mobile
com/products/developer.php satellite channel. 13th IEEE International. Symposium
on Personal, Indoor and Mobile Radio Communication.,
Schafer, T. M., Maurer, J., & Wiesbeck, W. (2002, Sep-
(PIMRC2002), Lisbon, Portugal, (pp. 1675-1679).
tember 24-28). Measurement and simulation of radio
wave propagation in hospitals. Paper presented at IEEE Seelen, L., Tijms, H., & Van Hoorn, M. (1985). Tables
56th Vehicular Technology Conference (VTC2002-Fall), for multi-server queues (pp. 1-449). New-York: Elsevier,
(pp. 792-796). Simon, S., Armel, W., (2003). The Use of Simulation to
Reduce the Length of Stay in an Emergency Department.
Schelling, T. C. (2006). Micromotives and macrobehavior
In S. Chick, et al (Ed.) Proceedings of the 2003 Winter
(2nd Ed.). New York: WW Norton and Company.
Simulation Conference (pp. 1907-1911). Washington,
Schmeiser, B. (2004, December). Simulation output DC: IEEE
analysis: A tutorial based on one research thread. Paper
Sen, A., & Sinha, A. P. (2005). A comparison of data
presented at 2004 Winter Simulation Conference, (pp.
warehousing methodologies. Communications of the
162-170).
ACM, 48(3), 79–84. doi:10.1145/1047671.1047673
Schmid, A. (2005). What is the truth of simulation?
Senge, P. (1990). The Fifth Discipline, The art & practice
Journal of Artificial Societies and Social Simulation,
of learning organisation. New York.
8(4), 5.
Shanahan, M. P. (2008). A spiking neuron model of corti-
Schmidt, D. C. (2006). Real-time CORBA with TAO.
cal broadcast and competition. Consciousness and Cogni-
Retrieved September 5th, 2008, from http://www.cse.
tion, 17(1), 288–303. doi:10.1016/j.concog.2006.12.005
wustl.edu/ schmidt/TAO.html.

563
Compilation of References

Shanmugam, K. S. (1979). Digital and analog communi- Simon, H. A. (1996). The sciences of the artificial (3rd
cation systems. Chichester, UK: John Wiley & Sons. Ed.). Cambridge, MA: The MIT Press.

Shariat, M., & Hightower, R. Jr. (2007). Conceptualizing Simon, M. K., & Alouini, M. (2000). Digital commu-
Business Intelligence Architecture. Marketing Manage- nication over fading channels. Chichester, UK: John
ment Journal, 17(2), 40–46. Wiley & Sons.

Shi, J. (1995). Optimization for construction simulation. Simulation Interoperability Standards Organization
Ph.D. Thesis, Alberta University, Canada. (SISO). (2006). The Base Object Model Standard; SISO-
STD-003-2006: Base Object Model (BOM) Template
Shi, J., & AbouRizk, S. M. (1997). Resource-based
Standard; SISO-STD-003.1-2006: Guide for Base Object
modeling for construction simulation. Journal of Con-
Model (BOM) Use and Implementation. Orlando, FL:
struction Engineering and Management, 123(1), 26–33.
SISO Documents.
doi:10.1061/(ASCE)0733-9364(1997)123:1(26)
Simulations, D. O. D. Improved Assessment Proce-
Shi, Y., Peng, Y., Kou, G., & Chen, Z. (2005). Classi-
dures Would Increase the Credibility of Results, (1987).
fying Credit Card Accounts For Business Intelligence
Washington, DC: U. S. General Accounting Office,
And Decision Making: A Multiple-Criteria Quadratic
PEMD-88-3.
Programming Approach. International Journal of Infor-
mation Technology & Decision Making, 4(4), 581–599. Sincich, T. (Ed.). (1994). A Course in Modern Business
doi:10.1142/S0219622005001775 Statistics. New York: Macmillan College Publishing
Company.
Shinn, T. (2006). When is simulation a research tech-
nology? practices, markets, and. In G. K. J. Lenhard, & Sinclair, J. B. (2004). Simulation of Computer Systems
T. Shinn (Eds.), Simulation: Pragmatic construction of and Computer Networks: A Process-Oriented Approach.
reality; sociology of the sciences yearbook (pp. 187-203). George R. Brown School of Engineering, Rice University,
Dordrecht, The Netherlands: Springer. Houston, Texas, USA.

Shnayder, V., Hempstead, M., Chen, B., Allen, G., & Siringoringo, W., & Sarkar, N. I. (2009). Teaching and
Welsh, M. (2004). Simulating the power consumption learning Wi-Fi networking fundamentals using limited
of large-scale sensor network applications. Paper pre- resources. In J. Gutierrez (Ed.), Selected Readings on
sented at the 2nd international conference on Embedded Telecommunications and Networking (pp. 22-40). Her-
networked sensor systems, (pp. 188-200). shey, PA: IGI Global.

Silver, R., Boahen, K., Grillner, S., Kopell, N., & Olsen, Skansholm, J. (1997). C++ From the beginning. Harlow,
K. L. (2007). Neurotech for neuroscience: Unifying UK: Addison-Wesley.
concepts, organizing principles, and emerging tools.
Slater, M., Steed, A., & Chrysanthou, Y. (2002). Computer
The Journal of Neuroscience, 27(44), 11807–11819.
Graphics and Virtual Environments: from Realism to
doi:10.1523/JNEUROSCI.3575-07.2007
Real-Time. Reading, MA: Addison Wesley.
Simbad Project Home. (2007, Dec). Retrieved 1 2008,
Smith, W. (2005). Applying data mining to scheduling
from Simbad Project Home: http://simbad.sourceforge.
courses at a university. Communications Of AIs, 16,
net/
463–474.
Simon, H. A. (1962). The architecture of complexity.
Song, L., & AbouRizk, S. M. (2003). Building a virtual
Proceedings of the American Philosophical Society,
shop model for steel fabrication. Proceedings of the 2003
106(6), 467–482.
Winter Simulation Conference, New Orleans, LA, (pp.
1510-1517).

564
Compilation of References

Spieckermann, A., Lehmann, A., & Rabe, M. (2004). Sterman, J. (1992). Teaching Takes Off, Flight Simulator
Verifikation und Validierung: berlegungen zu einer in- for Management Education, the Beer Game. Sloan school
tegrierten Vorgehensweise. In K. Mertins, & M. Rabe, of Management, Massachusetts Institute of Technology,
(Hrsg), Experiences from the Future Fraunhofer IRB, Cambridge, MA.
Stuttgart, (pp. 263-274). Stuttgart, Germany.
Stolba, N., & Tjoa, A. M. (2006). The relevance of data
SpikeSNNS [computer software] (n.d.). Available from warehousing and data mining in the field of evidence-
http://cortex.cs.may.ie/tools/spikeNNS/index.html. based medicine to support healthcare decision making.
Enformatika, 11, 12–17.
SpikeStream [computer software] (n.d.). Available from
http://spikestream.sf.net. SUN. (2006). JDK-ORB. Retrieved September 1st, 2008,
from http://java.sun.com/j2se/1.5.0/docs/guide/idl/
Srivastava, B., & Koehler, J. (2003). Web Service
Composition - Current Solutions and Open Problems. Suzuki, H. (1977). A statistical model for urban radio
Proceedings ICAPS 2003 Workshop on Planning for propagation. IEEE Transactions on Communications,
Web Services. 25(7), 673–680. doi:10.1109/TCOM.1977.1093888

Srivastava, J., & Cooley, R. (2003). Web Business In- Swain, J., 2007. Biennial Survey of discreet-event simula-
telligence: Mining the Web for Actionable Knowledge. tion software tools. OR/MS Today, 34(5), October.
INFORMS Journal on Computing, 15(2), 191–207.
Szczerbicka, H., et al. (2000, December). Conceptions
doi:10.1287/ijoc.15.2.191.14447
of curriculum for simulation education (Panel). In J. A.
Stallings, W. (2005). Wireless Communications and Net- Joines, R. R. Barton, K. Kang & P. A. Fishwick (Eds.),
works (2nd Ed.). Upper Saddle River, NJ: Prentice-Hall. Proceedings of the 2000 Winter Simulation Confer-
ence, Piscataway, NJ (pp. 1635-1644). Washington,
Stanislaw, H. (1986). Tests of computer simulation valida-
DC: IEEE.
tion: What do they measure? Simulation & Games, 17(2),
173–191. doi:10.1177/0037550086172003 Takai, M., Martin, J., & Bagrodia, R. (2001, October).
Effects of wireless physical layer modeling in mobile
Stasko, J. T. (1991). Using direct manipulation to build
ad hoc networks. Paper presented at MobiHOC, Long
algorithm animations by demonstration. In Proceedings
Beach, CA, (pp. 87-94).
of conference on human factors and computing systems
(pp. 307-314), New Orleans, LA. Tanenbaum, A. S. (2003). Computer networks (4th ed.).
Upper Saddle River, NJ: Prentice Hall.
Stasko, J. T. (1997). Using student-built algorithm
animations as learning aids. In The proceedings of the Tang, W., Wan, T., & Patel, S. (2003, June 3-5). Real-time
28th SIGCSE technical symposium on computer sci- crowd movement on large scale terrains. In Theory and
ence education (pp. 25-29), San Jose, CA. New York: Practice of Computer Graphics. Washinton, DC: IEEE
ACM Press. Computer Society.

Stasko, J. T. (1998). Building software visualizations Tani, J., Nishimoto, R., & Paine, R. W. (2008). Achieving
through direct manipulation and demonstration. In M. “organic compositionality” through self-organization:
Brown, J. Domingue, B. Price, & J. Stasko (Eds.), Software Reviews on brain-inspired robotics experiments.
visualization: Programming as a multimedia experience Neural Networks, 21(4), 584–603. doi:10.1016/j.neu-
(pp. 103-118). Cambridge, MA: MIT Press. net.2008.03.008

Steele, R. D. (2002). The New Craft of Intelligence: Tecchia, F., Loscos, C., Conroy, R., & Chrysanthou, Y.
Personal, Public and Political. VA: OSS International (2003). Agent behaviour simulator (abs): A platform for
Press. urban behaviour development. In Conference Proceed-

565
Compilation of References

ings of Theory and Practice of Computer Graphics. tion demand. Proceedings of the 1999 Winter Simulation
Washington, DC: IEEE. Conference, Phoenix, AZ (pp. 978-984).

Teccia, F., & Chrysanthou, Y. (2001). Agent behavior Tommelein, I. D., Carr, R. I., & Odeh, A. M. (1994).
simulator. Web Document, University College London, Assembly of simulation networks using designs, plans,
Department of Computer Science. and methods. Journal of Construction Engineering and
Management, 120(4), 796–815. doi:10.1061/(ASCE)0733-
Technologies, O. P. N. E. T. (2009). Retrieved January
9364(1994)120:4(796)
20, 2009, from www.opnet.com
Tonfoni, G., & Jain, L. C. (Eds.). (2003). Innovations in
Tetcos. (2007). Products. Retrieved 20 April, 2007, from
Decision Support Systems. Australia: Advanced Knowl-
http://www.tetcos.com/software.html
edge International.
Tickoo, O., & Sikdar, B. (2003). On the impact of IEEE
Tonnelier, A., Belmabrouk, H., & Martinez, D. (2007).
802.11 MAC on traffic characteristics. IEEE Journal
Event-driven simulations of nonlinear integrate-and-
on Selected Areas in Communications, 21(2), 189–203.
fire neurons. Neural Computation, 19, 3226–3238.
doi:10.1109/JSAC.2002.807346
doi:10.1162/neco.2007.19.12.3226
Tischer, T. E., & Kuprenas, J. A. (2003). Bridge falsework
Tools, C. P. N. (2009). Retrieved from http://wiki.daimi.
productivity – measurement and influences. Journal of
au.dk/cpntools/cpntools.wiki
Construction Engineering and Management, 129(3), 243–
250. doi:10.1061/(ASCE)0733-9364(2003)129:3(243) Tosic, V., Pagurek, B., Esfandiari, B., & Patel, K. (2001).
On the Management of Compositions of Web Services.
Tolk, A. (1999). Requirements for Simulation Systems
In Proceedings Object-Oriented Web Services (OOP-
when being used as Decision Support Systems. [), IEEE
SLA).
CS press]. Proceedings Fall Simulation Interoperability
Workshop, I, 29–35. Touran, A. (1990). Integration of simulation with expert
systems. Journal of Construction Engineering and
Tolk, A., & Diallo, S. Y. (2008) Model-Based Data
Management, 116(3), 480–493. doi:10.1061/(ASCE)0733-
Engineering for Web Services. In Nayak R. et al. (eds)
9364(1990)116:3(480)
Evolution of the Web in Artificial Intelligence Environ-
ments, (pp. 137-161). Berlin: Springer. Truong, T. H., Rothschild, B. J., & Azadivar, F. (2005)
Decision support system for fisheries management. In
Tolk, A., & Jain, L. C. (Eds.). (2008). Complex Systems
Proceedings of the 37th Conference on Winter Simula-
in the Knowledge-based Environment. Berlin: Springer
tion, (pp. 2107-2111).
Verlag.
Tseng, Y., Ni, S., Chen, Y., & Sheu, J. (2002). The
Tolk, A., Diallo, S. Y., & Turnitsa, C. D. (2008). Implied
broadcast storm problem in a mobile ad hoc net-
Ontological Representation within the Levels of Concep-
work. Journal of Wireless Networks, 8, 153–167.
tual Interoperability Model. [IDT]. International Journal
doi:10.1023/A:1013763825347
of Intelligent Decision Technologies, 2(1), 3–19.
Turban, E., Aronson, J. E., Liang, T. P., & Sharda, R.
Tolk, A., Diallo, S. Y., Turnitsa, C. D., & Winters, L. S.
(2007). Decision Support and Business Intelligence
(2006). Composable M&S Web Services for Net-centric
Systems, (8th Ed.). Upper Saddle River, NJ: Pearson
Applications. Journal for Defense Modeling and Simula-
Prentice Hall.
tion, 3(1), 27–44. doi:10.1177/875647930600300104
Turner, M. J., Richardson, R., Le Blanc, A., Kuchar,
Tommelein, I. D. (1999). Travel-time simulation to locate
P., Ayesh, A., & Al Hudhud, G. (2006, September 14).
and staff temporary facilities under changing construc-
Roboviz a collaborative user and robot. In CompuSteer

566
Compilation of References

Workshop environment network testbed. London, UK: Management, 119(2), 336–354. doi:10.1061/(ASCE)0733-
Oxford University. 9364(1993)119:2(336)

Uzam, M., Avci, M., & Kürsat, M. (2001). Digital Hard- vanguardsw. (n.d.). vanguardsw. Retrieved 2008, from
ware Implementation of Petri Nets Based Specification: vanguardsw: www.vanguardsw.com
Direct Translation from Safe Automation Petri Nets to
Varga, A. (1999). Using the OMNeT++ discrete event
Circuit Elements. In Proc. DESDes’01, (pp. 25 – 33),
simulation system in education. IEEE Transactions on
Zielona Gora, Poland.
Education, 42(4), 1–11. doi:10.1109/13.804564
Vaduva, A., & Vetterli, T. (2001). Metadata manage-
Varga, A. (2001, June 6-9). The OMNeT++ discrete
ment for data warehousing: an overview. International
evenet simulation system. Paper presented at the Eu-
Journal of Cooperative Information Systems, 10(3), 273.
ropean Simulation Multiconference (ESM’01), Prague,
doi:10.1142/S0218843001000357
Czech Republic.
Valk, R. (1978, July). Self-Modifying Nets, a Natural
Vennix, J. (1996). Group model building: facilitating
Extension of Petri Nets. In G. Ausiello & C. Böhm
team learning using system dynamics. Chichester, UK:
(Eds.), Proceedings of the Fifth Colloquium on Automata,
Wiley
Languages and Programming (ICALP’78), (p. 464-476).
Udine, Italy: Springer. Verbeke, J. M., Hagmann, C., & Wright, D. (2008, Feb-
ruary 1). http://nuclear.llnl.gov/simulation/fission.pdf.
Valk, R. (1998, June). Petri Nets as Token Objects: An
Retrieved October 1, 2008, from Computational Nuclear
Introduction to Elementary Object Nets. In J. Desel &
Physics, http://nuclear.llnl.gov/simulation/
M. Silva (Eds.), Proceedings of the 19th International
Conference on Applications and Theory of Petri Nets Vinoski, S. (1997). CORBA - Integrating diverse ap-
(ICATPN 1998) (p. 1-25). Lisbon, Portugal: Springer. plications within distributed heterogeneous environ-
ments. IEEE Communications Magazine, 35(2), 46–55.
van Dam, A. (1999). Education: the unfinished
doi:10.1109/35.565655
revolution. [CSUR]. ACM Computing Surveys, 31(4).
doi:10.1145/345966.346038 VMware Player [computer software] (n.d.). Available
from http://www.vmware.com/products/player/.
van der Aalst, W. M. P. (1996). Structural Characteriza-
tions of Sound Workflow Nets (Computing Science Re- Walfish, J., & Bertoni, H. L. (1988). A theoretical model
ports No. 96/23). Eindhoven, the Netherlands: Eindhoven of UHF propagation in urban environments. IEEE Trans-
University of Technology. actions on Antennas and Propagation, 36, 1788–1796.
doi:10.1109/8.14401
van der Aalst, W. M. P., & Basten, T. (2002, January).
Inheritance of Workflows: An Approach to Tackling Prob- Wan, T. R., & Tang, W. (2003). An intelligent vehicle
lems Related to Change. Theoretical Computer Science, model for 3d visual traffic simulation. In IEEE Interna-
270(1-2), 125–203. doi:10.1016/S0304-3975(00)00321-2 tional Conference on Visual Information Engineering,
VIE.
van der Aalst, W. M. P., & Jablonski, S. (2000, Septem-
ber). Dealing with Workflow Change: Identification of Wan, T., & Tang, W. (2004). Agent-based real time traffice
Issues and Solutions. International Journal of Computer control simulation for urban environment. IEEE Transac-
Systems, Science, and Engineering, 15(5), 267–276. tions on Intelligent Transportation Systems.

Vanegas, J. A., Bravo, E. B., & Halpin, D. W. (1993). Wang, Z., & Lehmann, A. (2008). Verification and Vali-
Simulation technologies for planning heavy construction dation of Simulation Models and Applications. Hershey,
processes. Journal of Construction Engineering and PA: IGI Global.

567
Compilation of References

Watson H. J., Wixom B. H., Hoffer J. A., Lehman R. A., University Network. Communications of the Association
& Reynolds A. M. (2006). Real-Time Business Intel- for Information Systems, 14, 234–246.
ligence: Best Practices at Continental Airlines. Journal
Wolfram, S. (1994). Cellular Automata and Complex-
of Information Systems Management, 7–18.
ity: Collected Papers. Reading, MA: Addison-Wesley
Weaver, W. (1948). Science and complexity. American Publishing Company.
Scientists, 36.
Wolstenholme, E. (1990). System Enquiry: a system
Weber, D. O. (2006). Queue Fever: Part 1 and Part 2. dynamics approach. Wiley: New York.
Hospitals & Health Networks, Health Forum. Retrieved
Wu, D., Olson, D. L., & Dong, Z. Y. (2006). Data mining
from http://www.IHI.org
and simulation: a grey relationship demonstration. Inter-
Weiss, S. M., Buckley, S. J., Kapoor, S., & Damgaard, S. national Journal of Systems Science, 37(13), 981–986.
(2003). Knowledge-Based Data Mining. [Washington, doi:10.1080/00207720600891521
DC.]. SIGKDD, 03, 456–461.
Wullink, G., Van Houdenhoven, M., Hans, E., van Oost-
Wheeler, D. A. (n.d.). SLOCCount [computer software]. rum, J., van der Lans, M., Kazemier, G., (2007). Closing
Available from http://www.dwheeler.com/sloc/. Emergency Operating Rooms Improves Efficiency.

Wilton, D. (2001). The Application Of Simulation Tech- Wutzler, T. (2008). Effect of the Aggregation of Multi-
nology To Military Command And Control Decision Cohort Mixed Stands on Modeling Forest Ecosystem
Support. In Proceedings Simulation Technology and Carbon Stocks. Silva Fennica, 42(4), 535–553.
Training Conference (SimTecT), Canberra, Australia.
Wutzler, T., & Mund, M. (2007). Modelling mean above
Winsberg, E. (1999). Sanctioning models: The epistemol- and below ground litter production based on yield tables.
ogy of simulation. Science in Context, 12(2), 275–292. Silva Fennica, 41(3), 559–574.
doi:10.1017/S0269889700003422
Wutzler, T., & Reichstein, M. (2007). Soils apart from
Winsberg, E. (2001). Simualtions, models, and theories: equilibrium – consequences for soil carbon balance
Complex physical systems and their. Philosophy of Sci- modelling. Biogeosciences, 4, 125–136.
ence, 68(3), 442–454. doi:10.1086/392927
Wutzler, T., & Sarjoughian, H. S. (2007). Interoperability
Winsberg, E. (2003). Simulated experiments: Method- among parallel DEVS simulators and models implemented
ology for a virtual world. Philosophy of Science, 70, in multiple programming languages. Simulation Transac-
105–125. doi:10.1086/367872 tions, 83(6), 473–490. doi:10.1177/0037549707084490

Winsberg, E. (2006). Handshaking your way to the top: Xilinx, (2009). Retrieved from http://www.xilinx.
Simulation at the nanoscale. In G. K. J. Lenhard, & T. com/f
Shinn (Eds.), Simulation: Pragmatic construction of
Xu, D., Yin, J., Deng, Y., & Ding, J. (2003, January). A
reality; sociology of the sciences yearbook (pp. 139-151).
Formal Architectural Model for Logical Agent Mobil-
Dordrecht, The Netherlands: Springer.
ity. IEEE Transactions on Software Engineering, 29(1),
Winsberg, E. (2006). Models of success versus the suc- 31–45. doi:10.1109/TSE.2003.1166587
cess of models: Reliability without. Synthese, 152, 1–19.
Yacoub, M. D. (2000). Fading distributions and
doi:10.1007/s11229-004-5404-6
co-channel interference in wireless systems. IEEE
Wixom, B. H. (2004). Business Intelligence Software for Antennas & Propagation Magazine., 42(1), 150–159.
the Classroom: Microstrategy Resources on the Teradata doi:10.1109/74.826357

568
Compilation of References

Yacoub, M., Baptista, J. E. V., & Guedes, L. G. R. (1999). Zeigler, B. P., & Hammonds, P. E. (2007) Modeling
On higher order statistics of the Nakagami-m distribu- & Simulation-Based Data Engineering: Introducing
tion. IEEE Transactions on Vehicular Technology, 48(3), Pragmatics into Ontologies for Net-Centric Information
790–794. doi:10.1109/25.764995 Exchange. New York: Academic Press.

Yegani, P., & McGillem, C. D. (1991). A statistical model Zeigler, B. P., & Sarjoughian, H. S. (2002). Implications
for the factory radio channel. IEEE Transactions on Com- of M&S Foundations for the V&V of Large Scale Com-
munications, 39(10), 1445–1454. doi:10.1109/26.103039 plex Simulation Models, Invited Paper. In Verification
& Validation Foundations Workshop Laurel, Maryland,
Yilmaz, L. (2007). Using meta-level ontology relations
VA., (pp. 1–51). Society for Computer Simulation. Re-
to measure conceptual alignment and interoperability of
trieved from https://www.dmso.mil/public/transition/
simulation models. In Proceedings of the Winter Simula-
vva/foundations
tion Conference, (pp. 1090-1099).
Zeigler, B. P., Praehofer, H., & Kim, T. G. (2000). Theory
Yilmaz, L., Ören, T., & Aghaee, N. (2006). In-
of modeling and simulation (2nd Ed.). New York: Aca-
telligent agents, simulation, and gaming. Jour-
demic Press.
nal for Simulation and Gaming, 37(3), 339–349.
doi:10.1177/1046878106289089 Zeigler, B. P., Sarjoughian, H. S., & Praehofer, H. (2000).
Theory of quantized systems: DEVS simulation of per-
Youzhi, X., & Jie S. (2004). The agent-based model on
ceiving agents. Cybernetics and Systems, 31(6), 611–647.
real-time data warehousing. Journal of Systems Science
doi:10.1080/01969720050143175
& Information, 2(2), 381–388.
Zein, H. (2006). A framework for planning and optimiz-
Zack, J., Rainer, R. K., & Marshall, T. E. (2007). Business
ing bridge deck construction using computer simulation.
Intelligence: An Analysis of the Literature. Information
M.Sc. Thesis, Cairo University, Cairo, Egypt.
Systems Management, 25, 121–131.
Zhang, C., & Hammad, A. (2007). Agent-based simula-
Zeigler, B. P. (1986) Toward a Simulation Methodology
tion for collaborative cranes. Proceedings of the 2007
for Variable Structure Modeling. In Elzas, Oren, Zeigler
Winter Simulation Conference, Washington, DC. (pp.
(Eds.) Modeling and Simulation Methodology in the
2051-2056).
Artificial Intelligence Era.
Zhu, C., Wang, O. W. W., Aweya, J., Oullette, M., &
Zeigler, B. P. (2003). DEVS Today: Recent advances in
Montuno, D. Y. (2002). A comparison of active queue
discrete event-based information. Proceedings of the
management algorithms using the OPNET Modeler. IEEE
11th IEEE/ACM International Symposium on Modeling,
Communications Magazine, 40(6), 158–167. doi:10.1109/
Analysis and Simulation of Computer Telecommunica-
MCOM.2002.1007422
tions Systems, (pp. 148-162).

569
570

About the Contributors

Evon M. O. Abu-Taieh is a Ph.D. holder in Simulation. A USA graduate for both her Master of
Science and Bachelor’s degrees with a total experience of 19 years. Author of many renowned research
papers in the Airline and IT, PM, KM, GIS, AI, Simulation, Security and Ciphering. Editor/author of
Book: Utilizing Information Technology Systems Across Disciplines: Advancements in the Application
of Computer Science.IGI, USA. Editor/author Handbook of Research on Discrete Event Simulation En-
vironments: Technologies and Applications, IGI, USA. Guest Editor, Journal of Information Technology
Research (JITR) Editorial Board Member in: International Journal of E-Services and Mobile Applica-
tions (IJESMA and International Journal of Information Technology Project Management (IJITPM)
and International Journal of Information Systems and Social Change (IJISSC). Editor/author of Book:
Simulation and Modeling: Current Technologies and Applications. IGI, USA. Developed some systems
like: Ministry of transport databank, auditing system for airline reservation systems and Maritime Da-
tabank among others in her capacity as head of IT department in the ministry of transport for 10 years.
Furthermore worked in the Arab Academy in her capacity as Assistant Professor, Dean’s Assistant and
London School of Economics (LSE) Director. Appointed many times as track chair, reviewer in many
international conferences: IRMA, CISTA, and WMSCI. Enjoys both the academic arena as well as the
hands on job. (abutaieh@gmail.com)

Asim Abdel Rahman El Sheikh got a BSc (honors) from University of Khartoum (Sudan),an MSc
and a PhD from University of London (UK). El Sheikh worked for University of Khartoum, Philadelphia
University (Jordan) and The Arab Academy for Banking & Financial Sciences (Jordan). El Sheikh is
currently the dean of the faculty of information systems & technology at the Arab Academy for Bank-
ing & Financial Sciences. El Sheikh’s areas of research interest are computer simulation and software
engineering. Email Address : a.elsheikh@aabfs.org.

***

Sattar J Aboud is a Professor at Middle East University in Jordan. He received his education from
United Kingdom. Dr. Aboud has served his profession in many universities and he was awarded the
Quality Assurance Certificate of Philadelphia University, Faculty of Information Technology. His re-
search interests include the areas of both symmetric and asymmetric cryptography, area of verification
and validation, and performance evaluation.

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors

Jeihan Abu-Tayeh attend school at the Rosary School in Amman, then she acquired her bachelor’s
in Pharmaceutical Science and Management from Al-Ahlyya Amman University. Furthermore, in
2002, she got her M.B.A. with emphasis on “International Marketing & Negotiations Technique”, with
outstanding G. P. A. of 3.87 out of 4 (with honors) from Saint Martin’s College, State of Washington;
U.S.A. Currently, Jeihan Abu-Tayeh is a Head of the International Agencies & Commissions Division at
the Jordanian Ministry of Planning and International Cooperation. In her capacity, she has the opportu-
nity to maintain sound cooperation relations with the World Bank Group, as well as the UN Agencies,
in order to extend to Jordan financial and technical support for developmental projects through setting
appropriate programs and plans, building and improving relations with those organizations. This is
achieved through loans and aids programs, by means of designing project proposals, conducting Prob-
lem & Needs Assessment for the concerned Governmental and Non-Governmental Jordanian entities,
followed by active participation in extensive evaluation processes, conducted by either the UN Country
Team, or the World Bank Group Country Team.

Hussein Al-Bahadili is an associate professor at the Arab Academy for Banking & Financial Sci-
ences (AABFS). He earned his M.Sc and PhD from the University of London (Queen Mary College)
in 1988 and 1991, respectively. He received his B.Sc in Engineering from the University of Baghdad in
1986. In addition to his academic activities at the University of Baghdad, he worked for the Iraqi Atomic
Energy Commission (IAEC) for more than 15 years. He was head of the Department of Software at the
Centre of Engineering Design, and then he became Director of the Centre of Information Technology.
He was the INIS Liaison Officer for Iraq at the International Atomic Energy Agency (IAEA) from 1997
to 2000. Dr. Al-Bahadili is a member of the Wireless Networks and Communications Group (WNCG)
at the School of Engineering, University of Brunel, United Kingdom. He is also a visiting researcher at
the Centre of Osmosis Research and Applications (CORA), University of Surrey, United Kingdom. He
has published many papers in different fields of science and engineering in numerous leading scholarly
and practitioner journals, and presented at leading world-level scholarly conferences. His research
interests include parallel and distributed computing, wireless communication, data communication
systems, computer networks, cryptography, network security, data compression, image processing,
data acquisition, computer automation, electronic system design, computer architecture, and artificial
intelligence and expert systems.

Mohammad A. Al-Fayoumi is a Professor at Middle East University in Jordan, and now he is the
dean of Information technology faculty. He received his education from Romania. Dr. Fayoumi has
served his profession in many Universities and he awarded many prices and certificates from different
Universities in Arab nation. His research interests include the methodology areas of Information Secu-
rity and Simulation and modeling.

Ghada A.K. Al-Hudhud holds a PhD from De Montfort University/ United Kingdom. Ghada serves
as the head of Department of Software Engineering at Al-Ahliya Amman University, Jordan. Ghada
is interested in modelling systems within virtual environments. Ghada is also interested in working on
image compression. Ghada has been working as a key partner in an EPSRC project funded by Compu-
tational Steering Network.

571
About the Contributors

Mouhib Alnoukari: Currently preparing a PhD in Management Information Systems (MIS) at The
Arabic Academy for Banking and Financial Sciences, Damascus, Syria. Holds MBA from Damascus
University, MS in Mathematics from Damascus University, and MS in Computer Engineering from
Montpellier University, France. Currently working as the ICT director at the Arab International Univer-
sity. Published papers in different conferences and journals such as: ITSim 2008 Malaysia, ICTTA’08
Syria, EQAF 2008 Budapest, Damascus University Journal, and others.

Mohamed Alnuaimi is a Professor at Middle East University in Jordan, and now he is the Vice
President of the University. He received his education from Krakoff University, Poland. Dr. Alnuaimi
has served his profession in many Universities and he awarded many prices, medals and certificates from
different Universities in Arab nation. His research interests include the methodology areas of Applied
Statistic and Simulation and modeling.

Raed Musbah Al-Qirem has a PhD in Management Information system from the University of
Sunderland-United Kingdom. His research focused on some of systems thinking methodologies which
are System Dynamics and Viable System model (Managerial Cybernetics). Because his experience was
in Banking and finance, he constructed a Decision Support System using systems thinking to evaluate
the credit worthiness of firm’s applying for credit in the Banks. He is now an assistant Professor in the
MIS department at Al-Zaytoonah University of Jordan.

Zaidoun Alzoabi: Currently preparing a PhD in (Management Information Systems) at the Arab
academy for Banking and Finance Science, Damascus, Syria. Holds Master in Computer Applications
(MCA) from J.M.I University, New Delhi, India. Currently working at Arab International University as
the Quality Assurance director. Also as an Information and Communication Consultant at the Moderniza-
tion of Ministry of Finance project (an EU funded project). Published papers in the different conferences
and journals such as: ITSim 2008 Malaysia, ICTTA’08 Syria, EQAF 2008 Budapest, and others.

Lorenzo Capra was born in Monza (Italy), and went to the University of Milan, where he obtained
his Laurea degree in Computer Science. After collaborating with the Automation Research Center at
the National Electric Power Provider (ENEL), he moved to the University of Turin, where he received
a Ph.D in Computer Science. He is currently assistant professor at the Dept. of Informatics and Com-
munication (DICO) of the University of Milano, Italy. His research interests include High-Level Petri
Nets analysis/simulation and formal methods in software engineering.

Adolfo Cartaxo received the degree of “Licenciatura” in Electrical and Computer Engineering,
and the Ph. D. in Electrical and Computer Engineering in 1985, and 1992, respectively, from Instituto
Superior Técnico (IST). He is currently Associate Professor at the Electrical and Computer Engineer-
ing Department of IST. He joined the Optical Communications Group (OCG) of IST as a researcher
in 1992, and he is now the leader of the OCG conducting research on optical fibre telecommunication
systems and networks. He is a senior member of the IEEE Laser and Electro-Optics Society. He has
authored or co-authored more than 65 journal publications (15 as first author) as well as more than 90
international conference papers. He is co-author of two international patents. His current research areas
of interest include fiber optic communication systems and networks, and simulation of telecommunica-
tion systems.

572
About the Contributors

Lucia Cassettari earned her degree in management engineering in 2004 at the University of Genoa.
Currently she is a researcher at DIPTEM, University of Genoa, in the field of simulator-based applica-
tions for industrial plants; particular attention is focused on the application of DOE and Optimization
techniques to industrial plant problems using Simulation.

Walter Cazzola (Ph.D.) is currently an assistant professor at the Department of Informatics and
Communication (DICo) of the University of Milano, Italy and the chair of the ADAPT research group
(http://adapt-lab.dico.unimi.it). His research interests include reflection, aspect-oriented programming,
programming methodologies and languages. He has written and has served as reviewer of several
technical papers about reflection and aspect-oriented programming. Details can be read from his home
page http://homes.dico.unimi.it/~cazzola.

Francisco Cercas received his Licenciatura, M.Sc., and Ph.D. degrees from Instituto Superior Téc-
nico (IST), Technical University of Lisbon, Portugal, in 1983, 1989 and 1996, respectively. He worked
for the Portuguese Industry as a research engineer and developed the work of his M.S. and Ph.D. theses
as an invited researcher at the Satellite Centre of the University of Plymouth, UK. This resulted in new
contributions for the characterization of DDS (Direct Digital Frequency Synthesizer) signals and in a
new class of codes named TCH after Tomlinson, Cercas and Hughes. He lectured during 15 years at
IST and became Associate Professor in 1999 at ISCTE, Lisbon, where he is the Head of the Department
of Sciences and Technologies of Information. He has over 100 international publications with referees
including conferences, magazines, book chapters and a patent. His main research interests focus on
mobile and personal communications, satellite communications, channel coding and ultra wide band
communications.

David Gamez completed his BA in natural sciences and philosophy at Trinity College, Cambridge,
and took a PhD in Continental philosophy at the University of Essex. After converting to IT, he worked
on the EU Safeguard project, which developed an agent system to protect electricity and telecommunica-
tions management networks against attacks and accidents. When the Safeguard project ended he took
up a PhD position on Owen Holland’s CRONOS project. During this PhD he developed a theoretical
framework for machine consciousness, developed the SpikeStream neural simulator and made predic-
tions about the representational and phenomenal states of a spiking neural network.

Brian L. Heath is a DAGSI Fellow and currently a Ph.D. Candidate in Engineering with focus in
Industrial and Human Systems at Wright State University (Dayton, OH, USA). In 2008 he received a M.S.
in Industrial and Human Factors Engineering from Wright State University and in 2006 he received a
B.S. in Industrial Engineering from Kettering University (Flint, MI, USA). He is a member of INFORMS
and the Institute of Industrial Engineering (IIE). His research interests include agent-based modeling,
simulation, validation philosophy, scientific model building, work measurement, and statistics.

Raymond R. Hill is a Professor of Operations Research with the Air Force Institute of Technology.
He has a Ph.D. in Industrial and Systems Engineering from The Ohio State University. His research
interests are in the areas of applied statistics and experimental design, mathematical modeling and com-
binatorial optimization and simulation to include agent-based modeling. He is a member of INFORMS

573
About the Contributors

and the Institute of Industrial Engineering (IIE) and an associate editor for the Journal of Simulation,
Military Operations Research and the Journal of Defense Modeling and Simulation.

Alexander Kolker is currently Outcomes Operations Project Manager in Children’s Hospital of Wis-
consin, Milwaukee, Wisconsin. He has been extensively involved in the various applications of Healthcare
management science and Operations research using discrete event simulation: from hospital capacity
expansion planning to patient flow improvement and optimized staff utilization. He actively publishes
in peer reviewed journals and speaks at conferences in the area of simulation and management science
applications for health care. Previously he has been with Froedtert Hospital, and with General Electric
Co, Healthcare Division, as a simulation specialist and reliability engineer. Alex holds a PhD in applied
mathematics from the Moscow Technical University, and is an ASQ certified Reliability engineer.

Ari Korhonen has been Adjunct Professor of Computer Science (specialising in Software Visualiza-
tion) since 2006. He is currently Lecturing Researcher of the Faculty of Information and Natural Sci-
ences in the Helsinki University of Technology. He holds M.Sc. (Tech), Lic.Sc. (Tech), and D.Sc. (Tech)
in Computer Science, all from the Helsinki University of Technology, Finland. His previous positions
include research positions and acting professor at the same university. He established the Software
Visualization Group at the Helsinki University of Technology in 2000 and has been its leader since.
He has been the manager of several research projects, including AAFAS (2005-2008), funded by the
Academy of Finland. A former secretary of The Finnish Society for Computer Science (1999-2001), he
is currently the editor of its journal Tietojenkäsittelytiede (2002-). At present he belongs to the board in
IEEE Education Society Chapter for the Joint Norway/Denmark/Finland/Iceland/Sweden Sections. He
has constantly refereed major journals and conferences including ACM Journal on Educational Resources
in Computing, Educational Technology & Society journal, The Baltic Sea Conference on Computing
Education Research, the ACM Annual Conference on Innovation and Technology in Computer Science
Education, and the ACM Technical Symposium on Computer Science Education. In addition, he has
served on the Program Committees for the 7th and 8th Baltic Sea Conferences on Computing Education
Research, and the 3rd, 4th, and 5th Program Visualization Workshops. His research interests include
data structures and algorithms in software visualization. Especially various applications of computer
aided learning environments in computer science education. Current work is concerned with software
tools and principles in the area of automatic assessment systems.

Hana Kubatova received her Ph.D. (CSc.) degree in Computer Science and Engineering at the Czech
Technical University in Prague (CTU) in 1987. She currently works as an associate professor and as a
deputy head of the Department of Computer Science and Engineering at the CTU. She is a leader of the
VLSI research group with 20 members and with following areas of interest: Petri Nets in modelling,
simulation and hardware design, design and evaluation of heuristic techniques for selected problems
in VLSI systems, reconfigurable computing, HW/SW co-design methodologies, embedded processor
cores for FPGA, design for testability, BIST on circuit and system level, design and modeling of fault-
tolerant and dependable systems.

György Lipovszki was born in Miskolc, Hungary and finished his study at Budapest University of
Technology and Economics, where he was graduated in 1975 in electronics sciences. He is now Associ-
ate Professor at the Department of Mechatronics, Optics and Engineering Informatics and his research

574
About the Contributors

field is the development of simulation frame systems in different programming environments. He is a


member of the Editorial Board of International Journal for the Scholarship of Teaching and Learning.

Mohamed Marzouk, Ph.D., PMP is Associate Professor in the Structural Engineering Department,
Faculty of Engineering, Cairo University. He has 12 years of experience in the Civil Engineering. His
expertise has been in the fields of structural engineering, project management, contract administration,
and construction engineering and management. His experience covers different phases of projects in-
cluding design, construction, monitoring, research, consulting, and project management. Dr. Marzouk
is certified Project Management Professional (PMP®). He authored and co- authored over 40 scientific
publications. His research interest includes simulation and optimization of construction processes,
object-oriented simulation, fuzzy logic and its applications in construction, risk analysis, and decision
analysis. Dr. Marzouk is currently involved in several academic and scientific committees.

Richard Membarth received the postgraduate diploma in computer and information sciences from
the Auckland University of Technology, Auckland, New Zealand in 2007, and the Diploma degree in
computer science from the University of Erlangen-Nuremberg, Erlangen, Germany in 2008. Richard is
currently working toward his Ph.D. degree in computer science in the Department of Computer Science
at the University of Erlangen-Nuremberg, Erlangen, Germany. His research interests include parallel
computer architectures and programming models for medical imaging as well as invasive computing.

Istvan Molnar was born in Budapest and educated at the Budapest University of Economic Sciences
(currently, Corvinus University), where he received his MSc. and PhD. He has completed his postdoctoral
studies in Darmstadt, Germany. In 1996 he has received his CSs. degree from the Hungarian Academy
of Sciences. Currently, he is an Associate Professor at the Bloomsburg University of Pennsylvania. His
main fields of interest are microsimulation, simulation optimization, simulation software technology,
and simulation education. Dr. Molnar is a member of the Editorial Board of International Journal of
Mobile Learning and Organization, published by Inderscience Publishers.

Roberto Mosca is Full Professor of “Industrial Plants Management” and “Economy and Business
Organization” at the DIPTEM (Department of Industrial Production, Thermoenergetics and Math-
ematical Modelling), University of Genoa. He has worked in the simulation sector since 1969 using
discrete and stochastic industrial simulators for off-line and on-line applications. He has been more
time national coordinator of research projects of national relevant interest His research work focuses
on original application of DOE and RSM to simulation experiment. He is author of about 200 scientific
papers published for International Conferences and International Journals . Currently he is Director of
DIPTEM in University of Genoa.

Roberto Revetria earned his degree in mechanical engineering at the University of Genoa. He com-
pleted his PhD in Mechanical Engineering in 2001. He is currently involved, as Associate Professor, in
the DIPTEM of Genoa University, working on advanced modeling projects applied to ERP integration
and maintenance planning applied to industrial case studies. He is active in developing projects involv-
ing simulation with special attention to HLA (High Level Architecture).

575
About the Contributors

Hessam S. Sarjoughian is Assistant Professor of Computer Science and Engineering at Arizona


State University in Tempe, Arizona. Sarjoughian is Co-Director of the Arizona Center for Integrative
Modeling & Simulation (ACIMS). His research focuses on modeling and simulation methodologies,
model composability, distributed co-design modeling, visual simulation modeling, and agent-based
simulation. He led the development of the Online Masters of Engineering in Modeling & Simulation
in the Fulton School of Engineering at ASU in 2004. He was among the pioneers who established the
Modeling & Simulation Professional Certification Commission in 2001. His research has been supported
by NSF, Boeing, DISA, Intel, Lockheed Martin, Northrop Grumman, and US Air Force.

Nurul Sarkar is a Senior Lecturer in the School of Computing and Mathematical Sciences at AUT
University, Auckland, New Zealand. He has more than 13 years of teaching experience in universities
at both undergraduate and postgraduate levels and has taught a range of subjects, including computer
networking, data communications, computer hardware, and eCommerce. His first edited book entitled
“Tools for Teaching Computer Networking and Hardware Concepts” has been published by IGI Global
Publishing in 2006. Nurul has published more than 80 research papers in international refereed journals,
conferences, and book chapters, including the IEEE Transactions on Education, the International Jour-
nal of Electrical Engineering Education, the International Journal of Information and Communication
Technology Education, the International Journal of Business Data Communications and Networking,
Measurement Science & Technology, and SIGCSE Bulletin. Nurul was the recipient of Academic Staff
Doctoral Study Award, and co-recipient of the 2006 IRMA International Conference Best Paper Award
for a fundamental paper on the modelling and simulation of wireless networks. Nurul’s research interests
are in multi-disciplinary areas, including wireless network architecture, performance modelling and
evaluation of wireless networks, radio propagation measurements, network security, simulation and
modelling, intelligent agents, and tools to enhance methods for teaching and learning computer net-
working and hardware concepts. Nurul is a member of various professional organisations and societies,
including IEEE Communications Society, Information Resources Management Association (IRMA),
and ACM New Zealand Bulletin. He served as Associate technical editor for the IEEE Communications
Magazine; Associate editor for Advances in Business Data Communications and Networking book series;
editor for Encyclopaedia of Information Technology Curriculum Integration book series; Chairman of
the IEEE New Zealand Communications Society Chapter, and Executive peer reviewer of the Journal
of Educational Technology & Society.

Pedro Sebastião received the BSc degree in Electronic and Telecommunication and Computing,
ISEL, Polytechnic Institute of Lisbon, Portugal, in 1992. He graduated in Electrical and Computing
Engineering and received the MSc degree in Electrical and Computer Science from IST, Technical
University of Lisbon, Portugal, in 1995 and 1998, respectively. From 1992 to 1998, he was with the
Department of Studies in the Portuguese Defence Industries. In 1995, he joined with the IT, Portuguese
Telecommunication Institute. From 1998-2000, he was with the Communication Business unity, in Sie-
mens. Also, from 1999 to 2005, he was a lecturer in the Department of Information and Communication
Technologies in ISG, High Management Institute. Since 2005, he is a lecturer in the Department of
Sciences and Information Technologies in Lisbon University Institute - ISCTE. He has authored more
than 40 international publications including conferences, magazines and book chapters. His current
research interests are stochastic models, efficient simulation algorithms, satellite, mobile and personal
communication systems and planning tools.

576
About the Contributors

Andreas Tolk is Associate Professor for Engineering Management and Systems Engineering at
Old Dominion University in Norfolk, VA, USA. He received has Ph.D. and M.S. in Computer Science
from the University of the Federal Armed Forces in Munich, Germany. More than 25 of his conference
papers were awarded for outstanding contributions. He is affiliated with the Virginia Modeling Analy-
sis and Simulation Center in Suffolk, VA, USA. His research targets at the integration of Engineering
Management, Modeling and Simulation, and Systems Engineering methods and principles, in particular
for Complex Systems and System of Systems applications.

Thomas Wutzler is a junior researcher at the Max Planck Institute for Biogeochemistry in Jena,
Germany. His research focuses on understanding and modelling the carbon cycle of terrestrial ecosystems
with emphasis on soil carbon processes, uncertainties, and problems of scales. He graduated as a master
of computer science at the technical university in Dresden, Germany. Then he continued with research
in earth system sciences and earned a PhD in natural science. He strives to provide communication and
interfaces between the research communities of simulation computer science and earth system sciences.
His research has been supported by the German Environmental Foundation.

Saad Ghaleb Yaseen has a PhD in Management Information Systems. He is an associate Profes-
sor and head of the MIS Department in the Faculty of Economics and Administrative Sciences as Al-
Zaytoonah University of Jordan. He had conducted over 40 specialized studies in many fields such as
IT, IS, e Management and knowledge management. He is a renowned expert in the management of IT
projects and a professional academician in the Middle East.

577
578

Index

A Arena 450
astrophysics 15, 16, 21
abstract model interface 78, 79 automatic assessment
accelerated simulation method (ASM) 143, 236, 245, 249, 250, 251
144, 145, 146, 147, 151, 153, 154, AVL trees 236
155, 156, 159, 162, 176 AweSim 510
activity ratios 501, 502 axon 338, 340, 357, 358
adaptive neuro-fuzzy inference system (AN-
FIS) 359, 370, 373, 378 B
additive white Gaussian noise (AWGN) 143, 1
B Activity 534
44, 147, 150, 155, 156, 162, 173
band-pass filters (BPF) 147, 162
adjacency preserving task 225, 226
base-level reification 198, 199, 200, 201,
Advanced Continuous Simulation Language
202, 203, 206, 207, 208, 211, 228,
(ACSL) 429
229
agent-based modeling 28, 29, 56, 57
behavioural realism 253, 258, 282
aggregation relationships 515
binary search tree (BST) 236
agricultural production systems simulator
bit error ratio (BER) 143, 144, 145, 146,
(APSIM) 16, 20, 21, 25, 26
154, 155, 156, 161, 162, 173
algorithm animation 234, 236, 237, 238,
blanked test 106
240, 241, 244, 246, 248, 249, 250,
body of knowledge 13
251
Bologna Process, the 2, 12, 14
algorithm simulation 234, 235, 236, 237,
Bremermann’s Limit 41, 42
239, 243, 245, 246, 247, 249, 250,
Bridge Analyzer Module 521, 522, 523, 530
251
Buffer object 289, 290, 291, 292, 297,
algorithm simulation exercises 234, 236, 237
298, 299, 300, 308, 316
, 243, 245, 250, 251
Building Information Modeling (BIM)
algorithm visualization
529, 534
234, 237, 238, 245, 249, 251
business intelligence 360, 369, 371, 376, 378
Analytica 24, 26
butterfly effect 41
analytic model 93
bypassable 220, 222, 223, 225, 226, 227,
ANOVA 110, 111, 113, 115, 116, 124, 137
228, 229, 230
AnyLogic 23, 24
AP2-Earth 513 C
application domain electives 4
ARCS 511 C++ 75, 76, 78, 80, 81, 83
area of influence 198 cable-stayed bridges 511, 530, 531

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index

C Activity 534 conceptual model validity 58


Call of Duty 17, 24, 25, 26 confidence interval 123, 125, 135, 139, 142
cantilever carriage method 528 constraints and logic rules 444
capstone 4, 7 continuous simulation 337, 342, 343, 345,
career tracks 4, 7 346, 355, 357, 358
cash flow 499 continuous-valued simulation 422, 423, 441
cast-in-place on falsework method CORBA 81, 84, 86, 87
524, 526, 528 core courses 7
causal loop diagram (CLD) 487, 488, 489, cosmic-ray shower (CRY) 16, 19, 25, 26
490, 492, 493, 495, 507 Cosmology 15, 16, 17, 21, 25
causally connected 192, 198, 206, 219, COUNTER 511
221, 223, 228, 229, 230 coupled model 79, 80, 81, 84, 88
causal structure 491 CPDF 153, 163, 170, 174, 175, 176
cell assembly theory 29 credit risk 484, 506
cellular automata 33, 57 critical path method (CPM) 514
center points addition 111 CRUISER 513
chaos 28, 32, 33, 34, 35, 56 Crystal Ball 24
chaos theory 34, 37 CSD 513
ChemLab 16, 18, 19, 25 curriculum design 1, 3
ChemViz 16, 18, 19, 25 customer delay (CD) 512
Church-Turing hypothesis 30 cybernetics 28, 33, 57
CIPROS 511 cyclic evaluation 72
classical mechanics 30 cyclic operation 511
closed-loop thinking 484 CYCLic Operation Network 511
clouds 507 CYCLONE 510, 511, 512, 531
code division multiple access (CDMA)
144, 163 D
coding performance 159 Daniel method 111
coloured Petri nets (CPN) Darwinian evolution 30
180, 184, 185, 186, 187, 188 data administration 336
COMBI 511 data alignment 336
commodity production cycle 485 data management 336, 360
common reference model 336 data mining 318, 337, 359, 360, 361, 362,
complex adaptive systems (CAS) 363, 364, 365, 366, 367, 368, 370,
35, 36, 37, 57 371, 372, 373, 375, 376, 377, 378
complex programmable logic device (CPLD) data transformation 336
179 data validity 58
composability 318, 321, 322, 330, 336 data warehouse 362, 366, 367, 368, 369,
computer science 4, 7, 8, 20 370, 371, 373, 378
computer simulation 509, 511, 512, 514, debt ratios 501, 503
519, 529, 530, 531, 532, 533 DecisionScript 24
computer simulation language 428 decision support simulation systems
conceptual data types (CDT) 239, 242 318, 325, 326, 329, 332, 335
conceptual model 58, 61, 63, 64, 65, 71, decision support system (DSS)
72, 73, 74 335, 359, 360, 366, 373, 378
conceptual modeling 324, 336 decision variables 444

579
Index

dendrite 338, 357, 358 Entity object 287, 290, 291, 292, 293, 294,
Design Expert 123, 124 300, 301, 302, 303, 305, 308, 310,
development process 314, 315, 316
58, 59, 60, 61, 68, 71, 72 EONreality 17, 23, 25
DEVSJAVA 78, 79, 80, 81, 83, 84, 86 European Aviation Safety Agency (EASA) 17
diagrammatic representations 487 event-driven simulation
directed network graphs (DNG) 222 423, 424, 425, 426, 427, 442
direct manipulation 235, 237, 238, 239, event validity 63
245, 250, 251 evolutionary interface 198, 202, 206, 207
discrete channel model (DCM) 145, 146, evolutionary strategies 196, 198, 211, 215
147, 150, 151, 156, 160, 163, 167, experimental error 92, 96, 97, 98, 99, 102,
168, 169 104, 110, 113, 117, 142
discrete event simulation (DES) 284, 285, experimental-frame processor (ef-p) model
319, 337, 342, 343, 344, 345, 358, 80, 82
364, 420, 422, 423, 443, 444, 445, experimental status 59
448, 449, 450, 451, 452, 453, 454,
455, 456, 457, 458, 459, 460, 461, F
462, 463, 464, 465, 469, 471, 474, face validity 63, 72
475, 476, 477, 478, 479, 481, 483, facility modeling 513
514, 530 false-work system 524
discrete event system specification (DESS) 75, Falsificationism 40, 44
76, 77, 78, 79, 80, 81, 82, 83, 85, Federal Aviation Administration (FAA) 17
86, 87, 88 federation object model (FOM) 77
disequilibrium dynamics 489 feedback loops 489
distillations 28 field programmable gate array (FPGA)
DTSS 75, 76, 78, 83 178, 179, 187, 188, 189
duality 180, 181 finite state machine (FSM) 179
dynamic data structure 514, 530 flexible manufacturing systems 230
dynamic hypothesises 490 flight training device (FTD) 17
dynamic testing 65, 74 flooding algorithms 418, 420, 430, 433, 435,
dynamo 429 436, 439, 442
Flow Bottleneck / Constraint 483
E
flow logic 511
Earthmoving operations 514 flows 485, 487, 488, 489, 490, 492, 499,
EarthMoving Simulation Program (EMSP) 500
514, 515, 516, 518, 520, 530 formalism 191, 202, 209, 213, 214, 215,
ecological systems 15, 16 217, 233
Ecosim Pro 16, 20, 21, 25 forward error correction (FEC)
EcosimPro Language (EL) 429 143, 144, 145, 149, 160, 163
efficient simulation 160 free-choice WF-net 222, 224, 229, 230
electrical engineering 4 full factorial design 111
El-Warrak Bridge 528, 529 full flight simulator (FFS) 17
emergence 29, 34, 37, 57 FUNCTION 511
emergency care simulator (ECS) 17, 22, 25 fundamental data types (FDT) 239, 240, 242
empiricism 40, 63

580
Index

G intercession 192, 198


interdisciplinary 1, 8, 11
Gaussian noise 143, 144, 153, 159, 161, 162 interoperability 318, 321, 322, 327, 328,
generalized stochastic Petri nets (GSPN) 330, 331, 332, 335, 336
193, 195, 196, 200, 206, 209 interpolating curve 99
general purpose simulation (GPS) intra-robot communication 283
510, 511, 529, 533 introspection 192, 198, 202, 203, 204, 208,
genus locii 6 224, 228
goodness-of-fit statistical test 446 ITIM approach 96
GPSS/H 510, 530
graphical user interface (GUI) 380, 381, 382, K
385, 386, 400, 417
kinematic action 514
H knowledge management 332, 360, 376, 378
hard-decision decoding 160 L
HAVELSAN 16, 17, 18, 25
healthcare 443, 444, 445, 446, 447, 460, Lack of Fit test 116, 123, 127, 133, 136
476, 478, 479, 482 Lag-SIPP 450
healthcare systems 443, 446, 447 “lake of resources” termination 528
hierarchical simulation modeling (HSM) LAR-1-probabilistic (LAR-1P)
511, 512, 533 418, 420, 434, 436, 437, 439
high level architecture (HLA) Law of Requisite Variety 35, 37
76, 77, 81, 82, 85, 86, 87 LiftMagic 17, 23, 25
historical information 63, 68 line of balance (LOB) 514
homomorphic 34, 35 liquidity ratios 501, 502, 503
human patient simulator (HPS) 17, 22, 25 livelock 204
hybrid simulation 337, 346, 358 location-aided routing scheme 1 (LAR-1)
418, 420, 434, 436, 437, 439
I long term evolution (LTE) 144, 163
LOS 464, 465, 466, 467, 468, 469, 474,
IEEE 802.11 395, 396, 398, 399, 401, 409, 475, 476
410, 411, 414, 415, 416, 417 lower bound 129
importance sampling 144 Lyapunov exponent 33
incompleteness theorem 40
Industrial Dynamics 485, 506 M
INET framework 383, 384, 397
information feedback 485 Machine object 287, 289, 293, 294, 296,
information gathering 66, 67, 70 300, 301, 316
information modeling 65, 66, 68, 70 machine program 65, 74
information systems 4, 7, 12 management flight simulators 487
information theory 33, 36 Management Science 479, 480, 481, 482
information validity 61, 64 MANET 418, 427, 430, 435, 436, 441, 442
inheritance relationships 515 MANSim 418, 420, 430, 431, 433, 434,
input parameter(s) 511 435, 436, 439, 442
integratability 321, 335 mathematical modelling 6, 10, 11
integration 4, 5, 7, 10, 55, 67 Matlab 16, 20, 25
intended applicability 59, 61, 74 Matrix 234, 236, 237, 238, 239, 240, 241,
243, 245, 246

581
Index

mean square pure error (MSPE) 92, 98, 99, Noisy (error-prone) environment 435
102, 103, 104, 105, 106, 111, 123, nonlinearity 32, 36, 57
127, 132, 142 non-linear phenomena 484
message transfer 511, 517 non-linear relationships 485
meta-program 196, 198, 203, 205, 206, non-linear system 483
207, 208, 215, 224, 225, 226, 227, NORMAL 511
229, 230
Microworld 487 O
mission rehearsal exercise (MRE) 24, 25 objective functions 444
M/M/s model 447, 450 ObjectList 288, 289, 291, 297, 298, 299,
mobile ad hoc network (MANET) 418 301, 308, 310, 311, 314, 315, 316
model accreditation 59 object-orientation 514, 515, 530
model-based data engineering 322, 328, 336 object-oriented modeling 514, 530, 534
model boundary 488, 490, 491, 492 object-oriented system (CIPROS) 511
model curriculum 2, 3, 9, 11, 12 OMNeT++ 379, 380, 381, 382, 383, 384,
model development 5, 20, 58, 59, 60, 61, 385, 386, 388, 389, 390, 391, 395,
66, 68, 70, 71 396, 397
model development process 58, 60, 61, 71 OMPR 418, 420, 434, 436, 437, 439, 440
Modelica (open-standard object-oriented lan- on-the-fly 207, 220, 223, 229
guage for modeling of complex physical open dynamics engine (ODE) 16, 19, 25
systems) 429 operational validity 44, 58, 64
modelling and simulation (M&S) 1, 2, 3, 4, operations research 444, 479, 480, 482
5, 6, 7, 8, 9, 10, 11, 12 OPNET 380, 381, 398, 399, 400, 401,
model verification 28, 58, 61, 65, 68, 72 402, 403, 405, 408, 409, 410, 412,
modular educational approach 3 413, 414, 415, 416, 417
Monte Carlo method 24, 28 optimal multipoint relaying (OMPR) 420
Monte Carlo simulation 92–142, 160 Optsim 24
multiplier effect 36 Organisation for Economic Co-operation and
multistage approach 58 Development (OECD) 5
multistage validation 63 orthogonal frequency-division multiplexing
MyBodyPart 17, 23, 25 (OFDM) 145, 163
N P
NAMD 16, 18, 19, 25 parametric partition 195
National Aviation Authorities (NAA) 17 patient waiting time 443, 464
National Science Foundation (NSF) paving processes 509, 511
12, 23, 26 Petri nets 178, 179, 180, 181, 182, 184,
network interface card (NIC) 383, 386, 417 187, 188, 191, 192, 193, 195, 196,
Network Simulator 439, 442 203, 206, 207, 209, 211, 212, 213,
neuron 337, 338, 339, 340, 341, 342, 343, 214, 215, 216, 218, 219, 220, 222,
344, 345, 346, 347, 348, 349, 350, 3 223, 224, 227, 229, 230, 231, 232,
52, 355, 356, 357, 358 233
Newton’s philosophy 30 Physics Education Technology (PhET) Project
NIST Network Emulation Tool (NIST NET) 16, 19, 25
429 Physics Simulator 16, 19, 25
Noiseless (error-free) environment 435 PILOT 235, 248

582
Index

pilot training 15, 16, 25 rationalism 63


pipelining software production 282 reconfigurable nets 213
place-invariants 230 recycling at construction sites 513
Poisson distribution 446, 449, 452 recycling effect 36
Poisson processes 445, 446, 447, 450 reductionism 30, 31, 34
polymorphism 514, 530 reflective chemical abstract machine (RCHAM)
practicum 7 213
precedence diagram method 514 reflective framework
predicate transition 214 198, 200, 202, 206, 207
predictive validation 63 reflective Petri nets 191, 192, 193, 196,
probabilistic flooding 418, 420, 440 207, 209, 215, 218, 219, 220, 223,
problem entity 61, 63, 64, 65, 74 227, 231, 233
process function 511 reflective tower 192, 215
process model 450 regression model 120, 121, 122, 124, 136
process modeling 512, 513 reification 192, 198, 199, 200, 201, 202,
process-oriented simulation 418, 423, 424, 4 203, 205, 206, 207, 208, 209, 211,
25, 426, 427, 442 212, 219, 223, 224, 226, 227, 228,
process-task level 512 229
procurement 509, 513, 530 reinforcing loop 492, 494
product modeling 513 replication 510, 515, 516
profitability ratios 502 resource-based modeling (RBM) 512
programming logic 523 resource library 512, 513
program modularity 65 response surface methodology
Project CLEA 17, 21, 25 93, 107, 137, 141, 142
project management tools 4 restart transitions 195
ProModel 450 retraining programs 11
proper termination 222 risk analysts 484
pure flooding 418, 420 rockfill dam 518
pure quadratic curvature test r-processes 512
101, 117, 124, 127, 133 runtime infrastructure (RTI) 77

Q S
quality assurance (QA) 378, 443, 445, 446, “Saint-Marguerite” river 518
447, 449, 450, 452, 453, 454, 455, 4 scalable vector graphics (SVG) 241
56, 457, 459, 460, 463 scaled prediction variance 122
QUEUE 511 SD Model 486
queue length (QL) 512 SEASAM 16, 20, 21, 25
queue wait (QW) 512 Segmental construction 519
queuing analytic theory 443, 478 self-modifying nets 213
queuing formulas 446, 447, 448, 450 server quantity (SQ) 512
queuing models 445, 447, 449, 451, 466 server utilization (SU) 512
queuing theory 443, 444, 445, 477, 479, 48 service oriented architecture (SOA) 83, 86
0, 481, 482, 483 shared abstract model (SAM) 75, 76, 78,
79, 80, 81, 82, 83, 85, 88
R shift-down action 192, 198, 200, 205, 206,
radio propagation channel 158, 159, 160, 165 208, 219
random node distribution 431, 432

583
Index

shift-up action 192, 198, 206, 219 steady-state simulation 422, 427, 442
SimApp 429 steady-state time period 449
Simbad 16, 19, 25, 27 STELLA 16, 20, 25
SimEarth 514, 518, 529, 530 stereoscopic view 283
SimMan 17, 22, 25 stochastic activity schedule 513
Simphony 513, 531 stochastic simulation 422, 442
Simplified Discrete Event Simulation Approach stochastic well-formed net (SWN) 193, 195,
(SDESA) 513 196, 200, 206, 209, 211, 212
Simula8 450 stock and flows diagram 488, 490, 492
simulation 509, 510, 511, 512, 513, 514, stocks 488, 489, 490, 492, 496, 498, 503,
515, 516, 517, 518, 519, 521, 522, 507
523, 524, 526, 528, 529, 530, 531, stocks and flows 488, 496
532, 533, 534 stratified nets 213
simulation experiments 66, 67 STROBOSCOPE 510, 512, 523, 524, 531
simulationist 2, 11, 42, 45, 49, 51, 52 sum squares error (SSE)
simulation language 65, 425, 428, 429, 442 96, 97, 110, 111, 116
Simulation Language for Alternative Modeling superposition 207
(SLAM) 429 superset 226
simulation model 5, 29, 43, 57, 59, 60, 61, symbolic marking (SM) 195, 196
62, 63, 65, 66, 67, 68, 70, 71, 72 symbolic reachability graph (SRG) 195
simulation object model (SOM) 77 synapse 338, 339, 342, 344, 345, 347, 348,
simulation package 483 349, 350, 352, 353, 357, 358
simulation tailoring 71 synchronicity loss 514
simulation validity 38, 39, 42, 44 system dynamics 484, 485, 486, 487, 488,
simulator interface 507 489, 490, 491, 492, 505, 506, 507,
simulators 15, 16, 17, 18, 19, 20, 21, 22, 508, 513
24, 25, 26, 484, 491, 499, 500, 503, system dynamics methodology 484
504, 505, 507 system dynamics models 486
single frequency distributions 99 system-engineering principles for healthcare
Sink object 289, 290, 291, 295, 301, 316 444
slab units 511 systems thinking 484, 508
soft-decision decoding 144, 153, 160
software visualization 247, 248, 250, 251 T
soundness 218, 221, 222, 223, 230, 231 tailoring 71, 72
source object 289, 290, 292, 293, 295, target functions 92, 93, 96, 98, 99, 105,
296, 297, 298, 299, 305, 310, 316 106, 107, 108
special purpose simulation (SPS) TaskList 285, 287, 308, 312, 313, 314, 316
510, 513, 529, 533 TeamBots 16, 19, 25, 26
SpikeStream 337, 346, 347, 348, 349, terminating simulation 422, 431
350, 351, 352, 353, 354, 355, 356, Three Body Problem 30
357, 358 three-phase simulation 514
spike time dependent plasticity (STDP) Tomlinson, Cercas, Hughes (TCH) codes 143,
339, 345, 349, 358 144, 145, 153, 154, 155, 156, 157,
static testing 65, 74 158, 159, 163
stationary independent period-by-period (SIPP) trace-driven simulation 422, 427
449, 450, 456, 480 transition firing 182, 184, 185, 187

584
Index

U virtual reality 15, 16, 17, 19, 25, 26, 252,


253, 254, 257, 281, 282
uncertainty 513, 514, 530 virtual world 487
universal mobile telecommunications system visual algorithm simulation 234, 236, 237,
(UMTS) 144, 163 243, 249, 250, 251
univocal matrix 93 visual debugging 234, 235
upper bound 129
utilitarian target 107, 108 W
V waiting pools 181
warm-up period 95
validation 10, 28, 29, 39, 43, 44, 45, 46, waste generation rates 513
48, 52, 53, 54, 55, 56, 57, 58, 59, “What If” scenarios 484
60, 61, 62, 63, 64, 65, 66, 67, 68, wireless communication system (WCS) 143,
70, 71, 72, 73 145, 146, 147, 148, 149, 150, 151,
valorisation 106 156, 160, 161, 163, 167, 173
variability 443, 444, 445, 446, 448, 452, work breakdown structure 511
454, 457, 458, 459, 462, 463, 464, workflow management system (WMS)
469, 470, 472, 476, 478, 482 218, 219, 220, 222, 230, 231
variability field 98, 99, 108 workflow verification 230
variables, parameters and constants 444 World of Warcraft 17, 24, 25, 26
verification 28, 29, 41, 53, 54, 57, 58, 59, WorldViz 25
60, 61, 62, 65, 66, 68, 70, 71, 72,
73 Z
verification and validation triangle
58, 59, 68, 71, 72 Zeitgeist 6
Virtlab 16, 18, 25 zero flight time (ZFT) 17

585

You might also like