Nothing Special   »   [go: up one dir, main page]

Applied System Simulation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Information Sciences 124 (2000) 173±192

www.elsevier.com/locate/ins

Applied system simulation: a review study


Balqies Sadoun *

College of Engineering, Applied Science University, Amman 11196, Jordan


Received 3 June 1998; accepted 15 March 1999
Communicated by Ahmed Elmagarmid

Abstract
This paper deals with a tutorial study of applied system simulation. We present the
applications of system simulation, simulation types, and elaborate on discrete-event
simulation, DES. Then we explain the level of details needed in a simulation model. We
investigate the signi®cance of simulation experiments and explain in detail the standard
methodology used in the development of a simulation model. The steps include:
problem formulation and planning, system abstraction, resource estimation, system
analysis, veri®cation and validation, and implementation. Finally, we present a detailed
case study using SIMSCRIPT II.5 simulation language. Ó 2000 Elsevier Science Inc.
All rights reserved.

Keywords: System simulation; Discrete-event simulation; Trace-driven simulation;


Applications of simulation

1. Introduction

The term simulate implies to imitate or mimic while the word modeling
refers to a small object that represents some existing object. A simulation is the
imitation of the operation of a system or process over time. The behavior of a
system as it evolves over time can be studied by a simulation model. Since
mimicking and modeling may be traced back to the beginning of civilization,

*
Correspondence address: 798 Howard Street, Teaneck NJ 07764, USA. Fax: +1-201-837-8112.
E-mail address: obaidat@monmouth.edu or b.sadoun@go.com.jo (B. Sadoun).

0020-0255/00/$ - see front matter Ó 2000 Elsevier Science Inc. All rights reserved.
PII: S 0 0 2 0 - 0 2 5 5 ( 9 9 ) 0 0 0 7 9 - 1
174 B. Sadoun / Information Sciences 124 (2000) 173±192

the history of simulation and modeling is very old. With the advances in
computer and telecommunication technology, the art and science of modeling
and simulation have experienced a remarkable transformation [1±11].
The behavior of a system as it evolves with time can be studied using a
simulation model. Such model takes the form of a set of assumptions regarding
the system under study. The assumptions are represented by mathematical,
logical and symbolic relationships. These relationships are between the entities
(objects) of the system. After the model is developed, it is veri®ed, and vali-
dated. Then, it can used to run simulation experiments in order to investigate a
wide variety of questions and behaviors of the system.
Simulation of systems can be used for the following reasons:
1. It can be used to experiment with a new design or scheme before implement-
ing it.
2. It can be used to enable the study of the internal interactions of a complex
system or subsystem within a complex system.
3. It provides the analyst with a tool to conduct various experiments that can
be done in real time or doing them could be catastrophic.
4. Organizational and environmental changes can be simulated and the e€ect
of these changes on the model's behavior can be observed.
5. It can be a used as a tool to validate analytic results.
6. Simulation provides a ¯exible means to experiment with the system or its
design. Such experiments can reveal and predict valuable information to
the designer, user, manager and purchaser.
7. Simulation is a cost-e€ective tool for capacity planning and tuning of
systems or subsystems.
Among the advantages of simulation are:
1. Flexibility: It permits controlled experiments.
2. Speed: It permits time compression operation of a system over extended
period of time.
3. It permits sensitivity analysis.
4. No need to disturb the real system.
5. It is a good training tool.
However, simulation has some disadvantages. These are listed below:
1. It may become expensive in terms of computer time and manpower.
2. There are some hidden critical assumptions that may a€ect the credibility of
the simulation outputs.
3. It may encounter extensive development time.
4. It may encounter diculties in model's parameters initialization.
There are di€erent types of simulation that can be categorized on the basis
of the nature of the system under study, goals of simulation, and availability of
facilities and tools. This section covers key ideas underlying any type of sim-
ulation. The components of a simulation model along with case studies are also
presented.
B. Sadoun / Information Sciences 124 (2000) 173±192 175

2. Simulation types

There are three main types of simulations; Monte Carlo, Trace-driven and
discrete. A brief description of each is given below [1±8]:
(a) Monte Carlo simulation: It is a static simulation technique which does
not have a time axis. It is named after the Count Montgomery de Carlo, an
Italian gambler and random-number generator. Monte Carlo simulation
employs random numbers, and random variates where the passage of time is
not essential. This approach is used to model probabilistic phenomenon that
does not change characteristics with time. The name `Monte Carlo' simulation
originated during World War II, when this simulation approach was applied
to solve problems related to the atomic bomb. Monte Carlo simulation can
also be used to evaluate nonprobabilistic expressions using probabilistic
techniques. It has been applied to estimate the critical values or the power of a
hypothesis test. One important application of Monte Carlo simulation is the
determination of the critical values for the Kolmogorov±Smirnov test for
normality.
(b) Trace-driven simulation: A trace is de®ned as a time-ordered record of
events that is collected by running an application program or part of it on the
real system. Weather prediction, prediction of environmental phenomena,
cache memory analysis, deadlock preventions algorithms, paging algorithms
and virtual memory algorithms are all examples of cases where trace-driven
simulation are used. The use of traces as an input to the simulator give better
credibility to the simulation results. Di€erent types of traces are used in
simulation depending on the system under study. These trace types can be
temperature, pressure, altitudes traces that can be used to drive simulators for
environmental engineering and planning. Other types of traces include
instruction, address and Input/Output(I/O) traces that are used to drive
pipeline computer models, cache memory models and subsystem models,
respectively.
Among the advantages of trace-driven simulation are [4]:
1. Results of trace-driven simulation are more credible than other types of sim-
ulation.
2. Validation of trace-driven simulation results is easy since during the process
of trace collection one can also measure performance characteristics of the
system.
3. Output of a trace-driven simulation model has less variance therefore fewer
number of repetitions are required to get the desired con®dence level.
4. A trace-driven model is very similar to the system being modeled. Therefore,
when implementing the model, one can get the feeling of the complexity of
implementing a proposed algorithm or protocol.
5. Trace-driven simulation provides us with a high level of detail in the work-
load which makes it easy to study the e€ect of small changes in the model.
176 B. Sadoun / Information Sciences 124 (2000) 173±192

6. Since a trace preserves the correlation e€ects in the workload, there is no


simpli®cation required such as those required to get an analytic model of
the workload.
The major drawbacks of a trace-driven simulation are [1±8]:
1. It is complex since it requires a more detailed simulation of the system. In
some cases, the model complexity overshadows the algorithm to be simulated.
2. Traces become obsolete faster than other kinds of workload models.
3. Each trace provide only one point of validation. In order to validate the re-
sults one should use di€erent traces.
4. One major problem with reliable traces is that they are very long and con-
sumes a lot of simulation time.
(c) Instruction execution driven: This type of simulation executes assembly
language instructions and it is used to design computer systems. It is slower
than trace-driven technique.
(d) Discrete-event simulation: Discrete-event simulations represent systems
that change at discrete points in time, as opposed to continuous systems, which
change state over time. Discrete-event simulation has been used heavily to
model computer systems since it is concerned with modeling of the system as it
evolves over time by representation in which state variables (de®ning the state
of a system) change only at speci®c number of points in time. These points, in
time, are the ones at which events occur. It is important to remember that the
terms continuous and discrete refer to changes in the state of the system being
simulated and not to the time management. In fact, time could be modeled in
both continuous or discrete in either types of simulation modeling. Some
models as well as languages combine both concepts in such cases we call the
type of simulation combined (hybrid) simulation (1±8). Discrete-event simu-
lation describes a system in terms of logical relationships that cause changes of
state, represented by changes in state variables, at discrete points in time. State
variables identify the state of a system by items such as the number of objects
waiting for services and the number being served.
In this section we will concentrate on discrete simulation modeling.

3. Discrete-event simulation methodology

There are two basic approaches to discrete-event simulation; the event


scheduling and the process approaches. In the event scheduling approach, an
event is de®ned as the instantaneous occurrence which may change the state of
a system. The event list for a model is a list or data structure which consists of
event records and the event routine. Each record contains the time occurrence
of a speci®c event. The timing routine is a procedure that determines and re-
moves the most imminent event record from the event list and then it advances
the simulation clock to the time when the corresponding event is to occur. The
B. Sadoun / Information Sciences 124 (2000) 173±192 177

event routine is a procedure that updates the state of the system when a
particular type of event occurs [4,5].
In the event-scheduling type of discrete event simulation, the simulation
process begins by placing one or more initial event records in the event list.
Then the timing routine is called to ®nd out the most imminent event in the list.
This imminent event record is taken from the event list and the simulation
clock is updated to the time of this event. Then control is passed to the event
routine corresponding to the event under study, and the state of the system is
updated accordingly. This procedure is continued until a prespeci®ed stopping
rule has been satis®ed. One stopping criterion is to run the simulation until a
speci®ed amount of simulation time has elapsed.
The other type of discrete-event simulation approach is the process ap-
proach. A process is de®ned as a time-ordered sequence of interrelated events.
A discrete-event simulation model representing the system under study may
have several di€erent types of processes. In the process approach, simulation
starts by placing one or more initial process notices in the event list. Each
notice corresponds to a realization of a process entity and contains an acti-
vation time. The activation time is the time when the notice is to be removed
from the event list. In order to determine process notice with the smallest ac-
tivation time the timing routine is called. The process with the smallest acti-
vation time is then removed from the event list and simulation clock is updated.
Then control is passed to the process routine corresponding to this type of
process notice. The process notice for the current process entity realization is
placed back into the event list with an activation time equal to the sum of the
current value of the simulation clock and the length of the simulated time.
Then control is returned to the timing routine which determines the process
notice in the event list and it now has the most imminent activation time. The
notice is removed from the event list and simulation clock is updated to its
activation time, and control is passed to the appropriate process procedure.
Fig. 1 shows the overall simulation methodology. In order to simulate a
system we need to observe the operation of the system, formulate hypotheses
that account for its behavior, predict the future behavior of the system based
on assumptions and hypotheses and compare predicted behavior with real
behavior. As shown in Fig. 1 the methodology consists of four major steps: (1)
pre-modeling or planning step, (2) modeling step, (3) validation and veri®ca-
tion step and (4) experimentation and application step. Section 6 elaborates in
detail on these steps.
We now enlist the main components of a discrete-event simulation models [4]:
1. Simulation clock: It gives the current value of simulated time.
2. System state: A collection of state variables de®ne the system at any instant
of time.
3. Event list: A list that contains a sequence of events and the information re-
lated to the occurrence of events.
178 B. Sadoun / Information Sciences 124 (2000) 173±192

Fig. 1. Overall simulation methodology.


B. Sadoun / Information Sciences 124 (2000) 173±192 179

4. Timing routine: A procedure that advances simulation clock to the time


when the next event is to occur.
5. Event routine: A procedure that updates system states when an event occurs.
Each event type has an event routine.
6. Library routines: A set of procedures that are used to generate random ob-
servations from distributions.
7. Initialization routine: A procedure to initialize state of a simulation model at
time zero.
8. Statistical counters: These are variables to store statistical data about system
performance.
9. Report(Output) generator: A procedure that calculates the estimates of the
desired performance metrics and produces a report at the end of simulation.
10. Main program: A procedure that invokes timing routine to determine next
event and then transfers control to the event routine to update state. It also
checks for termination and activates report generator at the end of simula-
tion run.
Fig. 2 summarizes the logical relationships among these components. As
shown in Fig. 2, simulation starts with the execution of main program that
begins by invoking the initialization procedure. The initialization routine resets
(initializes) the clock, system state, statistical counters and event list. Next, the
main program invokes the timing routine to ®nd out the most imminent rou-
tine procedure. If an event i is the next to occur, the simulation clock is ad-
vanced to the time that event i will occur and control is returned to the main
program. When the main program invokes event i three types of activities
usually occur: system state updating, occurrence of times of future events and
gathering of information about system performance. A check is made to ®nd
out if the simulation should be terminated. If simulation is to be terminated
then the report generator routine is invoked by the main program to compute
performance metrics and print the report. If it is not the time to terminate
simulation then control is passed back to main program and the previous cycle
of events is repeated until the stopping criteria is satis®ed.

4. Simulation level of detail

There are several levels at which a system can be modeled depending upon
its characteristics and attributes. Also, the goals of simulation determine part
of the detail of simulation. For example, the goals of manufacturing simulation
models are to identify problem areas and quantify system performance such as
throughput, utilization of resources, queueing at work locations, delays caused
by material handling devices, stang requirements, e€ectiveness of scheduling,
e€ectiveness of control systems, and bottlenecks and choke points [2±4]. An-
other example is simulation of computer systems. Computer systems can be
180 B. Sadoun / Information Sciences 124 (2000) 173±192

Fig. 2. Overall ¯owchart for the next event time-advance approach.

modeled at di€erent levels; namely the circuit-level, gate-level, register-level


and system-level. At the circuit-level, continuous-time simulation is used to
analyze state switching behavior. In gate-level simulation, the components such
as transistors, resistors, capacitors, etc. of a circuit are aggregated into a single
element. In the register-transfer-level, gates are aggregated into elements such
as multiplexers, decoders, comparators, registers, adders and multipliers. Sys-
tem-level simulation begins at a level somewhat above the register transfer
level. System-level, register-level and gate-level simulations are all di€erent
types of discrete-event simulation. Register-transfer and gate-level models are
usually developed to analyze the system from the functional viewpoint while
system-level models are developed to analyze the system from the performance
B. Sadoun / Information Sciences 124 (2000) 173±192 181

viewpoint. System-level simulation usually deals with the elements of the sys-
tem that a€ect the performance [4,7].
The major components of any simulation model are: (1) a scheme for rep-
resenting arrivals of new customers (items), (2) a way to represent what hap-
pens inside the system being modeled and (3) a method to terminate the
simulation. The arrival of new items (customers or objects) to the system, to be
modeled, can be obtained from: (1) An explicit sequence of arrivals or (2)
sampling from a stochastic process each time an arrival occurs. Refs. [1±3] give
a nice presentation of the stochastic distributions that are often used.

5. Design of simulation experiments

It is important to conduct simulation experiments since they reveal many of


the characteristics of the system being modeled. An experimental design re-
quires observation of the system under speci®c combinations of the system
variables. Simulation runs are made under di€erent conditions and inferences
are drawn about the relationship between the controllable variables and
measured performance metrics. Before running an experiment the strategic and
tactical plans should be reviewed thoroughly. Such plans may require some
initial experimentation to determine the validity of the tactical plan. The ex-
perimentation extent depends on the sensitivity of performance metrics to
variables, extent of the interdependencies between variables, and the cost for
estimation of metrics. It is important to articulate the experiments to be per-
formed as early as the goal-setting phase of simulation. There are di€erent
techniques for controlling the experiments, replicating runs and resetting
random number sequences for comparisons under identical situations or cases.

6. Simulation model development

The ®rst step in model development is describing system operation from a


performance point of view. The description then is abstracted according to the
objectives of the simulation into a model description. Such description speci®es
facilities to be described by the model and its relationships, workload to be
represented and its attributes and tasks needed to accomplish this work.
The major tasks required to model a system are explained next. The ¯ow-
chart in Fig. 1 shows these tasks and their interrelationships.
(a) Problem formulation and planning: This phase consists of problem
de®nition, resource estimation, and system and data analysis. The problem
de®nition includes the objectives and goals of the simulation, assessment of the
available time frame, understanding the operation of the system from a
performance viewpoint, analysis of assumptions, and a preliminary plan on
182 B. Sadoun / Information Sciences 124 (2000) 173±192

how to accomplish the analysis. It is important to mention that simulation


models can provide not only mean performance metrics, but also extremes and
variances. This is vital since system design is frequently concerned with vari-
ations and extremes. Among the performance measures (metrics) that can be
used are; throughput, response time, resource utilization, eciency, band-
width, reliability, availability and cost. One major task of problem de®nition
and formulation, is that of establishing system boundaries. System boundaries
deal with establishing assumptions and system extensions.
(b) System abstraction: A model is an abstraction of the system. Models
frequently are described in terms of the technique used to obtain performance
metrics. A model description of a system usually takes the form of a diagram
that shows hardware and software resources and their interconnection. Such a
diagram is accompanied by operations involved and explanatory notes re-
garding the assumptions. For large systems, multiple levels of diagrams may be
used to show the con®guration, ¯owcharts and pseudocodes used to describe
processing operations. The style of abstraction depends on modeler's back-
ground and his analysis orientation. Although there are no formal techniques
for system abstraction, we can employ either the synthesis or decomposition
techniques. In the synthesis technique, we may need several steps, each creating
a higher level of description. When the desired level of detail is reached, we
must review the assumptions and assess their impact on the results. The de-
composition approach is the reverse of synthesis. The system under study is
initially viewed as one entity. Work is decomposed into activities, and the
system is split up into the set of resources used by these activities. This pro-
cedure is repeated until the desired level is reached. The major advantages of
the iterative decomposition approach are; ease to detect cause-e€ect relation-
ship in high level models, and ease of veri®cation.
(c) Resource estimation: Resources required to collect data and analyze the
system, such as money, personnel, time and special equipment should be
considered. If some of the resources are not available, then the problem de®-
nition may need to be modi®ed. It is more desirable to modify objectives of the
simulation at an early phase than to fall short due to unavailability of crucial
resources.
(d) System analysis: This phase is concerned with familiarizing the analyst
with all relevant aspects of the problem to be investigated and analyzed. It is
important that the analyst understand the system very well. A literature search
to ®nd out the previous approaches to similar problems may prove valuable.
An insight into the aspects of a problem can be obtained from experienced and
quali®ed people. There are many projects that have failed due to an inadequate
understanding of the project. The analysts should bear in mind that a problem
well de®ned is a problem half solved.
Parameters and variables must be identi®ed with respect to initial conditions
and availability of sources. Data on the behavior of the uncontrollable
B. Sadoun / Information Sciences 124 (2000) 173±192 183

parameters and variables can be obtained from theoretical and historical


methods. The level of details involved in an analysis should be decided upon by
the project team. As the project proceeds more information will allow iterative
re®nement of the approach.
Simulation models exhibit common characteristics that include:
1. A method to represent arrivals of new items (objects).
2. A method to represent the state of objects within the modeled system.
3. A method to terminate (®nish) simulation.
The characteristics of the model should be representative of the character-
istics of the real system to be modeled. The arrival of new items to the system is
usually independent of what happens within the system.
The prime concern in modeling is the representation of what happens to the
arrivals within the modeled system. It is important to focus on one customer
(item) at a time and to describe its interaction with other customers. Termi-
nation of simulation can be done using one of three methods. The ®rst method
is to terminate the simulation after a prede®ned simulation time has elapsed
regardless of what else might be happening in the model. The second technique
relies on using traces. The simulation ends when all traces are completely used.
The third method is to allow everything in the model to come to reset [1±11].
In order to build a model the following steps are recommended:
1. Draw block diagrams and ¯ow charts of the simulation model.
2. Revise model diagrams with other project members.
3. Start data collection.
4. Modify the top-down design, verify and validate required degree of granu-
larity.
5. Complete data collection.
6. Repeat steps 1±5 until desired granularity is reached.
7. Finalize system diagrams.
It is important to determine the scope of the model. The scope of the model
is the process of determining what operation, equipment, process, etc., within
the system should be included in the model at what degree of detail. The rule of
thumb is to ask whether the inclusion of the component into the model sig-
ni®cantly a€ects the result or salability of the analysis. In considering com-
ponents for inclusion, we should estimate:
(a) Accuracy required for analysis.
(b) Accuracy required for overall model.
(c) E€ects of removing the component from the model.
(d) Operational e€ects of including the component.
(e) E€ort and time required to include the component.
The degree of details associated with a component in simulations, depends on
its e€ect on the result of analysis and stability of the analysis. For systems that
are so complex that no representative model can be used, the subsystem or
reduction approach must be used. In such cases, the system is divided into a
184 B. Sadoun / Information Sciences 124 (2000) 173±192

collection of less complex subsystems. Each subsystem is then modeled and an


overall system model is constructed by combining the submodels appropriately.
In order to identify a subsystem we can rely on the following techniques:
1. Flow approach: This technique has been used to analyze systems character-
ized by the ¯ow of information items through them.
2. Functional technique: This is used when a logical sequence of functions being
performed must be identi®ed and no directly observable ¯owing entities
exist in the system under study.
3. State-change technique: This is used for systems that are characterized by a
large number of independent relationships that needs to be monitored
regularly to detect state changes.
If the system can be subdivided into submodels then these submodels should
be linked to an overall system model. It should be stated that if the system
cannot be subdivided into subsystems in a way that makes the system easy
to comprehend, then the analyst should model the entire system as a single
entity.

7. Validation and veri®cation

Veri®cation and Validation (V&V) issues are important to simulation


since they a€ect the accuracy and credibility of the models. Veri®cation is the
process of determining that the simulation model accurately represents the de-
veloper's conceptual description of the system. It is basically the process of
troubleshooting the simulation program. A veri®ed simulation program can in
fact represent an invalid model. The program may do exactly what the modeler
intended, yet it may not represent the operation of the real system.
Validation refers to the process of determining whether the model is an
accurate representation of the real-world from the point-of-view of the
intended use of the model. Validation techniques include; expert consensus,
comparison with historical results, comparison with test data and peer review.
Validation should not be confused with veri®cation. Validation of a model is
not an easy task, however, it must be performed before using the model. We
can di€erentiate between two types of validation: for systems that can
be measured and for systems that cannot be measured. In the ®rst category, the
system being modeled exists and therefore measurement can be made, and in
such case, the goal is to predict a proposed change in the system. Validation in
this case is based on the comparison between the model results and measure-
ments. In the second category, the system being modeled exists only as a design
and the simulation goal is basically a matter of comparison between the results
of the existing system and that of the simulation model. If the two results agree,
this means that the simulation of the modi®ed system will produce valid esti-
mates of the e€ects of the suggested change [4±8].
B. Sadoun / Information Sciences 124 (2000) 173±192 185

8. Analysis of simulation results

Once the simulation model has been veri®ed and validated, it can be applied
to analyze the problem under study. The experiments to be conducted must be
de®ned as early as the goal-setting step. A model without a simulation experi-
ment and detailed analysis is not very useful. Simulation experiments reveal
many of the characteristics and behavior of the system under study. Further-
more, experiments save money and e€orts in predicting the behavior of the
system when certain design parameters or factors are changed or varied [9±11].
The following are some of the steps involved in the analysis of simulation results:
(a) Experiment design: In order to do proper experimental design, the system
must be observed under speci®c combinations of conditions or environments.
Simulation runs should be conducted under di€erent operating conditions.
Sensitivity of performance metrics to variables should be studied using ex-
periments. The simulationist should have a strategic and tactical plan before
starting the experimentation with the simulation model.
(b) Data presentation and analysis: The major goal of every simulation
analysis is to aid in making decisions regarding design, purchase or capacity
planning. It is essential that the analyst conveys the results to the decision
makers in a simple and clear manner. This requires the use of plots, pictures,
and words to explain and discuss results.
Among the graphic charts that could be used to present data are the line
charts, pie charts, bar charts and histograms. There are guidelines that must be
followed when preparing each of these charts. In addition to the general graphic
charts there are a number of graphic charts that have been developed speci®-
cally for computer performance analysis. Among these are the Kiviat graphs,
Gantt charts and Schumacher charts. Graphic charts are usually preferred over
textual explanation to present data. This is due to the following reasons:
1. Graphic charts saves the reader time by presenting data in a concise way.
2. Human beings prefer to see pictures rather than reading text; a picture is
worth a thousand words.
3. Charts are good ways to emphasize conclusions and summarize results. The
type of variables dictates which plot or chart to be used.
(c) Implementation: This refers to the process of integrating the simulation
results into practical decisions and documenting the model. The task of doc-
umenting the simulation model is the responsibility of the simulation team and
it should envelope all project activities. This is important for future use by
others especially if they are in di€erent locations or environments.

9. A case study

In this section we present a detailed simulation of a queueing system. In a


typical queueing system, customers arrive from time to time and join a waiting
186 B. Sadoun / Information Sciences 124 (2000) 173±192

line (queue). After these customers are served, they leave the system. The term
customers refers to any type of entity that can be viewed as requesting service
from a system. Among the types of queueing systems are production systems,
repair and maintenance facilities, transport and material handling systems, and
computer and communications systems. Queueing models, whether solved
analytically or through simulation, provide the analyst with a powerful and
e€ective tool for designing and evaluating the performance of queueing sys-
tems. Typical performance metrics (measures or indixes) are server utilization,
queue length, waiting time, throughput and response time. These performance
measures are classi®ed into three categories: (1) Higher Better(HB) metrics
such as throughput, (2) Lower Better (LB) such as waiting time, response time,
and queue length and (3) Nominal Better (NB) metrics such as the utilization
which preferably be between 0.5 and 0.75. Queueing theory and simulation are
usually used to predict the performance measures of a queueing system as a
function of input parameters. The input parameters include the arrival rate of
customers, distributions of interarrival and service times, service demand of
customers, service rate, capacity of the queue (bu€er), number of servers and
service discipline. For a simple queueing system, these performance measures
are easily computed analytically at a great savings in time and expense com-
pared to simulation. However, for complex real system, simulation is usually
used [4].
In this section, we present the simulation of a time-shared computer system
that has a single processor (CPU), disk drive, tape drive and 15 terminals. The
user of each terminal thinks for an amount of time which is an exponentially
distributed random variable with a mean of 25 s, after which he sends a job to
the computer. The sent job then joins a FIFO queue at the CPU. The service
times of jobs at the CPU are exponential random variables with a mean of 0.8 s.
The job, upon leaving the CPU, is either ®nished with a probability of 0.2
(independent of the system state and returns to its terminal for another think
time), or it requires data from the disk drive with probability 0.72, or needs
some data available on the tape with probability 0.008. If a job leaving the
CPU is sent to the disk drive, it may have to join a FIFO queue until the disk is
free. The service time at the disk drive is an exponential random variable with a
mean of 1.39 s. At the end of disk service, the job queues up again at the CPU.
A job leaving the CPU for the tape drive has an experience similar to a disk
job, except that a service time at the tape is an exponential random variable
with a mean of 12.5 s. Think times and service times are independent and all
terminals are assumed to be in the think state at time zero.
We used SIMSCRIPT II.5 to simulate this system.A PREAMBLE JOB is
de®ned to be a process with an attribute JB.TERMINAL, which is basically a
pointer to the TERMINAL process notice corresponding to the job. Seconds
are de®ned as the basic unit of time for simulation. In the MAIN program,
routine READ.DATA is used to read and print the input parameters and a
B. Sadoun / Information Sciences 124 (2000) 173±192 187

heading for results. Routine INITIALIZE initializes certain variables and


event list. Simulation starts when the START statement is executed. The latter
statement calls the timing routine and begins the execution of a simulation run.
After NUM.JOBS.DESIRED jobs have been processed in a simulation run,
routine REPORT is called. The system is simulated for 1000 jobs. Fig. 3 shows
a simpli®ed diagram of the system. The performance metrics of the system are
the utilization of all three devices, average number of jobs in the queue, mean
delay in the queue, and maximum and mean response time of a job. The model
for the system is developed using SIMSCRIPT II.5 [5]. The variables used in
the simulation program are shown in Table 1. The process approach of sim-
ulation is used. The job will be modeled from the instant it leaves the terminal
by a process called JOB.
Simulation starts by placing one or more initial process notices into the
event list. A process notice corresponds to an implementation of a process
entity and contains an activation time which is the time when the notice is to be

Fig. 3. A time-shared computer model.

Table 1
Variables used in the model of the time-shared computer system
Variable De®nition
JB.terminal Attribute of process JOB used to represent
the terminal corresponding to a particular job
Avg.number.in.queue The time-average number of jobs in queue
Max.response.time Maximum response time of a job
Max.terminals Maximum number of terminals
In.num.terminals Increment the number of terminals
Mean.service.time Mean service time
Num.terminals Number of terminals for a particular simulation
Response.time Response time of a particular job
Service.time Service time of a particular job
Start.time Time that a job leaves its terminals
Util.cpu Utilization of the CPU
Mean.think.time Mean think time
Mean.response.time Mean response time
188 B. Sadoun / Information Sciences 124 (2000) 173±192

Fig. 4. Listing of the PREAMBLE of the time-shared system model.

Fig. 5. Listing of the MAIN program.


B. Sadoun / Information Sciences 124 (2000) 173±192 189

Fig. 6. Listing of the READ.DATA routine.

Fig. 7. Listing of the PROCESS.TERMINAL routine.


190 B. Sadoun / Information Sciences 124 (2000) 173±192

Fig. 8. Listing of the PROCESS.JOB routine.

removed from the event list. The timing routine is then called to ®nd out the
process notice with the smallest activation time. This process notice is removed
from the event list and the simulation clock is updated. Then control is passed
to the process routine that corresponds to this type of process notice. In this
example, a process called TERMINAL is used to model the terminal operator
that thinks before sending a job to the CPU for execution. After sending a job
to the CPU, process TERMINAL suspends itself, and is not reactivated until
the corresponding job goes back to the terminal. Each terminal have a
TERMINAL process notice during simulation. The CPU is modeled by a
resource with the name CPU. The variables used in the model are global except
SERVICE.TIME and START.TIME which are local variables in process
B. Sadoun / Information Sciences 124 (2000) 173±192 191

Fig. 9. Listing of the REPORT routine.

Fig. 10. Simulation results of the time-shared computer system model.

routine JOB. The listings of the PREAMBLE, MAIN and routines (read data,
process terminal, process job and report) are all listed in Figs. 4, 5, 6, 7, 8 and 9,
respectively, and the simulation results are listed in Fig. 10.

References

[1] A.M. Law, W.D. Kelton, Simulation Modeling and Analysis, second ed., McGraw-Hill, New
York, 1991.
192 B. Sadoun / Information Sciences 124 (2000) 173±192

[2] J. Banks, J.S. Carson, B.L. Nelson, Discrete-Event System Simulation, Prentice-Hall, Upper
Saddle River, NJ, 1996.
[3] U.W. Pooch, J.A. Wall, Discrete Event Simulation, CRC Press, Boca Raton, 1993.
[4] M.S. Obaidat, Simulation of Queueing Models in Computer Systems, in: S. Ozekici (Ed.),
Queueing Theory and Applications, Hemisphere, 1990, pp. 111±151.
[5] E.C. Russel, Building Simulation Models with SIMSCRIPT II.5, CACI, 1989.
[6] R. Jain, The Art of Computer Systems Performance Analysis, Wiley, New York, 1991.
[7] M.H. MacDougall, Simulating Computer Systems, MIT Press, Cambridge, 1987.
[8] A.M. Law and C.S. Larmey, An Introduction to Simulation Using SIMSCRIPT II.5, CACI,
1984.
[9] B. Sadoun, A Simulation Methodology for De®ning Solar Access in Site Planning,
SIMULATION Journal, SCS, vol. 66, No. 1, 1996, pp. 357±371.
[10] B. Sadoun, A new simulation methodology to estimate losses on urban sites due to wind
in®litration and ventilation, Information Sciences Journal 107 (1998) 233±246.
[11] M.S. Obaidat, B. Sadoun, An evaluation simulation study of neural network paradigms for
computer users identi®cation, Information Sciences Journal 102 (1±4), (1997) 239±258.

You might also like