Nothing Special   »   [go: up one dir, main page]

Validation of Simulation Based Models: A Theoretical Outlook

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Validation of Simulation Based Models: A Theoretical Outlook

Morvin Savio Martis


Manipal Institute of Technology, India
oceanmartis@yahoo.com

Abstract: Validation is the most incomprehensible part of developing a model. Nevetheless, no model can be accepted
unless it has passed the tests of validation, since the procedure of validation is vital to ascertain the credibility of the
model. Validation procedures are usually framework based and dynamic, but a methodical procedure can be followed by
a modeller (researcher) in order to authenticate the model. The paper starts with a discussion on the views and burning
issues by various researchers on model validation and the foundational terminology involved. The paper later highlights
on the methodology and the process of validation adopted. Reasons for the failure of the model have also been explored.
The paper finally focuses on the widely approved validation schemes (both quantitative and qualitative) and techniques
in practice, since no one test can determine the credibility and validity of a simulation model. Moreover, as the model
passes more tests (both quantitative and qualitative) the confidence in the model increases correspondingly.

Keywords: Validation, simulation, dynamic models, validation schemes, validation process, modelling.

1. Introduction thing as an absolutely valid model, credibility of a


model can be claimed only for the intended use of
Validation has been one of the unresolved the model or simulation and for the prescribed
problems of systems modelling (Mohapatra 1987). conditions under which the model or simulation
It is true for simulation models in general and has been tested” (DMSO 1996). Sterman (2000)
system dynamic models in particular. System also argue “validation and verification are
dynamic modelling makes use of computer impossible; the emphasis should be more on
simulation (packages like Matab, Stella) to model testing i.e. the process to build confidence
generate the consequences for studying the that a model is appropriate for the purpose. Some
dynamic behaviour of the system. In contrast, models may be better than others; some models,
validations of Optimisation Models, Decision while not completely valid, possess a greater
Theory or Game Theory are often not questioned degree of authenticity than others. Furthermore,
since the solution procedures are elegant and all models are, in a sense, wrong because there
correct. Reasons for conceptual and simulation could always be a counter test to which the model
models having received more criticism could be did not conform to completely”.
the ease with which the models and their overall
results being understandable. Another reason Nevertheless, the power of a model or modelling
being, the simulation model of any system could technique is a function of validity, credibility, and
only be an approximation of the actual system, no generality (Solberg 1992). Hence model validation
matter the amount of time spent on the model is not an option but a necessity in a dynamic
building. Hence if the model produced is not a modelling scenario. Usually the simplest model,
‘close’ enough approximation to this actual which expresses a valid relation, will be the most
system, conclusions derived from such model are powerful; however, there is no single test that
likely to be divergent and erroneous, leading to would allow the modellers to assert that their
possible costly decision mistakes been made models have been validated. Rather, the level of
(Ijeoma et al. 2001). confidence in the model can increase gradually as
the model passes more tests (Forrester, and
According to Law (2001), validation can be done Senge 1980). The relationships of cost (a similar
for all simulation models regardless of whether relationship holds for the amount of time) of
their corresponding systems exist presently or performing model validation and the value of a
would be built in future. Also, Kleijnen (1999) and model to the user as a function of model
Sterman (1984) give insight on validation of confidence are shown in Figure1. As shown in the
simulation models using statistical techniques and figure, the value of the model increases as the
reasoned that the technique applied would level of confidence in the model is increased,
depend on the availability of data in the real correspondingly the cost of model validation also
system. Contradicting the above authors, some increases.
authors have also stated that “there is no such

ISSN 1477-7029 39 ©Academic Conferences Ltd

Reference this paper as:


Martis, M S (2006) “Validation of Simulation Based Models: A Theoretical Outlook” The Electronic Journal of Business
Research Methods Volume 4 Issue 1, pp 39 -46, available online at www.ejbrm.com
Electronic Journal of Business Research Methods Volume 4 Issue 1 2006 (39 - 46)

3. Viewpoints on validation
The viewpoints on validation are based on
modified views of traditional validation techniques.
These characteristics of validation are listed
accordingly below:
ƒ A model should be judged for its usefulness
rather than its absolute validity.
ƒ A model cannot have absolute validity but it
should be valid for the purpose for which it is
constructed.
ƒ There can be no one test with which the
Figure 1: Value, cost vs. model confidence model validity can be judged.
(Source: Sargent 2003)
ƒ As a model passes the various tests,
Validation cannot be carried out by the modeller confidence in the model is enhanced.
(or researcher) alone, communication with the
ƒ “Failing a test helps to reject a wrong
client (or user) plays a large role in building a valid
hypothesis, but passing is no guarantee that
model and establishing its credibility (Carson
the model is valid” (Sushil 1993).
1989). Another relevant issue of concern is that by
how much the model output could deviate from ƒ “Quantitative as well as qualitative validity
system output and still remain valid (Kleindorfer et criterion should be given more credence
al. 1998). Since the model created is an (Forrester 1961)”.
approximation of the actual system, some errors ƒ Most of the information from the real system
and approximations are unavoidable. Model is used to check the consistency of model
validation thus resides in decision between the behaviour.
modeller and client; when both groups are ƒ Rejecting a model because it fails to
satisfied, the model is considered valid (Goldberg reproduce an exact replica of past data is not
et al. 1990). acceptable.
A wide range of tests to build confidence in a ƒ Rejecting a model because it fails to predict a
model have been developed by authors like specific future event is not acceptable
Forrester and Senge (1980), Barlas (1989 because social systems operate in wide noise
and1996), Khazanchi (1996) and Saysel et al. frequencies.
(2004) a summary of which is presented under
Validation Schemes. 4. Methodology for validation
Validation deals with the assessment of the
2. Validation defined comparison between ‘sufficiently accurate’
The definitions of validation as stated by different computational results from the simulation and the
authors are listed below: actual/ hypothetical data from the system.
ƒ Substantiation that a computerised model Validation does not specifically address how the
within its domain of applicability possesses a simulation model can be changed to improve the
satisfactory range of accuracy consistent with agreement between the computational results and
the intended application of the model (Sargent the actual data. The fundamental strategy of
2003). validation involves identification and quantification
of the error and uncertainty in the conceptual/
ƒ Validation is the process of determining that simulation models, quantification of the numerical
the model on which the simulation is based is error in the computational solution, estimation of
an acceptably accurate representation of the simulation uncertainty, and finally, comparison
reality (Giannanasi et al. 2001). between the computational results and the actual
ƒ Validation is the process of establishing data. Thus, accuracy is measured in relation to
confidence in the usefulness of a model actual/ hypothetical data, our best measure of
(Coyle 1977). reality. This strategy does not assume that the
ƒ The process of determining the degree to actual/ hypothetical data are more accurate than
which a model is an accurate representation the computational results. The strategy only
of the real-world from the perspective of the asserts that simulation results are the most faithful
intended uses of the model (DoD 2002). reflections of reality for the purposes of validation
(AIAA 1998).

www.ejbrm.com 40 ©Academic Conferences Ltd


Morvin Savio Martis

5. Model validation process 6. Reasons for failure of models


Figure 2 shows the model validation process in a Some of the reasons due to which the models fail
simpler form. The ‘problem entity’ is the system the validation tests are enumerated below as
(real or proposed – e.g. dynamics of integrated follows:
Knowledge Management and Human Resource ƒ Model-structure- In both the conceptual model
Management can be considered as a problem and the simulation model mathematical
entity (Martis 2004)) to be modelled; the simplifications might be inadequate for
‘conceptual model’ is the mathematical/ verbal capturing complex dynamics.
representation (influence diagram) of the problem ƒ Numerical solution- The solution of the
entity developed for a particular study; and the simulation model might differ dramatically
‘computerised model’ is the conceptual model from the ideal solution.
implemented on a computer (simulation model).
ƒ Input values- Proper numerical values of the
The inferences about the problem entity are
inputs that describe the scenario for prediction
obtained by conducting simulations on the
might be known only approximately.
computerised model in the experimentation
phase. ƒ Observation errors- Inaccurate observations
of real system.
Error! ƒ System noise- Failure to recognize random
Problem Conceptual
Operational Entity Model changes existent in the system.
Validation or Validation ƒ Project management errors- These errors
Credibility
Experimentation Influence revolve around project management and
Diagram related communication issues (Carson 2002).
Data
Validity
ƒ Inappropriate simulation software – either too
Computerised Conceptual inflexible or too difficult to use (Law 2003).
Simulation
Model Model Model ƒ Misinterpretation of simulation results
Computerised Model
Verification 7. Validation schemes

Figure 2: Model validation process. 7.1 Validation scheme as proposed by


There are three steps in deciding if a simulation is Forrester and Senge (1980):
an accurate representation of the actual system This validation criterion is used to validate
considered, namely, verification, validation and quantitative as well as qualitative models. The
credibility (Garzia et al. 1990). ‘Conceptual model validation scheme is mainly divided into four
validation’ is the process of determining that the phases; as the model passes more tests under
theories and assumptions underlying the every phase the confidence in the model
conceptual model are correct and that the model increases correspondingly. The validation scheme
representation of the problem entity is as proposed by Forrester and Senge (1980) is
“reasonable” for the intended purpose of the enumerated below:
model. ‘Computerised model verification’ is the
process of determining that the model
7.1.1 Importance of model objective:
implementation accurately represents the
developers’ conceptual description of the model The validity of a model cannot be greater than the
and the solution to the model (AIAA 1998). objective set for it. Therefore, the model objective
‘Operational validation’ is defined as determining must be a justified representation of the values
that the model’s output behaviour has sufficient prevalent in the real system. The method of
accuracy for the model’s intended purpose over setting model objectives by the conceptualization
the domain of the model’s intended applicability of problems in the existent system seems
(Sargent 2003). Operational validity determines unstructured, unless the problem elicitation is
the models credibility. ‘Data validity’ is defined as done under the guidance of experts from the
ensuring that the data necessary for model various subsystems existent within the system. A
building, model evaluation and conducting the model could be proven valid by a series of
model experiments to solve the problem are methods, but the validation may be totally useless
adequate and correct (Love et al. 2000). if the objectives are wrongly set.

7.1.2 Validating model structure:


These tests help in establishing confidence in the
model structure.

www.ejbrm.com 41 ISSN 1477-7029


Electronic Journal of Business Research Methods Volume 4 Issue 1 2006 (39 - 46)

Tests of suitability: parameter values, i.e. do the modes of the


Structure-Verification Test: This test is meant to behaviour change with parameter variations?”
answer the following question “Is the model Structural sensitivity test: “Is the behaviour of the
structure not in contradiction to the knowledge model sensitive to reasonable structural
about the structure of the real system, and have reformulation, i.e. do the modes of the behaviour
the most relevant structures of the real system change with structural variations?”
been modelled?”
Dimensional-Consistency Test: “Do the
Tests of consistency:
dimensions of the variables in every equation
Behaviour-Reproduction Test: Here the generated
balance on each side of the equation?” This test
model behaviour is judged with the historical
verifies whether all equations are dimensionally
behaviour. “How well the model generated
constant.
behaviour matches observed behaviour of the real
Extreme-Conditions Test: “Does every equation in system in terms of symptom generation,
the model make sense even if subjected to frequency generation, relative phasing, multiple
extreme but possible values of variables?” Policy mode, and behaviour characteristics?”
equations are scrutinized for their applicability in
Behaviour-Prediction Test: This test calls for
extreme conditions.
pattern prediction. “Whether or not a model
Boundary-Adequacy Test: This test verifies generates qualitatively current patterns of future
whether the model structure is appropriate for the patterns of future behaviour in terms of periods,
model purpose (Barlas 1989). “Is the model shape or other characteristics?
aggregation appropriate and includes all relevant
Behaviour-Anomaly Test: Behaviour conflicting
structure containing the variables and feedback
with the real system helps in finding obvious flaws
effects necessary to address the problem and suit
in the model. “Does behaviour shown by the
the purposes of the study?”
model is conflicting with the real system behaviour
and how implausible behaviour arises if the
Tests of consistency: assumptions are altered?”
Face validity test: “Does the model structure looks Family member test: Whenever possible, attempt
like the real system? Is it a recognisable should be made to build a general model of the
representation of the real system? Does a class of system to which a particular member
reasonable fit exist between the feedback belongs. The general theory is depicted in the
structure of the model and the essential structure. Parameter values are chosen to depict
characteristics of the real system?” a particular situation. By choosing a different set
Parameter-Verification Test: Parameters and their of parameter values the model can be applied to
numerical values should have real system other situation as well.
equivalents. “Do the parameters correspond Surprising behaviour test: “Does the model under
conceptually and numerically to real life? Are the some test circumstances produces dramatically
parameters recognisable in term of real systems, unexpected or surprise behaviour, not observed in
or are some parameters contrived to balance the the real system? Whether such a surprise
equations? If the values selected for the behaviour is due to model structure or some
parameters consistent with the test information causes in the real system can be assigned to
available about the real system?” such a behaviour?”
Extreme-Policy Test: “Does the model behave in
Test of utility and effectiveness: an expected fashion under extreme policies, even
Appropriateness for audience: “Is the size of the ones that have never been observed in the real
model, its simplicity or complexity, and its level of system?” If the model behaves in an expected
aggregation or richness of detail appropriate for fashion under extreme policies, then it boosts
the audience for the study?” The more the confidence in the model (Saysel et al. 2004).
appropriate a model for the audience the more will Boundary adequacy (behaviour) test: “Does the
be the audience’s perception of model validity. model include the structures necessary to address
the issues for which it is designed?” If an extra
7.1.3 Validating model behaviour: model structure does not change the behaviour,
These tests help in establishing confidence in the then this extra structure is not necessary.
model behaviour. Alternatively, if a model structure does not
reproduce desired model behaviour, it calls for
Tests of suitability: inclusion of additional model structure (Barlas
Parameter sensitivity test: “Is the behaviour of the 1996).
model sensitive to reasonable variations in Behaviour-Sensitivity Test: “Whether plausible
shifts in parameters can cause model to fail

www.ejbrm.com 42 ©Academic Conferences Ltd


Morvin Savio Martis

behaviour tests previously passed?” Here the 7.2 Validation scheme as proposed by
sensitivity of the model to changes in parameter Khazanchi (1996):
values is judged ‘(Saysel et al. 2004)’.
This validation criterion is mainly used to validate
Statistical tests: “Does the model behaviour
qualitative/conceptual models and consists of a
statistically like data from real system?” (Law and
set of criteria for validation. The criteria for
Kelton 2000).
validation as suggested by Khazanchi (1996) are
as follows:
Tests of utility and effectiveness: 1. Is it plausible/ reasonable? This criterion is
Counter intuitive behaviour: “In response to some useful to assess the apparent
policies, does the model exhibit behaviour that at reasonableness of an idea and could be
first contradicts intuitions and later, with the aid of demonstrated by deduction from past
the model, is seen as a clear implication of the research or theories.
structure of the system?” (Richardson et al. 1981). 2. Is it feasible? A feasible concept would be
“Is the model capable of generating new insights operational only if it would be open to
or at least the feeling of new insights, about the graphical, mathematical, illustrative
nature of the problem addressed and the system characterisation.
within it arises?” 3. Is it effective? An effective conceptual
model should have the potential of serving
7.1.4 Validating policy implications: our scientific purposes (Kaplan 1964).
Tests of suitability: 4. Is it pragmatic? This criterion emphasises
Policy sensitivity and robustness test: The that concepts and conceptual models
sensitivity of a policy with respect to change in should have some degree of logical self-
parameter values is judged during this test. consistency or coherence with other
“Whether the model based policy concepts and conceptual models in the
recommendations change with reasonable discipline (Hunt. 1990).
changes in parameter values or reasonable 5. Is it empirical? Empirical content implies
alteration in equation formulations?” that a concept or conceptual model must
have "empirical testability" (Hunt 1990).
Tests of consistency: 6. Is it predictive? A conceptual model that is
Changed Behaviour Prediction Test: “Whether the predictive would, at the least, demonstrate
model correctly predicts how behaviour of the that given certain antecedent conditions, the
system will change if a governing policy is corresponding phenomenon was somehow
changed?” expected to occur.
Boundary adequacy (policy) test: “Whether 7. Is it inter-subjectively certifiable? This
modifying the model boundary (i.e. criterion states “Investigators with differing
conceptualisation of additional structure) would philosophical stance must be able to verify
alter policy recommendations arrived by using the the imputed truth content of these concepts
model?” or conceptual structures through
observation, logical evaluation, or
System Improvement Test: “Whether the policies
experimentation (Hunt 1990).
found beneficial after working with a model, when
implemented, also improve real system 8. Is it inter-methodologically certifiable? This
behaviour?” criterion provides that investigators using
different research methodologies must be
able to test the veracity of the concept or
Test of utility and effectiveness: conceptual model and predict the
Implementable policy test: “Can those responsible occurrence of the same phenomenon.
for policy in the real system be convinced of the
value of model-based policy recommendations?
How is the real system likely to respond to the
8. Other validation techniques
process of implementation?” the policy Combinations of these techniques are generally
recommendations should be such formulated and used for validating a simulation model. These
argued so as to fit in the mental models of those tests can be used in addition to the validation
to whom they are addressed. schemes in the preceding section to increase the
credibility of the model.
1. Comparison to other models: Different outputs
of the simulation model being validated are
compared to those of other ‘valid’ models.

www.ejbrm.com 43 ISSN 1477-7029


Electronic Journal of Business Research Methods Volume 4 Issue 1 2006 (39 - 46)

2. Degenerate test: This has to do with 10. Black-box validation: This test is concerned
appropriately selecting values of the input and with determining whether if the entire model is
internal parameters to test the degeneracy of an adequately accurate representation of the
the model’s behaviour. For instance, test to real world (Ijeoma et al. 2001).
see if the average number in the queue of a 11. Extreme Condition Test: The model structure
single server continues to increase with and output should be reasonable for any
respect to time when the arriving rate is larger extreme and unlikely combination of values in
than the service rate (Ijeoma et al. 2001). the system. For example, if in-process
3. Events validity: The events of occurrences of inventories are zero, production output should
the simulation model are compared to those be zero (Sargent 2003).
of the real system to see if they are similar
e.g. verify the exit rate of employees. 9. Conclusions
4. Face validity: This has to do with asking
As rightly coined by (DMSO 1996), validation is
knowledgeable people if the system model
both an art and a science, requiring creativity and
behaviour is reasonable (Forrester 1961).
insight. But validation is a convoluted, multifarious
5. Historical Data validation: The experimental and exasperating procedure, and is unavoidable
data is compared with the historical data; to as it is the evidence for the steadfastness and
check whether if the model behaves in the legitimacy of the model. Moreover, no single
same way the system does (Balci et al. 1982). procedure can suit all the models. Statistical
6. Predictive validation: The model is used to based validation techniques have been widely
predict the system’s behaviour, and then accepted among the management community.
comparison is made between the real system But the problem associated with this method is
behaviour and the model’s forecast to being able to determine the suitable type of
determine if they are the same (Sargent statistical procedure, which in turn depends on the
2003). right type of data that is available for analysis.
7. Schellenberger’s Criteria: This include Moreover, the amount of deviation from the real
technical validation which has to do with system that is within the acceptable limits is
identifying all divergences between the model uncertain.
assumptions and perceived reality as well as
the validity of the data used, operational The paper has given an insight on the widely
validity which addresses the question of how approved validation schemes and techniques in
important these divergences are and dynamic practice. The validation schemes can be
validation which ensures that the model will applicable to quantitative (mathematical/
continue being valid during its lifetime (Ijeoma computerised) as well as qualitative (conceptual)
et al. 2001). models. But reliability of the model can only be
ascertained as the model passes more and more
8. Scoring Model Approach: Scores (or weights)
tests. Also, the decision of accepting a model as
are determined subjectively when conducting
valid cannot be left to the modeller alone,
various aspects of the validation process and
inclusion of the client / practitioners in the
then combined to determine category scores
validation procedure should be ascertained.
and an overall score for the simulation model.
Researchers and practitioners may find this paper
A simulation model is considered valid if its
quite useful as the procedures for validation
overall and category scores are greater than
discussed are quite generic, and hence, may be
some passing score(s) (Gass 1993).
applied to other dynamic models as well.
9. Clarity: Clarity refers to the extent to which the
model clearly communicates the implied
causality/linkages.

References
AIAA. (1998) “Guide for the Verification and Validation of Computational Fluid Dynamics Simulations”, [online], American
Institute of Aeronautics and Astronautics, AIAA-G-077-1998, www.aiaa.org/store, Reston, VA.
Balci, O. and Sargent, R.G. (1982) “Validation of multivariate response simulation models by using Hotelling’s two-
2
sample T test”, Simulation, Vol 39, No. 6, pp185-192.
Barlas, Y. (1989) “Tests of model behavior that can detect structural flaws: demonstrations with simulation experiments”,
Computer Based Management of Complex Systems, P. M. Milling and E. O. K. Zahn. (eds.), Germany, Springer-
Verlag, pp246-254.
Barlas, Y. (1996) “Formal aspects of model validity and validation in system dynamics”, System Dynamics Review, Vol
12, No. 3, pp183-210.

www.ejbrm.com 44 ©Academic Conferences Ltd


Morvin Savio Martis

Carson, J.S. (1989) “Verification and Validation: A Consultant’s Perspective”, E. A. MacNair, K. J. Musselman, and P.
Heidelberger (eds), Proc. 1989 Winter Simulation Conf, pp552-557.
Carson, J.S.II (2002) “Model Verification and Validation”, Proceedings of the 2002 Winter Simulation Conference, E.
Yucesan, C. H. Chen, J. L. Snowdon, and J. M. Charnes, (eds.), pp52-58.
Coyle, R.G. (1977) Management System Dynamics, Chichester, John Wiley and Sons.
DMSO (Defense Modeling and Simulation Office). (1996) “The Principles of Verification, Validation, and Accreditation”,
[online], Verification, Validation, and Accreditation Recommended Practices Guide, www.dmso.mil/public, U.S.
Department of Defense, Office of the Director of Defense Research and Engineering, November.
DoD. (2002) DoD Instruction 5000.61: “Modeling and Simulation (MandS) Verification, Validation, and Accreditation
(VVandA)”, [online], Defense Modeling and Simulation Office, Office of the Director of Defense Research and Engr.,
www.dmso.mil/docslib.
Forrester, J.W. and Senge, P. (1980) “Tests for building confidence in System Dynamics Models”, TIMS Studies in the
Management Sciences, Vol 14, pp209-228.
Forrester, J.W. (1961) Industrial Dynamics, Cambridge: MIT Press.
Garzia, R.F. and Garzia, M.R. (1990) Network Modeling, Simulation, and Analysis, Marcel Dekker, NY.
Gass, S.I. (1993) “Model Accreditation: A Rationale and Process for Determining a Numerical Rating”, European Journal
of Operational Research, Vol 66, No. 2, pp250-258.
Giannanasi, F., Lovett, P. and Godwin, A.N. (2001) “Enhancing confidence in discrete event simulations”, Computers in
Industry, Vol 44, pp141-157.
Goldberg, J. Dietrich, R. Chen, J. Valenzuela, T. and Criss. E. (1990) “A Simulation Model for Evaluating a Set of
Emergency Vehicle Base Locations: Development, Validation, and Usage”, Socio-Economic Planning Science, Vol
24, pp25-141.
Hunt, S. (1990) Marketing theory: The philosophy of marketing science, Homewood, IL: Richard D. Irwin, Inc.
Ijeoma, S.I. Andersson, J. and Wall, A. (2001) “Correctness criteria for models’ validation–A philosophical perspective”,
[online], Department of Computer Science and Computer Engineering (IDT), www.mrtc.mdh.se/publications/0731.pdf,
Malardalen University, Sweden.
Kaplan, A. (1964) The Conduct of Inquiry, Scranton, PA: Chandler Publication. Co., pp52-54.
Khazanchi, D. (1996) “A Framework for the Validation of IS Concepts”, Proceedings of the Second Annual Association
for Information Systems Americas Conference, Phoenix, Arizona, August.
Kleijnen, J.P.C (1999) “Validation of Models: Statistical Techniques and Data Availability”, Proceedings of the 1999
Winter Simulation Conference, P. A. Farrington, H. B. Nembhard, D. T. Sturrock and G. W Evans, (eds.), pp647- 654.
Kleindorfer, G.B. O’Neill, L. and Ganeshan, R. (1998) “Validation in Simulation: Various Positions in the Philosophy of
Science”, Management Science, Vol 44, No. 8, August, pp1087-1099.
Law, A.M. (2003) “Model Verification and Validation”, Proceedings of the 2003 Winter Simulation Conference, S. Chick,
P. J. Sanchez, D. Ferrin, and D. J. Morrice, (eds.), pp66-70.
Law, A.M. and Kelton. W.D. (2000) Simulation Modeling and Analysis, Third Edition, McGraw-Hill, New York.
Law, A.M. and McComas, M.G. (2001) “How to Build Valid and Credible Simulation Models”, Proceedings of the 2001
Winter Simulation Conference, B. A. Peters, J. S. Smith, D. J. Medeiros, and M. W. Rohrer, (eds.), pp22-29.
Love, G. and Back, G. (2000) “Model Verification and Validation for Rapidly Developed Simulation Models: Balancing
th
Cost and Theory”, Proceedings of the 18 International Conference of the System Dynamics Society, Bergen,
Norway, August 6-10.
Martis, M. (2004) “System Dynamics of Human Resource and Knowledge Management in Engineering Education”,
Journal of Knowledge Management Practice, Vol. 5, October, http://www.tlainc.com/jkmpv5.htm, ISSN 1705-9232,
TLA: INC, 2004.
Mohapatra, P.K.J. (1987) “Validation of System Dynamics Models”, Orientation Course, Lecture Notes, Second National
Conference on System Dynamics, Varanasi, January, 15-17.
Richardson, G.P and Pugh III, A.L. (1981) Introduction to System Dynamics Modeling with DYNAMO, MIT Press,
Cambridge, Massachusetts.
Sargent, R.G. (2003) “Verification and Validation of Simulation models”, Proceedings of the 2003 Winter Simulation
Conference, S. Chick, P. J. Sanchez, D. Ferrin, and D. J. Morrice, (eds.), pp37-48.
Saysel, A.K. and Barlas, Y. (2004) “Model Simplification and Validation: Illustration with Indirect Structure Validity Tests”,
Working Papers in System Dynamics, March, ISSN 1503-4860.
Schlesinger, S. (1979) “Terminology for Model Credibility”, Simulation, Vol 32, No. 3, pp103-104.
Solberg, J. (1992) “The power of simple models in manufacturing”, Manufacturing systems- Foundations of World-Class
Practice, J. Hein and W. Compton (eds.), Washington, USA National Academy of Engineering Press, pp215- 223.
Sterman, J.D. (1984) “Appropriate Summary Statistics for Evaluating the Historic Fit of System Dynamics Models”,
Dynamica, Vol 10, pp51-66.
Sterman, J.D. (2000) Business Dynamics: Systems Thinking and Modeling for a Complex World, Irwin McGraw-Hill,
p845.
Sushil, (1993) System Dynamics: A Practical approach for Managerial Problems, Wiley Eastern Publication, New Delhi,
ISBN: 81-224-0498-7, p137.

www.ejbrm.com 45 ISSN 1477-7029


Electronic Journal of Business Research Methods Volume 4 Issue 1 2006 (39 - 46)

www.ejbrm.com 46 ©Academic Conferences Ltd

You might also like