Parametric Estimating Handbook
Parametric Estimating Handbook
Parametric Estimating Handbook
Parametric
Estimating
Handbook
Fourth Edition April 2008
Table of Contents
List of Figures
Preface
Introduction
Part One
CHAPTER 1
1.1
1.2
1.3
Part Two
CHAPTER 2
2.1
2.2
2.3
2.4
TABLE
OF
CONTENTS
2.5
2.6
CHAPTER 3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
ii
4.2
4.3
4.4
4.5
4.6
CHAPTER 5
5.1
5.2
5.3
5.4
5.5
5.6
5.7
iii
TABLE
OF
CONTENTS
CHAPTER 6
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
CHAPTER 7
7.1
iv
GOVERNMENT COMPLIANCE
7.2
7.3
CHAPTER 8
8.1
8.2
8.3
8.4
8.5
CHAPTER 9
9.1
France....................................................................................................... 9-1
9.1.1 General Applications ................................................................... 9-1
9.1.2 Examples...................................................................................... 9-1
9.1.3 Regulatory Considerations........................................................... 9-2
TABLE
OF
CONTENTS
9.2
9.3
Germany................................................................................................... 9-5
9.2.1 General Applications ................................................................... 9-5
9.2.2 Examples...................................................................................... 9-5
9.2.3 Regulatory Considerations........................................................... 9-5
United Kingdom....................................................................................... 9-8
9.3.1 General Applications ................................................................... 9-8
9.3.2 Examples...................................................................................... 9-8
9.3.3 Regulatory Considerations........................................................... 9-8
Appendix A
Appendix B
Appendix C
Appendix D
Appendix E
Appendix F
Appendix G
Appendix H
Appendix I
Appendix J
Appendix K
Glossary
References
Handbook User Comments
vi
List of Figures
Figure 1.1
Figure 1.2
Figure 1.3
Figure 1.4
Figure 1.5
Figure 1.6
Figure 4.1 Example of Model with CERs Matched to Product ........................ 4-4
Figure 4.2 Typical Model Development Process Company Developed
Models ............................................................................................. 4-5
Figure 4.3 Example of a Computerized Data Collection Form ........................ 4-8
Figure 4.4 The Space Sensor Cost Model Engineering Cost Drivers............... 4-9
Figure 4.5 Example of Model Validation by Comparing the Models
Estimate to Another Estimate ........................................................ 4-11
Figure 4.6 Example of Model Documentation Which Facilitates
Maintenance .................................................................................. 4-15
vii
TABLE
OF
CONTENTS
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 6.10
Figure 6.11
Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Figure 7.5
viii
Preface-1
PREFACE
Parametric World;
Membership Directory;
Journal of Parametrics;
Conference Proceedings;
ISPA has always provided excellent conferences and educational programs for its
members. Truly international in scope, the ISPA conferences are held annually,
and each leap year are at an international location.
In the estimating process, parametricians understand the technical and other cost
drivers, thus making the parametrician a valuable member of a proposal team.
The goal of ISPA is to continue to support parametricians throughout their career
by stimulating tool development and by encouraging professional contributions.
ISPA will continue to be a powerful force within the estimating community in the
foreseeable future.
Acknowledgements
The International Society of Parametric Analysts (ISPA), as well as the general
Industry/Government Estimating Community would like to thank the following
people and organizations for their support during the effort to compile this edition
of the Parametric Estimating Handbook. We apologize if anyone was overlooked.
x
The ISPA Board of Directors for their overall support and funding.
The Content Writing and Review Team consisted of the following people:
o David Eck, DCAA;
o John C. Smith, DCAA;
o Jason Dechoretz, MCR;
o Rich Hartley, USAF;
Preface-2
Joan Ugljesa (MCR) and Diane Welschmeyer (DCAA) for general editing
and publishing support.
Preface-3
PREFACE
Galorath Incorporated
www.galorath.com
User Comments
As in previous editions, this Fourth Edition of the Parametric Estimating
Handbook is intended to be a living document. We encourage all handbook users
to provide suggestions for improvement using the comment form at the end of the
handbook. You may also go to the ISPA web site (www.ispa-cost.org) to make
comments.
Preface-4
Introduction
About This Handbook
The detailed guidance, case studies, and best practices contained in this handbook
are designed to provide an understanding of the how-to of parametric
estimating. It is designed to help those involved in the acquisition process to
become more familiar with parametric estimating as well as the techniques and
tools used in the process. It is also designed to assist practitioners and managers
involved in the processes to better understand and apply the tools and techniques
to real world cost estimating problems.
People new to the parametric estimating practice will find this document to be an
invaluable aid in the execution of their assignments. This handbook provides
information about parametric estimating techniques, guidance on the acceptable
use of tools, and methods for process and parametric estimate development and
evaluation. The chapters mirror the process an organization may use in
developing a parametric estimating capability.
This handbook presents and summarizes the best practices and lessons learned
which an organization needs to know to successfully establish and utilize
parametric estimating tools and techniques. This handbook also helps companies
address the feasibility of using parametric techniques before they are
implemented. Some of the critical feasibility issues assessed include:
x
Introduction-1
INTRODUCTION
Introduction-2
Developing case studies based on the best practices and lessons learned;
Introduction-3
INTRODUCTION
Introduction-4
There exists a high-quality link between the technical and cost proposals;
Introduction-5
INTRODUCTION
Handbook Outline
The general content of this edition of the handbook is as follows:
Chapter 1 Parametric Analysis Overview
This chapter describes the parametric estimating process as well as the procedures
and techniques for implementing that process as seen from a management
perspective. As well as process, the chapter describes the various types of
parametric applications, the organization required to implement the techniques,
use of cost IPTs, the necessary competencies, and roles and responsibilities in the
parametric estimating organization.
Chapter 2 Data Collection and Analysis
Chapter 2 discusses the methods and techniques of data collection and analysis for
use in the development of CERs and more complex parametric models. The
chapter also discusses the techniques of data capture and normalization. Detailed
technical math can be found in Appendix B. There is an emphasis on real world
examples.
Chapter 3 Cost Estimating Relationships
Chapter 3 discusses the development and application of cost estimating
relationships (CERs). As with Chapter 2, detailed technical math can be found in
Appendix B.
Introduction-6
Introduction-7
INTRODUCTION
Introduction-8
References;
Part One
The Basics of Parametric Analysis
CHAPTER 1
The purpose of this chapter is to describe, for those who are not intimately
involved in the parametric costing process, the basic knowledge required to
manage and support that process. Many support personnel do not need an indepth knowledge. These people include project and other managers, proposal
reviewers, pricers, accountants, and other knowledgeable professionals who come
into contact with parametric analyses from time to time during their careers.
1.1
1-1
CHAPTER 1
of their work. The profession has grown steadily in both number and quality of
practitioners. That has happened through a gradual process of convincing project
stakeholders of the integrity of the modeling process, and of the increasing
professionalism of model users.
The objective of this chapter is to provide guidance for managing the parametric
analysis process. It describes best practices associated with what we will formally
call the parametric analysis process. This process is not monolithic in the sense of
massive, totally uniform in character, and slow to change. It is an evolving
process, benefiting from continuous improvements and new ideas. Experience
has shown that certain aspects of the process are best done in particular ways if
the best results are to be obtained for all concerned. Those best ways is the focus
of this chapter.
The process has three major components: database development, model
development, and model use. In most situations, the same people do all parts of
the process. This is especially the case when a company, or a Government
agency, decides that they should build a parametric model (or models), for their
own, often specialized, internal use. In other cases, organizations may decide to
license or otherwise acquire a commercially available general or special purpose
model that they believe will adequately serve their needs.
Parametric analysis is a major management innovation. In common with several
other management innovations, such as network scheduling, earned value
analysis, and many of the methods of operations research, modern parametric
analysis had its genesis in the U.S. and British military-industrial complex.
ISPAs present membership is still associated with and heavily influenced by that
world, but its sphere of interest now includes other U.S. Government agencies,
and additional companies and Governments in Europe, Australia, and in Asia.
Use of parametrics also has spread to the commercial world, especially to the
construction industry and to companies that build or buy software, and that is now
a growing share of the business for companies that produce commercial models
still used primarily by ISPA members. Nevertheless, the best practices discussed
here are heavily influenced by the needs of Government, U.S. and other. That
should be kept firmly in mind. Some of the practices might not apply, or might be
less rigorous, if it were not for the need to maintain great openness and integrity
in the handling of public money.
The Government interest in parametric best practices strongly affects the
construction and use of in-house parametric models. It also affects the use of
commercially available models and to a lesser extent their construction. In-house
developed cost models used in Government procurements generally must be open
to and approved by the Government, at least in some respects. Commercially
built models, on the other hand, are generally proprietary, at least in part, and their
customers use them because they offer some economic advantages. Nevertheless,
their use in estimating Government-paid costs is carefully scrutinized by the
Government, and users must generally show that the models have been calibrated
to a particular project environment and way of doing business.
1-2
This chapter will first address database development, then the model building part
of the process, and then using the model. In each instance, the appropriate steps
will be described in a simplified flowchart format. Then, each step of the
flowchart will be discussed. The level of detail of the discussion often will be
limited because subsequent chapters and appendices in this handbook provide
much more detail.
1.1.1
It should be noted that the flow of work for commercial model development and
the development of in-house models differ. In-house models that are developed
for a specific purpose are strongly tied to the available historical data. Thus, the
development and normalization of auditable historical data is the key starting
International Society of Parametric Analysts
1-3
CHAPTER 1
point. The historical data will dictate and restrict the level of detail and the
approach the parametrician can take.
In-house models discussed here tend to be for a specific purpose, such as a new
business competition, and not to support a family of acquisitions. Although, the
in-house model could be used for other acquisitions with tailoring to a work
breakdown structure (WBS), phasing, quantities, hardware specifics, and so forth.
The development of unique in-house cost models as experienced by the
parametrician occur through these steps:
1.1.1.1
Database development;
Model Requirements;
Model development;
Model documentation;
Model updating.
Database Development
A sound database is key to the success of the parametrician. A cost model is a
forecast of future costs based on historical fact. Thus, future cost estimates must
be consistent with historical data collection and cannot provide a lower level of
detail than provided by the historical detail without some allocation or distribution
scheme devised by the parametrician.
Parametric techniques require the collection of historical cost data (including
labor hours) and the associated non-cost information and factors that describe and
strongly influence those costs. Data should be collected and maintained in a
manner that provides a complete audit trail with expenditure dates so that costs
can be adjusted for inflation. Non-recurring and recurring costs should be
separately identified. While there are many formats for collecting data, one
commonly used by industry is the WBS, which provides for the uniform
definition and collection of cost and certain technical information. If this is not
the case, the data collection practices should contain procedures for mapping the
cost data to the cost elements of the parametric estimating technique(s) which will
be used.
The collection point for cost data is generally the companys financial accounting
system, which in most instances contains the general ledger and other accounting
data. All cost data used in parametric techniques must be consistent with, and
traceable to, the collection point. The data should also be consistent with the
companys accounting procedures and generally accepted cost accounting
practices.
1-4
Model Requirements
The expectation of a parametric model is that it will estimate costs virtually
instantaneously and accurately if the correct information is entered with respect to
its parameters. It can do this repeatedly without deviation. Generally, there is an
even higher expectation, namely that a parametric model will do these things
quicker and better than alternative methods, such as bottoms-up estimating or
detailed analogy estimating. This is especially true if the model is intended to
support numerous cost trade studies and analyses. If that is not true, the expense
of building a parametric model may not be justified.
1-5
CHAPTER 1
While crude analogy estimates can sometimes be produced in minutes, they are
not famous for their accuracy. More detailed analogy estimates can be quite
accurate, but they are usually time consuming to build. Bottoms-up estimates are
notoriously inaccurate very early in project planning because of poor
understanding of project scope, but typically improve as time goes on, and a bill
of materials (BOM) is built. They are usually very time consuming and
expensive. A well conceived and constructed parametric model offers rapid,
inexpensive estimating at any stage of project life, and is generally the more
accurate method in the early days of a project.
The scope of the model is strongly dictated by the database and the specification
for the model. The specification is generally a function of a request for
information (RFI), request for proposal (RFP), or other official Government
request, or this model may even be by request of management in anticipation of a
new business opportunity. In any event, the level of detail required by the model
will be a function of the information desired tempered by the nature of the data
available in the database, the time-frame required for developing the model, and
so forth.
The in-house model is typically designed to estimate a system such as a
communication satellite system, land-based missiles or armored tanks, a particular
type of hardware or software such as a battery or fire control system, or perhaps a
particular function, such as systems engineering, and may be limited to
development costs only, or production costs only. Many in-house models give
most likely point estimates, but there is a significant trend within industry to
provide range estimates based on risk and cost uncertainty. The new parametric
model may be best served by a combination of several commercial models tied
together by in-house developed CERs and algorithms.
1.1.1.3
Algebraic equations;
Lookup tables.
Other forms are possible in certain types of very specialized models, but we will
not attempt to list them here. The most general expression for the algebraic
equation form of a CER is y = f(xi). Here, y represents a desired estimate, usually
1-6
in currency units (e.g., USD), labor hours expended, or periods of time consumed
(e.g., months). Mathematicians would refer to y as a dependent variable. The f
represents some functional relationship that could be almost anything, including
linear equations and a variety of non-linear equations, such as polynomials, power
laws, exponentials, and so forth. The key point about the selection of f is that the
resulting equation must be a good fit to the supporting data. Assuming that is
true, we also assume that it will produce estimates that reasonably represent what
will happen in future real world projects.1
The xi represents the possibility of more than one independent variable. Most
commonly, those independent variables are the parameters that the model builder
has chosen as parameters driving the dependent variable. Model users initiate
estimating by entering known or at least suspected values for these parameters.
Lookup tables are basically mathematical functions that are expressed in tabulated
form. Use of lookup tables is sometimes more convenient for commercial model
builders than use of algebraic functions. This is particularly true for discrete
drivers such as material of manufacture, number of axes of rotation, number of
external interfaces, and so forth.
A key aspect of model architecture is the choice of parameters. A prime
requirement for use of a particular parameter is that it must be either a direct
cause of the level or amount of the resource being estimated, or must strongly
correlate with it. An example of a direct cause is the number of optical elements
in a telescope. The more of them that are required, the higher the cost. Their
optical quality is another direct cause, as is their combined surface area.
A prime requirement for the selected cost driving parameters considered as a set
is that the amount of correlation between any two of them should be small.
Correlation is a statistical term. The most used measure of it is the coefficient of
variation (R2). The coefficient measures the strength and direction of a linear
relationship between the two variables. Most spreadsheets will do this
calculation.
A commonly used parameter in estimating many types of hardware is its weight.
In many cases, weight does correlate strongly with cost, but it is seldom a direct
cause. In fact, attempts to reduce weight can increase cost significantly. So why
use weight as opposed to using more direct cost driving parameters? Weight is
used because data for it is almost always available. Commonly on projects where
weight is important, reasonably accurate weights are available very early, even in
the proposal phase. But when using weight, it is virtually always necessary to use
1
We can now define what could be called the Fundamental Assumption of Parametric Estimating. A fair
wording for it is the following: If carefully selected and adjusted historical project outcomes are fitted
with sufficient accuracy by a set of mathematical relationships, then that same set of mathematical
relationships will estimate with similar accuracy the outcomes of sufficiently similar future projects. Note
the profuse use of qualifiers such as carefully, adjusted, sufficient, and similar. It is because of the
need for such qualifiers that parametricians must exercise due diligence in selection and treatment of data,
and careful testing of modeling concepts. The need for such qualifiers also prompts customers presented
with parametric estimates to demand justifications of model construction and use.
1-7
CHAPTER 1
1-8
120
100
80
60
40
20
0
0
The OLS provided best fit equation is y = 3.1772x + 1.8123. Obviously, the fit is
not exact, because none of the points lie exactly on the line. Additional analyses,
discussed in Chapter 2 of this handbook, are generally appropriate to analyze just
how good the fit is. If it is good enough to satisfy the analyst and customers, a
one independent variable CER can be declared. But what if the result is as shown
in Figure 1.3? Here, the data scatter is much worse. What to do?
20
15
10
5
0
0
The first thing to do is to understand the several reasons why data scatter may be
observed. The most prominent among them are:
x
1-9
CHAPTER 1
Engineers and others who work with a new product in a project environment
generally have a good intuition for what parameters drive cost of the product.
However, intuition can sometimes be deceived. The plot in Figure 1.3 is an
extreme example. It should be noted that a parameter that is poorly associated
with one project cost may be an excellent driver for a different project cost. For
example, a driver that poorly predicts development labor hours could possibly be
a very good predictor of production touch labor hours.
Presence of One or More Other Cost Driving Parameters
Most costs depend to some extent on more than one parameter. For that reason
alone, scatter plots almost always contain scatter. One good way to detect this is
to make scatter plots of other suspected drivers. Unfortunately, scatter plots with
two or more independent variables are very difficult to make and to interpret, but
usually dependence on more than one parameter can be detected with two
dimensional plots.
Presence of Non-Normalized Parameter Values
Consider cost values for essentially the same kind and amount of material taken
from two different projects, one done in 1999 and one done in 2003. In the 1999
project, the material cost $5/pound, but in the 2003 project it cost $5.50/pound.
There could be more than one reason for this, but the most common reason is cost
inflation. Inflation is one of a set of cost-affecting parameters commonly referred
to as normalization factors. Certain cost data that comes from different years will
typically be affected by inflation, and unless this effect is accounted for and
corrected, the model can have significant errors.
It should be noted that different kinds of cost are not necessarily affected in the
same way by particular normalization factors. For example, cost of a certain
material such as aluminum may be strongly affected by inflation, while labor
hours are not much affected. However, labor hours can be affected by industry
productivity trends, learning phenomena, and other considerations such as skill
mix.
Proper selection of normalization factors and the mathematics of normalization
corrections are an area when analyst judgment and experience are important.
Data Collection Errors
Data collection errors can occur when the data are first recorded, and also when
the parametric analyst acquires it from wherever it has been stored. If the original
recording was in error, finding and correcting the error is often difficult.
Inconsistent Cost Classification
One of the most difficult tasks undertaken by parametric analysts is to sort out
inconsistent data. The task is doubly difficult if the data comes from more than
one organization, if accounting practices differ. Mergers and acquisitions within
1-10
industry means data from disparate accounting systems have been combined. It
would be foolish to assume that the contributing accounting systems were
identical and that the data from the two was homogeneous. Experience, good
judgment, and knowing the right questions to ask can be important to doing this
well.
For example, an area of potential confusion is the distinction between recurring
and non-recurring cost. In some companies, engineering development work is
treated as a burden on production effort (recurring), while in others it is treated as
a standalone non-recurring cost. See Appendix I for additional discussion on this
topic.
What are the best practices with regard to possible data inconsistency?
x
Recognize that if data comes from more than one organization, some cost
inconsistency is likely. Ask questions about their ways of treating costs.
Try to bore down to the actual labor tasks that are done; work with labor
hours to the extent possible, not labor costs (e.g., dollars).
Recognize that even labor hours can have some inconsistency if the skill
mix is changing.
It would be simpler for parametricians if all costs were linearly related to their
various drivers. Unfortunately, this is often a non-linear world and the
assumption of linearity will not always work. For accuracys sake, the nonlinearity often must be accounted for.
Figure 1.4 shows a rather mildly non-linear x-y relationship. This plot was created
using the popular MS Excel spreadsheet, which has the capability to do some
simple best fit curve fitting to plotted data. While that capability should not be
the basis of a formal CER design, it is an excellent tool for detecting non-linearity
and making a preliminary assessment of the best non-linear function to use to fit
the data.
1-11
CHAPTER 1
y = 28.123x - 16.757
R2 = 0.9593
150
100
50
0
-50
In Figure 1.4, the dashed line is a straight line, and it is not as good a fit as the
curved line, which happens to be a second order polynomial, also known as a
quadratic equation. The best fit equations of both curves appear at the top of the
plot. Also appearing at the top of the plot is the statistic, R2, the coefficient of
variation. This statistic, which ranges from zero to one depending on how well
the curve fits the data, is useful for rough comparisons of goodness of fit of
different types of curves. Note that the better fit of the quadratic is confirmed by
its higher R2 value.
1.1.1.4
Model Development
During this phase, the team refines the scope of the models requirements, and
defines the methods and assumptions which establish the basis for its business
rules and estimating relationships. User requirements and input/output interfaces
are also identified.
The development of a complex model incorporates many anticipated uses and
goals such as estimating/users requirements, life-cycle costs, systems engineering
costs, forward pricing rates and it must integrate these into the parametric
estimating approach. The modeling process, in particular, focuses on these tasks:
x
Identifying the job functions and other elements of cost that will be
estimated;
1-12
1.1.1.5
Model Documentation
Model documentation requires that configuration control of the assumptions,
conditions, and changes to the model are recorded as they occur. Management
will want to know the version of the model, including all changes and
assumptions being used for costing.
1.1.1.7
Model Updates
Model updates are evolutionary. They are often updated on the fly as changes
occur. All changes should be documented.
1.1.2
1-13
CHAPTER 1
Special settings;
Model calibration;
It should be noted that the flow of work need not always conform exactly to the
sequence shown in the flowchart (Figure 1.1). Sometimes it may be possible,
depending on resources available, to do in parallel, or partly in parallel, or even
out of order, some of the tasks that are shown to be in series.
1.1.2.1
Special Settings
Many parametric models have special global settings that influence the overall
response of the model. Common examples of such settings are year of inflation,
inflation table to be used, method and slope of learning, average labor rates,
average overhead rates, project start dates, and so forth. If a model has global
settings, they must all be checked and properly set before estimating activities
begin.
1.1.2.2
Model Calibration
Most simple CER models do not have a calibration capability nor do they need
calibration if they are based on company history. The more sophisticated
complex models have a calibration capability. The main reason they do is that the
data they are based on may come from various sources and represents industry
typical conditions. These conditions may be somewhat different than the
conditions in a particular organization (more on this in Section 1.3). Users
therefore need to adjust their model to accommodate the differences. These
adjustments typically are not huge. Huge differences are unlikely in competitive
markets, but they can occur in non-competitive situations, or in situations where
an organization has a decided advantage, as for example in the availability of high
quality but inexpensive labor (e.g., in university graduate schools). The topic of
calibration is discussed in additional detail throughout this handbook.
1.1.2.3
1-14
Sanity checks and cost realism. If a project has decided to use another
estimating method as the primary method for a proposal, it may
nevertheless want to use a parametric backup estimate as a sanity check.
1-15
CHAPTER 1
1.2
Data normalization;
CER development;
CER validation.
CERs are mathematical in nature, and the math can be somewhat advanced, but
our discussion of the mathematics in this section will be at a rather elementary
and cursory level. See Appendix B for a deeper exploration of the mathematics.
1.2.1
1.2.1.1
Comparability of Activities
Ways of comparing project activities are almost limitless. Our interest in
comparison will be limited to examination of project activities that can create a
material difference in cost. Even so, it is impossible in the scope of this handbook
to capture all that can be imagined. It is the job of the CER builder to be sure that
he or she has captured all significant project activities that matter.
Some project activities have shown by experience to cause material differences in
cost; they also frequently occur. This includes:
1-16
Timing;
Team skills;
Team tools;
Volatility;
Accounting changes;
Special constraints.
Timing
Timing of historical versus future costs is of importance for at least one main
reason: variations in the value of currencies, also called inflation or deflation.
Adjustments must be made due to these changes in the value of currencies.
Another consideration is the number of years in the period of performance and
total cumulative data from inception to completion.
Labor versus Material
In common parlance the difference between labor and material is clear. But it is
not always clear in the world of accounting, which is the source of the data used
to build CERs. Major integrating contractors commonly refer to anything they
buy as material, regardless of its labor content at the source. Lower level
contractors may do the same thing, but their labor content may be considerably
different than an integrating contractors.
Recurring versus Non-recurring
Costs related to initial development of a product are frequently referred to as nonrecurring costs on grounds that they will only occur once. Costs related to
production of a product are referred to as recurring costs on grounds that they will
recur every time the product is built. Hardware projects commonly have both
kinds of costs, while software projects commonly have only non-recurring costs.
As long as the definitions of what is included in each category remain consistent,
there are no problems. But different organizations have been known to adopt
different accounting practices in this regard. For example, organizations typically
treat engineering design effort as non-recurring costs, while a few bundle the
engineering effort into an overhead account and apply it as a burden to production
labor costs. The difference in production burdened labor rates is substantial. See
Appendix I for additional discussion of recurring and non-recurring costs.
Overhead versus Direct
1-17
CHAPTER 1
costs. While this to some extent masks the true costs of activities, it is not
particularly troublesome to CER builders as long as the allocation percentages
remain about the same. Unfortunately, different organizations have different
overhead structures, so mixing data from different organizations can be
troublesome. Moreover, even within one organization, overhead rates and thus
burdened labor rates, can sometimes vary as often as daily.
Production Quantity and Rate
If cost data rolls up the cost of many items produced, as opposed to separately
enumerating the cost of each item, the data is not useful unless the quantity
produced is known. Of lesser effect on data usefulness is the rate of production,
the effects of which are more subtle and variable.
While quantity is the main driver of total production cost, the well known learning
effect can also have a considerable impact.
Team Skills
In some organizations team skills are relatively constant and therefore of not
much concern to CER builders. However, the modern trend is for competitive
project organizations to engage is some form of continuous improvement, thereby
becoming more cost effective in their work. This takes various forms, such as
CMMI, and various ISO classifications, but team self-improvement is the
common purpose.
Team Tools
Quality teams cannot be their most effective if they work with poor tools.
Tools can include everything from buildings to production machines to
computers and software. As tools improve, cost effectiveness increases.
Volatility
The most cost effective project environment is one in which project requirements,
labor force, and infrastructure is stable. Volatility in any of these factors can
increase costs.
Accounting Changes
Companies change their accounting systems for many reasons. There are
mandated changes from the Government, internal decisions to change cost
accumulation procedures, adjustments to account for new ways of doing business,
and mergers and acquisitions. In any event, expect to have to reconcile history to
account for these planned changes.
Special Constraints
Various kinds of special constraints can seriously affect cost. Among them are:
x
1-18
1.2.1.2
Manufacturing records;
Departmental records;
Purchase orders;
Industry surveys;
Government reports;
Cost proposals.
The latter should be used only as a last resort. Estimates based on proposals are
often viewed with suspicion.
1.2.1.3
Production quantity;
Production rate;
Project schedule;
1-19
CHAPTER 1
Context information such as the above is useful to the CER builder in determining
whether or not the future project to be estimated is sufficiently similar to the
historical projects from which data was taken.
1.2.2
Data Normalization
Data normalization is a process whereby a CER builder attempts to correct for
dissimilarities in historical data by putting the data into uniform format. In
principle, if all of the historical data comes from the same organization, if that
organization is not changing or learning, if the organization is repetitively doing
the same kind of work with no improvements in efficiency or technology, and if
the national currency does not fluctuate in value, historical data would not need to
be normalized.
For historical data, normalization is virtually always necessary. How much is
necessary depends on what is changing and how fast it is changing. It also
depends on how much accuracy is needed in the CER being built.
The ability of a CER builder to do normalization is almost always subject to the
limitation caused by unrecorded data. Not everything that affects cost in a
historical project is recorded in the official books of account. If a CER builder is
fortunate, a historical project will have recorded certain vital contextual
information in some kind of anecdotal project history.
While some normalization adjustments are of the types that require an initial
injection of expert opinion to get the ball rolling, others are more mechanical.
The two that are most nearly mechanical are adjustments for inflation and
production quantity.
To normalize for inflation, the CER builder will locate a prior years inflation
table appropriate to a given product or product mix. Using this table, the
historical costs will be adjusted to a common desired base year. The CER will
then make its estimates based on the new base year currency values. If the CER
user wishes the cost output to be in a different base year currency, he or she must
use a table of inflation that includes that base year. If the new base year is in the
future, as is likely, the inflation values given by tables will be estimates made by
Government or industry economists.
If every unit produced had the same labor hours and material cost, or deviated
only slightly from the average of those values, adjusting for differences in
production quantity would simply be a matter of dividing the historical total cost
by its production quantity to get an average value. Unfortunately, in most cases
this simple linear adjustment will usually be far off the mark if the production
quantity is more than three or four. In production where the quantity is more than
just a few, a highly non-linear phenomenon known historically as learning
commonly takes place.2
Little or no learning may take place in highly automated factories. However, there may be improvements
due to better methods and equipment.
1-20
Sometimes this phenomenon is called cost improvement or other names, but the
basic idea is that the work force gradually improves its ability to build the
product. This involves both human understanding and motor skills. Learning
can also be due to investments in better tools and processes. Learning may
affect material purchase costs through better buying practices, reduction of
manufacturing scrap, and measures to reduce various material attrition losses
(e.g., due to rough handling, theft, damage in transit, and so forth).
The learning process has been found to be non-linear in that it affects every item
built somewhat differently. Without delving too deeply into the mathematics
involved, we can say that the learning process appears to be best represented by a
non-linear equation of the form:
y
ax b
The above equation will plot as a straight line on log-log graph paper.
In one theory of learning, called the unit theory, y is the labor hours or cost
associated with unit number x in the production sequence, a is the labor hours or
cost associated with the first unit produced, and b is called the natural learning
slope. The value of b is almost always negative, reflecting the fact that unit costs
decrease as production quantity increases.
In the other theory, called the cumulative average theory, y represents the
cumulative average cost of units 1 through x, a again represents the first unit cost,
and again b is the natural learning slope.
Learning slope is commonly given as a percentage. The percentage expression of
learning is related to b through an equation that can be found in Appendix B.
1.2.3
CER Development
The basic idea in CER development is to 1) identify one or more parameters of a
product or project that best explain its cost, 2) find some historical data that are
representative of the desired cost, and appropriately normalize it, and finally, 3)
identify one or more mathematical functions that fit the data and that can be
used to estimate future costs based on similar plans about future projects.
The world of useful mathematical functions is extensive. However, most cost
data sets arising in practice have fairly simple shapes. This allows good fits using
relatively simple functions. The functions used are mostly the polynomials of
orders 1 and 2, the power law, the exponential function, the logarithmic function,
and some variations on these.
The most elementary function commonly used in fitting to data is the polynomial
of order 1, also known as the straight line. If a scatter plot of data appears to be
compatible with a straight line, then the function to be fitted would be the
equation of a straight line, namely:
y
ax b
1-21
CHAPTER 1
400
300
200
100
0
0
20
40
60
80
100
120
In the 19th century the famous mathematician Carl Friedrich Gauss (with
contributions from others) developed a process called least squares, also called
ordinary least squares (OLS) for obtaining an optimal fit of data to a polynomial
of order 1 or higher (e.g., a straight line, a quadratic, a cubic, and so forth.). The
fit is optimal in the sense that the sum of the squares of the fit errors (known as
residuals) is minimized. See Figure 1.6 below. Hence the name least
squares.
Certain so-called transcendental curves favored by parametric analysts, such as
the power law, the exponential, and the logarithmic, cannot be fitted directly by
the OLS process. The technical reason for this is that they are not linear in their
coefficients. However, the most useful ones can be converted to a linear form by
a certain mathematical transformation, and in that form the OLS process can be
used.
One problem with OLS, bothersome to analysts, is the nature of the error term of
the resultant fitted equation. Using a simple linear equation to represent the class
of polynomials, the equation including error term can be written:
y
a bx H
Where is the error. Note that the error term is additive, which is not always
appropriate. When certain non-linear functions are converted to linear form so
that OLS can be performed, the error term is multiplicative, also not always
appropriate, but usually of more interest in the cost estimating context:
y
( a bx )H
1-22
Additive Error
Multiplicative Error
To the problem of error inconsistency can be added the inability of OLS to fit
certain interesting and sometimes useful non-linear functions that cannot be
transformed to a linear equivalent, such as y = axb + c.
Various fixes have been proposed for these problems, but probably the most
popular is called the General Error Regression Model (GERM). With GERM,
one can choose to use either an additive or a multiplicative error approach, and
one can fit virtually any function one chooses to fit to the data, however nonlinear it may be. Today, GERM is widely used, and it can be implemented on
most computer spreadsheets.
1.2.4
CER Validation
Once we have gone through all of the steps of finding and normalizing data and
fitting one or more functions to it, we naturally want to know if the result is any
good. The ultimate test of the goodness of any CER is whether or not it can
predict project costs with reasonable accuracy. Unfortunately, we can never
know that for sure until we estimate the costs, do the project, and compare the
results to what the CER predicted. But by then, if we had a weak or inaccurate
CER, it would be pretty late to find that out, and damage could have been done.
So, a lot of effort is typically expended on CER validation before a CER is used
for any risky purpose. Validation activities are typically 1) practical, 2)
mathematical, and 3) judgmental. The most practical thing that can be done is to
use the CER to estimate one or more projects that have already been completed
and see if the answer is accurate to within expectations.
Several mathematical tests are available for CERs. We will briefly discuss three
of them:
1-23
CHAPTER 1
Standard error of estimate (SEE). SEE is the root mean square (RMS) value of
all percentage errors made in estimating points of the data. It is similar in nature
to the more well known standard deviation () statistic. SEE measures how well
the model represents its own underlying data, given the scatter.
Average percentage bias. This is the algebraic sum of all percentage errors
made in estimating points of the data averaged over the number of points. Bias
measures how well percentage over and underestimates are balanced.
Coefficient of Variance. This statistic, written as R2, is undoubtedly the most
commonly used measure of goodness of fit, although many say it is not the best.
It measures the amount of correlation between estimates and corresponding
database values, that is, the degree of linearity between two quantities.
CERs have to be sanity checked. These checks can take various forms from
management reviews to in-depth audits. A growing practice is to form an
integrated product team (IPT) to review all of the steps of CER creation with a
view to assessing their validity. The activities of these IPTs can resemble
murder boards, in that they attempt to punch holes in all validity arguments. A
CER that survives such a process is likely to be of high quality.
1.3
Complex Models
What is a complex parametric tool or model, and how does it differ from a cost
estimating relationship (CER)?
In this section, we will try to make those differences clear to help readers better
understand how complex tools are built and how they fit into the estimating
process.
1.3.1
1-24
house built cost model will be simpler, requiring less input, than a mature detailed
model requiring much more input.
A typical CER will ask for one, two, three, or perhaps four pieces of parametric
information. A complex model, by contrast, may ask the user for 20 to 40 pieces
of information. Its construction may involve a variety of statistical analyses and
inferences, generally not limited to regression, and will inevitably use expert
judgment about the way the world works in order to arrive at some of its results.
A complex model asks for more information than a single CER, and generally it
also provides more information in its output. A typical CER will give one
answer, usually a single point estimate of total cost, labor hours, material cost,
time, weight, and so forth. A complex model might provide all of that, and a
great deal more. For example, a complex model might provide a range of costs
that includes risk and uncertainty, information about project team skill mix and
size, spend profile, activity scheduling, facilities required, and so on.
Because a complex model is typically designed to report many kinds of
information, it tends to be algorithmically robust compared to a CER. While a
CER may comprise as little as one algebraic equation, a complex model could
have dozens of interactive equations, as well as look up tables, if-then logic
ladders, and even iterative procedures such as non-linear equation solving or
Monte Carlo simulation.
Mathematics is a powerful tool, but unaided, it is far from capable of putting
together the analytical approach of the typical complex model. Considerable
expert intervention is required as well. While a reasonably competent
mathematician or statistician may be able to build a valid CER given a regression
tool and a set of fairly clean data, that person would probably be unable to build a
valid complex model. Why? Because of inexperience with the objects for which
the estimates are needed, and perhaps also because of inexperience with the way
those objects are used, come into existence, or even what they look like. This is
why a good parametrician has a technical background, versed in finance with
strong math and statistical skills.
Another difference between CERs and complex models is that CERs often have
much wider scope. There have been CERs (also known as rules of thumb in
this case) that proclaim that it takes x dollars to get a pound into space. A
complex model is unlikely to make such a simplistic claim making its versatility
limited to well defined programs.
1.3.2
1-25
CHAPTER 1
names such as target cost, design-to-cost (DTC), life cycle cost (LCC), total
ownership cost (TOC), or cost as independent variable (CAIV). Where do such
cost constraints come from? How are they set? Often, they come from complex
cost models. Many of these models have the capability to not only provide a most
likely cost, but also provide costs at various probabilities of success. Careful use
of such models can help establish not only total cost targets, but also lower level
targets that are challenging but doable with reasonable effort.
Unless a project involves totally familiar effort, the stakeholders in a
consequential project will want to examine different ways of executing it. Such
examinations are called trade studies. They can range from macro level studies
conducted by the customer or project management, such as which development
team to use, which factory to use, what kinds of tests to conduct, and so on, to
micro level studies conducted by engineers, such as which wing shape is best,
which fuel control valve is best, and so forth. Seldom can even a complex model
embrace all of the considerations involved in such choices, but they can often be
very helpful with the comparison of the resources required aspects of a trade
study.
Stakeholders will be interested in seeing if a project is meeting expectations, and
if not, what liabilities they are assuming by continuing. Earned value
management (EVM) is a much touted tool for this purposes, but unaided, it has
limited predictive power. A complex estimating tool can enhance the
effectiveness of EVM by providing reasonably accurate predictions of cost
outcomes given the current situation. A complex tool can vastly aid decisions
about whether to continue, and if continuing, how much unplanned time and
money will be needed.
Complex tools can also assist pricing, production quantity, and marketing
decisions. Pricing below estimated cost is not feasible in the long run, but can be
an effective short run strategy. But to safely develop such a strategy, you must be
pretty sure what the cost is. Production quantity is a key decision variable, being
influenced by funds availability, production costs, and market demand. See
Chapter 8 for an additional discussion.
1.3.3
1.3.3.1
Outputs.
1-26
Development cost;
Production cost;
Commonly, all of these results are produced in hardware models, but only the first
in software models. In software development, the production activity called
coding is generally regarded as part of development and thus defined by
convention as non-recurring.
Until recent years, the universally accepted primary driver for software
development cost was the number of lines of delivered working code. That
primacy still exists in many companies, but others have come to prefer other
metrics. Of those, the most widely accepted appears to be counts of function
points. The function point method counts not the lines of code, but the number of
1-27
CHAPTER 1
distinct functions that the software must perform. There are several ISO
standards for counting application functionality, defining the rules for counting
and weights to be used in computing a summary function point count. Other
primary metrics have been considered, such as number of use cases, but these
as yet have not had wide acceptance.
There is increasing interest in non-weight primary drivers for non-electronic
hardware. One reason is that in general, all that engineers want to do with weight
is decrease it, and doing that can cost more than leaving it alone. So, it is not a
particularly useful metric in trade studies that consider cost, as most do. Many
future models will have as primary drivers relatively difficult or expensive to
meet requirements or product features. For example, for a space based telescope,
the primary drivers might be diameter, number of imaging elements, number of
non-imaging elements, and optical quality.
Lower level drivers for both hardware and software models are generally of one
of three types:
x
Memory constraints;
Machine volatility;
1-28
Cost drivers are not all equal. Some are more sensitive and cause larger cost
variations than others. In an ideal complex model, all cost drivers would have
similar effect, but that ideal is unattainable. The reality is that some drivers will
be big and others will be small. Users should be aware of this and should
know which is which. Model builders tend to make this information readily
available.
1.3.3.2
Scope of Estimates
Generally, complex models focus on estimating development and production cost.
Some go beyond this and deal with what is commonly known as operations and
support, or the costs of operating and maintaining fielded systems.
System integration, as opposed to systems integration is another important scope
issue. All complex models deal with one or more systems, in some sense of that
word, and many deal with systems within a system. Recently, because of growth
in what technology is capable of doing, we hear more and more about systems of
systems, or mega-systems. Commercial model builders continually try to upgrade
to enable users to better deal with these higher level integration issues. Users
must be aware of what level of integration complexity their complex model is
capable of dealing with, and limit their expectations accordingly. The practitioner
should be sensitive to this and include factors for these costs in the in-house
developed model.
1.3.3.3
CERs;
Algorithms.
CERs
Virtually all CERs are empirical fits to data. The most commonly used tool for
fitting is linear least squares regression, but other tools are often used, for
example:
1-29
CHAPTER 1
Gauss-Newton.
A variety of curves is used. The choice of curve is most often on the basis of
some criterion of best fit. The following list includes the most commonly used
curves, in no particular order:
x
Linear.
Segmented linear;
Power law;
Log normal;
Exponential;
Rational;
Power series.
Some simple CERs have only a single explanatory variable (cost driver). But in
complex models there are often two or more. Most complex models, especially
custom in-house developed models use a breakdown of hardware or software into
a WBS. Many of the core CERs are developed on lower level data, which may be
more readily available.
Algorithms
1-30
1.3.3.4
User error detection. On occasion, a model user may create inputs that
are inconsistent, or that exceed the design range of the model. If not
warned, the unaware user could get a result that is grossly in error. A
common practice is to provide user warning notices.
I/O Management
Model builders must facilitate rapid input as much as possible. Complex models
also generally provide many outputs, most of which are unwanted at any point in
time. Model builders must make it possible for the user to quickly isolate the
needed outputs.
In recent years, users have tended to link complex cost models with other
complex models, to make even more rapid the transfer of cost driving
information. This sometimes results in the cost model being semi-automated.
Input Management
1-31
CHAPTER 1
Complex models may require a variety of inputs. Here are perhaps the most
commonly seen ones:
1-32
Product parameter inputs. These are inputs that describe features of the
product. For hardware, the most common is weight. Others typically
relate to design features, requirements, quality, and complexity of the
product. For software, the most common is lines of code and language.
Other measures of software size include function points and use cases.
Other frequent software product descriptors have to do with reliability,
complexity, and team environment and characteristics.
Output Management
Complex models typically provide a variety of reports and charts for users. More
detailed charts typically present detailed breakouts split in a number of ways. Pie
charts and various other graphical formats are commonly seen.
Most complex models provide means for the user to design his or her own output
reports. These custom reporting utilities generally allow the user to select just
about any combination of dozens of available outputs, and inputs as well, and
form them into a custom report. Typically, custom report formats can be stored
and used repeatedly. Most complex models provide for output export to other
applications.
Integration of Cost Models with Other Models
More and more, product development activities are being streamlined to decrease
the time necessary to provide product availability to end users. To that end,
collocated development teams have been formed in some organizations, with each
team member positioned at a computer that is linked to all of the other computers
in use by the team.
A cost analyst is generally a member of the team, and it will be his or her function
to keep track of the costs that result from the designs, versus the goal costs, and
also to be a kind of cost referee in tradeoffs where cost is a dimension of the trade
space. To have the power to deal with the many complex cost issues that are
likely to arise, the cost analyst in these teams is generally equipped with one or
more complex, general purpose models, and perhaps several simpler models or
CERs designed for specific types of estimating.
In this development environment, manual transfer of cost driving technical data to
the cost model, or models, slows the activity down considerably, and can be a
cause of error. Because of this, most development teams today highly favor semi
or fully (where achievable) automated transfer of technical information to the cost
model. Much of the focus of model building today is to facilitate rapid data
transfer.
1.3.4
1-33
CHAPTER 1
1.3.4.1
1.3.4.2
1-34
You fully understand the intended uses of the model, and you have one or
more applications that correspond to some of those uses;
You use the model at least once to estimate one of your previous projects
where the results are known, and you are satisfied with the comparison.
If you do these things and are happy with the outcomes, the black box syndrome
is not a concern.
1.3.4.3
1.3.4.4
1.3.4.5
1-35
CHAPTER 1
1-36
CHAPTER 2
2.1
2-1
CHAPTER 2
DATA COLLECTION
AND
ANALYSIS
consistent with the companys accounting procedures and generally accepted cost
accounting practices.
Technical non-cost data describe the physical, performance, and engineering
characteristics of a system, sub-system, or individual item. For example, weight
is a common non-cost variable used in CERs and parametric estimating models.
Other examples of cost driver variables are horsepower, watts, thrust, and lines of
code. A fundamental requirement for the inclusion of a technical non-cost
variable in a CER is that it must be a significant predictor of cost. Technical noncost data come from a variety of sources including the MIS (e.g., materials
requirements planning (MRP) or enterprise resource planning (ERP) systems),
engineering drawings, engineering specifications, certification documents,
interviews with technical personnel, and through direct experience (e.g., weighing
an item). Schedule, quantity, equivalent units, and similar information come from
industrial engineering, operations departments, program files, or other program
intelligence.
Other generally available programmatic information that should be collected
relates to the tools and skills of the project team, the working environment, ease
of communications, and compression of schedule. Project-to-project variability in
these areas can have a significant effect on cost. For instance, working in a
secure facility under need to know conditions or achieving high levels in
various team certification processes can have a major impact on costs.
Once collected, cost data must be adjusted to account for the effect of certain noncost factors, such as production rate, improvement curve, and inflation this is
data normalization. Relevant program data including development and
production schedules, quantities produced, production rates, equivalent units,
breaks in production, significant design changes, and anomalies such as strikes,
explosions, and other natural disasters are also necessary to fully explain any
significant fluctuations in the data. Such historical information can generally be
obtained through interviews with knowledgeable program personnel or through
examination of program records. Fluctuations may exhibit themselves in a profile
of monthly cost accounting data; for example, labor hours may show an unusual
"spike" or "depression" in the level of charges. Section 2.3 describes the data
analysis and normalization processes.
2.2
Data Sources
The specification of an estimating methodology is an important step in the
estimating process. The basic estimating methodologies (analogy, grassroots,
standards, quotes, and parametric) are all data-driven. Credible and timely data
inputs are required to use any of these methodologies. If data required for a
specific approach are not available, then that estimating methodology cannot be
used. Because of this, the estimator must identify the best sources for the method
to be used.
2-2
Figure 2.1 shows basic sources of data and whether they are considered a primary
or secondary source of information. When preparing a cost estimate, estimators
should consider all credible data sources; whenever feasible, however, primary
sources of data have the highest priority of use.
Sources of Data
Source
Source Type
Primary
Cost Reports
Historical Databases
Either
Functional Specialist
Either
Technical Databases
Either
Either
Contracts
Secondary
Cost Proposals
Secondary
Figure 2.1 Sources of Data
Primary data are obtained from the original source, and considered the best in
quality and the most reliable. Secondary data are derived (possibly "sanitized")
from primary data, and are not obtained directly from the source. Because of this,
they may be of lower overall quality and usefulness. The collection of the data
necessary to produce an estimate, and its evaluation for reasonableness, is critical
and often time-consuming.
Collected data includes cost, program, technical, and schedule information
because these programmatic elements drive those costs. For example, assume the
cost of an existing program is available and the engineers of a new program have
been asked to relate the cost of the old to the new. If the engineers are not
provided with the technical and schedule information that defines the old
program, they cannot accurately compare them or answer questions a cost
estimator may have about the new programs costs. The cost analysts and
estimators are not solely concerned with cost data they need to have technical
and schedule information to adjust, interpret, and support the cost data being used
for estimating purposes. The same is true of programmatic data when it affects
costs. As an example, assume that an earlier program performed by a team at
CMMI (Capability Maturity Model Integration) level 2 is to be compared to a
new program where the team will be at CMMI level 4. The expectation is that the
CMMI level 4 team will perform much more efficiently than the level 2 team.
A cost estimator has to know the standard sources of historical cost data. This
knowledge comes both from experience and from those people capable of
answering key questions. A cost analyst or estimator should constantly search out
2-3
CHAPTER 2
DATA COLLECTION
AND
ANALYSIS
new sources of data. A new source might keep cost and technical data on some
item of importance to the current estimate. Internal contractor information may
also include analyses such as private corporate inflation studies, or "market
basket" analyses (a market basket examines the price changes in a specified group
of products). Such information provides data specific to a company's product
line, but which could also be relevant to a general segment of the economy. Such
specific analyses would normally be prepared as part of an exercise to benchmark
government provided indices, such as the consumer price index, and to compare
corporate performance to broader standards.
Some sources of data may be external. This includes databases containing pooled
and normalized information from a variety of sources (e.g., other companies,
public record information). Although such information can be useful, it may have
weaknesses. For example, there could be these types of issues:
x
Sources of data are almost unlimited, and all relevant information should be
considered during data analysis. Figure 2.2 summarizes the key points about data
collection, evaluation, and normalization.
Data Collection, Evaluation, and Normalization
Very critical step
Can be time-consuming
Need actual historical cost, schedule, and technical information
Know standard sources
Search out new sources
Capture historical data
Provide sufficient resources
Figure 2.2 Data Collection, Evaluation, and Normalization
2-4
2.3
Various production quantities and rates during the period from which the
data were collected.
Non-recurring and recurring costs are also segregated as part of the normalization
process.
Figure 2.3 shows the typical data normalization process flow. This does not
describe all situations, but does depict the primary activities followed in data
normalization.
Recurring/Non-Recurring
Mission Application
Grouping products by
complexity
Calibrating like products
State of Development
Variables
Mission uniqueness
Product uniqueness
Normalizing the
Environment (Platform)
Manned space vehicle
Unmanned space vehicle
Aerospace
Shipboard
Commercial
Some data adjustments are routine in nature and relate to items such as inflation.
These are discussed below. Other adjustments are more complex in nature (e.g.,
relating to anomalies), and Section 2.4 considers those.
International Society of Parametric Analysts
2-5
CHAPTER 2
2.3.1
DATA COLLECTION
AND
ANALYSIS
Inflation
Inflation is defined as a rise in the general level of prices, without a rise in output
or productivity. There are no fixed ways to establish universal inflation indices
(past, present, or future) that fit all possible situations. Inflation indices generally
include internal and external information and factors (such as Section 2.2
discusses). Examples of external information are the Consumer Price Index
(CPI), Producer Price Index (PPI), and other forecasts of inflation from various
econometric models.
While generalized inflation indices may be used, it may also be possible to tailor
and negotiate indices used on an individual basis to specific labor rate agreements
(e.g., forward pricing rates) and the actual materials used on a project. Inflation
indices should be based on the cost of materials and labor on a unit basis (e.g.,
pieces, pounds, hours), and should not include other considerations such as
changes in manpower loading, or the amount of materials used per unit of
production.
The key to inflation adjustments is consistency. If cost is adjusted to a fixed
reference date for calibration purposes, the same type of inflation index must be
used in escalating the cost forward or backwards from the reference date, or to the
date of the estimate.
2.3.2
2-6
AX b
Where:
Y
= Unit number
There are two interpretations concerning how to apply this equation. In the unit
interpretation, Y is the hours or cost of unit X only. In the cumulative average
interpretation, Y is the average hours or cost of all units from 1 to X, inclusive.
In parametric models, the learning curve is often used to analyze the direct cost of
successively manufactured units. Direct cost equals the cost of both touch labor
and direct materials in fixed year dollars. This is sometimes called an
improvement curve. The slope is calculated using hours or constant year dollars.
Chapter 3, Cost Estimating Relationships, presents a more detailed explanation of
improvement curve theory.
2.3.4
Production Rate
Many innovations have been made in cost improvement curve theory. One is the
addition of a variable to the equation to capture the organization's production rate.
The production rate is defined as the number of items produced over a given time
period. This equation modifies the basic cost improvement formula to capture
changes in the production rate (Qr) and organizational cost improvement (Xb):
Y
AX b Q r
Where:
2-7
CHAPTER 2
DATA COLLECTION
AND
ANALYSIS
= Unit number
2.4
2.4.1
2.4.2
2-8
Total Hours
Units
Lot 1
256,000
300
853 hours/unit
Lot 2
332,000
450
738 hours/unit
Lot 3
361,760
380
952 hours/unit
Lot 4
207,000
300
690 hours/unit
Clearly, Lot 3's history should be investigated since the average hours per unit
appear high. It is not acceptable, though, to merely "throw out" Lot 3 and work
with the other three lots. A careful analysis should be performed on the data to
determine why it exhibits this behavior.
2.4.3
Historical System
Planned System
Date of Fabrication
Jul 03-Jun 05
Jul 06-Dec 08
Production Quantity
500
750
Size - Weight
1 cu ft-roughly cubical
12.l x 11.5 x 12.5
Volume
8 x 10 x 16.2
2-9
CHAPTER 2
DATA COLLECTION
Parameter
Other Program
Features
AND
ANALYSIS
Historical System
Planned System
5% electrical
5% electrical
No spare parts
These data need several adjustments. In this example, the inflation factors, the
difference in production quantity, the rate of production effect, and the added
elements in the original program (spare parts) all require adjustment. The analyst
must be careful when normalizing the data. General inflation factors are usually
not appropriate for most situations; ideally, the analyst will have a good index of
costs specific to the industry and will use labor cost adjustments specific to the
company.
The quantity and rate adjustments must consider the effects of quantity changes
on the company's vendors and the ratio of overhead and setup to the total
production cost. Likewise, for rate factors each labor element will have to be
examined to determine how strongly the rate affects labor costs. On the other
hand, the physical parameters do not require significant adjustments.
The first order normalization of the historic data would consist of:
x
Possible production rate effects on touch labor (if any) and unit overhead
costs.
Because both cases are single lot batches, and are within a factor of two in
quantity, only a small learning curve adjustment would be required. Given the
schedule shown, a significant production rate adjustment is needed.
2.5
2-10
Were the source data used as is, or did they require adjustment?
2.6
Other Considerations
Several other issues should be considered when performing data collection and
analysis.
2.6.1
Resources
Data collection and analysis activities require that companies establish sufficient
resources to perform them, as well as formal processes describing data collection
and analysis. Chapter 7, Government Compliance, provides information on
estimating system requirements, and discusses data collection and analysis
procedures.
2.6.2
2-11
CHAPTER 2
2.6.3
DATA COLLECTION
AND
ANALYSIS
2.6.4
2.6.5
Comparability Problems
Comparability problems include, but are not limited to, changes in a company's
department numbers, accounting systems, and disclosure statements. They also
include changing personnel from indirect to direct charge for a given function.
When developing a database, the analyst must normalize it to ensure the data are
comparable. For example, when building a cost database, the analyst must
remove the effects of inflation so that all costs are displayed in constant dollars.
The analyst must also normalize data for consistency in content. Normalization
for content ensures that a particular cost category has the same definition in terms
of content for all observations in the database. Normalizing cost data is a
challenging problem, but it must be resolved if a good database is to be
constructed.
2.6.6
Database Requirements
Resolving database problems to meet user needs is not easy. For example, cost
analysis methodologies may vary considerably from one analysis or estimate to
another, and the data and information requirements for CERs may not be constant
over time. An analysts data needs now do not determine all future needs, and
must be periodically reviewed.
The routine maintenance and associated expense of updating the database must
also be considered. An outdated database is of little use in forecasting future
acquisition costs. The more an organization develops and relies on parametric
2-12
estimating methods, the more it needs to invest in data collection and analysis
activities. The contractor must balance this investment against the efficiency
gains it plans to achieve through use of parametric estimating techniques. If the
contractor moves towards an ERP system, the incremental cost to add a
parametric estimating capability may not be significant.
Good data underpins the quality of any estimating system or method. As the
acquisition community moves toward estimating methods that increase their
reliance on contractors historical costs, the quality of the data cannot be taken for
granted. Industry and their Government customers should find methods to
establish credible databases that are relevant to the history of the contractor.
From this, the contractor will be in a better position to reliably predict future
costs, and the Government to evaluate proposals based on parametric techniques.
2-13
CHAPTER 2
2-14
DATA COLLECTION
AND
ANALYSIS
CHAPTER 3
This chapter discusses the development and application of basic cost estimating
relationships (CERs). This topic could be treated in an entire graduate-level
textbook. Doing so is beyond the scope of this handbook. Although the
discussion in this chapter is more in-depth than what was discussed in Chapter 1,
the higher-order mathematics of CER development are relegated to Appendix B.
The topic of CERs can range from the very simplistic to very complex. This
chapter attempts to strike a balance. The reader needs to decide for him/herself
the level of detail they will need to perform their parametric estimating
assignments.
Many organizations implement CERs to streamline the costs and cycle times
associated with proposal preparation, evaluation, and negotiation. The proper
development and application of CERs depends on understanding the associated
mathematical and statistical techniques. This chapter explains the basic and more
commonly used techniques, and provides general guidance for use in developing
and employing valid CERs. The discussion in this chapter:
x
The chapter also provides rule-of-thumb guidelines for determining the merit of
statistical regression models, instructions for comparing models, and examples of
simple and complex CERs.
Corporations, other types of economic enterprises, and Government cost
estimating organizations make extensive use of CERs and parametric estimating
models. This chapter focuses primarily on their use by Industry, as opposed to
Government organizations. However, the bulk of the principles, guidelines,
methods and procedures presented apply to Government cost estimating as well as
to cost estimating by Industry.
International Society of Parametric Analysts
3-1
CHAPTER 3
3.1
CER Development
A CER is a mathematical expression, which describes how the values of, or
changes in, a dependent variable are partially determined, or driven, by the
values of, or changes in, one or more independent variables. The CER defines
the relationship between the dependent and independent variables, and describes
how it behaves. Since a parametric estimating method relies on the value of one
or more input variables, or parameters, to estimate the value of another variable, a
CER is actually a type of parametric estimating technique.
Figure 3.1 demonstrates this equivalence and points out that the estimating
relationship may range from simple to complex (e.g., from a ratio to a set of interrelated, multi-variable mathematical equations commonly referred to as a
parametric model).
Parametric Estimating Methods/Cost Estimating Relationships (CER)
Simple Relationships
Complex Relationships
Complex Models
3.1.1
Cost CERs
A cost CER is one in which cost is the dependent variable. In a cost-to-cost CER
the independent variables are also costs examples are CERs which use
manufacturing cost to estimate quality assurance cost, or to estimate the cost of
expendable material such as rivets, primer, or sealant. The cost of one element is
used to estimate, or predict, that of another.
In a non cost-to-cost relationship, the CER uses a characteristic of an item to
predict its cost. Examples are CERs that estimate an items manufacturing costs
based on its weight (independent variable), or the design engineering costs from
the number of engineering drawings (independent variable) involved.
It is important to note that the term cost driver is meant in a fairly broad sense,
to include cases like those above where the independent variable does not
actually cause the dependent variable to be what it is. But the two variables
may be sufficiently correlated with (or track) each other such that if one is
known or estimated, then the other can be known or estimated fairly well. Thus,
in the cost-to-cost CER example above, the size, quantity and complexity of the
item being produced may be the real cost drivers of both the manufacturing costs
and the quality assurance costs. The design engineering CER example illustrates
true cause-and-effect behavior, where the design-engineering costs are caused to
be what they are by the number of drawings required.
3-2
The manufacturing cost CER example is a little murkier. The items weight and
cost may correlate well, but the weight is not exactly the cause for the cost to be
what it is. It is usually the basic requirements that the item must satisfy which
drive both cost and weight (or size). In fact, if the requirements dictate that the
items weight be limited to the extent that unusually expensive production
methods must be used, then weight per se and cost may have an inverse (i.e.,
negatively correlated) relationship.
Regardless of the underlying cause and effect relationships, in the context of this
chapter, CER cost drivers are assumed to be either true drivers of cost or
surrogates for the true cost driving requirements and constraints on the item being
estimated. In many cases weight may be viewed as a good representative for
most of the requirements that drive cost. In other cases it may represent cost
driving requirements poorly particularly in cases where smallness or lightness
are at a premium. The same might be true for other variables that represent size
or magnitude of the cost element being estimated, such as software source lines of
code or processing throughput.
CERs are often used to predict labor hours, as opposed to costs. In fact, some
CERs deal with normalized dependent variables, as opposed to cost or hours. For
example, a CER might predict a factor, or percentage, that, when multiplied times
a base cost, yields the cost for another work element. This approach is typically
used to estimate system engineering, program management and integration, and
test costs. Another example of a normalized dependent variable is the production
cost/weight ratio for a type, or class, of hardware components. The ensuing
discussion in this chapter applies to all of these types of CERs whether they
predict costs, labor hours,or cost estimating factors.
A cost CER is a valuable estimating tool and can be used at any time in the
estimating process. For example, CERs may be used in the program concept or
validation phase to estimate costs when there is insufficient system definition for
more detailed approaches, such as the classical grass roots or bottoms-up
methods. CERs can also be used in a later phase of a program as primary
estimates or as crosschecks of non-parametric estimates. CERs may also form the
primary basis of estimate (BOE) for proposals submitted to the Government or
higher-tier contractors. They are also used extensively by Government agencies
to develop independent cost estimates for major elements of future programs.
Before developing complex parametric models, analysts typically create simple
CERs which demonstrate the utility and validity of the basic parametric modeling
approach to company and Government representatives.
3.1.2
3-3
CHAPTER 3
approaches and relationships at the same time, since it is more efficient to collect
data for and evaluate them as a group.
Opportunity Identification
- Identify the opportunity
to gather data and develop
CERs.
Data Collection
Selection of Variables
Weight
Thrust
Range
Impulse
# of Drawings
Materials
MIPs
SLOC
Cost
# of Drawings
Select CERs
Validation
Approval
- Use CERs in proposals and gain
agreement on use by the customer.
- Unit Cost/Quantity
- Constant Year $
- Escalation
- Complexity
Test Relationships
Periodic Revalidation
CER Database
- Incorporate approved CERs into the
estimating method database.
To Cost Models
3.1.3
Development Database
The value of a CER depends on the soundness of the database from which it is
developed. Determination of the goodness of a particular CER and its
applicability to the system being estimated requires a thorough analysis and
knowledge of both the system and the historical data collected from similar
systems. Regardless of the CERs intended application or degree of complexity,
its development requires a rigorous effort to assemble and refine the data that
constitutes its empirical basis. Assembling a credible database is important and,
often, the most time-consuming activity in Figure 3.2. The number of valid CERs
is restricted more by the lack of appropriate data than any other factor.
When developing a CER, the analyst often hypothesizes potentially useful logical
estimating relationships between dependent and independent variables, and then
organizes the database to test them. Another approach is where the data are
collected and even organized before any relationships are hypothesized. In fact, it
may be patterns in the data that suggest the most useful types of estimating
relationships.
Sometimes, when assembling a database, the analyst discovers that the raw data
are in the wrong format, the data displays irregularities and inconsistencies, or
will not provide a good test of the hypothesis. Adjustments to the raw data,
therefore, almost always need to be made to ensure a reasonably consistent,
3-4
comparable, and useful set of data. Making such adjustments is often referred to
as normalizing the raw data.
No degree of sophistication in the use of advanced mathematical statistics can
compensate for a seriously deficient database. Chapter 2, Data Collection and
Analysis, provides further information on collecting, organizing and normalizing
CER data.
3.1.4
3.2
3-5
CHAPTER 3
CER Title
Pool Description
Base Description
Application
Panstock
Material
Allocated panstock
dollars charged.
Manufacturing
assembly touch
direct labor hours
charged.
Panstock is piece-part
materials consumed in the
manufacturing assembly
organization. The panstock
CER is applied to 100% of
estimated direct labor hours
for manufacturing assembly
effort.
F/A-18 Software
Design Support
Allocated effort
required performing
software tool
development and
support for computer
and software
engineering.
Design Hours
Design engineering
including analysis
and drafting direct
labor hours charged.
Number of design
drawings associated
with the pool direct
labor hours.
Systems
Engineering
Systems engineering
(including
requirements analysis
and specification
development), direct
labor hours charged.
Design engineering
direct labor hours
charged.
Tooling Material
Tooling nonrecurring
direct labor hours
charged.
Test/Equipment
Material (dollars
for avionics)
Material dollars
(<$10k)
Total avionics
engineering
procurement support
group direct labor
hours charged.
3.2.1
3-6
understanding by the cost analyst of the CERs logic and the product being
estimated. A CER can take numerous forms, ranging from an informal rule-ofthumb or simple analogy, to a mathematical function derived from statistical
analysis.
3.2.1.1
Data Collection/Analysis
When developing a CER, the analyst first concentrates on assembling and refining
the data that constitute its empirical basis. A considerable amount of time is
devoted to collecting and normalizing the data to ensure its consistency and
comparability. More effort is usually devoted to assembling a quality database
than any other task in the development process. Chapter 2 also discusses data
collection and analysis. Data normalization addresses:
3.2.1.2
Inflation. This includes the conversion of the cost for each data point to a
common year of economics or year dollars using established yearly
company inflation rates
Validation Requirements
A CER, as any other parametric estimating tool, must produce, to a given level of
confidence, results within an acceptable range of accuracy. It must also
demonstrate estimating reliability over a range of data points or test cases. The
validation process ensures that a CER meets these requirements. Since a CER
developer and customer must, at some point, agree on the validation criteria for a
new CER, the Parametric Estimating Reinvention Laboratory determined that the
use of an integrated product team (IPT) is a best practice for reviewing and
implementing it. The contractor, buying activity, DCMA, and DCAA should be
part of the IPT.
Figure 3.4 illustrates the validation process flow, which incorporates the CER
testing methodology discussed earlier in the chapter. The process, described in
Figure 3.5, is a formal procedure which a company should use when developing
and implementing a CER. It describes the activities and criteria for validating
simple CERs, complex CERs, and parametric models. Figure 3.6 contains the
guidelines for statistical validation (and implements the CER quality review
matrix in Figure 3.4). Figure 3.7 is an example of the membership of a CER IPT
designated the Joint Estimating Relationship Oversight Panel (JEROP), which is
3-7
CHAPTER 3
Rationalize
Pool/Base
Definitions
Nominate
ER to
Examine
ER is Flagged.
JEROP determines next course of action.
A
A
NO
Structure
Rational?
YES
Acquire
& Plot
Data
Determine
Data
Parameters
4
YES
NO
Can Fix
Problem?
7
YES
Data
Concerns?
6
NO
YES
YES
12
Data
Concerns?
NO
Perform
Report Card
Analysis
MOA
Material?
FPA
MARGINAL
NO
11
Consensus
to use
Stat Model
YES
Quality
10
GOOD
FPA or MOA
ER-specific
PRT
or
Steps 1-4 PRT
agree on
alternate method
NO
FPA
ER = Estimating Relationship
13
3-8
Discussion of Activities
1
2
3
4
5 6 7 8 -
9 10 -
11 12 13 -
Assess materiality.
Examine rationale and data, or use additional historical data.
Investigate alternative forms.
Team is encouraged to review data beyond that used to develop the current CER, i.e., additional completed
jobs for Steps 1 & 2 CERs, or longer time periods for Steps 3 & 4 CERs.
Multivariate (more than one independent variable) solutions may be considered.
Examine ER across programs for rationalization of differences.
Team may explore linear, logarithmic, exponential, polynomial, power, moving average, or any other model structures implied by
the data patterns and/or rationale of the effort.
Check for evidence of outliers, influentials, time trends, bi-modal data, etc.
Team analyzes data sources.
Develop results based on weighted factor methodology and linear regression with intercept, unless otherwise agreed.
Construct report card with F-stat, R-squared, CV, and narrative for stat method; with MAD and narrative for factor method.
Plot results. Analyze residuals, checking for patterns in the residuals to ensure that the regression assumptions were not
violated. Examine raw versus fitted data for outliers, using a rule of thumb of 2 to 3 standard deviations as a
means of flagging data points for further investigation.
Team analyzes report card for ER based upon guidance shown in Figure 3-9.
Team decides by consensus whether one or more of the methods presented are acceptable. Unless a compelling argument is
presented by one of the organizations, the statistical model is to be preferred. Lack of consensus among the three
organizations, or consensus that no available model is satisfactory, results in process flow to Step 13.
Qualitative decision by team determining whether stat model is Good or Marginal, using report card criteria as guide.
Team determines materiality of the ER based on dollar impact, breadth of application, etc.
Alternative methods include, but are not limited to, other statistical models, simple or weighted averaging and other factors,
discreet estimating, accounting changes, investigation of other options for base, etc.
Flagging:
3-9
CHAPTER 3
Marginal
Good
Statistically Derived ER:
d0.10
d0.15
d0.10
d0.15
d0.25
0.25 o 0.30
R-squared:
t0.70
0.35 o 0.70
Narrative:
Weighted Factor:
This section of the report card should be used to record other pertinent information,
particularly non-quantitative information, about the effort to be modeled or about the
proposed estimating tool. For example, data constraints, materiality, exogenous
influences, etc., may impact the acceptability of the proposed tool.
MAD as % of ER mean:
Narrative:
d0.25
0.25 o 0.30
Terminology:
F-test:
t-test:
Measures the significance of the individual components of the model; where there is only one independent
variable (one base variable), the significances of the t-test and of the F-test are identical.
R-squared:
Measures the percentage of variation in the pool explained by the CER or model; varies between 0% and 100%.
CV:
MAD:
Mean absolute deviation is a measure of dispersion comparing how well the individual point relationships
match the mean relationship of the composite data.
3-10
JEROP Membership
Developer (Company Personnel)
x Group Manager-Estimating/Systems Engineering
x
Sr. Specialist-Accounting
Supervisory Auditor
DCAA
DCMA
x
x
Industrial Engineer
Contract Price Analysts
The customer is not a full-time member of the IPT, but regularly provides
feedback.
Figure 3.7 Joint Estimating Relationship Oversight Panel Membership
It is important to note that the IPT uses the Figure 3.6 report card as the starting
point for evaluating a candidate CER. The IPT does not use the statistical tests as
its only criteria for accepting or rejecting the CER. Equally important to their
assessment is non-quantitative information, such as the importance of the effort or
product to be estimated and the quality of possible alternative estimating methods.
While statistical analysis is useful, it is not the sole basis for validating a CER,
with importance also given to whether the data relationship is logical, the data
used in deriving it are credible, and adequate policies and procedures for its use
are in place.
3.2.1.3
Documentation
A company should document a CER to provide a clear understanding of how to
apply and maintain it. The documentation, based on a standard company format,
is built during the development process and includes, at a minimum, all the
information necessary for a third party to recreate, validate, and implement the
CER. The documentation should include:
x
3-11
CHAPTER 3
3.2.2
Complete actual cost information for all accounting data used. This
provides an audit trail that is necessary to identify the data used.
Joint Training
Strong Moderating
Management Support
3.3
3-12
Graphical Method
To apply the graphical method, pairs of independent variables X, and their
matching dependent variables Y, in the CER database are first plotted in the form
of an X-Y scatter diagram. Next, the analyst draws a curve (or straight line)
representing the assumed CER X-Y relationship such that it passes through the
approximate middle of the plotted data points. No attempt should be made to
make the smooth curve actually pass directly through any of the data points that
have been plotted. Instead, the curve should pass between the data points leaving
approximately an equal number on either side of the line. The objective is to
best-fit the curve to the data points plotted; every data point plotted should be
considered equally important. The curve which is drawn then represents the CER.
Before developing a forecasting rule or mathematical equation, the analyst should
plot the data in a scatter diagram. Although considered outdated for purposes of
best-fitting, scatter plotting the data is still important, since it quickly gives a
general idea of the relationship between the CER equation and the pattern of the
data points (if any). Also, the analyst can easily focus on those data points that
may require further investigation because they seem inconsistent with the bulk of
the data point set. The task is easily performed with any spreadsheet or statistical
software package.
3.3.2
3-13
CHAPTER 3
A is the point at which the line intersects the vertical (Y) axis (X=0).
A and B are called the parameters of the population regression line. The line,
when A and B determined, represents the desired CER, since the purpose of
regression analysis, and the regression line, is to provide estimates of values of
the dependent variable from values of the independent variable.
Since it is usually not practical, or even possible, to obtain data for an entire
population, and calculate A and B directly, the analyst must instead work with a
sample collected from the population. When based on this sample, the regression
line becomes Y = a + bX, where a and b are estimates of the true population
parameters A and B. Since a and b are usually based on a data sample of a
limited size, there always involves a certain amount of error in estimating the true
values of A and B. A different sample would give different estimates of A and B.
OLS is a method for calculating the straight line (regression line) through the data
set which minimizes the error involved in estimating A and B by the a and b
associated with that line. OLS also provides a measure of the remaining error, or
dispersion of the dependent variable values above and below the regression line,
and how it affects estimates made with the regression line. Thus, the regression
line which minimizes the error in estimating A and B, defined by the parameters a
and b, becomes the CER.
In particular, OLS finds the best fit of the regression line to the sample data by
minimizing the sum of the squared deviations of (differences between) the
observed and calculated values of Y.
The observed value, Yi, represents the value that is actually recorded in the
database for a given X value (Xi), while the calculated value, Yc, is the value the
sample regression equation gives for the same value of X.
For example, suppose we estimated engineering hours based on the number of
required drawings using the linear equation (obtained by regression analysis):
EngrHours = 467 + 3.65 (NumEngrDrawings). In this case EngrHours is the
dependent variable, and NumEngrDrawings is the independent variable.
Suppose the companys database contained 525 hours for a program containing
15 engineering drawings. The 525 hours represents the observed value for Y
when X is equal to 15. The equation however would have predicted Yc = 467 +
3.65(x) = 467 + 3.65(15) = 521.75 hours. The difference between the observed
and calculated values, 3.25 hours, represents the error e of the OLS regression
line for this data set and the data point X = 15, Y= 525.
To further define how OLS works, assume the regression line is being fit for the
four points in Figure 3.9, and that the error terms, e, for these points are: (Y1 YC1,), (Y2 - YC2), (Y3 - YC3), (Y4 - YC4). The line that best fits the data is the one
which minimizes the sum of the squared errors, SSE:
4
SSE = ei2
i 1
3-14
The equation which minimizes the sum SSE is then the candidate regression line
CER. Calculus is used to solve this classical minimization problem, yielding
simple linear equations for determining the values of a and b that minimize SSE.
A good general purpose statistical software application will automatically
calculate the a and b regression coefficients (CER model parameters), and provide
goodness of fit statistics identified and described in Section 3.3.3 below. It will
also make a scatter plot, graphing the regression equation against the CER data
points.
Often, independent variables in regression analysis are also referred to as
explanatory variables. They explain some of the variation in the Y variable via
the regression equation, thereby reducing the uncertainty in estimating Y (as
compared to using a simple average of all the Y data points to estimate Y).
Similarly, a good regression-derived CER is said to have a high degree of
explanatory power if it reduces the sum of the squared errors to a large degree
and explains how the dependent variable varies as the independent variable is
changed.
3.3.3
3.3.3.1
Assumptions
The mathematics of ordinary OLS regression is based on several assumptions
about the underlying probability distributions of the dependent variable and
probabilistic independence of the observations in the CER data set. These are not
stated here but they can be found in many of the references listed in Appendix D.
Theoretically, if any of the assumptions are invalid, then the regression and CER
are flawed. Applied mathematicians, however, tend to consider the
assumptions as guidelines rather than absolute rules. In most parametric cost
analysis applications, the size of the CER sample is often too small to even make
a conclusion about most of these assumptions. When an OLS assumption is
3-15
CHAPTER 3
apparently violated, the question is: How significant is the violation? If minor,
the CER is still generally accepted as satisfactory for estimating. The size of the
sum of the squared errors, SSE, and other related statistical measures described
below, should provide sufficient indication of the validity of the CER even when
the sample data points do not completely adhere, or appear to adhere, to the
assumptions above.
3.3.3.2
3.3.3.3
3-16
3.3.3.4
3.3.4
Multiple Regression
In the simple regression analysis described above, a single independent variable
(X) is used to estimate the dependent variable (Y), and the relationship is assumed
to be linear. Multiple, or multivariate, regression considers the effect of using
more than one independent variable, under the assumption that this better explains
changes in the dependent variable Y. For example, the number of miles driven
may largely explain automobile gasoline consumption. However, we may
postulate a better explanation if we also consider such factors as the weight of the
automobile.
In this case, the value of Y would be estimated by a regression equation with two
explanatory variables:
Yc = a + b1X1 + b2X2
Where:
Yc is the calculated or estimated value for the dependent variable
a is the Y intercept (the value of Y when all X-variables equal 0)
X1 is the first independent (explanatory) variable
b1 is the slope of the line related to the change in X1
X2 is the second independent variable
b2 is the slope of the line related to the change in X2.
Finding the right combinations of explanatory variables is not easy, although the
general process flow in Figure 3.2 helps. The first step involves the postulation of
which variables most significantly and independently contribute toward
explaining the observed cost behavior. Applied statisticians then use a technique
called step-wise regression to focus on the most important cost driving variables.
Step-wise regression is the process of "introducing the X variables one at a time
(stepwise forward regression) or by including all the possible X variables in one
multiple regression and rejecting them one at a time (stepwise backward
regression). The decision to add or drop a variable is usually made on the basis of
the contribution of that variable to the SSE (error sum of squares), as judged by
the F-test."1 Stepwise regression allows the analyst to add variables, or remove
them, to determine the best equation for predicting cost.
Gujarati, Domodar, Basic Econometric, New York, McGraw-Hill Book Company, 1978, p. 191.
3-17
CHAPTER 3
3-18
The basic form of the learning curve equation is Y = AXb. When a natural
logarithmic transformation is applied to both sides of this equation, it is
transformed to the linear form:
Ln(Y) = Ln(A) + b Ln(X)
Where (for both equations):
Y = Hours/unit (or constant dollars per unit)
A = First unit hours (or constant dollars
X = Unit number
b = Slope of curve related to learning
Since Ln(Y) = Ln(A) + b Ln(X) has the same form as Y = a + b(X), it can be
graphed as a straight line on log-log paper, and an OLS regression analysis can be
performed for it. In particular, the OLS regression equations can be used to
derive the coefficients a and b from production cost data on individual units or
lots. Typically several unit or lot costs are needed say five or more.
In both cost improvement curve theories, the cost is assumed to decrease by a
fixed proportion each time quantity doubles. The fixed proportion is called the
learning curve slope or simply learning curve, usually expressed as a
percentage. For example, in the case of the unit theory, a 90 percent learning
curve means that the second unit cost 90 percent of the first unit, and the fourth
unit cost is 90 percent of the second unit cost, or 81 percent of the first unit cost.
For the cum average theory, a 90 percent learning curve means that average unit
cost of the first two units is 90 percent of the first unit cost, and the average unit
cost of the first four units is 81 percent of the first unit cost.
Solving the equation Ln(Y) = Ln (A) + b Ln(X) for b, and assuming a first unit
value for A = 1, and X = unit # 2, the learning curve slope is related to the
learning curve coefficient by:
b
Ln( Slope)
(Note: Ln(1) = 0)
Ln(2)
Note that when, for example, the slope is 90 percent, 0.90 is used in the equation
for Slope. The divisor Ln(2) reflects the assumption that a fixed amount of cost
improvement occurs each time the production quantity doubles.
3.4
How useful will it be for estimating the cost of specific items or services?
What is the confidence level of an estimate made with the CER (i.e., how
likely is the estimated cost to fall within a specified range of cost
outcomes)?
3-19
CHAPTER 3
When the CER is based on a regression (or other statistical) analysis of the data
set, the questions are best answered by reviewing the statistics of the regression
line, which are a normal part of the OLS results provided by a statistics software
package.
Figure 3.10 contains a list of statistics and other aspects of a candidate CER that
should be evaluated whenever possible. Appendix B further defines and explains
the statistics.
No single statistic either disqualifies or validates a CER. Many analysts tend to
rely on two primary statistics when evaluating a CER: for example, the standard
error (SE) and the adjusted coefficient of determination (R2). Both simply
measure the degree of relatedness between the CERs variables, but neither by
itself certifies the CER as good. However, when they are poor they do
indicate the CER will not be an accurate predictor. All the CER statistics which
are available should be studied.
Evaluation of a candidate CER begins with the data and logic of the relationship
between its variables. The analyst should again ensure the accuracy of the
database and verify the logic behind the CER. The data sources and accuracy
should be characterized in words, as well as the logic behind the CER functional
form and independent variable selections.
The analyst can then check the regression statistics, beginning with an evaluation
of its variables; the t-stat for each explanatory variable indicates how important it
is in the CER. One form of this statistic indicates the likelihood that the estimated
variable coefficient (slope) could have resulted even though there is no underlying
relationship between the variable and the dependent variable. Thus, it indicates
the likelihood a false reading about the possibility of a relationship between X
and Y.
The significance of the entire regression equation is assessed using the F-stat.
Once again, it indicates the likelihood of a false reading about whether the entire
regression equation exists. The F-Stat is influenced by the amount of curvature or
flatness in the relationship between the dependent variable and the independent
variables. Relationships that become relatively flat, compared to the amount of
dispersion (as measured by the standard error), will have lower F-stat values than
those which are steeper. Thus, the F-Stat may not be a good statistic to assess the
worth of CERs, at least not on an absolute basis, when their relationships are
inherently curved.
The size of the regression estimating errors is characterized by the standard error
of the estimate (SE) and the coefficient of variation (CV). The SE measures the
average error on an absolute basis (e.g., in units of dollars or hours). The CV is a
normalized variable, typically expressing the root mean square estimating error as
a percentage of the mean Y value across the CER data set points. However,
neither of these statistics actually quantifies the amount of estimating error for
specific values of each independent variable (this can be done with the standard
error of the mean described below).
3-20
F-Stat
3-21
CHAPTER 3
Coefficient of Variation
(CV)
Coefficient of
Determination (R2)
Adjusted R2
Degrees of Freedom
(DOF)
Outliers
Data Range
3-22
criteria, the analyst must always ask: If I reject this CER as the basis for
estimating, is the alternative method any better?
3.5
3.5.1
3.5.2
Strengths
x
They are quick and easy to use. Given a CER equation and the required
input parameters, developing an estimate is a quick and easy process.
Weaknesses
x
Problems with the database may mean that a particular CER should not be
used. While the analyst developing a CER should also validate both the
CER and the database, it is the responsibility of the parametrician to
determine whether it is appropriate to use a CER in given circumstances
3-23
CHAPTER 3
3.6
3.6.1
Construction
Many construction contractors use a rule of thumb that relates floor space to
building cost. Once a general structural design is determined, the contractor or
buyer can use this relationship to estimate total building price or cost, excluding
the cost of land. For example, when building a brick two-story house with a
basement, a builder may use $60/square foot to estimate the price of the house.
Assume the plans call for a 2,200 square foot home. The estimated build price,
excluding the price of the lot, would be $60/sq. ft. x 2,200 sq. ft. = $132,000.
3.6.2
Electronics
Manufacturers of certain electronic items have discovered that the cost of a
completed item varies directly with the number of total electronic parts in it.
Thus, the sum of the number of integrated circuits in a specific circuit design may
serve as an independent variable (cost driver) in a CER to predict the cost of the
completed item. Assume a CER analysis indicates that $57.00 is required for setup, and an additional cost of $1.10 per integrated circuit required. If evaluation of
the engineering drawing revealed that an item was designed to contain 30
integrated circuits, substituting the 30 parts into the CER gives:
Estimated item cost
3.6.3
Weapons Procurement
CERs are often used to estimate the cost of the various parts of an aircraft, such as
that of a wing of a supersonic fighter. Based on historical data, an analyst may
develop a CER relating wing surface area to cost, finding that there is an
estimated $40,000 of wing cost (for instance, nonrecurring engineering) not
related to surface area, and another $1,000/square foot that is related to the
surface area of one wing. For a wing with 200 square feet of surface area:
3-24
Estimated price
3.7
3.7.1
3.7.2
3-25
CHAPTER 3
3-26
3.7.4
There is no element of the cost or price being estimated that is not related
to the independent variable (i.e., there is no "fixed cost" that is not
associated with the independent variable).
The relationship between the independent variable and the cost being
estimated is linear.
If you believe that there are substantial costs that cannot be explained by the
relationship or that the relationship is not linear, you should either try to develop
an equation that better tracks the true relationship or limit your use of the
estimating factor to the range of the data used in developing the factor.
3.7.5
3-27
CHAPTER 3
CERs, like most other tools of cost analysis, MUST be used with judgment.
Judgment is required to evaluate the historical relationships in the light of new
technology, new design, and other similar factors. Therefore, a knowledge of the
factors involved in CER development is essential to proper application of the
CER. Blind use of any tool can lead to disaster.
3.7.6
Does the available information verify the existence and accuracy of the
proposed relationship?
Technical personnel can be helpful in analyzing the technical validity of
the relationship. Audit personnel can be helpful in verifying the accuracy
of any contractor data and analysis.
3-28
3.8
3.8.1
Evaluating CERs
Government Evaluation Criteria
Chapter 7, Government Compliance, discusses the requirements of an estimating
system and also discusses estimating system requirements and evaluation criterion
in detail. Government evaluators evaluate and monitor CERs to ensure they are
reliable and credible cost predictors. This section provides a general overview of
CER evaluation procedures, which generally include:
3.8.2
3-29
CHAPTER 3
changes
Metric Proposed
Count of part design changes (start tool Fabrication to final tool buyoff)
design hours
number of tools
production run
Number of parts tool is designed to build (i.e., 500 parts to be built using tool)
rework
schedule *
subsystems
complexity
10
speeds
Total number of (fabrication) rework orders for a particular tool during initial build
Measure of compression of flowtime to produce tool *.
Aircraft Subsystem category (i.e. tool builds part "A" which is in subsystem "X")
Measure of tool complexity
Measure of speed of moving parts on a tool
11
type
12
weight
13
material type
"?"
Type of tool
Weight of tool
Type of material the tool is made of ( steel, alum., graphite, fiberglass... )
Q1 Do you think that this would be a good predictor of Tool material costs?
(Y/N)
(Y/N)
Q1
Q2
Q3
Q4
YN
NN
YN
NN
YY
YY
YN
YN
YY
NN
YY
YN
YY
YN
NN
YN
YY
YY
YY
YN
YN
YY
NN
YY
YN
YY
YN
NN
YY
YY
YY
YY
YN
YN
YY
NN
YY
YN
YY
NN EH ML XH EM
YY ML HL MM MM
NN EA EL HL HL
YY HM HM HM HM
YN ML ML MM MM
YY HM MM MM MM
YN EE EE EH EH
YN EE EE EH EH
YN EM EM EH EH
NN EH EH EH EH
YY MM MM ML MM
NN EH EH EH EH
YN EM EM EM EM
Q5
Q6
Q7
Q8
GOOD
MEDIUM
Q3 Would you expect any correlation between this item and Tool material cost?
(Y/N)
BAD
(Y/N)
(E-extreme (yrs), H-high (mo's), M-medium (wk's), L-low (days), A-almost none)
(E-extreme (yrs), H-high (mo's), M-medium (wk's), L-low (days), A-almost none)
Using this survey, the IPT identified those cost drivers which had the most effect
on a given cost element, and were therefore candidates for further analysis. The
IPT used these key questions, which are important to any CER evaluator:
3-30
Does the CER seem logical (e.g., will the cost driver have a significant
impact on the cost of the item being estimated)?
How accessible are the data needed to develop the CER (both cost and
non-cost)?
3.8.3
Will there be a sufficient number of data points to implement and test the
CER(s)?
Credible Data
All data collected to support parametric estimating tools must be accurate and
their sources documented. An evaluator should verify the integrity of the data,
and the adjustments made during their normalization. Some questions which
should be asked during an evaluation include:
3.8.4
3.8.5
CER Validation
CER validation is the process, or act, of demonstrating the techniques ability to
function as a credible estimating tool. Validation includes ensuring contractors
have effective policies and procedures, data used are credible, CERs are logical,
and CER relationships are strong. Evaluators should test CERs to determine if
they can predict costs within a reasonable degree of accuracy. The evaluators
must use good judgment when establishing an acceptable range for accuracy.
Generally, CERs should estimate costs as accurately as other estimating methods
(e.g., bottoms-up estimates). This means when evaluating the accuracy of CERs
to predict costs, assessing the accuracy of the prior estimating method is a key
activity.
3-31
CHAPTER 3
3-32
3-33
CHAPTER 3
3-34
CHAPTER 4
The information in this chapter also applies to special purpose models developed
by Government agencies, if those models otherwise have the same features as
company developed ones.
4-1
CHAPTER 4
4.1
Background
Companies develop their own parametric models for a variety of reasons. For
example:
4.1.1
General Definitions
Parametric models can generally be classified as commercial or company
developed, and this chapter will refer to the latter as proprietary models.
Complex parametric models may consist of many interrelated CERs, as well as
other equations, ground rules, assumptions, and variables that describe and define
the situation being studied.
Models generate estimates based upon certain input parameters, or cost drivers.
Parameters drive the cost of the end product or service being estimated. Some
examples are weight, size, efficiency, quantity, and time. Some models can
develop estimates with only a limited set of descriptive program inputs; others,
however, require the user to provide many detailed input values before the model
can compute a total cost estimate. A model can utilize a mix of estimating
methods, and it may allow as inputs estimates from other pricing models (or
information systems) or quotes from external sources, such as subcontracts.
Commercial parametric estimating models, available in the public domain, use
generic algorithms and estimating methods which are based on a database that
contains a broad spectrum of industry-wide data. Because this data encompasses
many different products, a company working with a commercial parametric
model must calibrate it before using it as a basis of estimate (BOE) for proposals
submitted to the Government or higher-tier contractors. Calibration tailors the
commercial model so it reflects the products, estimating environment, and
4-2
4.1.2.1
4.1.2.2
4.1.2.3
4-3
CHAPTER 4
Companies use this model to make sanity check estimates for major engineering
proposals or high-level engineering trade studies.
WBS
Non-Recurring CER
(FY2006K$)
Recurring CER T1
(FY2006K$)
Receiver
2449.5 + 431.9 * Wt
1875.9 + Wt ^ 2.42
Transmitter (SSA)
2385.6 - 75.9 * Wt
Transmitter (TWTA)
1036.2 + 81.9 * Wt
Transponder
-453 + Wt ^ 2.25
Antenna (Reflector)
Antenna (Horn)
-199.8 + 94.2 * Wt
Space-borne Electronics
-1350.9 + 198 * Wt
Waveguides
10.9 + 14.6 * Wt
Power Dividers
192.9 + 47.4 * Wt
4.1.2.4
4.1.2.5
4.2
4-4
4-5
CHAPTER 4
4.2.1
4.2.2
4.2.3
4-6
4-7
CHAPTER 4
When developing a model, the team identifies the main characteristics, called the
primary cost drivers, that are responsible for, and have the greatest impact on, the
product or services cost to be estimated. As many primary cost drivers as
possible should be identified and included. Chapter 3, Cost Estimating
Relationships, addresses the topic in more detail.
4.2.5
Identifying the job functions and other elements of cost that will be
estimated;
4-8
Figure 4.4 shows some of the parametric equations used by the Space Sensor Cost
Model. The model is a statistically derived aggregate of CERs based on historic
data collected from national sensor programs, including DMSP, DSP, Landsat,
AOA, and thirty space experiments. The CERs predict contractor cost without fee
and are based on engineering cost drivers, including:
D = Detector chip area in square microns
AE = Number of active elements in the focal plane array
W = Wavelength in Microns
C = Cooling capacity in watts
I = Input power per cooling capacity
AS = Optical area sum in square centimeters
AA = Optical area average in square centimeters
ALW = Area x length x width in square centimeters
OD = Optical element dimension in centimeters
WBS
Element
Prototype T1
Prod Setup
Flight Unit T1
Focal Plane
Array
Monolithic
1936 (avg
value)
5 + 5E-07 *
D
159 (avg
value)
11 + 3.75E-04
* AE
Optical
Telescope
Assy
854 - 1996 *
W + 5.61 *
AS - 9.7 *
AA
253 + 1.13 *
AS - 2.22 *
AA
184 + 0.16 *
ALW + 7.67
* OD
- 63 + 3 * AS 5.42 * AA
Cryogenic
Cooler
1028 + 510 *
C
- 142 + 402 *
C + 3.3 * I
8361 (avg
value)
485 (avg
value)
Figure 4.4 The Space Sensor Cost Model Engineering Cost Drivers
This model meets the developers criterion of being able to fine tune the estimate,
since separate CERs are available for the engineering, prototype (or qualification
unit) T1, the production setup, and the flight unit (production) T1 costs. This
model can also be used for engineering trade studies and as the primary method of
generating a cost proposal. The CERs were heuristically derived, then calibrated
to the normalized historic data.
Another model, the Follow-On Production Model, incorporates a number of
estimating techniques. It estimates follow-on production costs, allows the input
of discrete estimates for certain cost elements, and uses CERs to estimate others.
For example, unique non-recurring data and travel costs are discretely estimated
and input to the model; however, material can either be entered as a discrete
estimate, or the analyst can use the model to estimate the costs through the
4-9
CHAPTER 4
Models should be validated and periodically updated to ensure they are based on
current, accurate, and complete data, and that they remain good cost predictors. A
contractor should work with Government representatives to determine how
frequently a proprietary model is to be updated, and this decision incorporated
into the companys estimating policies and procedures. Chapter 7, Government
Compliance, provides further information on this subject.
The purpose of validation is the demonstration of a models ability to reliably
predict costs. This can be done in a number of ways. For example, if a company
has sufficient historical data, data points can be withheld from the model building
process and then used as test points to assess the models estimating accuracy.
Unfortunately, data sets available are often extremely small, and withholding a
few points from the models development may affect the precision of its
parameters. This trade-off between accuracy and testability is an issue model
developers always consider.
When sufficient historical data are not available for testing, accuracy assessments
can be performed using other techniques. For example, a comparison can be
4-10
Parametric Estimates
Alternate Estimates
Figure 4.5 Example of Model Validation by Comparing the Models Estimate
to Another Estimate
4-11
CHAPTER 4
Companies should also explain the models design, development, and use. For
example, the contractor, as part of its support for the Follow-On Production
Model and Estimating Tool, developed a detailed manual containing information
about the mechanics of the model, its estimating methodologies, and the timing of
updates. The company also amended its Estimating System Manual to include a
section on the model, and to refer the reader to the models own manual.
4.2.8
4.2.9
4-12
4-13
CHAPTER 4
4-14
GRP_AT10=0.4+0.4*T_A_WTIN+23.6*LRU_MOD
Where:
and:
GRP_AT10
T_A_WTIN
LRU_MOD
CORRELATION MATRIX:
GRP_AT10
1.00
GRP_AT10
0.48
T_A_WTIN
0.78
LRU_MOD
=
=
=
T_A_WTIN
0.48
1.00
-0.13
X, in BY97K$.
LRU_MOD
0.78
-0.13
1.00
SEE
39.7
CV
13.6%
ANOVA TABLE:
SOURCE
Regression
Residual
TOTAL
DATA POINTS:
REC-1
REC-2
REC-3
REC-4
REC-6
REC-7
Sum of Squares
182481.5
9474.6
191956.1
DF
2
6
ADJ R2
0.93
BETA COEFF
N/A
0.59
0.86
R2
0.95
RANGE
N/A
MEAN
N/A
256.53
7.89
F Ratio/SIG (F)
57/0.0
REC-8
REC-9
REC-10
4.3
Evaluation Criteria
An evaluators review of a proprietary model generally focuses on determining
that:
x
Policies and procedures exist which enforce the appropriate use of, and
consistency in, the model;
4-15
CHAPTER 4
Frequency of use;
Expected savings;
Customer support.
4-16
The effort needed to evaluate information system controls will vary with the
complexity of the model. The purpose of the controls is to maintain the models
integrity.
4.4
4.5
Lessons Learned
The Parametric Estimating Reinvention Laboratory identified some concepts that
all implementation teams should consider:
4-17
CHAPTER 4
4.6
Include the customer, all interested company personnel, and DCMA and
DCAA representatives;
Establish a process flow and target development dates to ensure all team
members provide their inputs to the model's design;
Best Practices
Based on Parametric Estimating Reinvention Laboratory experience, no single
implementation approach is superior to another, but all successful applications of
the general model-building process do depend on good communications. Because
Industry and Government recognize a common need to reduce the time and
expense of generating, evaluating, and negotiating cost proposals, they agree to
participate on a particular model implementation team.
Industry model team members provide the Government insight into the methods
and constraints of their estimating processes, and the Government team members
explain what criteria the model must meet for it to be an acceptable estimating
tool. As the work progresses, all team members share opinions, concerns, and
solutions in an effort to make the proposal preparation process faster and less
costly, while maintaining a reasonable level of reliability.
The best practices for model development are:
4-18
Estimate and track the cost of developing and implementing the new
methods. Maintain metrics on cycle times and proposal costs to determine
the return on the invested costs.
Engage the major customers, DCMA, and DCAA early in the process and
solicit their input on a real-time basis.
CHAPTER 5
The chapter provides best practice recommendations which are based on model
practitioners experiences with implementing hardware models into an
organizations cost estimating and analysis practices. Many models are available,
and some are used for very specific purposes such as estimating the costs of
electronic modules and the operations and support cost of hardware and software
systems. Organizations and cost estimating model users are encouraged to
evaluate as many alternatives as possible prior to selecting and implementing the
most appropriate cost estimating model that meets their requirements.
5.1
Background
In the early 1950s, the Rand Corporation pioneered parametric cost estimating
concepts and used them to develop costs in support of high-level planning studies
for the United States Air Force (USAF). Rand used parametric cost estimating
relationships (CERs) based on speed, range, altitude, and other design parameters
5-1
CHAPTER 5
for first and second-generation intercontinental ballistic missiles, jet fighters, jet
bombers, and cargo aircraft.
Since then the Government and Industry cost analysis community has moved
from simple to complex CERs, and then to sophisticated computer parametric
models that can estimate the life-cycle cost (LCC) of complex weapon, space, and
software-intensive systems. A parametric cost model can be viewed as the
collection of databases, CERs (simple one-variable equations and complex
algorithms requiring multiple design/performance/programmatic parameters), cost
factors, algorithms, and the associated logic, which together are used to estimate
the costs of a system and its components. A model may be manual or automated
and interactive. A parametric cost model uses known values (e.g., system
descriptions or parameters) to estimate unknown ones (e.g., program, component,
activity costs).
Over the past 40 years, Government and Industry have used parametric models to
support conceptual estimating, design-to-cost analyses, LCC estimates,
independent cost estimates, risk analyses, budget planning and analyses, should
cost assessments, and proposal evaluations. Chapter 8, Other Parametric
Applications, contains information on other uses of parametric models.
In 1975, the then RCA Company offered a commercial hardware estimating
model, which was initially developed in the 1960s to support internal independent
cost estimates. This tool, and others that followed from competing companies,
grew in popularity and sophistication and were used to support the full spectrum
of cost estimating and analysis activities. However, these hardware cost models
were generally not used as a BOE for proposals submitted to the Government
when cost or pricing data were required.
As part of the recent Parametric Estimating Reinvention Laboratory effort, several
companies using integrated product teams (IPTs) implemented commercial
parametric estimating hardware models, which can rapidly compute development
and design costs, manufacturing costs of prototypes, and production
unit/manufacturing support costs. The models can also compute the operation and
support costs of fielded systems.
5.2
5-2
Hardware parametric models bring speed, accuracy, and flexibility to the cost
estimating process. Cost measurement of alternative design concepts early in the
design and acquisition process is crucial to a new program because there is little
opportunity to change program costs significantly once a detailed design and
specs have been released to production. The analyst, with engineering support,
reviews the systems concept of operations, system requirements, documentation,
and conceptual designs. From this review, a work breakdown structure (WBS)
and cost element structure (CES) are developed for all the systems that are being
designed, developed, and produced. In addition, ground rules and assumptions
(GR&As) defining the acquisition drivers and the programmatic constraints that
affect design and performance are identified. This WBS/CES is then incorporated
into the model, and it defines what is being estimated, including the descriptive
parameters.
Parametric estimating models have been developed to operate with limited
concept description so that program management personnel can estimate the cost
of many unique configurations before system design specifications and detailed
bills of material are finalized. Parametric models can also be used as the basis of
a cost estimate in preparation of firm business proposals, or in the independent
assessment of cost estimates prepared using a traditional estimating approach.
Hardware models extrapolate from past systems to estimate and predict the costs
of future ones, and their inputs cover a wide range of system features and
characteristics. Weight and size are often used as a models principal descriptive
variables (descriptors) since all systems (and their components) exhibit these
properties. Application and type are the common predictive variables (predictors)
for electronic components, while mechanical and structural elements can best be
described in terms of their construction: method, type of material, functionality,
machinability, and manufacturing process.
Some uses of parametric hardware cost models include (see Chapter 8 for more
discussion on cost models):
x
Estimates of modifications;
Vendor negotiations;
5-3
CHAPTER 5
LCC estimates;
Estimates of spare parts costs and other operations and support (O&S)
costs.
Parametric models can be used in all phases of hardware acquisition; for example,
development, production and deployment, and all functional aspects such as
purchased and furnished hardware (GFE), hardware modifications, subcontractor
liaison, hardware-software integration, multiple lot production, and hardware
integration and test.
Figure 5.1 depicts typical hardware modeling inputs and outputs. The main
advantage of a parametric model over grass roots or build-up methods is that it
requires much less data to make the estimate. For example, when a parametric
model calculates a manufacturing cost, it does so using a few items of
programmatic, technical, and schedule information rather than an itemized parts
list and/or a labor resources build-up.
Fundamental input parameters for parametric hardware models include:
5-4
Hardware Modeling
A Hardware Model Uses Common Parameters
To Estimate and Evaluate New Requirements
o Input Parameters
--- Magnitude (quantity)
--- Operating environment
--- Amount of new design &
design repeat
--- Engineering complexity
--- Manufacturing complexity
--- Schedule
--- H/W - S/W integration
--- Weight / volume
o Output Parameters
--- Cost
o Development
o Production
o Engineering
o Manufacturing
--- Schedule risks
--- Unit /system
integration costs
Be calibrated;
5-5
CHAPTER 5
5.3
Define Objectives
Groundrules and assumptions
Application(s)
Use of IPTs
Development plan
Model Validation
Training the IPT members
Develop procedures
Demonstrate accuracy
Document
Model Calibration
Map cost and technical data
Calibrate to history or other relevant data
Document calibration trials and results
Forward Estimating
Identify estimating opportunities
Gather technical descriptions
Use relevant program data
Develop estimate
Analysis of estimate and reconciliation
Write the basis of estimate support
Periodic Re-Calibration
and Validation
5.3.1
Define Objectives
Users of complex hardware models must first establish assumptions concerning
data collection, data requirements for model calibration/validation, and the best
way to normalize data for differences in development, production quantities,
scope of work. This includes establishing ground rules for determining the
compatibility of the data, the model itself, the calibration results, and the
5-6
proposed use of the model. The estimator should coordinate these rules and
assumptions with the proposal manager, technical leads, customer, DCMA, and
DCAA.
5.3.2
5.3.3
Model Calibration
The calibration of a complex hardware model is the process of tuning it to reflect
the given contractors historical cost experience and business culture. Actual
technical, programmatic, and cost data from previous projects embody the
organizations historical way of doing business. The parameters may have to be
adjusted for the way an organizations business will be conducted in the future.
Calibration captures this by adjusting the complex model's complexity and/or
adjustment factors. The calibration process involves:
5-7
CHAPTER 5
Collecting cost, technical, and programmatic data from historical or ongoing relevant programs;
Previous
Lots'
Schedule/
Quantities
Product Calibration
(PRICE H)
Historical
PCO %
Labor/Material
Mix by LRU
Go to
PostProcessor
Input equals
output labor/
mat'l mix?
No
Iterate Labor/
Material Index
(PRICE H)
Yes
Input equals
output D/L
production rates?
No
Yes
Perform Organization
Calibration (PRICE H)
5.3.4
Model Validation
Validation is the process of demonstrating the credibility of a parametric model as
a good predictor of costs, and must be done before the model can be used as the
5-9
CHAPTER 5
BOE for proposals. Parametric models also require periodic re-calibration and
validation of company-indexed complexity factors.
A parametric model should demonstrate the following features during its
validation.
x
Many analysts use one of the following methods to assess the models predictive
accuracy.
5.3.5
Predict the cost of end items not previously calibrated (using appropriate
calibration values), and compare the models estimates to the end items
actual costs or estimates-at-completion (when at least 80 percent of
program actual costs are known).
Forward Estimating
Figure 5.4 displays the forward estimating process. All the collected historical
complexity factors, technical descriptors, programmatic data, and interview
results are used to develop the proposal estimate. The BOE should document
major input parameter values and their rationales. Some companies may ask the
functional areas (e.g., engineering, quality) to develop independent estimates as
sanity checks to gain confidence in the complex models results. In addition, the
establishment of a reconciliation process is strongly recommended to provide a
mechanism for comparing the models estimates with actual cost experience (and
can also be used for the periodic revalidation).
5-10
Model Outputs
Database of Historical
Complexity Factors
Manufacturing complexity
Design global factor
Calibration adjustment
Commercial
Parametric Cost
Model
Design hours
Fab, assembly and test hours
Tooling hours
Development cost
Production cost
Tooling cost
Material cost
Hours by classification of cost
A models results may not be in the usual proposal format (e.g., spread of hours
by functional category and element of cost) that differs with the model and how
the company chooses to use it. In this case, a post processor can restructure the
results to have the desired level of detail (e.g., percentage spread of hours within a
functional category). To produce a dollar estimate for the project, then the
company just applies current labor and indirect rates to the post processor output
to produce the typical functional category and element of a cost proposal.
Chapter 7, Government Compliance, contains additional information on formats
for proposal submissions. There is a trend in industry to use hardware datasheets,
initially developed by the National Reconnaissance Office (NRO) Cost Group to
capture technical, detailed parametric model inputs. Appendix K includes data
input forms that are often requested by the NRO and other Government
contracting organizations. These data input forms reflect required inputs for
parametric models.
5.4
5-11
CHAPTER 5
5.5
5-12
Start small to gain experience and acceptance by all internal and external
customers (e.g., use simple CERs to parametrically estimate one to three
subsystems in a proposal) and to demonstrate the reasonableness of
proposal parametric estimates by comparing them with estimates made
using other techniques.
5.6
Best Practices
The best practices from the Parametric Estimating Reinvention Laboratory sites
where complex hardware models were used as the BOE in a proposal are as
follows.
x
Develop a Parametric IPT (include customers) and train all key members
in the use of the selected model.
5-13
CHAPTER 5
5.7
Conclusions
Complex parametric cost models provide opportunities for Industry and
Government to save time and money on proposals and negotiations requiring cost
or pricing data. In addition, experience from Parametric Estimating Reinvention
Laboratory sites indicates that the use of these models, when properly
calibrated/validated and appropriately applied, complies with Government
procurement regulations.
5-14
CHAPTER 6
6.1
Background
Software is a combination of computer instructions and data definitions that are
required for computer hardware to perform computational or control functions.
DoD spending for software intensive systems is significant and continues to
increase. Software costs as a percentage of total program and computer system
6-1
CHAPTER 6
costs are also increasing. DoD purchases software for weapon systems and
management information systems (MISs). Weapon system software is associated
with the operations of aircraft; ships; tanks; tactical and strategic missiles; smart
munitions; space launch and space-based systems; command and control (C2);
command, control, communications (C3); and intelligence (C3I) systems. MIS
software also performs activities that support weapon systems (e.g., payroll and
personnel, spares calculations).
Accurately projecting and tracking software costs is difficult, and cost overruns
often occur. It is very important, therefore, to understand software estimating
processes and methods. Software estimating problems often occur because of the:
x
Figure 6.1 illustrates the critical elements of the software estimating process, and
shows that adequate parametric software estimating practices include policies and
procedures for data collection and normalization, as well as calibration and
validation (including guidance on significant model cost drivers, input and output
parameters, and steps for validating the models accuracy).
Input Parameters
(i.e., cost drivers)
Data
Collection
Calibrated
&
Validated
Outputs
Include:
Costs by Program Phase
Labor Estimates by Program
Phase
Staffing Profiles
Schedule Risks
Include:
Software Sizing (Key Driver)
Application (e.g., IS,
Command & Control)
Software Processes (e.g.,
Modern Process, CMM level)
New Design and Reuse
Productivity Factors
Complexity
Utilization
Schedules
Commercial
Model
6-2
According to Boehm [Boehm, et. al., 2000], the impacts of certain risk
drivers can be significantly higher than the JPL study:
o Requirements volatility can increase cost by as much as 62 percent;
6-3
CHAPTER 6
6.1.2
6-4
System
Requirements
Analysis
System
Design
C
S
C
I
P
H
A
S
E
S
Software
Requirements
Analysis
Software
Design
Software
Im plem entation
and Unit Testing
Unit
Integration
and Testing
CSCI
Qualification
Testing
CSCI and
HW CI
Integration
and Testing
System
Quality
Test
Preparing
For
Use
Preparing
For
Transition
6-5
CHAPTER 6
SYSTEM
CSCI 1
CSCI 2
SU 11
SU 12
SU 111
SC 112
HWCI 1
6-6
The software life cycle (i.e., SCI) phases shown in Figure 6.2 do not have to occur
sequentially, as the illustration may imply. Many modern development practices
can result in a different order of activities or even in a combination (or
overlapping) of activities as explained in the following discussions of alternative
software development methodologies. The type of methodology used generally
has a significant impact on development, maintenance, and total life cycle costs.
6.1.2.1
Although this method is still widely used, most software experts recommend that
it be used with caution.
6.1.2.2
Evolutionary Development
This methodology involves the initial development of an operational product, and
then the continual creation of more refined versions (i.e., iterations) of it.
Successive iterations generally follow the SCI activities highlighted in Figure 6.2.
During the first iteration, core capabilities are developed and fielded. The
software is developed with a modular design so additional capabilities and
refinements can be added by the iterations. The advantage of this method is that a
working product is available for users early in the development process, which
helps them assess the product and provide inputs for the enhanced iterations. One
drawback, though, is that the final version can require more time and effort than
would be expended under the waterfall method.
6.1.2.3
Incremental Development
The incremental development methodology builds a software product through a
series of increments of increasing functional capability, and is characterized by a
6-7
CHAPTER 6
Prototyping
Prototyping involves the development of an experimental product that
demonstrates software requirements for the end users, who get a better
understanding of these requirements by working with the prototype. Computeraided software engineering (CASE) tools facilitate the use of this methodology.
While prototyping improves requirements definition, the prototype must not be
taken as a final product, because this could increase long-term support costs.
6.1.2.5
Spiral Development
This approach views software development as a spiral, with radial distance as a
measure of cost or effort, and angular displacement as a measure of progress.
One cycle of the spiral usually represents a development phase, such as
requirements analysis or design. During each cycle, objectives are formulated,
alternative analysis performed, risk analysis conducted, and one or more products
delivered. The advantages of the spiral model are that it emphasizes evaluation of
alternatives using risk analysis, and provides flexibility to the software
development process by combining basic waterfall building blocks with
evolutionary or incremental prototyping approaches.
6.1.2.6
Object-Oriented Development
This methodology differs from traditional development in that procedures and
data are combined into unified objects. A system is viewed as a collection of
classes and objects, and their associated relationships. This is not a separate
development method per se, and can be used with other methods (e.g., waterfall,
evolutionary, incremental). It can also facilitate software reusability and
supportability. Appendix D lists several societies that can provide additional
information.
6.1.3
Software Support
Software must be maintained, or supported, after it is developed. Software
maintenance includes such activities as adding more capabilities, deleting
obsolete capabilities, modifying software to address a change in the environment
or to better interface with the host computer, and performing activities necessary
to keep software operational. Software support can also be called "software
redevelopment" since its tasks repeat all, or some, of the software development
phases.
6-8
Figure 6.4 explains support categories and gives the relative percentage of effort
generally expended on each one. Note that corrective support activities, which
many people regard as the sole software maintenance activity, generally account
for only 17 percent of the total support effort.
17%
ADAPTIVE -Accommodate
environmental
changes
18%
60%
PERFECTIVE -- Make
enhancements
5%
OTHER
Software support is expensive, and can exceed the total cost of development.
Unfortunately, the techniques often used to estimate support costs are ad-hoc;
software support costs are often funded through level-of-effort (LOE) type
contracts, and are not based on specific support requirements.
6.1.4
6.1.4.1
6-9
CHAPTER 6
decided to replace the CMM in 2001 with a suite of CMM Integration (CMMI)
models.
According to CMU, CMMI best practices improve upon the CMM by enabling an
organization to:
x
Expand the scope of and visibility into the product life cycle and
engineering activities to ensure that the product or service meets customer
expectations;
There are actually four CMMI models, with two versions of each: continuous and
staged. The staged version of the CMMI for systems and software engineering
(CMMI-SE/SW) is discussed here since it tracks most closely with the CMM.
The CMMI-SE/SW has five levels of software process maturity. Figure 6.5
shows the characteristics associated with each level. These characteristics are
typically demonstrated by organizations at that level. The levels are sometimes
used as key parameters (i.e., inputs) by complex parametric models, and the
characteristics may be used to indicate process improvements that need to be
implemented before an organization can advance to the next level of maturity.
Maturity Level
6-10
Description
Level 1
Initial
Level 2
Managed
Maturity Level
Description
Level 3
Defined
Level 4
Quantitatively
Managed
Level 5
Optimizing
6.1.4.2
Process Areas
For each staged CMMI-SE/SW maturity level (except Level 1), an organization
must achieve a number of specific goals and practices for certain process areas.
Figure 6.6 lists the required process areas by maturity level. An organization is
expected to successfully perform all process areas at each level (and all lower
levels) to attain that maturity level; however, tailoring is allowed in special
circumstances.
Maturity Level
Process Areas
Level 1
Initial
None Required
Level 2
Managed
Requirements Management
Project Planning
Product Monitoring and Control
Supplier Agreement Management
Measurement and Analysis
Product and Process Quality Assurance
Configuration Management
6-11
CHAPTER 6
Maturity Level
Process Areas
Level 3
Defined
Level 4
Quantitatively
Managed
Level 5
Optimizing
Defining the project life cycle phases used to scope the planning effort;
Establishing the schedule and cost for work tasks based on estimation
rationale.
6-12
6.1.4.3
6.1.5
6.1.6
6-13
CHAPTER 6
Model
Category
Description
Advantages
Limitations
Analogy
Compare project
with past similar
projects
Truly similar
projects must exist
Expert
Judgment
Little or no
historical data is
needed; good for
new or unique
projects
Experts tend to be
biased; knowledge
level is sometimes
questionable
Bottoms-Up
Individuals assess
each component
and then
component
estimates are
summed to
calculate the total
estimate
Accurate estimates
are possible
because of detailed
basis of estimate
(BOE); promotes
individual
responsibility
Parametric
Models
Perform overall
estimate using
design parameters
and mathematical
algorithms
Models can be
inaccurate if not
properly calibrated
and validated; it is
possible that
historical data used
for calibration may
not be relevant to
new programs
6-14
6.2
6.2.1
External inputs (EI). All unique data or control inputs that cross the
system boundary and cause processing to occur (e.g., input screens and
tables).
6-15
CHAPTER 6
6-16
External outputs (EO). All unique data or control outputs that cross the
system boundary after processing has occurred (e.g., output screens and
reports).
External inquiries (EQ). All unique transactions that cross the system
boundary to make active demands on the system (e.g., prompts and
interrupts).
Internal files (ILF). All logical data groupings that are stored within a
system according to some pre-defined conceptual schema (e.g., databases
and directories).
External interfaces (EIF). All unique files or programs that cross the
system boundary and are shared with at least one other system or
application (e.g., shared databases and shared mathematical routines).
Complexity
Total
EI
Simple
3
Average
4
Complex
6
EO
EQ
6 (or 7)
ILF
10
15
EIF
10
14 Factors:
2. Distributed Data Processing
3. Performance Objectives
4. Heavily-Used Configuration
5. Transaction Rate
7. End-user Efficiency
8. On-Line Update
9. Complex Processing
10. Reusability
The excellent results obtained from Albrecht and Gaffneys research are a noted
strength of function-point models. In addition, the International Function Points
Users Group (IFPUG), which meets twice a year, and periodically publishes a
guide to counting and using function points (Garmus, 2001), performs ongoing
research. Proponents of function point size estimation state that function point
counts can be made early in a program, during requirements analysis or
preliminary design. Another strength, according to Capers Jones (Jones, 1995), is
that they provide a more realistic measure of productivity because SLOC-perperson-per-month measures tend to penalize HOLs (e.g., ADA, C++).
6-17
CHAPTER 6
However, function points do have disadvantages, since they are often harder to
visualize (i.e., functions points are concepts), where SLOCs can be seen (e.g., on
a code listing). Function points are also less widely used than SLOC for most
applications, and have only been studied extensively for business or data
processing applications, though attempts to adapt the function point concept to
real-time and scientific environments have been made.
6.2.2
Language
Jones
Jones
Galorath
Reifer
Language Level
SLOC/FP
SLOC/FP
SLOC/FP
Assembler
320
320
400
COBOL
107
61
100
FORTRAN
107
58
105
ADA (1983)
4.5
71
71
72
PROLOG
5.0
64
61
64
Pascal
3.5
91
71
70
PL/1
4.0
80
71
65
While function point to SLOC conversion ratios are useful, and often sometimes
necessary, they should be used with caution.
Figure 6.9 illustrates that, while researchers may agree on the ratios for some
languages such as ADA, they differ on the ratios for others, such as Pascal and
PL/1. Furthermore, there was considerable variance for these ratios within the
databases. Therefore, for some languages it appears that backfiring should not be
used, and for cost estimation it is probably best to use a model for which the
6-18
algorithms are based on the users size measure (i.e., calibrated parametric sizing
models).
6.2.3
Object Points
Other sizing methods were developed to address modern programming
applications. Currently, object points are used in development environments
using integrated CASE tools (although they may have other applications). CASE
tools automate the processes associated with software development and support
activities and, when used correctly, can have a significant impact on productivity
levels as well as quality factors associated with software costs, such as rework.
The four object types used include:
x
Two object-based measures are obtained from these object types. The first, object
counts, is merely a sum of the number of instances of each object type and is
analogous to basic function points. Object points are a sum of object instances
for each type, times an effort weight for each type.
The average effort weight for each type is as follows:
x
Therefore, object points are an estimation of effort needed for an integrated CASE
tool development environment. Application points, a variant of object points, are
currently used in the COCOMO II Application Composition model,
6.2.4
6-19
CHAPTER 6
for another measure, such as SLOC. If productivity rates are known (UCP/PM),
UCPs can be used to directly estimate effort.
6.2.5
6.3
6.3.1
6-20
Importance
Rating
Factor
Model
Ratings
A B C
Sub-Factor
Products
A B C
10
10
7 100 90 70
10
90 54 63
Ease of Use
64 72 48
Ease of Calibration
12 30 30
Database Validity
35 35 20
Currentness
15 25 25
Accessibility
24 36 16
Range of Applicability
10
14 20
Ease of Modification
Weighted Totals
6.3.2
6.3.3
6-21
CHAPTER 6
Users must become familiar with each of the candidates to choose the most
effective model. This often involves attending a training course and using the
model for several months. Once the user becomes sufficiently familiar with the
models, the selection process can begin. It is highly desirable that users perform
their own studies, and not rely solely on outside information. Nevertheless,
validation studies performed by outside agencies can certainly help the user in the
model selection process. An excellent example is a study by the Institute for
Defense Analysis (Bailey, 1986), which compared and evaluated features of most
of the cost models then in use by Industry and Government. While outside
studies can provide valuable information, they should be used as supplementary
material since they may not reflect the unique features of the users environment.
The Weighted-Factors Approach (Figure 6.10) can help with the qualitative
assessment of candidate models. The user assigns a weight to each factor (Step
1), assigns a rating between "1" and "10" to each model (based on how well it
addresses each factor), multiplies the model and importance ratings, and sums the
results. The highest total can indicate the best model alternative (e.g., Model B in
Figure 6.10). However, models that are close in score to the highest (e.g., Model
A in Figure 6.10) should also be examined. Since there is some subjectivity in
this process, small differences may be negligible. Again, while the WeightedFactors Approach is somewhat subjective, it can help a user consider what is
important in model selection and in quantifying the rating process.
For quantitative assessments, or in determining whether the models meet accuracy
requirements, users should calibrate the models, then run them against projects
for which the user has historical data that was not used during the calibration
process. This approach is often arduous, but essential if a user truly wants to
identify the model that is most suitable (i.e., most accurate) for the application.
6.3.4
6.4
6-22
estimates (Park, 1989). Software models however, are not magic boxes; they are
only as good as the input data used. Even models have limitations. For example,
parametric models can be inaccurate if they have not been adequately calibrated
and validated. Furthermore, models are not always useful in analyzing non-cost
factors that may impact certain decision-making. Therefore, a manager must be
able to recognize the capabilities and limitations of models and use them
intelligently. An example is provided at the end of this section.
6.4.1
Input Data
One problem with parametric models is that their effort and schedule estimates
may be very sensitive to changes in input parameters. For example, in most cost
models, changes in program size result in at least an equivalent percentage change
in cost or effort. Other input changes can have dramatic effects; for instance,
changing the two COCOMO II personnel capability ratings, (analyst capability
(ACAP) and programmer capability (PCAP), from very high to very low will
result in a 350 percent increase in effort required. All models have one or more
inputs for which small changes result in large changes in effort and, perhaps,
schedule.
The input data problem is compounded by the fact that some inputs are difficult to
obtain, especially early in a program (e.g., software size). Other inputs are
subjective and often difficult to determine; personnel parameter data are
especially difficult to collect. Even objective inputs like security requirements
may be difficult to confirm early in a program, and later changes may result in
substantially different cost and schedule estimates. Some sensitive inputs such as
productivity factors should be calibrated from past data. If data are not available,
or if consistent values of these parameters cannot be calibrated, the models
usefulness may be questionable.
A manager or analyst must spend considerable time and effort to obtain quality
input information. Ideally, a team of personnel knowledgeable in both software
estimating and technical issues should perform a software cost estimate. A
software cost analyst must work with engineering or technical personnel to
determine some of the hard inputs such as size and complexity. The analyst
should also try to determine soft inputs (e.g., analysts capabilityACAP) by
working with appropriate personnel in the organization and, if necessary,
performing a Delphi survey or similar expert judgment technique. Finally, an
analyst or team should calibrate the models to the particular environment, a timeconsuming but worthwhile exercise. As previously discussed, model calibration
should improve model accuracy.
6.4.2
Model Validation
If a model will be used to develop estimates for proposals that will be submitted
to the Government or a higher tier contractor, its accuracy should be addressed
through the validation process. Validation is defined as the process, or act, of
6-23
CHAPTER 6
demonstrating a calibrated models ability to function as a credible forwardestimating tool. A parametric model, such as one for software estimating, should
be implemented as part of a contractors estimating system. For a model to be
considered an acceptable (or valid) estimating technique, an organization should
be able to demonstrate that:
x
COCOMO II
Because it is an open book model, COCOMO II (REVIC is also) is an example
software estimating tool that can be used for performing model-based estimates.
USC COCOMO II is a tool developed by the Center for Software Engineering
(CSE) at the University of Southern California (USC), headed by Dr. Barry
Boehm.
Unlike most other cost estimation models, COCOMO II (www.sunset.usc.edu) is
an open model, so all of the details are published. There are different versions of
the model, one for early software design phases (the Early Design Model) and one
for later software development phases (the Post-Architecture Model). The
amount of information available during the different phases of software
development varies, and COCOMO II incorporates this by requiring fewer cost
drivers during the early design phase of development versus the post-architecture
phases. This tool allows for estimation by modules and distinguishes between
new development and reused/adapted software.
6.4.4
6-24
KSLOC
EAF
If MM = k(KSLOC)b x EAF
Then:
MM
Man Months
3.22
KSLOC
36.300
1.2
EAF
1.0
MM
MM
240
6-25
CHAPTER 6
6.5
6.5.1
6.5.2
6.5.3
6-26
Model users should ensure they are familiar with the latest editions of these
models and obtain retraining as necessary. Some commercial software estimation
models as they appear today (2007) are more fully described in Appendix A.
6.6
Lessons Learned
The results of the Parametric Estimating Reinvention Laboratory demonstrated
that software parametric models should be implemented as part of an
organizations estimating system. Parametric estimating systems should consist
of credible databases; adequate policies and procedures containing guidance on
data collection and analysis activities, calibration, and validation; and policies and
procedures to ensure consistent estimating system operation. Chapter 7,
Government Compliance; provides detailed guidance on the Governments
expectations related to software estimating using parametric techniques.
The effective implementation of software parametric techniques involves
establishing adequate resources to populate software metric databases on a regular
basis. Figure 6.11 contains a listing of key metrics that contractors should collect
(Grucza, 1997).
Category
Size
(By Language)
Measure
x
x
x
Effort
x
x
x
Productivity
LOC/Hour; Pages/Hour
Requirements
Stability
Schedule
x
x
x
x
x
x
x
x
6-27
CHAPTER 6
Category
Measure
Environment
x
x
x
Throughput
Computer Memory Utilization
Input/Output Channel Utilization
Quality
x
x
x
Training
x
x
Parametric Model
Data
Data Sheets
Risk Management
Risk Items
Earned Value
Intergroup
Coordination
Integrated Software
Management
6.7
Best Practices
During the Parametric Estimating Reinvention Laboratory, an IPT used a complex
parametric model for software estimating. At the beginning of its
implementation, the contractor IPT members found the most challenging task was
obtaining the necessary internal resources (i.e., commitments) to perform data
collection and analysis activities. Later the company initiated software process
improvement activities consistent with CMMI criteria, for Levels 2 and 3. As
previously discussed, this criterion includes establishing databases and metrics for
software estimation. The IPT recognized the implementation of a complex
parametric software estimating model could be greatly facilitated when done in
conjunction with the software process improvement activities related to the
CMMI. Of course, if a contractor has already achieved and continues to maintain
6-28
6.8
Conclusions
Software model users and managers are continually challenged to make
intelligent use of current models and to keep abreast of the impacts of future
changes. New languages, new development methods, and new or refined models
are a few of the many areas a model user must have current knowledge.
Technologies such as graphical user interfaces and artificial intelligence can also
affect software estimation. However, current parametric software cost models
have many features and capabilities and, when calibrated, can provide detailed
and accurate software estimates.
6-29
CHAPTER 6
6-30
CHAPTER 7
Government Compliance
7.1
Regulatory Compliance
The proper use of calibrated and validated parametric estimating CERs and
parametric models, in tandem with the establishment and consistent adherence to
effective estimating policies and procedures, will promote compliance with the
applicable procurement statutes and regulations. This section discusses the
various regulatory requirements.
7-1
CHAPTER 7
7.1.1
GOVERNMENT COMPLIANCE
Cost or pricing data does not include judgmental data, but does include the factual
data on which judgment is based. Like all traditional estimating techniques,
parametric estimates contain judgmental elements that are not subject to
certification, yet need be disclosed pursuant to Part FAR 15, since they are
subject to negotiation.
Specific to parametric techniques, properly calibrated and validated CERs and
parametric models, as supported by corresponding company policies and
procedures, are expected to be fully compliant with TINA requirements through
the cyclical processes of calibration and validation themselves. Accordingly, the
matters of currency and completeness of that data should not become issues,
provided the frequency of calibration and validation of said data is technically
sufficient, and addressed by Government/contractor agreements and approved
estimating policies and procedures, as well as their successful implementation.
Additional information relating to the Government expectations when developing
CERs is included in the Defense Contract Audit Manual Section 9-1000. In terms
of strict interpretation of the law, the key is full disclosure of all factual pricing
data, and not whether the said data was necessarily relied upon, particularly for
updates and other out-of-cycle data. However, while compliance may be no
7-2
longer an issue in such instances, the excluded data may be of such significance
in determining a fair and reasonable price that its very exclusion may become an
issue during the ensuing negotiation.
How to handle out-of-cycle data of significance and how to determine what
significance means is perhaps best addressed by agreement so as to avoid such
scenarios. In fact, the findings of the Parametric Estimating Reinvention
Laboratory support that, in general, the best way for contractors to comply with
the TINA requirements is to establish adequate parametric estimating procedures
that have been coordinated with their Government representatives. These
representatives include the contractor's primary customers, DCMA, and DCAA.
The next section discusses some key elements of parametric estimating
procedures.
7.1.2
7-3
CHAPTER 7
GOVERNMENT COMPLIANCE
Additionally, to ensure data are current, accurate, and complete as of the date of
final price agreement, contractors must establish practices for identifying and
analyzing any significant data changes so as to determine if out-of-cycle updates
are needed. A contractors estimating policies and procedures should identify the
circumstances when an out-of-cycle update is needed.
Examples of some events that may trigger out-of-cycle updates include:
x
Restructuring/merging.
7-4
7-5
CHAPTER 7
GOVERNMENT COMPLIANCE
Explanation for the need and benefit of estimating the costs at a lower level;
A general reconciliation of the lower level detail to the level at which the
costs are accumulated and reported.
The only other CAS issue, specifically a CAS 401 issue, that is of concern and
merits attention is the additional requirement that the estimating techniques be
consistent with the disclosed practices in the CAS Disclosure Statement, not to be
confused with disclosure under the Estimating System. In most instances, it is
unlikely that the Disclosure Statement would be of such minute detail that
inconsistencies would occur. Nonetheless, due diligence is called for.
Likewise, when using complex parametric models, or tasking estimators with
developing in-house models that are unfamiliar with the CAS Disclosure
Statement, a mapping between elements of cost described in that Disclosure
Statement to those in the parametric model is recommended to ensure consistency
between the two.
7.1.4
7.1.4.1
7-6
In situations where cost or pricing data are required, FPRAs are certified each
time a specific pricing action is negotiated. When FPRAs are used for CERs, it is
important to have monitoring procedures in place, based upon specific cost
reports to made available at regular intervals.
The key to formulating the frequency of reports is to make certain that if the
CERs are no longer valid, sufficient advance identification can be made to
mitigate further windfall profits or losses by exercising the rescission provisions
in a timely fashion. As such, it is essential to have effective processes for
identifying any unusual events that may have a significant effect on the CERs,
such as changes in production processes or company reorganizations. As an
example, during the Parametric Estimating Reinvention Laboratory, one of the
teams established a process for monitoring CER accuracy. The IPT defined a
range of acceptability (or tolerance level) for each CER, and established processes
to monitor CER accuracy on a monthly basis to identify any anomalies, such as
CERs falling outside the defined range. The IPT analyzed these anomalies and
identified follow-up activity to update or improve the CERs.
7.1.4.2
7-7
CHAPTER 7
GOVERNMENT COMPLIANCE
7.1.5
Subcontracts
The treatment of subcontract costs and compliance with FAR regarding
subcontract costs has been one of the most significant challenges to implementing
parametric techniques in proposals. However, this is an issue for all estimating
approaches. Therefore, it is imperative that the treatment of subcontract costs in
the proposal and negotiation process is addressed early and an agreement is
reached between the contractor and the Government.
FAR 15.404-3 defines cost or pricing data requirements specific to subcontracts.
It states that a prime contractor is required to obtain cost or pricing data if a
subcontractors cost estimate exceeds $650,0001, unless an exception applies. A
prime contractor is also required to perform cost or price analysis on applicable
subcontractor estimates to establish the reasonableness of the proposed prices.
Prime contractors are required to include the results of the analyses with proposal
submissions. For subcontracts that exceed the lower of $10,000,000, or are more
than 10 percent of a prime contractors proposed price, the prime contractor is
required to submit the prospective subcontractors cost or pricing data to the
contracting officer. If the subcontractor does not meet this threshold, but the
price exceeds $650,000, the prime contractor is still required to obtain and
analyze cost or pricing data but is not required to submit it to the Government.
Subcontractors should be responsible for developing their own estimates since
they have the experience in pricing the specific good or service they will be
providing. Subcontractors are in the best position to include the cost impacts of
new events such as reorganizations, changes in production or software
engineering processes, and changes in prices of key commodities.
This is the current threshold at the time of this update. The contracting officer should check the current
threshold.
7-8
For these reasons, it is a best practice for a prime contractor to obtain the
necessary cost or pricing data directly from its subcontractors. Prime contractors
can work with their subcontractors to streamline costs and cycle time associated
with preparation and evaluation of cost or pricing data. This means that
subcontractors can use parametric estimating techniques to develop their quotes,
provided their models are adequately calibrated and validated.
The Government may decide that adequate evaluation of a prime contractors
proposal requires field pricing support (an assist audit) at the location of one or
more prospective subcontractors at any tier. This may be based on the adequacy
of the prime contractors completed cost analysis of subcontractor proposals.
The prime contractors auditor will also evaluate the subcontractors cost or
pricing submission. The prime contractor will advise the contracting officer if
they determine there is a need for a Government assist audit. If the prime cannot
perform an analysis of the subcontractors cost or pricing submission in time for
proposal delivery, the prime will provide a summary schedule with their proposal.
That schedule will indicate when the analysis will be performed and delivered to
the Government.
The following items generally indicate a need for a Government assist audit.
7.1.6
7-9
CHAPTER 7
GOVERNMENT COMPLIANCE
Best Practices
Properly calibrated and validated parametric techniques can comply with all
Government procurement regulations. Establishing effective estimating system
policies and procedures specific to the proposed parametric techniques ensures
consistent compliance with the applicable statutes and regulations, provided they
are successfully implemented and enforced through periodic internal reviews.
Using teamwork, IPTs, and addressing the best practices discussed in this chapter,
contractors can comply with all Government procurement regulations while
making parametric estimating techniques an accurate and reliable tool for
streamlined estimating processes. For example, during implementation of a
parametric-based estimating system, Government team members can provide
feedback to the contractor concerning their expectations related to the estimating
system disclosure requirements. In addition, the Government team members can
work with the contractor to address any other regulatory concerns on a real-time
basis so improvements can be initiated before actual proposals are submitted.
Due to the sensitivity of subcontract costs the treatment of these costs should be
addressed early between the contractor and the Government, preferably as part of
an IPT. In addition a MOU should be developed relating to the treatment and
disclosure of subcontract costs. See Appendix F.
7.2
7-10
Credible data;
Good judgment;
While the following Figure 7.1 captures the process flow for auditing both
parametric estimating systems and proposals, often an IPT approach is employed
by the Government, which includes not only DCAA, but also DCMA and the
major buying activities thereby leveraging all available technical expertise such
as engineers and pricing people with knowledge of product lines and software
modeling.
In general, when evaluating parametric estimating systems, the focus is on:
x
Government
Procurement
Regulations
Policies and
Procedures
Verifiable
Data
Consistent
Application of
Policies and Procedures
Evaluation of the
Parametric
Estimating System CERs
Models
Data Analysis
and
Validation
Calibration
and
Validation
Evaluation of
Proposals Based on
Parametric Techniques
Database
Adjustments
and
Updates
7-11
CHAPTER 7
GOVERNMENT COMPLIANCE
7.2.1
Provide for internal review of, and accountability for, the acceptability of
the estimating system, including comparison of projected to actual results,
and analysis of differences;
7-12
7.2.2.2
Credible Data
Contractors are encouraged to use historical data, whenever feasible, as the basis
of estimate. Technical representatives at both the buying command and DCMA
may have specific knowledge as to the appropriateness of that data and be in a
position to provide valuable technical support to DCAA accordingly. For
example, actual costs may reflect gross inefficiencies due to initial engineering or
manufacturing difficulties encountered, but since resolved.
Parametric techniques generally require the use of cost, technical, and other
programmatic data. Figure 7.2 is an example of the types of data customarily
collected.
7-13
CHAPTER 7
GOVERNMENT COMPLIANCE
Description
Examples
Cost data
Technical Data
Programmatic data
7.2.2.4
Complex Models
Auditors and technical reviewers will concentrate on calibration and validation
techniques employed, as well as the corresponding policies and procedures,
keeping in mind key inputs and outputs of the model being adapted.
7-14
Software Model
Weight
Software size
Quantity
Development language
Engineering complexity
Manufacturing complexity
Software tools
Schedule
Personnel capabilities
Figure 7.3 Sample Inputs for Complex Parametric Models
For the contractor, data collection and analysis is generally the most timeconsuming part of the calibration process thereby making it cogent that the
calibration methodology be defined prior to collection of data. A significant
portion of that data should be obtained preferably from the organizations
information systems, while other data, such as technical data, are usually obtained
from a variety of sources ranging from manufacturing databases to engineering
drawings. Otherwise, contractors may interview technical personnel to obtain
information not readily available, such as information germane to a specific
product or process. The data are then normalized.
Complex models generally have their own classification system for cost accounts,
and as a result, companies must establish a mapping procedure to properly relate
their cost accounts to those used by the models for accuracy of predictions and to
preclude potential noncompliance with the Estimating System and/or Cost
Accounting Standards Disclosure Statements. Additionally, contractors should
document any adjustments made to the data, including assumptions and
associated rationale, during mapping.
7-15
CHAPTER 7
GOVERNMENT COMPLIANCE
Once data are collected, normalized, and mapped, they are entered into the model.
The model then computes a calibration or correction factor that is applied to that
data. The result is a complex model adjusted to represent the organizations
experience, or footprint. As such, contractors must document the calibration
process fully, including the key inputs, input parameters, calibration assumptions,
results of interview questionnaires, i.e., names of people interviewed, dates,
information obtained, as well as cost history, and the calibration estimate. Any
changes in these calibration factors over time must also be documented.
Auditors and technical reviewers when evaluating the calibration process will
need to note whether:
x
The data points used in calibration are the most relevant to the product being
estimated;
Historical data used for calibration can be traced back to their sources;
Key input values and complexity factors generated by the model are
reasonable;
Validation
7-16
7-17
CHAPTER 7
GOVERNMENT COMPLIANCE
Parametric-Based Proposals
The review of a proposal that has cost based on parametric techniques should be
relatively simple and straightforward, provided (i) it is based upon established
parametric estimating policies and procedures (ii) such policies and procedures
have been deemed adequate and compliant with procurement law and regulation,
and (iii) it is properly calibrated and validated. As such, emphasis is placed upon
determining that estimates are consistent with those policies and procedures, and
any deviations are adequately justified in writing.
To facilitate Government review, proposals should contain sufficient
documentation that Government reviewers can use to evaluate the reasonableness
of estimates. Dependent upon the model or parametric technique being used, a
basis-of-estimate (BOE) for a parametric cost estimate should include the
following types of information:
7-18
All cost drivers considered in preparing the estimate (cost and non-cost
parameters).
The types of materials (raw, composite, etc.) and purchased parts with
procurement lead times required to complete the tasks being estimated.
The types of direct labor and/or skill mix required to perform the tasks being
estimated (e.g., manufacturing, manufacturing engineer, software engineer,
subcontract manager).
In the case of CERs, the BOE should explain the logical relationship of all costto-cost and/or cost-to-non-cost estimating relationships used in the estimate. It
should include (i) a description of the source of historical data used in
determining the dependent and independent variable relationships and its
relevance to the item or effort being estimated, (ii) a description of the statistical
analysis performed, including the mathematical formulas and independent
variables, and an explanation of the statistical validity, and (iii) any adjustments
made to historical data to reflect significant improvements not captured in history,
such as changes in technology and processes.
When a commercial model is used in preparing an estimate, the BOE should
describe the estimating model used, and identify key input parameter values and
their associated rationale, as well as model outputs. The BOE should describe
how the model was calibrated, that is, describe the process for developing factors
that adjust the models computations to more closely reflect the contractors
specific environment. Auditors and other Government reviewers will assess the
databases validity to ensure currency, accuracy, and completeness by checking
that the most current and relevant data points(s) were used for calibration.
Accordingly, identification of the historical database in the BOE is essential.
Another key assessment is determining how the processes and technologies will
be used on the programs being estimated as compared to the same for those
programs contained in the calibration database. Use of technical support at
DCMA and the buying command by DCAA may be appropriate. In addition, the
BOE should describe how the model has been validated.
Additionally, contractors may submit proposals for forward pricing rate
agreements (FPRAs) or formula pricing agreements (FPAs) for parametric cost
estimating relationships to reduce proposal documentation efforts and enhance
Government understanding and acceptance of the estimating system. The basis of
estimate should include the information described above for CERs and should
International Society of Parametric Analysts
7-19
CHAPTER 7
GOVERNMENT COMPLIANCE
clearly describe circumstances when the rates should be used and the data used to
estimate the rates must be clearly related to the circumstances and traceable to
accounting and/or operations records.
Also, with the advent of reorganizations and process improvement initiatives such
as single-process initiatives, software capability maturity model improvements,
and technology improvements, adequate procedures should be in-place for
quantifying the associated savings and assuring their incorporation into the
estimates. Often such changes are reflected in decreasing complexity values or
downward adjustments to the estimate itself. Regardless of cause, Government
reviewers will evaluate any significant adjustment, including the pivotal
assumptions and rationale, to determine if it is logical, defensible, and reasonable.
7.2.4
Best Practices
Here is a list of best practices related to the Government review process.
7-20
Sufficient historical data relevant to the current environment often does not
exist, necessitating the use of other data points for calibration. Accordingly,
auditors should use judgment when evaluating data used for calibration and
validation, while contractors need to establish formal data collection
practices to ensure effective use of parametric techniques thereby better
controlling the time consuming data gathering process.
Statistical measures are not the only criteria to be used in determining the
validity of CERs so that a variety of tests should be performed, with no one
test disqualifying it. Other factors to be considered include the logic of the
relationships, soundness of the data, and adequacy of the policies and
procedures, as well as the assessed risk associated with the CER versus the
effectiveness of estimating techniques previously used to predict those
costs.
7.3
7-21
CHAPTER 7
7.3.1
GOVERNMENT COMPLIANCE
Normalization
(Inflation, Quantity &
Content)
B.
Select CERs
(Acceptance of Results)
F.
Selection of Variables
(Hypothesizing a
Relationship)
C.
Test Relationship
D.
No
Validation
G.
7-22
Approval
Revalidation
(Updating)
I.
CER Database
Yes
7.3.2.1
7.3.2.2
7.3.2.3
7-23
CHAPTER 7
GOVERNMENT COMPLIANCE
predictor should minimize the variability and, hence, lower the risk associated
with the estimate.
The inspection CER discussed above is an example of a linear relationship.
However, an exponential (i =a * l2) or logarithmic (i =a + 2log l) relationship
could have been postulated. Often linear relationships are favored, because they
are easier to evaluate and comprehend. Nonetheless, it is important not to
arbitrarily rule out non-linear relationships. Improvement or learning curves are
an example of a generally accepted exponential relationship. Additionally, for the
above inspection scenario, an analyst should have postulated and tested to
determine whether the CER will be used to estimate both initial and final
inspection costs jointly or separately for the greatest accuracy.
7.3.2.4
7.3.2.5
7-24
Step 6: Validation
Validation is the process of demonstrating that the CER or cost model accurately
predicts past history, or current experience. This entails demonstrating that the
data are credible, and that the relationship(s) is logical and correlates strongly. In
determining if a CER or model is a good predictor for future costs, its accuracy
needs to be assessed. As was previously discussed, the best technique is to use
independent test data, that is data not included in the development of the CER or
model. In limited data situations, however, flexibility is needed to develop
alternate approaches.
Regardless of approach, good judgment is a requisite to determine an acceptable
level of accuracy because there is no recognized standard level for CERs or more
complex models. In general, CERs and cost models should be at least as accurate
as the prior estimating technique relied upon.
7.3.2.7
7.3.2.8
7-25
CHAPTER 7
GOVERNMENT COMPLIANCE
For this reason, FAR 15.406.2(c) permits for the use of cut-off dates. This
regulation allows the contractor and the Government to establish defined cyclical
dates for freezing updates. Some data are routinely generated in monthly cost
reports, while others are produced less frequently. Accordingly, in establishing
cut-off dates, the difficulty and costliness associated with securing the requisite
data must be considered in tandem with the costs of actually updating the
database.
Annual updates, for instance, may be appropriate for elements that involve a
costly collection process, provided the impact in not updating more frequently
remains insignificant. It follows that an annual update of some or all of the data
implies that those portions of it may be as much as eleven months old at the time
of consummating negotiations, which may be deemed an acceptable risk under an
approved estimating system.
As such, a contractors procedures will need to specify when updates normally
occur, and indicate the circumstances and process for exceptions. Also,
contractors need to have procedures established to identify conditions that warrant
out-of-period updates. For example, a contractor may need to update its
databases outside the normal schedule to incorporate significant changes related
to such issues as process improvements, technology changes, reorganizations, and
accounting changes.
7.3.3
7.3.3.1
Existing Products
It is necessary to first understand the theory behind a given parametric
formulation before determining the limits involved in projecting outside its data
range, and the associated risks. For the commonly used learning curve, the
reduction in hours projected as more units are produced results from a
combination of operator learning, more efficient use of facilities, and production
line improvements possible with increased production rates. Other factors, such
as improved training and supervision, can favorably affect learning, but regardless
of causation, there is always a limit to the amount of improvement that can be
achieved.
7-26
In some cases, there can actually be a loss of learning such as when production is
disrupted, or a production line reaches full capacity (based on current
manufacturing processes). In the case of the latter, older or slower equipment may
be used and/or a second shift established to expand capacity.
Either remedy involves absorption of additional costs per unit of production,
which is expressed as a loss of learning. For example, if additional operators
must be hired, then the new operators begin at unit one on the learning curve. To
some degree, learning may take place at a faster rate for the newer group than for
the initial group, due to lessons learned, process improvements, and the resulting
training and mentoring. However, this cannot be assumed.
Nonetheless, learning curves may be appropriately used, provided the projected
efficiencies proportionately impact both the dependant and independent variables,
as corroborated through validation testing.
7.3.3.2
7.3.3.3
7-27
CHAPTER 7
GOVERNMENT COMPLIANCE
Breaks in Production
As was previously discussed, a break in production results in the loss of learning
for both operators and supervision. Further, experienced operators may no longer
be available when production resumes thereby necessiating the use of other
personnel. Even with gaps in production with experienced personnel, some loss
of learnng is expected. Additionally, if the break is sufficiently long, the line may
be dismantled and require reconstruction necessitating the development of new
method sheets and routings, as well as the replacement of tooling. In fact, after a
long break, resumption may resemble the start-up of a new program. Perhaps this
area involves more judgment than most for the experienced estimator and
evaluator due to the low frequency that it is encountered during the typical work
lifetime.
7.3.5
7-28
Hours/Unit
1000
90
0
10
00
80
0
70
0
60
0
50
0
40
0
37
5
34
5
32
5
30
0
27
5
25
0
22
5
20
0
1
10
0
100
Cum Units
Continuous Assignment
New Operator
Returning Operator
7.3.6
7.3.6.1
Best Practices
Accuracy Assessments
Limited accuracy may be acceptable for minor cost elements. However, major
cost elements should be as accurate as possible, and will be subject to greater
scrutiny. Whenever the results in any part of the evaluation process are
questionable, alternative methods and hypotheses need to be considered. If the
alternatives produce results similar to those under review, the accuracy of the
modeling process is confirmed. If the results are different, then further
examination is warranted.
7-29
CHAPTER 7
7.3.6.2
GOVERNMENT COMPLIANCE
7-30
Use of Specialists
During the implementation of parametric estimating systems, the use of experts in
statistics and/or the given model being addressed may be an expedient and
definitive in resolving challenges and conflicts. It also serves to make estimators,
evaluators, and auditors more familiar and comfortable with parametric tools. For
Government evaluations and negotiators, the use of specialists should be in
consonance with local procedures and policies, if they exist. Otherwise, the need
should be based upon judgment, and the identification of specialists or
prospective specialists by referral or some other rational means.
7.3.6.5
7-31
CHAPTER 8
8.1
8-1
CHAPTER 8
8.2
8-2
Has the parametric model been tested recently to ensure it is still providing
accurate estimates?
The materiality and risk associated with the estimate if the model provides
an inaccurate result.
8.3
8.4
General Applications
Parametric techniques are used for a variety of general applications, as shown in
Figure 8.1. There are other possible applications. The number is limited only by
the imagination of the user.
Forward Pricing Rate Models
Risk Analysis
Conceptual Estimating
Design-to-Cost (DTC)
Proposal Evaluation
Trade Studies
Sensitivity Analysis
Affordability
Cost Spreading
Cost Realism
Sizing parameters
MTBF, MTTRs
Make-buy analysis
Figure 8.1 General Parametric Applications
Most of these applications listed in Figure 8.1 are widely used throughout
Industry and the Government and guidance on their implementation is available.
8-3
CHAPTER 8
The following sections provide some general descriptions and examples relating
to most of the general applications listed in Figure 8.1.
8.4.1
8.4.1.1
Example
Using an IPT approach, one contractor developed a model that uses forecasted
sales and historical cost estimating relationships (CERs) to develop forward
pricing rates. This forward pricing rate model uses forecasted sales and historical
CERs to develop indirect rates. The process used to develop the forward pricing
indirect expense rates involves:
8-4
Sales forecast. The sales forecast is the major cost driver for this model.
Therefore, it must be developed first. An accurate sales forecast is critical
because it is the baseline from which all other costs are generated. A sales
forecast should be developed using the most current plan from a
contractors various budgeting processes (e.g., business plan, long range
plan, current forecast, and discrete inputs from the operating units).
Total cost input (TCI): TCI is a commonly used method for allocating
G&A expenses to final cost objectives and is further described in the Cost
Accounting Standard 410. TCI is calculated as a percentage of sales. A
TCI/Sales ratio would be developed based on historical experience, with
the result adjusted for any anomalies that may exist.
Direct labor/materials base. Direct labor and direct material bases are
developed as a percentage of TCI. These would also be based on
historical trends, adjusted to reflect any significant changes that may have
occurred, or are expected to occur, in the near future.
Fixed pool costs remain constant from the prior year and are adjusted for
any known changes (e.g., purchase of a new asset that would result in a
significant increase in depreciation) and escalation.
Variable pool costs are calculated by applying the variable pool cost
portion of the rate (based on historical trends) to the current forecasted
base. An example of an adjustment for a known change to variable costs
A projected pool may need to be adjusted for large dollar nonrecurring items such
as environmental clean-up costs. Forecasted expenses would also need to be
adjusted to reflect implementation of new processes. For example, if a contractor
implemented a new quality system, its costs should be reflected in the forecasted
expenses. The fixed, variable, and semi-variable costs would be added to arrive at
total pool costs.
8.4.1.2
Evaluation Criteria
These elements should be considered when developing forward pricing rates
using historical CERs:
x
8-5
CHAPTER 8
8.4.1.3
8.4.2
8.4.2.1
When obtaining data for performing independent analysis, the goal is to seek as
many relevant data points as possible. In many cases, patterns or trends will be
apparent in the multiple data points and help in their normalization.
8-6
Often, regardless of the source, the results can be a large spread of data points that
require normalization analysis. See the discussion about normalization in Chapter
2.
Are the data in a useable condition?
The basic purchase order data raises a number of questions related to the data (the
same issues apply to in-house data). Figure 8.2 lists some typical data details that
need to be assessed as part of the data collection process.
Identifying
the data
x
x
x
x
x
x
Evaluating the x
data
x
x
x
x
x
x
x
Normalizing
the data
x
x
x
x
x
x
8-7
CHAPTER 8
Trend Line
Forecast
Initial
Buys
MY I
MY III
MY II
Follow-on
Buys
Program Period
Figure 8.3 Projecting from Historical Data
The data set used in this example exhibited extremely stable properties and
produced an excellent modeling result. Do not expect all results to achieve this
close a fit.
8.4.2.2
Generally, without adjustment, the answer is No. However, one of the greatest
strengths of parametric analysis is the inherent flexibility of todays complex
models. Once a model has been calibrated to a known technology or process
baseline, it can be objectively modified to account for most variations. As an
example, a common scenario for one contractor would be the transition from
aluminum to advanced composite structures. Assume that an F-16 aluminum
landing gear door subcontractor is preparing a quote to produce carbon fiber doors
for an advanced fighter application. The prime contractor has calibrated its
complex model for F-16 landing gear doors.
8-8
How can the contractor use its model to develop a should cost value for the
new product?
Production quantities;
With this information and basic physical parameters (e.g., approximate weight,
dimensions, material mix) the prime contractor can adjust its existing model. The
most significant change would relate to the material composition of the doors. In
general, the switch to composite materials will ripple through weight, parts count,
learning curve slope, and production process. Most complex models are able to
adjust for the effect of such changes. This basic modeling technique allows the
prime contractor to develop a should cost target for an advanced technology
subsystem, while maintaining an audit trail back to the calibrated historical base
line. Similar techniques can be used with most complex parametric estimating
models.
8.4.2.3
8.4.3
8-9
CHAPTER 8
8.4.3.2
Implementation
The CAIV process is highly analytical and technical. It requires skilled
personnel, sophisticated analytic tools, and specific institutional structures to
handle the technical studies required, to track progress, and to make reports
capable of initiating action. In addition, the CAIV concept must be built into the
contract structure so that all parties to the program are properly motivated towards
a best value objective. Key tasks in the CAIV process are:
8-10
Technical analysis;
Reports.
8.4.4
Risk Analysis
Risk analysis is another important aspect of the acquisition strategy of most major
programs. The consideration of different, yet possible, program events and
outcomes should lead to a more realistic estimate, in spite of uncertainties
associated with cost models, variability of cost driver metrics, unplanned or
unexpected events, and other factors beyond control of the IPT. By their nature,
all cost estimates have some uncertainty. The number of uncertainties and the
associated cost impact is usually higher early in a programs development. As the
program matures, uncertainties generally decrease as a result of greater design
definition and production experience.
Risk analysis provides an orderly and disciplined procedure for evaluating these
uncertainties, so a more realistic cost estimate can be made. Risk analysis can be
performed using a number of techniques, including parametrics. Capturing
program uncertainty in the variance measures of parametric estimates allows the
analyst to mathematically model and quantify the risk. This method is one
technique for providing financial insight into the technical complications behind
the cost growth witnessed in many programs. Risk analysis provides additional
information and insights to a programs decision makers.
Risk analysis is a process that uses qualitative and quantitative techniques for
analyzing, quantifying, and reducing uncertainty associated with cost or
performance goals. IPTs are generally responsible for evaluating areas of
uncertainty in the evolution of design and process development. The preferred
common denominator for measuring these uncertainties is dollars. Therefore,
most risk analyses are conducted as part of the many cost analyses performed on a
typical program, including:
8-11
CHAPTER 8
Although program risk should decrease with time, the risk analysis process is
iterative. On most programs, the risk analysis process is not viewed as a onetime, "check the box" activity. It is an on-going management activity that
continues throughout the life of a program.
These cost risk analysis objectives fully support those of the program because it:
x
Identifies technical, schedule, and cost estimating risk drivers for use in
risk management exercises.
Enables the current BOEs to reflect the cost and effectiveness of the
planned risk handling strategies.
Depicts how funding levels impact total program and phase specific
confidence levels, assuming constant program execution plans.
The cost-risk analysis process begins with program definition and ends with
management review(s). For a risk analysis to be effective, program definition
must be at least one WBS level deeper than that at which the cost risk analysis is
performed. For example, to support a cost-risk analysis conducted at WBS level
three, program definition is required at WBS level four. This allows the analyst to
capture all reasonable risks, and provides the visibility needed to eliminate
overlapping and gaps in the analysis. Most commercially available parametric
models possess a risk analysis capability.
The major benefit of the use of parametric tools in the risk analysis process is the
fact that the tools are repeatable in this highly iterative procedure. Many whatif exercises must be performed during a programs life cycle. The use of
parametric tools is the only practical way to perform these exercises. See
Appendix H for the Space Systems Cost Analysis Group (SSCAG) discussion of
risk.
8.4.5
8-12
more than making a profit. For instance, a company may be willing to take a loss
on a project if there is significant profitable follow-on potential. Or, as another
example, they may want to strategically place a product in the marketplace. Also,
they may wish to get into a new business arena. There are many reasons to bid on
a project. No-bidding a project may not be an option. Careful consideration
given at the bid/no-bid stage can have an enormous impact on an organizations
bottom line. Parametric tools are very useful at this stage of the business process,
and can help the decision maker.
8.4.5.1
Example
Lets assume that a company is considering proposing to an RFP. The product is
well-defined, but its a new state of the art. Clearly, a cost estimate is a major
consideration. How should the cost estimate be performed? Since the product is
defined, parametric tools come to mind. Clearly, performing a bottoms-up
estimate at this stage of the program doesnt seem reasonable. No bill of
materials exists. Some type of top down estimate appears to be the way to go.
Since at least a preliminary engineering evaluation must be done, a parametric
approach should be adopted.
8.4.5.2
Evaluation Criteria
Once the estimate is complete, management can decide if the project is worth
pursuing. If the cost estimate is within a specified competitive range, a bid
decision could be made. It is important to note that other criteria besides cost are
important considerations. Obviously, program technical competence is also
important. On the other hand, cost can be a show stopper. If the cost estimate
is too far outside the competitive range, or too much money needs to be invested,
a no bid decision would be made.
8.4.5.3
8.4.6
Conceptual Estimating
Parametric costing models are powerful tools in the hands of management. If a
software or hardware product has been conceptualized, parametric models can
provide a fast and easy cost estimate. An estimate in the conceptual stage of a
program can be invaluable in managements planning process. Engineering
concepts such as weight, manufacturing and engineering complexities, source
lines of code and so forth are normally available at the time concepts are being
developed. Given this fact, use of parametric costing tools is the only reasonable
way to perform a cost estimate this early in a program life cycle.
8-13
CHAPTER 8
8.4.6.1
Example
For example, if a Government program manager has a new technical need, lets
assume for a specific software application, and that application can be
conceptualized, a cost estimate can be obtained. The conceptualization will take
the form of a general description: application, platform, programming
language(s), security level, programmer experience, schedule, an estimate of the
SLOC and so forth. Such information will provide a cost estimate using any
number of parametric models. A cost estimate is important early in a program, if
only to benchmark a proposed technical approach.
8.4.6.2
8.4.7
8.4.7.1
Example
Assume, for a moment, that a bottoms-up estimate has been (or is being)
generated for an organization who is proposing on a must win program.
Winning the program could be important for a variety of reasons. In any event,
should the organization bet winning the program on just one estimating approach?
If an independent estimate (or two) is performed by a team of people who have no
vested interest in, or little knowledge of, the primary estimate, the benefits of such
an estimate could be enormous. If the ICE supports the primary estimate, then
added confidence is automatically placed on the original estimate.
If parametric tools are utilized for the ICE, many unique estimating criteria
(primarily technical criteria) are used for the cost evaluation. If, on the other
hand, the two estimates indicate significant differences between each other, an
evaluation can still be performed to correct the primary proposal. For instance, if
two estimates show a more than 10% difference (management will determine the
level of significance) between themselves, a careful review, analysis and
explanation may be in order.
In a real world example, when the two estimates differed by a significant
amount, an analysis revealed that the estimating team had duplicated estimates in
8-14
various places within a complex WBS. Without the ICE, the discrepancy never
would have been discovered.
8.4.7.2
8.4.8
8.4.8.1
Example
Design to cost techniques are used extensively in commercial manufacturing. It
stands to reason, then, that many DoD applications would exist. A customer will
not purchase a $10,000 vacuum cleaner, regardless of how good it is. The
product must be manufactured for customer affordability. Based on analyses, cost
targets (sometimes called standards) are allocated to manufacturing operations.
The management expectation is that those standards will be met or exceeded by
the floor operators. T he targets are rolled up to the final product. If the final
target is met, the product is affordable and the manufacturing processes are under
control. If the targets are exceeded, the processes may be out of control, and an
analysis must be performed.
8.4.8.2
8.4.9
8-15
CHAPTER 8
significant number of parameters that can impact the training costs they
developed a parametric model that would take into account the various factors
(both cost and operational factors) that allowed them to estimate the total program
costs for various training requirement scenarios.
8.4.9.1
Example
The C-17 Aircrew Training System (ATS) Life Cycle Cost (LCC) model was
constructed around the approved Wright Patterson AFB, Aeronautical System
Division WBS for Aircrew Training Systems as modified for the C-17 ATS
program. Structuring the LCC model around this established WBS provided the
framework so that the model is generic enough to fundamentally analyze any
integrated training system. In addition, the use of this approved WBS allows the
model predictions to be easily integrated and compatible with a prime contractor's
accounting system used to perform cost accumulations, cost reporting, budgeting,
and cost tracking.
Rather than make the C-17 ATS LCC model a pure accounting type of LCC
model, a decision was made to integrate the functional parameter algorithms of
each WBS element with the cost element relationship algorithms of each WBS
element. This caused the model to be more of an engineering type of model in
which the predicted outputs of the model are sensitive to input data, and changes
in the input data to the model (i.e. program and pragmatic data changes). As a
result, the model can be used to make early predictions for program development
and acquisition decisions and can then be re-used during the operations and
support phase to make continuing economic decisions based on actual annual
operational decisions.
The C-17 ATS LCC model was developed using the Automated Cost Estimating
Integrated Tool (ACEIT) modeling environment. See Appendix A for more
information about ACEIT. This environment was selected primarily because of
the flexibility and functional capability that it could supply to an LCC model. As
a result, the model is a tool which can be continuously employed to manage the
life cycle cost of an integrated training system and is not just a single point LCC
estimate.
During the development of the C-17 ATS LCC model, each WBS operational,
functional, and cost element relationship rationale, element algorithm and
estimating methodology, along with other relevant data and information denoting
how an element is modeled, was recorded and documented. At the conclusion of
the modeling effort, this documentation provided the necessary traceability
information and auditing data to verify and validate the model's predictive
capability. This documentation was developed online as an integral part of the
definition and development of the operational, functional and hardware cost
element relationships for each WBS element and associated WBS element
component parts.
8-16
8.4.9.2
8.4.10
8.4.11
8.4.12
8-17
CHAPTER 8
For example, if a parametric model has already been developed and utilized for a
program or product, the model can be used in a should cost analysis. If a program
has overrun, for instance, the model can be used to evaluate differences between
the proposed program and the one that was executed to determine why. In this
type of application, the benefits of the parametric modeling technique should be
easy, quick and efficient.
8.4.13
8.4.14
8.4.15
Trade Studies
Trade studies always come with a CAIV or a DTC program. But, sometimes,
trade studies are important within their own right. Trade studies are almost always
used in a commercial business environment, and are more and more frequently
being used in DoD. Trade studies are most frequently used to evaluate cost and
performance trade-offs between competing technical designs. The basic idea
behind trade studies is to get the highest performance for the lowest cost. What it
doesnt mean is cheapest. I t means best performance value, or best bang for
the buck. Parametric costing models are very effectively used in trade studies.
Trade studies require multiple sensitivity studies, with rapid turnaround times.
For example, an Army program wanted to evaluate the performance and cost
curve for gasoline and diesel engines in a tank design. Technical parameters were
input into two parametric cost models one model for gasoline engines and the
other for diesel engines. Based on a performance vs. cost analysis, the diesel
engine was selected. The power of the parametric models was demonstrated
when the models inputs were easily tweaked almost repeatedly. No other
estimating approach would have been nearly as effective.
8-18
8.4.16
Sensitivity Analysis
Parametric tools are extremely powerful when sensitivity analyses are needed. As
with trade studies, the ability to easily tweak inputs for even small changes
provides management with a strong and superior analytical benefit. When a
parametric models input(s) is changed, the resulting costing is instantaneous.
The sensitivity analysis can be quickly performed many, many times until the
ultimate result is obtained. No other estimating approach besides parametrics can
perform this task effectively.
8.4.17
8.4.17.1
Example
Lockheed Martin Astronautics (LMA) signed up as a Reinvention Laboratory
member to test the PRICE H Model as a primary Basis of Estimate (BOE) based
on solid, auditable and verifiable data. In coordination with the local Defense
Contract Management Agency (DCMA) and Defense Contract Audit Agency
(DCAA), LMA developed an implementation plan to calibrate and validate the
use of the PRICE H Model. The plan included joint training, writing a procedure
8-19
CHAPTER 8
Lessons Learned
The following lessons were identified during the LMA project and should be
considered whenever parametric estimating techniques are used to develop a
BOE:
8.4.18
You will never have all the data desired but need to determine what data
are critical;
8-20
all key performance parameters (KPPs) be costed. This activity is even increased
with the constant changes occurring in funding of Government programs.
Today, most contractor and Government organizations have affordability groups.
Frequently, the quantity that is anticipated when a solicitation is issued can
change significantly during the negotiation and funding process. Parametric
techniques can be effectively used to address the costs variances associated with
changing quantities without requiring the solicitation of new bids, and provide a
level of comfort relating to the affordability of the new quantities that reduces the
risk for both the Government and the contractor.
In this section we will provide an example of where this was effectively used
during a source selection on a $200 million Government contract for the
procurement of the Joint Tactical Information Distribution System (JTIDS). The
JTIDS system is used on several different weapon systems by all four armed
services and NATO. This made the determination of the future requirements very
difficult to estimate.
8.4.18.1
Ni
E(C) = (qijCijP(qij))
i=1 i=1
Where
qij = jth possible value of q for line item i.
cij = bid unit price (cost) when buying j units of line item i.
p(qij) = probability of buying J units of line item i.
M = number of different line items on contract.
N = number of possible quantities for line item i.
E(C) = expected possible quantities for line item i.
For each item, it is important to note that the sum of all possibilities must equal
one.
The use of this formula allowed the bidders to conduct internal what if analysis
to arrive at their desired bottom line.
8-21
CHAPTER 8
8.4.18.2
8.4.19
Cost Spreading
Parametric estimating tools can be used to develop time phased profiles for the
total costs on a project. A top level parametric was developed based on a
database of 56 NRO, Air Force, and Navy programs and was used to estimate the
time to first launch. This study included space satellite acquisition, integration,
system engineering, and program management costs for incrementally funded
contracts.
The process used in the study identified above involved three steps: (1)
estimating time from contract award to launch: (2) developing a time-phased
expenditure profile, and (3) converting cost to budget. The first step required the
development of the estimate for time from contract award to launch and required
an independent estimate of the schedule.
The database referred to above was used as the basis for the following top level
schedule model that estimates the time to first launch:
Duration (months) = 17.0 + 0.87W.406
(DL*PL).136
8-22
8.5
Summary
There are many uses of parametric tools other than to develop estimates for
proposals. Such uses include, but are not limited to:
x
CAIV applications;
EACs;
Design-to-cost;
Risk analysis;
Budget planning;
Several examples were presented in this chapter. The use of parametric tools is
limited only by the end-users imagination.
8-23
CHAPTER 8
8-24
CHAPTER 9
Within this chapter, the term International applies to any geographic area outside
of the United States of America. The chapter contrasts USA and International
application of parametric estimating in three topical areas: general applications,
examples, and regulatory considerations. General applications deal with
differences in the way parametric estimating is applied between the International
country and the USA. Examples such as basis of estimate (BOE) are presented to
illustrate differences. Regulatory considerations are presented from the
perspective of an organization responding to an International country request for
information (RFI)/request for proposal (RFP).
The International countries covered within this chapter include France, Germany,
and the United Kingdom; a section is devoted to each. Other countries havent
responded with an input. However, during the 2007 review period there is an
opportunity for additional contributions.
This chapter:
9.1
9.1.1
France
General Applications
In France, parametric cost estimating is applied in much the same way as it is
within the USA. Most differences are nominal naming conventions.
9-1
CHAPTER 9
9.1.2
INTERNATIONAL USE
OF
PARAMETRICS
Examples
Pending.
9.1.3
Regulatory Considerations
In 2002, the President of the Republic and the French Government restated their
intention to provide France with defense forces in keeping with the nation's
security interests and international ambitions. This intention is reflected in the
2003-2008 military programming law, which in a demanding economic context,
provides 15 billion per year for defense investments. In 2004, the armament
program management reform highlighted two objectives for the Ministry of
Defense activities: a) to reinforce Government project ownership b) to help
develop the defense industrial and technological base at the national and
European level. The Ministry of Defense is therefore responsible for building a
high-performance defense system consistent with Government priorities. To this
end, it is implementing a procurement policy aimed at providing the French
armed forces with the equipment they need to accomplish their tasks.
This policy is based on a principle of competitive autonomy, built around two
complementary goals: optimizing the economic efficiency of investments made
by the Ministry of Defense to meet armed forces requirements and guaranteeing
access to the industrial and technological capabilities on which the long-term
fulfillment of these requirements depends. Economic efficiency must be among
the Ministry's top priorities. Priority shall be given to market mechanisms and
competitive bidding, which provide great leverage in achieving competitiveness
and innovation.
9.1.3.1
Marketplace Categories
The Ministry of Defense seeks to maintain and develop an industrial and
technological base from three categories of marketplace segments.
9-2
The first category groups together equipment that can be acquired through
cooperation with partner nations or allies. Equipment in this category can
be procured on the European market, in particular, or manufactured
through European cooperation agreements.
The third category includes equipment for which the Ministry of Defense
turns to the global marketplace. This includes common equipment that
can be procured from a large number of providers (mobile support
equipment, camouflage systems, etc.) and a few specialized systems of
small quantities acquired through existing equipment.
9.1.3.2
9.1.3.3
Preparing for the future, which entails carrying out research, controlling
technologies and preparing the industrial resources required for
manufacturing future weapon systems;
Competitive Bidding
Procurement methods implemented by the Ministry of Defense are based on the
use of competitive bidding whenever possible and making prime contractors
responsible by obtaining commitments to results. The use of new procurement
and financing methods is also encouraged.
A market-based approach with competitive bidding significantly contributes to
technical and economic emulation and helps to improve the service provided. It
provides a suitable framework for achieving a trade-off between the need to meet
the public buyer's requirements at the best price and the expectations of vendors,
who are justifiably concerned with the profitability and long-term future of their
company. It also has a revealing impact on the competitiveness of the defense
industrial and technological base. Competition is therefore desirable and is
sought within an area consistent with the required degree of autonomy,
particularly within the frontiers of Europe, which is the reference area.
As the consolidation of French and European industry has reduced the number of
potential vendors in some fields, the situation often arises where only one
company is in a position to act as prime contractor for an armament program. In
such cases, the DGA (French Armament Procurement Agency) requires the prime
contractor to open contracts for subsystems and equipment to competition.
9-3
CHAPTER 9
9.1.3.4
INTERNATIONAL USE
OF
PARAMETRICS
9.1.3.5
Grouped Orders
Grouped orders give prime contractors a clearer view of future work load and
allow them to organize their procurements, investments, and production more
efficiently on the basis of order books committing the Government over a period
of several years. This approach is to the benefit of both parties and, in return, the
Ministry of Defense expects substantial price reductions and better control of
obsolescence from prime contractors.
9.1.3.6
9.1.3.7
In-Service Support
The in-service support of defense equipment is crucial to operational readiness.
Through-life support (TLS) is a major economic consideration. Improving
equipment readiness and reducing support costs are among the Ministry's top
priorities. Equipment support is considered right from the armament program
preparation stage. Through-life support is procured via suitable procedures and is
mainly a matter for the armed forces. In accordance with the principle of
competitive autonomy, competitive bidding is carried out as broadly as possible.
Use of service level agreements, where contractors for support services can be
made responsible through commitments to availability are encouraged. In
general, "service-type" support operations can be ordered through innovative
procurement procedures.
9-4
9.1.3.8
References
x
9.2
9.2.1
Germany
General Applications
In Germany, parametric cost estimating is applied in much the same way as it is
within the USA. Most differences deal with naming conventions. For example,
funding of defense projects in Germany require a document called Approval for
Realization. Among the contents of this document are: implementation
alternatives analyzed and their assessment; justification and description of the
solution selected; time and cost plans, updated and detailed; optimization of the
overall balance between performance, time and cost; detailed information on
expenses during the risk reduction phase; and others which are usually associated
with Exhibit 300 submissions for IT project funding approval by the US Office of
Management and Budget (OMB).
9.2.2
Examples
Pending.
9.2.3
Regulatory Considerations
The Bundeswehr (Federal Defense Force) is tasked by Basic Law (Grundgesetz)
with the duty of providing national defense. To be able to accomplish this
mission and the associated tasks, the armed forces must be provided with the
necessary capabilities by making available the equipment required.
Article 87b of the Basic Law assigns the task of satisfying the armed forces
requirements for materiel and services to the Federal Defense Administration.
The contracts required for providing the necessary equipment to armed forces are
awarded to industry, trade, and commerce by the designated civilian authorities of
the Federal Defense Administration in compliance with the awarding regulations
and directives of the Federal Government.
9.2.3.1
9-5
CHAPTER 9
INTERNATIONAL USE
OF
PARAMETRICS
Once the preconditions for service use or early partial use, as the case may be,
have been achieved, the Approval for Service Use step will be taken. If available
products are procured in an unmodified condition, appropriate measures need to
be taken during the analysis phase so that the Approval for Service Use can be
incorporated into the Final Functional Requirement/Approval for Realization.
The introduction phase is concluded by the phase document Final Report
following the completion of all implementation activities.
All measures taken during the in-service phase are aimed at maintaining the
operational capability and safe operation of the equipment at operational
conditions and within the scope which is legally permissible until the time of
disposal. If, in the course of the in-service phase, product improvement measures
are necessary, a new development cycle shall be initiated in accordance with these
provisions.
9.2.3.2
9-6
9.2.3.3
Contract Awards
When awarding contracts, the Bundeswehr must comply with contract awarding
regulations. National or international awarding procedures are applied depending
on the type and extent of required performance.
9.2.3.4
Contract Terms
The drafting of contracts is based on the principle of freedom of contracts. There
are no special legal provisions governing the contents of contracts with public
customers. In accordance with the principle of self commitment of the
administration, however, the procuring agencies are obliged to follow uniform
administrative guidelines when contracting. To become legally effective, general
contract terms must be clearly identified as contractual provisions. A contractors
general terms and conditions are not accepted.
9.2.3.5
References
x
9-7
CHAPTER 9
INTERNATIONAL USE
9.3
9.3.1
OF
PARAMETRICS
United Kingdom
General Applications
In the United Kingdom, parametric cost estimating is applied the same way as it is
within the USA. See Chapter 7 for a discussion on government compliance.
9.3.2
Examples
Pending.
9.3.3
Regulatory Considerations
The Acquisition Management System (AMS) written and maintained by the
Defence Procurement Agency (DPA) contains guidance on how programme
offices and occasionally Industry should prepare a Business Case (BC) for
Investment Approvals Board (IAB) submissions. Director General
(Scrutiny+Audit), issued a requirement for the use of properly calibrated and
validated cost models (including) parametric estimating techniques on internal
UK Ministry of Defence (MoD) BC submissions to the IAB.
This requirement is for all cost models to undergo a validation and verification
(VNV) process. This will lead to the proper and consistent use of calibrated and
validated cost estimating techniques in all UK MoD IAB submissions. Since
much of the IAB Business Case programme costs are founded on Industry
submissions it follows that the VnV process must apply equally to costs estimated
both internally and externally.
This section highlights the elements of the UK Government and MoD
procurement requirements. The section also discusses the key elements that
should be included in an organisations parametric estimating system policies and
procedures.
9.3.3.1
9-8
Cost data (e.g., labor hours, material costs, scrap rates, overhead rates,
general and administrative expenses (G&A));
Cost or pricing data does not include judgmental data, but does include the data
on which the judgment is based. Like all traditional estimating techniques,
parametric estimates contain judgmental elements. Any judgmental elements are
not subject to certification; however, they should be disclosed in a contractors
proposal since they are subject to negotiation.
9.3.3.2
Competitive Procurement
UK MoD does not apply Defcon 643 to competitive bids, 648 may be applied in
case of follow on contracts that require single source price fixing and therefore
access to contractor data to agree follow on prices. In competition, only data
collected and clarified at the proposal stage will be available to Government cost
estimators. Therefore it is important that all data and other assumptions necessary
for consistent and comprehensive cost analysis to be supplied by the bidder are
clearly identified and requested at the tender stage. This data must include the
tools and calibration data used by a contractor to generate the tender prices.
Examples of relevant cost and risk questionnaires are held by PFG.
9-9
CHAPTER 9
9.3.3.3
INTERNATIONAL USE
OF
PARAMETRICS
Estimating Systems
Cost estimating systems are critical to the development of sound price proposals
and cost forecasts. Sound price proposals provide for reasonable prices for both
the contractor and the Government. Good practice criteria state that an adequate
system should use appropriate source data, apply sound estimating techniques and
good judgment, maintain a consistent approach, and adhere to defined estimating
policies and procedures. The key issue here is to record and agree the data and
assumptions used to generate the cost estimates; this record is normally referred to
as a Master Data and Assumption List (MDAL) and is similar to the US CARD
system.
A key estimating system policy and procedure for parametric estimating relates to
the frequency of calibration (or database updates). Calibration is defined as the
process of adjusting the general parameters of a commercial parametric model so
it reasonably captures and predicts the cost behavior of a specific firm. Data
collection activities that are necessary to support parametric estimating techniques
(including calibration processes) tend to be expensive. As a result, parametric
databases can not always be updated on a routine basis. Therefore, the use of cutoff dates is encouraged. When used, cut-off dates should be defined for all
significant data inputs to the model, and included in a companys estimating
system policies and procedures. In addition, contractors should disclose any cutoff dates in their proposal submissions. The parties should revisit the relevancy of
the established dates before settling on final price agreement and seek updates in
accordance with the contractors disclosed procedures. When cut-off dates are
not used, companies should have proper procedures to demonstrate that the most
current and relevant data were used in developing a parametric based estimate.
To ensure data are current, accurate, and complete as of the date of final price
agreement, contractors must establish practices for identifying and analyzing any
significant data changes to determine if out-of-cycle updates are needed. A
contractors estimating policies and procedures should identify the circumstances
when an out-of-cycle update is needed. Examples of some events that may trigger
out-of-cycle updates include:
x
9-10
9.3.3.4
Subcontracts
In UK single source work the application of Defcons 643 and 648 flow down to
sub-contracts placed by a Prime contractor. Subcontractors should be responsible
for developing their own estimates since they have the experience in pricing the
specific good or service they will be providing. Subcontractors are in the best
position to include the cost impacts of new events such as reorganizations,
changes in production processes, or changes in prices of key commodities.
For these reasons, it is a best practice for a prime contractor to obtain the
necessary cost or pricing data directly from its subcontractors. Prime contractors
can work with their subcontractors to streamline costs and cycle time associated
with preparation and evaluation of cost or pricing data. This means,
subcontractors can use parametric estimating techniques to develop their quotes,
provided their models are adequately calibrated and validated. In addition, prime
contractors can use parametric techniques to support cost or price analysis
activities of subcontract costs. The use of parametric techniques to perform
subcontract cost or price analysis is discussed in further detail in Chapter 8, Other
Parametric Applications.
9.3.3.5
9-11
CHAPTER 9
9.3.3.6
9-12
INTERNATIONAL USE
OF
PARAMETRICS
References
x
JSP507
GAR