Ebook - LCC Using ABC
Ebook - LCC Using ABC
Ebook - LCC Using ABC
Jan Emblemsvåg
Preface ix
Acknowledgments xv
1 Introduction 1
What Does It Cost? 1
The Role of Life-Cycle Costing 4
Why Activity-Based Life-Cycle Costing? 5
Notes 14
4 Activity-Based Costing 95
Motivating Example 95
Activity-Based Costing 100
vi CONTENTS
Glossary 308
Acronyms 314
Index 315
x PREFACE
form the basis of systematic work toward gaining sustainable profitability for the
long term.
Some would say that LCC is to help engineers “think like MBAs but act like
engineers.” That is true and important, but I think of LCC in a broader sense. I
believe the main purpose of LCC should be to help organizations apply knowledge
about past performance and their gut feelings to future issues of costs and risks.
This should be done not in the traditional sense of budgeting, but in meaningful
predictions about future costs of products, processes, and organization, and their
associated business risks.
In order to turn LCC from being an engineering tool hidden in the cubicles in
an engineering department to a more useful and widely accepted engineering and
management tool, some changes must be made. The purpose of this book is to
present and illustrate one such approach that can bridge the gap between past and
future costs, engineering and management decisions, and direct and overhead
resource usage. To do that, I have taken two well-known concepts, Activity-Based
Costing and LCC, and merged the best parts while adding the usage of Monte
Carlo simulations, uncertainty, and some additional insight.
It should be noted that Activity-Based LCC is similar to the Activity-Based Cost
and Environmental Management approach, but as the saying goes, the devil is in
the details. The Activity-Based Cost and Environmental Management approach
does not explicitly detail how to do cost forecasting, financial analysis, and so
forth, issues that are pertinent to LCC. Also, it leaves the reader with little explicit
support on assessing and managing risks. This book therefore concerns how to turn
the Activity-Based Cost and Environmental Management approach into an LCC
approach, which for simplicity is referred to as Activity-Based LCC.
The result is an approach that in my opinion is flexible, highly effective, and
efficient for most cost management considerations (including LCC) and that can
handle risk and uncertainty in a credible fashion. This is evident both from its the-
oretical foundations and also from the three case studies provided in the book. For
those who are particularly interested in the theoretical foundations, I have provided
references to every chapter in the back of the respective chapter.
The book is organized into nine chapters. In Chapter 1, you will find the basic
premises for the book and the key characteristics of Activity-Based LCC. In
Chapter 2, the basics of LCC are discussed. It starts out by discussing what a life
cycle is, because that is not obvious and numerous definitions exist in the litera-
ture. Then cost as concept is defined and contrasted to expense and cash flow. This
distinction is important to understand because LCC models can be cost, expense,
and cash flow models, and it is important to understand which is which, and what
to use when.
PREFACE xi
In Chapter 8, the WagonHo!, Inc. case study is found, the most complex case
study in the book. Its complexity derives from the facts that:
● It incorporates multiple products and cost objects.
● It includes credible overhead cost considerations.
● It includes the entire life cycle of the products.
● It includes both a cash flow analysis and a costing analysis.
● It shows how Monte Carlo simulations can effectively be used in terms of
handling uncertainty as well as risks and in enhancing tracing.
This case study is, unlike the two other case studies, functional. That may be
viewed as a limitation. However, given the complexity of similar real-world case
studies, I am happy to use a simpler version because it is more than sufficient
enough to illustrate the potential of Activity-Based LCC. For example, the case
study clearly illustrates how Activity-Based LCC can handle multiple cost objects
at the same time, how Activity-Based LCC handles overhead costs in a credible
fashion, and that Activity-Based LCC is in fact a costing analysis and not just a
cash flow analysis. Another advantage of this case study is that it is also used in
Chapter 4 as an extensive example of an ABC implementation. By contrasting that
implementation to the model in Chapter 8, readers can easily see both the differ-
ences and similarities between ABC and Activity-Based LCC.
In Chapter 9, some key concepts and findings are revisited. Whereas Chapter 1
focuses on the problems Activity-Based LCC must overcome and how they relate
to traditional cost management approaches, including traditional LCC to a large
extent. Chapter 9 focuses on the building blocks of Activity-Based LCC. The ear-
lier chapter helps readers to understand what LCC ideally should be about and how
Activity-Based LCC overcomes the problems of traditional approaches. Chapter
9 explains how Activity-Based LCC can be applied. Both Chapters 1 and 9 tell the
same story, although they tell it differently. Finally, some future issues are dis-
cussed.
The book also includes two appendices. Appendix A contains a Monte Carlo
example. This example is handy to read for those who do not quite understand the
power of Monte Carlo methods. It clearly illustrates how Monte Carlo methods
can be used for three purposes: (1) uncertainty and risk assessment/management,
(2) the tracing of critical success factors, and (3) information management.
Appendix B provides an overview of a ship component classification system,
which is applicable to the case study in Chapter 7.
As outlined in the last chapter of the book, Activity-Based LCC opens up a
completely different way of conducting cost management. Instead of depending
on hindsight and chasing the deceptive accuracy of past figures to look forward
ACKNOWLEDGMENTS
It is difficult to mention one person before the other. However, I undoubtedly owe
much to Professor Bert Bras at the Systems Realization Laboratory (SRL) at the
Georgia Institute of Technology. Apart from being my supervisor, and an excellent
one, he and I worked together during my master studies in 1993 and 1994 to
develop a new Life-Cycle Costing method called Activity-Based Costing with
Uncertainty. Then we embarked on a new Life-Cycle Assessment method in 1996
through 1999 as a part of my Ph.D. studies, which upon completion turned out to
be useful for integrated cost and environmental management. In 2000, a popular-
ized and edited version of the Ph.D. dissertation was published as a book titled
Activity-Based Cost and Environmental Management: A Different Approach to the
ISO 14000 Compliance.
I also am highly indebted to Professor Farrokh Mistree also at the SRL for his
mentoring of both me and his former student Bert Bras. In recent years, I have
found his insight concerning identifying “hidden assumptions” particularly inter-
esting because it turns out that they are literally everywhere. Now, working as a
consultant, identifying such hidden assumptions can be the difference between a
mediocre project and a great one.
Another great thing I learned from Farrokh and Bert was to articulate my
thoughts. Without such training, I would probably never even have written a sin-
gle paper. I have truly learned a lot from the two of you, Bert and Farrokh, more
than I can ever express. When I came to the SRL, I hated writing, but I came to
love it, and it is mostly due to your efforts. Because I can never repay you, I hope
that by completing this book without your help, you will think “yes, Jan has grad-
uated,” and you should know that it is largely due to your efforts through the years.
Meeting and working with the two of you has been a defining moment in my life.
Thank you!
After writing some hundreds of pages, it is virtually impossible to have a fresh
view of the book. I would therefore very much like to acknowledge the tremen-
dous effort made by Senior Partner John Erik Stenberg at Considium Consulting
Group AS and my colleague Lars Endre Kjølstad in reviewing, providing con-
structive comments, and simply making the book better. The proofreaders and
Sheck Cho at John Wiley & Sons also have done a great job in making this book
presentable.
xvi ACKNOWLEDGMENTS
Regarding the case studies, I would like to thank Randi Carlsen from Sagex
Petroleum AS, who wrote Chapter 6; Jan Henry Farstad of Farstad Shipping ASA,
Annik Magerholm Fet of the Norwegian University of Science and Technology
(NTNU), and Greg Wiles of the Center for Manufacturing Information
Technology. Without their help, none of my case studies would have materialized.
I would also like to thank Det Norske Veritas (DNV) Consulting for allowing
me to pursue writing this book and spend some time on its completion. Moreover,
the insight I have gained through DNV projects has been valuable.
Finally, I would like to thank my family for their support. In particular, my wife
has been very patient with me.
1
INTRODUCTION
Which of you, intending to build a tower, sitteth not down first, and
counteth the cost?
Jesus
Luke 14:28
This book concerns the age-old question, “What does it cost?” But not just in mon-
etary terms. Not understanding the uncertainties and risks that divide an organiza-
tion from its desired results is also a “cost” because it can, and often will, result in
a loss. Therefore, if we were to build a tower, we should also consider the risks and
uncertainties of building it when counting the costs.
Despite the fact that cost management has been around as a field of study for
more than 150 years, the answers we have found so far to this simple question have
obvious shortcomings. That is evident from the fact that virtually all cost man-
agement systems only concern the costs incurred within the four walls of the
organization. Even worse, we try to control costs after they are incurred instead of
eliminating them before they are committed. The result is a massively wasteful
economy.1
In this book, a new approach is presented that deals with estimating future costs
and directing attention toward its root causes so that companies and organizations
can get useful decision support for solutions both inside and outside the organiza-
tion. The approach, called Activity-Based Life-Cycle Costing (LCC), is presented
by theory, argumentation, and illustrative case studies.
to costs being committed before they are incurred. Managing costs effectively and
efficiently thus implies that costs must be eliminated in the commitment stage and
not reduced in the incurring stage. Many organizations realize this, but is few prac-
tice it. The costing methods employed by most companies simply do not take such
notions into account as they embark on cost cutting. This happens for many rea-
sons, but it might simply be a matter of bad habits or because we dislike to learn
new things unless the consequences of not learning are worse than those of learn-
ing, as world-renowned psychologist Edgar H. Schein claims.2
The points argued so far are illustrated in Figure 1.1. The numbers are heuris-
tics from manufacturing. In the literature, we typically find that the number is
somewhere between 70/30 and 90/10; the most often quoted numbers are along
the 80/20 ratio. Figure 1.1 shows that although about only 20 percent of the costs
are actually incurred in the activities prior to production, these activities actually
commit 80 percent of the costs. The production costs, however, incur about 80 per-
cent of the costs, but production improvement efforts impact only about 20 per-
cent of the cost commitment. This has been a well-known fact for many years. In
fact, LCC came about in the early 1960s due to similar understanding concerning
weapons systems procurement in the U.S. Department of Defense.
The first to use such ideas extensively in cost management on a continuous
basis and on extensive scale, however, were the Japanese. After World War II,
Japan was in ruins, and to rise, the Japanese had to be more clever than the rest.
American industry, in contrast, saw no need to become smarter because they were
already doing so well—for the time being. It is therefore not strange that a
Japanese cost management concept, target costing, has most clearly emphasized
the need for the elimination of costs through design. Such emphasis leads to proac-
tive cost management, as opposed to reducing costs after they are incurred, which
is reactive cost management.
20% Legend
Production costs
Unfortunately, even today, more than 30 years after Japanese industry became
world class, most companies still manage costs reactively. While they try to elim-
inate costs via design, unless cost management follows suit, it will be two para-
digms fighting each other. Thus, the most established paradigm will usually prevail
unless the challenger can present a convincing case.
The traditional, reactive paradigm is a challenge for the new, proactive paradigm
because this paradigm inevitably incurs more costs up front, which traditionally is
thought to be bad for business since traditional accounting regimes treat Research
and Development (R&D) costs as period costs and not investments. Then design
departments will get insufficient funding to eliminate costs before they are
incurred. The intended results do not materialize, which in turn will be used as an
argument against the new paradigm. Therefore, what is needed is a change of mind-
set and a change of cost management approaches. We simply cannot achieve the
results promised by the proponents of cost elimination via design unless we com-
mit wholeheartedly to it. As Michael Porter argued concerning strategic position-
ing, middle positions are never successful.3 The traditional paradigm has some
other unwanted side effects that the new paradigm can overcome. The best-known
side effect is quality. Just as the Japanese designed products that were less costly
than their American counterparts, they also produced products that many consid-
ered better in terms of quality. The overall value of Japanese products was in other
words greater, at least if we use the definition of value that the European Committee
for Standardization uses.4
Value5 is defined as proportionate to the satisfaction of needs divided by the use
of resources. In other words, value is proportionate to quality divided by costs.
Value-driven organizations must therefore be both quality driven and cost con-
scious, something traditional management systems simply cannot deliver, as
explained in Chapter 4. On top of that, despite the fact that traditional cost man-
agement systems are partially designed to satisfy external needs for reporting, they
have completely missed the concept of shareholder value and its measure of eco-
nomic profit, or Economic Value Added (EVA).6
Costs like quality and other important aspects of the product, such as image and
branding, cannot be fixed after the product is manufactured, as is traditionally
done. They can be successfully handled only during an effective and efficient
design process supported by relevant cost management systems and value-driven
strategies. The new paradigm of cost elimination through design has far-reaching
implications that must be taken seriously if the intended results are to materialize.
Quick fixes and shortcuts, which have often been the rule of the day in many com-
panies,7 will not sustain a change toward the new paradigm. A change must occur
in both the culture of business and the performance measures, because those two
factors are the most important ones in change management efforts, as shown by
the two large surveys carried out by A.T. Kearney and Atticus.8
4 INTRODUCTION
One such change in performance measures is to expand the horizon of the cost
management efforts from the four walls of the company to the relevant parts of the
life cycle where value is created and to employ foresight instead of hindsight. In
this context, LCC can play a far greater role than traditionally thought, and that is
one of the main messages of this book.
ment, however, the world is still deterministic and determinable by simple aver-
ages. We hear, for example, about how executives rush into implementing inte-
grated cost systems but that “some (real-time cost information) will cause
confusion and error, delivering information that is far less accurate than what man-
agers currently receive.”20 The reason for such confusion and error is lack of under-
standing in the statistical nature of costs because “there will always be fluctuations
in spending, volume, productivity and yield.”
It is time to internalize the fact that costs simply cannot be determined with cer-
tainty and act accordingly. Basically, while it is important to avoid cost management
practices that act on random variation, information about cost fluctuations should
be incorporated in forecasts to provide uncertainty and risk measures for budgeting
and so on.
entail having a costing system for internal control and management and another
one for external reporting and compliance. This is, in fact, what many leading com-
panies do.
As of today, ABC is the costing method that has captured process orientation
the best, in my opinion. Activity-Based LCC enjoys the same benefits as ABC does
because it piggybacks on ABC in this context.
A more subtle part of process orientation is to also think about continuous
improvement when implementing or designing a costing system or model. That is,
it is important to steadily improve the costing model. When this is done, it is cru-
cial to notice that the cost estimates inevitably will change as well. Such changes
should not be interpreted as a sign of error, but rather a sign of improvement or
inevitability. Costs are, after all, statistical in nature. Hence, an estimate of $100
for something might be equally correct as $90, or vice versa. In fact, both are prob-
ably correct at some time, but not at the same time. The problem is that we will
never know when this “some time” occurs. Cost should therefore be treated
according to its nature, statistical and uncertain, as discussed earlier.
ate life cycle chosen. Often it is more work limiting an analysis than performing a
complete analysis for the entire organization. The reason is that much work must
be spent trying to understand the consequences of limiting the analysis. Also, if
people know that many assumptions have been made to limit the analysis, they
may try to undermine the results, particularly if they do not like them.
out reason that Monte Carlo methods have been referred to as “the perfect tool of
numerical theories and applications.”23
Some may argue that they do not like the Monte Carlo methods because they
involve the problem of random errors. Random errors occur as a consequence of
Monte Carlo methods being statistical in nature, and all statistical measures are
associated with some random errors. However, since costs are statistical in nature,
surely using a statistical method to handle the associated risks and uncertainties is
most appropriate. Also, the random errors are not a problem as long as you have a
computer that enables you to run enough trials to reduce the errors to acceptable
levels. As the clock speed of chips still seems to double every 18 months, it is safe
to assume that the problem of random errors is one of the past. In fact, most PCs
today have more than enough Random Access Memory (RAM) and high enough
clock speed to handle even large LCC Monte Carlo simulations. Add to that the
possibility of running Monte Carlo simulations over a local area network (LAN)
or a similar system, and the use of such simulations is virtually endless for any
practical cost-modeling purpose.
If you’ve got ten decisions to make and you spend all your time making just
four, then you’ve made six wrong decisions.24
out that in the context of change management, changes in the performance meas-
urement system and in culture are the two most important factors of success.26
Industry Forecast
bringing down
adding up
Grass Roots Approach
• Surveys
• Middlemen estimates
• Frontline Representatives
• Executive opinion
Figure 1.2 The four ways of forecasting. Source: Adapted from F. Allvine, Marketing:
Principles and Practices. Boston, MA: Irwin/McGraw-Hill, 1996.
WHY ACTIVITY-BASED LIFE-CYCLE COSTING? 13
As far as the laws of mathematics refer to reality, they are not certain; and
as far as they are certain, they do not refer to reality.
NOTES
1. P. Hawken, A.B. Lovins, and L.H. Lovins, Natural Capitalism: The Next Industrial
Revolution. London: Earthscan Publications, 1999, p. 396.
2. D.L. Coutu, “Edgar H. Schein: The Anxiety of Learning.” Harvard Business Review
80, March 3, 2002, pp. 100—106.
3. M.E. Porter, Competitive Advantage: Creating and Sustaining Superior
Performance. New York: The Free Press, 1985, p. 557.
4. Technical Committee CEN/TC 279, “EN 12973:2000 Standard: Value
Management.” Brussels: European Committee for Standardization, 2000, p. 61.
5. In the CEN 12973 Standard.
6. See J. Pettit’s, EVA & Strategy. New York: Stern Stewart & Co., 2000, p. 17.
7. See, for example, R.G. Eccles, “The Performance Measurement Manifesto,”
Harvard Business Review, January-February 1991, pp. 131—137, and R. Ernst and
D.N. Ross, “The Delta Force Approach to Balancing Long-Run Performance,”
Business Horizons, May-June 1993, pp. 4—10.
8. “Change Management: An Inside Job.” The Economist 356 (8179), 2000, p. 65.
9. See Graham Ward’s speech on corporate governance at INSEAD at
www.icaew.co.uk.
10. M.E. Jones and G. Sutherland, “Implementing Turnbull: A Boardroom Briefing.”
London: The Center for Business Performance, The Institute of Chartered
Accountants in England and Wales (ICAEW), 1999, p. 34.
11. Ibid.
12. B. Bras and J. Emblemsvåg, “Designing for the Life-Cycle: Activity-Based Costing
and Uncertainty.” Design for X: Concurring Engineering Imperatives, ed, G.Q.
Huang. London: Chapman & Hall, 1996, pp. 398—423.
13. H.T. Johnson and R.S. Kaplan, Relevance Lost: The Rise and Fall of Management
Accounting. Boston, MA: Harvard Business School Press, 1987, p. 269 and
Chapters 2 and 4.
14. N. Shepherd, “The Bridge to Continuous Improvement.” CMA. March 1995, pp.
29—32.
15. R.A. Lutz, “Lutz’s Laws: A Primer for the Business Side of Engineering.” The
George W. Woodruff Annual Distinguished Lecture, 1998. Atlanta, GA: Georgia
Institute of Technology.
16. N. Bowie and H. Owen, “An Investigation into the Relationship Between Quality
Improvement and Financial Performance.” United Kingdom: Certified Accountants
Educational Trust, 1996. Häversjö, T, “The Financial Effects of ISO 9000
Registration for Danish Companies.” Managerial Auditing Journal 15 (No. 1 & 2),
2000, pp. 47—52.
17. H.T. Johnson, “It’s Time to Stop Overselling Activity-Based Concepts.”
Management Accounting, September 1992.
18. H.P. Barringer, “Why You Need Practical Reliability Details to Define Life Cycle
Costs for Your Products and Competitors Products!” New Orleans, LA: The 16th
International Titanium Annual Conference & Exhibition, October 9—10, 2000.
NOTES 15
19. S.M. Hronec and S.K. Hunt, “Quality and Cost Management,” Handbook of Cost
Management, ed, J.B. Edwards. Boston, MA: Warren, Gorham & Lamont, 1997,
pp. A1.1—A1.42.
20. R. Cooper and R.S. Kaplan, “The Promise—and Peril—of Integrated Cost
Systems.” Harvard Business Review, July/August 1998, pp. 109 —119.
21. M. O’Guin, “Focus the Factory with Activity-Based Costing.” Management
Accounting, February 1990, pp. 36—41.
22. J.G. Miller and T.E. Vollmann, “The Hidden Factory.” Harvard Business Review,
September-October, 1985, pp. 142—150.
23. A. Kaufmann, “Advances in Fuzzy Sets: An Overview.” Advances in Fuzzy Sets,
Possibility Theory, and Applications, ed. P.P. Wang. New York: Plenum Press, 1983.
24. D.N. James, “The Trouble I’ve Seen.” Harvard Business Review 80, March 3, 2002,
pp. 42—49.
25. M.D. Shields, “An Empirical Analysis of Firms’ Implementation Experiences with
Activity-Based Costing.” Journal of Management Accounting Research 7, Fall
1995, pp. 148—166.
26. For details, see “Change Management: An Inside Job.” The Economist 356 (8179,
2000), p. 65.
2
BASICS OF LIFE-CYCLE
COSTING
A cynic is one who knows the price of everything and the value of
nothing.
Oscar Wilde
Before we start thinking about Life-Cycle Costing (LCC), a very basic concept
must be clarified, namely the life cycle, and that is the topic in this chapter. After
that, the purpose of LCC is discussed followed by a discussion on what cost is,
because many people confuse cost with expense and even cash flow. An overview
of the various LCC approaches is also provided.
1. Purchase
2. Operating
3. Support
4. Maintenance
5. Disposal
Because the purchase price the customer pays is equal to the cost of the pro-
ducer plus an add-on (profit), the life-cycle costs of the consumer perspective will
most often be the most complete. This is important to realize and to turn into a
competitive advantage, as Toyota did, for example. Over the years, Toyota has sys-
tematically worked toward minimizing total life-cycle costs, and this is something
its customers have benefited from, as Toyota’s cars are virtually problem-free after
purchase. Toyota can actually charge a higher price than its competitors and
increase its profits because its customers know that they also save money and has-
sles, a win-win situation for both customer and manufacturer. Unfortunately, tra-
ditional cost accounting methods give no decision support for such considerations,
but that will be discussed later. In fact, most accounting regimes require that spend-
ing for intangibles like Research and Development (R&D) be treated as period
costs. As a result, the current profit is reduced and financial measures like earn-
ings per share drop despite the fact that such spending is essentially investments
that take place into the future.2 Traditional accounting practices therefore distort
the picture and promote shortsightedness. In fact, it has been argued that “account-
ing practices often drive major business decisions despite—and not because of—
the economics.”3
That the customer perspective also incorporates the most costs is probably more
often the case in relation to infrastructure than in relation to any other type of pro-
duce or service. For example, it has been estimated that “the operating costs of a
school can consume the equivalent of its capital costs every 4 to 5 years and remain
in service for a century.”4
The three perspectives only consider “private” costs, that is, costs that directly
impact a company’s bottom line. The societal perspective, however, includes those
activities (and associated costs) borne by society, such as:
● Disposal
● Externalities (see Glossary)
Concerning disposal costs, the trend now internationally is that they are becom-
ing the cost of the manufacturer or the user. For example, in both Germany and
Norway, various take-back legislation exists. Thus, as societies become more afflu-
ent and knowledgeable, it is likely that it will become increasingly difficult for
companies to escape from their social responsibilities.
Finally, we have the most comprehensive perspective, namely that of the prod-
uct itself, as shown in Figure 2.1. The product life cycle is essentially all the activ-
18 BASICS OF LIFE-CYCLE COSTING
ities that the product, or parts of the product, undergo regardless of which deci-
sion-makers are involved. It may therefore involve all the proceeding perspectives
except one: the marketing perspective. The reason is that the product life cycle is
on the individual level of each product unit, whereas the marketing perspective is
on the type level of a product. By the same token, if we compare the product life
cycle depicted in Figure 2.1 to the life cycle illustrated in Figure 2.3, we see that
the product life cycle always consists of processes and activities, whereas the mar-
ket life cycle in does not. Of course, underlying activities in addition to factors in
the business environment are needed to sustain or improve a product in a specific
stage or to move it to another stage. The stages themselves, however, are neither
processes nor activities. They are related to the market situation as characterized
by being new or old (introduction or decline) or the amount of sales (growth, matu-
rity, and decline), as shown later in Table 2.2.
In this book, this life cycle is therefore denoted as the market life cycle and not
product life cycle, which is the common denomination, particularly in manage-
ment literature. In my opinion, the term product life cycle logically fits the life
cycle shown in Figure 2.1 better than the one shown in Figure 2.3. Furthermore,
this book focuses on the product life cycle and not so much on the market life
cycle. However, because the underlying mechanisms of the market life cycle are
processes and activities, the approach presented in this book can easily be adopted
to market life-cycle issues. In any case, two genuinely unique life cycles exist: one
on the individual product level (the product life cycle) and one on the product-type
level (the market life cycle).
Manufacture
Material Product
Mining Distribution
processing manufacture
Environment: Use
air, sea, land 4 3 2 1 +
Demanufacture Service
Figure 2.1 Generic representation of a product life cycle. Source: J. Emblemsvåg and
B. Bras, Activity-Based Cost and Environmental Management: A Different Approach to
the ISO 14000 Compliance. Boston: Kluwer, 2000, p. 317.
WHAT IS A LIFE CYCLE? 19
their own products due to laws and regulations, as mentioned earlier. Others may
do this voluntarily, as Interface, Inc., did because it is good business. The third
option is that a third party takes back the products. If none of the above options
occur, the product will be disposed of.
After a product has been retrieved from the marketplace, it can be either directly
reused or demanufactured. Direct reuse is the best option in terms of both costs
and environmental impact, but selling directly reused products may be difficult.
Ironically, the farther to the left we come in the lower half of Figure 2.1, the eas-
ier it seems to be to sell the product, but the more downgraded it is. So typically a
product would have to be demanufactured in order to be economically interesting.
As the term indicates, demanufacturing is the opposite of manufacturing, that
is, it involves taking the product apart. During demanufacturing, several different
paths must be taken:
● Disassembly The product is taken apart without destroying any parts or
components. Some products may undergo only this process, which occurs if
the reusable parts are sold (the product loop is closed) whereas the rest is
recycled.
● Remanufacturing This is an industrial process that restores worn products
to like-new condition. A retired product is first completely disassembled, and
its usable parts are then cleaned, refurbished, and put into inventory. Finally,
a new product is reassembled from both old and new parts, creating a unit
equal in performance to the original or a currently available alternative. In con-
trast, a repaired or rebuilt product usually retains its identify, and only those
parts that have failed or are badly worn are replaced.6 Remanufacturing is
therefore a systematic way of closing the product loop.
● Material demanufacture Products and components that cannot be reused
or remanufactured are broken down further into either simpler products
(polymers are broken down into smaller polymers) or incinerated for energy
recovery purposes. The simpler products can be either consumed (as fuel, for
example) or used in material regeneration (the material loop is closed).
● Recycling In this process, material is reprocessed into “new” raw material.
This is, in other words, the same as closing the materials loop. Recycling is
perhaps the most common strategy to closing the loop, but it is the least effec-
tive one in the sense that it is the most wasteful strategy (except disposal).
● Disposal The last resort is disposal, which ideally should not happen at all.
In fact, Interface, Inc. aims toward totally eliminating disposal from its value
chain.7 But in the grand scale, disposal is the most common of all end-of-life
strategies. In fact, less than 2 percent of the total waste stream is actually
recycled, primarily paper, glass, plastic, aluminum, and steel. Over the course
of a decade, 500 trillion pounds of American resources have been trans-
formed into nonproductive solids and gases.8
WHAT IS A LIFE CYCLE? 21
Once an end-of-life strategy has been chosen, the design job begins. Of course,
the strategy is chosen in relation to what is feasible given the current product mix,
but over time the products and their value chains should be designed concurrently
to fit each other. In any case, depending on the scale of the temporal and organi-
zational concerns, seven major umbrella terms are pertinent to look at, both from
a strategic and a design-related perspective (see Figure 2.2). It is, after all, a quite
different task to become sustainable than to run environmental engineering pro-
grams, and this must be reflected in everything in the company (vision, strategy,
product mix, suppliers, and so on).
Under each umbrella are a huge variety of methods, approaches, and tools. All
these approaches, however, can be grouped into three according to their relation to
the product life cycle: those that are applied within a single product life cycle and
focus on specific life-cycle stages, those that focus on a complete product life cycle
and cover all life-cycle stages, and those that go beyond single product life cycles.
For this book, it suffices to acknowledge that these methods exist and that LCC
should be used in conjunction with useful design methods. LCC and other per-
formance measurement techniques can, after all, only direct attention and indicate
paths of improvements; they cannot actually improve anything.
Society
1: Environmental Engineering
7
2: Pollution Prevention
X Manufacturers 3: Environmentally Conscious
Design & Manufacturing
Industry Life Cycle
7: Sustainable Development
Final disposal
Reuse
Recycling
Use 3,4,5
Manufacturing 1
2
Sales
ket development to market decline.”9 The market life cycle is therefore also concep-
tually strikingly similar to industry life cycles, business life cycles, and even human
life cycles as they all go through the same four generic stages (see Figure 2.3). The
shape of the curve will of course depend on many different situational factors.
Although, Figure 2.3 is symmetric, this situation is not likely to occur in real-
ity. The figure should only be interpreted schematically. As noted earlier, seven-
stage models can also be found in the literature, but the crux is the same.
In Table 2.1, the different typical characteristics and responses related to the
four stages are shown. It is important to be aware of the fact that a product in the
Source: Adapted from F. Allvine, Marketing: Principles and Practices. Boston, MA:
Irwin/McGraw-Hill, 1996.
WHAT IS A LIFE CYCLE? 23
decline stage can be repositioned in the market, but if it has gone too far, only two
options are available. The best is usually to discontinue the product as soon as
other products can more profitably utilize the resources earlier allocated to that
particular product, but often companies keep their products too long. However,
products can be made profitable for some time by using a retrenching program.
Such programs would typically attempt to appeal to customers’ sentimental
attachments to the products and increase the price up front to an announced dis-
continuation of the product. Volkswagen, for example, increased the price of its
convertible as the company announced plans to drop the product.10
Clearly, “What is a life cycle?” is not possible to answer straightforwardly using
a simple definition. All life-cycle terminology is subject to considerable confusion.
The U.S. Environmental Protection Agency (EPA), for example, states that “some
people view life-cycle costing as referring only to private costs, while others view
it as including both private and societal costs.” Hence, it is important to define the
purpose of the analysis in order to define an appropriate life cycle and conse-
quently a suitable cost base.
Luckily, Activity-Based LCC is a process-oriented method, and defining the
correct life cycle is therefore quite easy. The difficulty lies in defining the activi-
ties so that reliable data can be found, but this and more will be discussed in
Chapters 4 and 5.
In the “Purpose of LCC” section, the purpose of LCC is discussed, but first the
difference between a life cycle and a value chain must be explained.
longer chain of activities, from cradle to grave, and consequently the time horizon
is also longer. The life cycle is therefore a more generic, less limiting concept than
the value chain, and that is why I prefer the concept life cycle to value chain.
However, in many cases an Activity-Based LCC implementation will essentially
be a value-chain costing implementation.
Purpose of LCC
As most concepts, LCC has evolved over time, and today LCC serves three main
purposes:
1. LCC can be an effective engineering tool for providing decision support in
the design and procurement of major open systems, infrastructure, and so on.
This was the original intent for which it was developed.
2. LCC overcomes many of the shortcomings of traditional cost accounting and
can therefore give useful cost insights in cost accounting and management.
3. LCC has reemerged as a design and engineering tool for environmental pur-
poses.
carrier the USS Enterprise was first commissioned in 1961 but is still in service.
During those 40-plus years, it has naturally undergone many updates, repairs, major
maintenance jobs, and so on. The costs of building the USS Enterprise therefore
was minor compared to the costs incurred during its life span, and this illustrates
why it is so important to perform LCC analysis before making decisions. Many, if
not most, open systems incur more costs during their life span than the purchasing
costs. For example, the cost of sustaining equipment is frequently 2 to 20 times the
acquisition costs.15 Assessing, eliminating by design, or managing the downstream
costs as well as their associated risks and uncertainties are therefore vital for both
the manufacturer and the purchaser because:
● The downstream costs after purchasing will most likely be very significant
and should therefore play a major role in purchasing decisions.
● The knowledge about downstream costs and their associated risks and uncer-
tainty can be used during negotiations both when it comes to costs/pricing
and risk management.
● The simulation of downstream costs can be very useful in designing the prod-
ucts so that the costs are eliminated even before they are incurred.
Essentially, LCC in an engineering context helps engineers think like MBAs
but act like engineers. But why was it necessary to come up with a new concept to
assess the life-cycle costs?
explosion in product variety at affordable prices due to the new manufacturing tech-
nologies that enabled mass production at an unprecedented scale. This required
more support in the companies, and overhead grew increasingly. These days, sur-
veys indicate that overhead costs constitute roughly 35 percent of an average
American company, whereas 100 years ago it was somewhere around 10 percent.
This should indicate that the costing systems invented more than 100 years ago are
long outdated, and indeed they are; they have lost their relevance in managerial
accounting, as H. Thomas Johnson and Robert S. Kaplan argued in their famous
book, Relevance Lost: The Rise and Fall of Management Accounting.17
Traditional cost accounting is capable of handling only a relatively small part of
the costs, despite the fact that a company’s bottom line incorporates total costs. In
other words, traditional cost accounting can focus only on partial costs, not total costs.
Companies simply cannot afford to segregate cost accounting from design,
engineering, production, and other core business processes as in the good old days.
Nowadays, even the customer relationships must be scrutinized by cost account-
ing because keeping the customers satisfied has both an upside and a downside
(costs). The four walls of the company are basically no longer the borders. This
puts new challenges on management that traditional cost accounting techniques
have little chance of handling well.
These facts were unpleasantly revealed to many western companies in the
1980s due to the tough competition from primarily Japanese companies. The
Japanese had long realized that it is better to eliminate costs via design than to cut
costs during production, which was pretty much the western approach. But
because many studies concluded that up to 85 percent of production costs are com-
mitted before even a single unit of the product is manufactured, new approaches
were needed. One of the approaches people turned to was LCC. In fact, it was said
that “LCC is an accounting methodology that became very popular in the 1980s,
when the increasing competitive pressure on most markets forced firms to improve
their capability to control and reduce manufacturing costs.”18 Hence, the LCC prin-
ciples of the 1960s became pertinent during the 1980s for a quite larger context
than procurement, and in this book I try to take it a step further.
performance measurement standards (particularly the ISO 14040 series) and will
not be remedied by more accurate data and scientific understanding.19
Despite such dedicated and thorough attempts in managing environmental
issues that ISO and others can show, LCC has remained one of the premier envi-
ronmental management tools. LCC has the potential of being a more effective deci-
sion-making tool than conventional LCA for reasons such as:
● Cost is often a more reliable indicator of resource consumption than any phys-
ical analysis because the economic system systematically captures many more
difficult-to-trace subtle interactions than an ad hoc approach can.
● Costs provide a more direct measure compared to scientific measures (such
as gigajoules of energy) that do not reflect marginal economic effects relat-
ing to resource scarcity.
● Priority setting in environmental improvement projects can be very difficult
in view of the profusion of available options. The environmental improve-
ments combined with cost reduction can greatly facilitate this process.
● It is easy to promote the results of environmental studies if related cost sav-
ings can be demonstrated.
● The data needed for LCC calculations are generally easily accessible. In fact,
they are much easier to access than the detailed process data needed for car-
rying out Life-Cycle Assessment (LCA) studies, as several in the literature
have pointed out.
● The LCC information regarding a product can be a real eye-opener to man-
agement by revealing hitherto unidentified cost drivers.20
Thus, not only can LCC aid in making cost management more relevant and proac-
tive, which was the original intent, but LCC can be useful in aiding companies
toward “doing (economically) well by doing (environmentally) good.” Thus, LCC
has grown substantially in scope over the 40 years the concept has been around.
Before proceeding, it should be noted that these points do not apply to all LCA
methods, except for the point about data being easily accessible. A method called
Activity-Based Cost and Environmental Management can be used for conducting
integrated cost and environmental management according to Activity-Based
Costing (ABC) principles.21 The result is that environmental issues can be man-
aged just as closely as economic issues, given that data are available. In fact,
Activity-Based LCC can be viewed as a subset of the Activity-Based Cost and
Environmental Management method. In this book, it is therefore focused on LCC,
cost management (as a managerial tool), and related issues.
We have now discussed what a life cycle is as well as the purpose of studying
it and assigning costs to it. We need to understand the concept of costs, because a
great deal of confusion exists in the literature and among practitioners, particularly
in the field of environmental management.
28 BASICS OF LIFE-CYCLE COSTING
WHAT IS A COST?
Two terms are often used incorrectly or interchangeably, both in the literature and
among practitioners, including the undersigned, namely cost and expense. This is
probably partly due to language simplifications but also to ignorance. Let us there-
fore look at what a cost is and what an expense is.
Cost is a measure of resource consumption related to the demand for jobs to be
done, whereas expense is a measure of spending that relates to the capacity pro-
vided to do a job.22 For example, a stamping machine costs $100 daily to operate
and it can stamp 10,000 coins per day. One day, only 5,000 coins are stamped. The
expense is $100 because that is the capacity provided, which shows up in the
books, but the cost of stamping the coins is only $50. Hence, this day there was a
surplus capacity worth $50.
It is the resource consumption perspective that counts, because management
must match capacity to demand and not the other way around. In our example,
management must consider if the excess capacity should be kept or removed, not
stamp more coins than demanded because that only drives costs, and the risks of
producing obsolete coins increase. However, in most companies, the latter is the
case because “producing to capacity rather than to demand often appears to reduce
costs.”23 This in turn leads to overinvestment and surplus capacity for companies
with significant free cash flow, which further erodes profits. In such cases, debt is
a greater good because their cost systems foster erroneous decisions but debt intro-
duces spending discipline.
While the logical consequence of this is that an LCC model should represent
the cost/resource consumption perspective, from the literature we see that most
LCC models are actually cash flow models or expense models at best.
Cash flow models are necessities in situations where revenues and their related
costs occur in different time periods. Cash flow models are important to ensure
sufficient liquidity, but they cannot replace cost models. Also, it is important to
take the time value of money into account using a discounting factor, because, as
we have all heard, “it is better to earn one dollar today than one dollar tomorrow.”
The time value of money is, however, not confined to cash flow models, but this
will be discussed in greater detail in Chapter 3 because risk and uncertainty is
closely related to time and value. In any case, such models are always used in
investment analyses, which in some ways are like LCC analyses. But what is the
difference?
In the wide sense, an investment is a sacrifice of something now for the prospect
of something later. Furthermore, “we see two different factors involved, time and
risk. The sacrifice takes place in the present and is certain. The reward comes later,
if at all, and the magnitude may be uncertain.”24 Depending on the type of sacrifice,
WHAT IS A COST? 29
we are talking about financial investment and real investments. Financial investments
typically involve assets that can be quickly liquidated (cash, stocks, and bonds).
Financial investment analyses therefore include cash flow analyses and financial per-
formance measures. Real investments, on the other hand, concern physical (real)
assets that depreciate over time, such as buildings and equipment, and their economic
evaluation is variously referred to as economic analysis, engineering economy, and
economic decision analysis. Hence, real investment analyses can in principle be the
same as LCC, whereas financial investment analyses are not. Furthermore, LCC may
or may not include consideration of the time value of money. That depends on the
scope of the LCC, which is discussed in Chapter 5.
Expense models typically support environmental initiatives and policies by
including the expenses and benefits that are derived from the effects of the envi-
ronment on the General Ledger while expenses outside the four walls of the organ-
ization are included when possible or desirable. Such practices go under many
names in the literature, such as environmental accounting, environmental cost
accounting, life-cycle accounting, total cost accounting, green accounting, full-
cost accounting, and full-cost environmental accounting. Depending on whom you
ask, these practices can be anything from essentially financial reporting and analy-
sis of environmental aspects as they show on the General Ledger to a national
income accounting. The managerial relevance therefore varies considerably.
This proliferation of terminology and the associated confusion has also led to
many different definitions of what a life-cycle cost is. Some of the better defini-
tions are:
● The sum total of the direct, indirect, recurring, nonrecurring, and other
related costs incurred, or estimated to be incurred, in the design, develop-
ment, production, operation, maintenance, and support of a major system
over its anticipated useful life span.25
● The amortized annual cost of a product, including capital costs, installation
costs, operating costs, maintenance costs, and disposal costs discounted over
the lifetime of a product.26
● The total cost throughout its (an asset’s) life, including planning, design,
acquisition and support costs, and any other costs directly attributable to
owing or using the asset.27
To keep things simple, in this book the life-cycle cost for a product is defined
as “the total costs that are incurred, or may be incurred, in all stages of the prod-
uct life cycle.” This simple definition relies on the definition of the product life
cycle and will therefore vary from product to product. It must be so, because LCC
is a decision-support tool that must match the purpose and not an external, finan-
cial reporting system that should obey rigid principles, such as the U.S. GAAP.
30 BASICS OF LIFE-CYCLE COSTING
LCC is therefore defined more widely in this book than what is common. That
is by intention, because LCC should be just as much an approach to capture future
costs as it is a technique to compute the life-cycle cost for a specific product. After
all, ABC goes well beyond assessing the costs of the activities. Thus, just as ABC
employs activities as a core element in its costing assignment, I suggest using a
specific life-cycle definition as a premise for a specific LCC analysis. By doing it
this way, we do not have to use all the different acronyms and terms that plague
the environmental accounting field in particular.
It is not, as some seem to believe, that having many accurate terms makes life
easier, because as Hegel said, “Theories conceal as much as they reveal.” It is bet-
ter to keep things simple and rather concentrate on understanding the essentials
and how they relate to reality. As Nanquan explains to Zhaozhou in Case 19 of
Gateless Barrier:
Zhaozhou asked Nanquan, “What is the way?”
Nanquan said, “Ordinary mind is the way.”
Zhaozhou asked, “Shall I try for that?”
Nanquan said, “If you try, you’ll miss it.”
Zhaozhou asked, “How do I know it’s the way if I don’t try?”
Nanquan said, “The way has nothing to do with knowing or not knowing.
Knowing is illusion; not knowing is ignorance. If you penetrate the way
of no-trying, it will be open—empty and vast. What need is there to affirm
this or deny it?”
Zhaozhou was suddenly enlightened upon hearing this.
To better understand the costs in a life-cycle perspective, the next two sections pro-
vide discussions and some examples regarding categories of costs (and expenses) and
business activities that create cost, that is, a demand for jobs to be done.
● Capital costs:
䡩 Buildings
䡩 Equipment
● Expenses:
䡩 Labor
䡩 Supplies
䡩 Raw materials
䡩 Utilities
䡩 Disposal
● Revenues:
䡩 Primary product
䡩 Marketable byproducts
䡩 Manifesting
䡩 Labeling
䡩 Preparedness and protective equipment
䡩 Closure/postclosure care
䡩 Medical surveillance
䡩 Insurance/special taxes
The third type of costs is liability costs, or costs that arise due to noncompli-
ance and potential future liabilities. These costs are also referred to as contingent
costs by some. Depending on the type of costs, these costs traditionally will either
be handled as overhead costs and be put in the big bag of poorly allocated costs,
or be handled as extraordinary costs. An example of these costs would be the
cleanup costs of the so-called Superfund sites in the United States, which were
estimated to be between $300 and $700 billion as of 1991.28
Since liability costs include future liabilities, it is difficult to estimate them;
environmental liability costs are particularly difficult to estimate and hence prob-
ably understated. Nonetheless, here are some examples:
● Legal staff and/or consultants
● Penalties and fines
● Future liabilities from customer injury
● Future liabilities from hazardous waste sites:
䡩 Soil and waste removal and treatment
䡩 Groundwater removal and treatment
䡩 Surface sealing
䡩 Personal injury (health care and insurance ramifications)
䡩 Economic loss
䡩 Real property damage
䡩 Natural resource damage
䡩 Other
The last category of less tangible costs, or image and relationship costs, as some
denote such costs, are very difficult to estimate. They are, however, far from unim-
portant as is evident from the following examples:
● Customer acceptance
● Customer loyalty
● Worker morale/union relations
● Corporate image
● Brand name
● Stakeholder relations
One of the classic examples of such less intangible costs and the effect on a com-
pany is the introduction of New Coke by the Coca-Cola Company on April 23, 1985.
WHAT IS A COST? 33
Preliminary taste tests concluded that New Coke was “better” than the old Coke
because during blind testing they found that people preferred the New Coke taste.
But this was a classic market research error, because what Coca-Cola completely
failed to take into account was the fact that people ascribed a certain taste to the brand
Coca-Cola. Thus, when New Coke was released on the market, it became an instant
failure. The sweeter and less carbonated New Coke was to many more like Pepsi-
Cola than Coca-Cola. Suddenly, millions of customers thought that Pepsi, not Coke,
was the “Real Thing.” After New Coke spent only 77 days on the market, Coca-Cola
had to quickly reintroduce the old Coca-Cola under the new name Classic Coke.
Most will agree this was a huge embarrassment for Coca-Cola. But according to
chairman Roberto Goizueta, the introduction of New Coke was the best thing that
ever could have happened for Coca-Cola because it taught management a valuable
lesson as to why the company has one of the highest returns on investment of any
U.S. company today.29 This goes to show that although the costs of a catastrophe can
be great, if handled properly, the lessons taken from it can be a turning point for the
better. Indeed, catastrophe is a Greek word that originally means turning point.
An example from the environmental domain is the Exxon Valdez accident in
Alaska in 1989. Prior to the accident, Exxon said it saved $22 million on building
the Exxon Valdez with a single hull (as opposed to a double hull). But then came
the accident in Prince William Sound. Exxon spent roughly $2 billion just on
cleaning up, yet it captured only 12 percent of the 11 million gallons spilled. Law
suits followed from both the state of Alaska and a group of 14,000 fishermen and
citizens in the region. The state demanded compensatory and punitive damages
possibly in excess of $1 billion, while the people in the region sought punitive
damages of $15 billion. In September 1994, Exxon was ordered to pay $5.3 bil-
lion, with $4 billion for environmental costs and $1 billion as retribution for insen-
sitivity. But Exxon appealed both damage awards, won, and avoided paying both
the damages and any interest on them. In the end, the fishermen and fisheries were
awarded $286 million as compensatory damages for losses incurred as a result of
not being able to fish in the area of the spill. In any case, Exxon ended up paying
in excess of $2 billion as almost a direct result of saving $22 million. But these are
only tangible costs. What about the costs related to the corporate image or loss of
confidence?
These various types of costs are associated with various degrees of quantifica-
tion difficulty, as illustrated in Figure 2.4. On the right, we have the so-called exter-
nalities or costs that are external to the economic system. Basically, we just know
that they exist but cannot really provide any meaningful cost quantification. Many
in the literature refer to external costs or externalities as societal costs. I do not sub-
scribe to that view because it does not acknowledge the fact that some costs are
simply incomprehensible in an economic sense; they are external to the economic
system, hence the name. For example, what is the cost of clearing 1,000 km2 of
34 BASICS OF LIFE-CYCLE COSTING
Quantification Difficulty
Degree of
Cost Category
rain forest? Clearly, we can identify the labor costs, the costs of machinery, admin-
istrational costs, the costs to the local society, and so on, but what is the cost of the
potential loss of an indigenous species? Maybe we lost a plant that could provide
cancer medicine. What cost is that?
From my point of view, four economic cost categories exist (usual, hidden, lia-
bility, and less tangible) as well as an external category (externalities). These five
different categories exist in all organizational concerns and even individual con-
cerns. For example, a manufacturer has usual, hidden, liability, less tangible, and
external costs. The external costs of a manufacturer are in many cases societal
costs, but some will also be externalities (this is the point that I feel many in the
literature miss).
The same logic applies to society as well as an individual. The cost categoriza-
tion is therefore two-dimensional: (1) the obvious cost category dimension, as
shown in Figure 2.4, and (2) the less obvious organizational concern dimension.
These two dimensions are inseparable. Thus, we should not talk about a cost cat-
egory without also specifying the organizational concern. For example, if we talk
about a hidden cost, we must say “hidden” with respect to what? The department?
The company? The society? Nature?
Table 2.2 Examples of Production and Environmental Costs Firms May Incur
Labor Materials Equipment Other
Production work Raw materials Production equipment Depreciation
Material handling Solvents Cleansing/degreasing Waste disposal
Inspection Process water Material-handling Insurance
machinery
Record-keeping Cleaning water Waste treatment Utilities
Manifesting Office supplies Wastewater treatment Regulatory fees
Labeling Training materials Air pollution control Taxes
Stocking Safety materials Painting equipment Maintenance
Training Parts Protective equipment Lab fees
Permitting Storage equipment
Source: Adapted from S. Perkins and T. Goldberg, Improving Your Competitive Position:
Strategic and Financial Assessment of Pollution Prevention Investments. Boston, MA: The
Northeast Waste Management Officials’ Association, 1998, p. 81.
Concerning the discussion we had earlier about expenses versus costs, we see that
even the heading of Table 2.2 speaks solely of costs, although in fact a mixture of
costs and expenses exists. This may be confusing, but it is common and almost
unavoidable. The point, however, is not to get stuck in terminology but to realize that
a fundamental difference exists between costs and expenses, namely the measure of
the demand for jobs to be done versus the capacity provided to do the jobs, respec-
tively. This is important for cost management purposes, but it is also important in
relation to challenging the ruling theory of the business. The reason is that faulty cost
management sustains this theory by indirectly creating myths about what is prof-
itable and what is not. Challenging these theories is vital for companies to sustain
long-term profitability: “What underlies the malaise of so many large and success-
ful organizations worldwide is that their theory of business no longer works.”30
Analogy Models
An LCC estimate made by an analogy identifies a similar product or component
and adjusts its costs for differences between it and the target product. This way of
conducting LCC is common in shipbuilding, for example, where mass is the fac-
tor they relate costs to.
In the energy sector, we find approaches such as exergy costing, which is a part
of exergoeconomics or thermoeconomics. The essence of exergoeconomics is that
it “combines a detailed exergy (exergy is the availability to do work) analysis with
appropriate cost balances to study and optimize the performance of energy sys-
tems. The analyses and balances are usually formulated for single components of
an energy system. Exergy costing, one of the basic principles of exergoeconom-
ics, means that exergy, rather than energy or mass, should serve as a basis for cost-
ing energy carriers.”32 In the literature, exergy costing has also been used in the
complete life cycle of an energy system.
This way of handling costs may sound utterly crude, and it is. It says nothing
about direct labor or overhead costs. It simply looks at what the costs have been his-
torically and scales them according to the most important cost driver, which in ship-
building is mass and in some energy systems is exergy. Such methods can serve well
when extensive historical material is available, the products are produced unit by
FOUR WAYS OF LCC 37
unit (such as a shipyard producing ferries), one dominant cost driver is used, and the
products do not differ much (such as in size, technology, use patterns, and opera-
tional characteristics). It is crucial that the products are produced unit by unit
because that effectively reduces cost allocation issues to virtually nothing. By the
same token, the existence of a dominant cost driver is paramount because otherwise
the analogy has no basis. To ensure the relevance of historical data, it is vital that the
products do not change much. Thus, such methods have limited usage.
This book does not discuss such ways of performing LCC because it is simple
and well established. You don’t need to read this book to do analogy LCC,
although Chapter 3 on uncertainty and risk management may be useful.
Parametric Models
Parametric models are in many ways more advanced analogy models. A paramet-
ric LCC model is based on predicting a product’s or a component’s cost either in
total or for various activities by using several models describing the relationships
between costs and some product- or process-related parameters. The predicting
variables typically include:
● Manufacturing complexity
● Design familiarity
● Mass
● Performance
● Schedule compression
Compared to the analogy models, three main differences exist. First, an analogy
model depends on one single, dominant cost driver, whereas a parametric model
can use several parameters. Second, an analogy model is based on linear33 rela-
tionships between costs and cost drivers, while parametric models rely on one or
more nonlinear regression models. Third, whereas analogy models use an analogy
(such as mass) as a driver, parametric models are essentially regression, or response
surface, models that can be linear, quadratic, multidimensional, and so on.
Like the analogy models, parametric models do not handle overhead costs in a
credible fashion, nor do they go beyond simply presenting an assessment number
without any further insight, except what is a direct consequence of their parameters.
Parametric modeling, like analogy modeling, has clear limitations, but under some
circumstances they are sound approaches. However, when some propose parametric
approaches to optimize the economic performance of a manufacturing system, they
seem to stretch the validity of parametric modeling beyond its limits. It is clear that
parametric models are easy to use in optimization algorithms. However, one must
remember that open systems cannot really be optimized because they interact with
the environment and because implementing a solution can take so much time that
38 BASICS OF LIFE-CYCLE COSTING
the solution is no longer the optimum by the time it is implemented. It is not with-
out reason that Nobel laureate Herbert Simon invented the terms bounded rational-
ity and satisficing. Basically, what he said was that in most real-life circumstances,
we must search for a solution that is good enough and not necessarily “the best.”
Since parametric models can offer more insight and higher accuracy than anal-
ogy models, they are often found in the engineering literature. Parametric models
can also perform well as models within a cost accounting system, preferably one
that can handle overhead costs well, such as ABC (see “Cost Accounting
Models”). For example, if an ABC model discovers that a particular product incurs
too many disposal costs related to cutting raw materials, a parametric model can
be used to investigate how to reduce the need for disposing of waste. In such a
model, it would be useful to look at the direct labor costs, the machine times, and
the associated overhead costs, for example.
Activity-Based Costing
Activity-Based Costing (ABC)37 is a significant improvement over the volume-
based systems for reasons explained in Chapter 4. However, ABC alone cannot
handle the design needs in the twenty-first century for at least four reasons:
FOUR WAYS OF LCC 41
Just-in-Time Costing
In contrast to volume-based costing systems that focus on products, Just-in-Time
(JIT) Costing is oriented toward process and time, as measured by the cycle time.
Compared to all other costing systems I am aware of, JIT Costing has special ways
of finding the cycle time cost and also treating direct labor. More specifically, in
JIT Costing39:
42 BASICS OF LIFE-CYCLE COSTING
The focus on cycle time in JIT costing, however, has led to a slightly different
definition but nevertheless a significant cycle time cost formula:
Overhead Expenses ⫹ Direct Labor Costs
(2.2)
Cycle Time in Hours
The big difference between these two definitions is that the JIT Costing uses
time as a basis for cost absorption. The cycle time includes:
● Process time (machine, direct labor, total quality control in line inspection)
(value added)
● Queue (nonvalue added)
● Setup (nonvalue added)
● Wait (nonvalue added)
● Move (nonvalue added)
● Inspection (nonvalue added)
Cycle times exclude line stops for quality, direct delivered parts, and training.
Evidently, line stops are not included in the cycle time. Consequently, in a JIT envi-
ronment, a very unique practice is employed: JIT encourages manufacturing to
stop production immediately whenever a quality problem is discovered!
Although defining cycle time is straightforward, tracing cost contributors is
more complicated. A common method is the Average Unit cycle time Method
(AUM), which is based on First-In, First-Out (FIFO) product flow (see Equation
2.3). Another requirement is a line audit at least twice a day, where an actual count
of the number of products at each process step is performed.
Daily Production Hours
AUM ⫽ (2.3)
Number of Units Produced
Total Product Cycle Time ⫽ Average Number of Units in Process ⫻ AUM (2.4)
Because of this heavy focus on time, to avoid serious cost distortions, it is clear
that the product diversity cannot be too large. Thus, some claim that JIT is best
suited for the manufacture of closely related standardized products in which homo-
geneity (rather than heterogeneity) is a dominant characteristic of the output.
FOUR WAYS OF LCC 43
Yet some disagreement exists as to what extent cycle time is the cost driver. This
disagreement seems to be mainly due to how people envision the JIT manufactur-
ing environment, that is, the degree of automation and hence time focus. For exam-
ple, some explain that JIT environments normally operate with automated
equipment but not necessarily, and the product costs are classified as either mate-
rials costs (including all materials-related costs) or conversion costs (the rest of
production-related costs). The allocation procedure is mostly based on cycle time,
except materials costs, which are allocated in a traditional way. Clearly, this way
of doing JIT Costing is less radical than what is proposed here.
A third view holds that JIT Costing distinguishes itself mainly in three aspects:
1. Costs associated with raw materials and in-process inventory are put in a
Raw and In-Process inventory (RIP) account.
2. The usage of conversion costs (see above).
3. Overhead costs are not included in the product costs until the products are
finished. Thus, no Work In Process (WIP) accounts are needed.
Regardless of what view we take, JIT Costing is almost like a subset of ABC in
the sense that volume-based drivers are not necessarily used at all. In JIT Costing,
however, the main cost driver is predetermined—time—while ABC enables com-
plete flexibility in cost driver definitions. Other similarities exist as well. For exam-
ple, ABC was originally designed to improve the accuracy of product costing while
JIT was conceived as a method for eliminating all forms of wasteful activity in an
organization, yet both focus on value adding with ABC as a side benefit.
Essentially, while ABC is designed to manage (eliminate or reduce) the com-
plexity of modern organizations, JIT seeks to simply eliminate this complexity.
Clearly, time is a very important factor when it comes to utilizing the production
resources as efficiently as possible, because the less time spent per product unit,
the higher the throughput and the more likely it is to be profitable. However, time
is not the only possible cost driver that could be employed, and in some cases a
heavy focus on time can even be counterproductive, as shown in the literature.
Therefore, ABC seems to provide a more general approach than JIT Costing does.
It is evident that ABC will work well in a JIT environment; however, since the
JIT environment seeks to eliminate the complexities ABC is good at handling, one
might as well settle for the simpler JIT Costing system. Thus, the ultimate ques-
tion is whether the complexity is needed or not, which is a strategic issue (diver-
sification versus cost effectiveness, for example).
How JIT Costing would work for LCC purposes is not discussed in the litera-
ture I have reviewed, but since time estimates will be very difficult to obtain real-
istically, particularly in the use phase of the product, it is not likely that JIT Costing
would work in LCC. Also, JIT Costing seems to be too specialized for manufac-
turing environments.
44 BASICS OF LIFE-CYCLE COSTING
Target Costing
The conceptual idea behind Target Costing (TC) is to balance the needs of the cus-
tomers with the profit need of the company. The Consortium for Advanced
Manufacturing—International utilizes the following definition:
Target Costing is a system of profit planning and cost management that is
price driven, customer focused, design centered and cross functional. Target
Costing initiates cost management at the earliest stages of product develop-
ment and applies it throughout the product life-cycle by actively involving
the entire value chain.
The earliest (late 1960s and early 1970s) implementers of TC were Japanese
automotive manufacturers. TC was a logical outgrowth of their analysis of the
causes of cost. As they sought ways to reduce or eliminate costs, the influence of
the design of manufacturing costs became apparent. The Japanese recognized that
profitability depends on marketplace success, which is the result of developing
cogent business strategies, satisfying customers, and confronting competitors. The
costing process becomes interdependent with the business management process
and provides a common focus for the activities of the enterprise. Hence, TC is
process oriented, but, more important, TC is employed in strategic management.
Thus, TC can, for example, be combined with ABC and JIT Costing if desired.
However, TC provides two concepts that ABC does not offer explicitly. One is
the way products are being priced and the other is to turn away from cost cutting to
cost elimination via design and planning. These two points are important paradigm
shifts:
1. From cost-plus pricing40 to market-based pricing In TC, customers and
competitors drive market prices. The company sets the target profit and the
resulting cost (target price ⫺ target profit) is the targeted cost the company
must meet. In the literature, this approach is also referred to as the market-
based approach or the deductive method, and it is a fundamentally different
way of looking at pricing. It is rooted in the fact that in the marketplace, cus-
tomers have many high-quality products to choose from; the price of these
products may therefore be the only significant competitive edge for a com-
pany. It should be noted that the other popular method in TC, called the addi-
tive method, is essentially a cost-plus approach, but it is less attractive in my
opinion because it is less customer focused.
2. From cost-cutting during production to cost control during design As
we have seen, it is widely noted in the literature that during the design phase
(concept, design/engineering, and testing), most of the basis for the costs is
formed (refer to Figure 1.1). TC takes this into account and is proactive
rather than reactive in its management focus.
FOUR WAYS OF LCC 45
TC also requires a shift (1) from an internal to an external focus, (2) from an
internal vantage point to listening to the customer, and (3) from independent
efforts to interdependent efforts, but these shifts are not limited for TC. Indeed,
these paradigm shifts were probably preceded by the work of Deming and others
like Juran and Crosby. These thoughts and more were popularized in the “If Japan
Can, Why Can’t We” TV show on NBC in 1980 and were articulated for the gen-
eral public in Deming’s 1986 book, Out of the Crisis. The focus on quality is an
idea adopted by all modern cost management approaches, since quality affects
customer satisfaction substantially, as noted in a wide array of management and
design literature.
What is less known is the enormous impact the focus on quality has on the
operating costs of the company. As Taiichi Ohno, the former vice president of
Toyota, said, “Whatever an executive thinks the losses of poor quality are, they
are actually six times greater.”41 In the distribution industry, for example, studies
indicate that up to 25 percent of operating expenses stem from internal quality
problems. But former vice chairman of Chrysler Corporation Robert Lutz warns
us that “too much quality can ruin you.” Indeed, in the literature we can read about
many companies that have received quality awards soon after getting into serious
financial difficulties. For example, many Baldrige Award winners have encoun-
tered severe financial difficulties. In fact, 14 out of the 43 supposedly best-run
companies in the United States that Peters and Waterman reported on in their book
In Search of Excellence “were in financial trouble within two years of the book’s
completion.”42 Hence, a cost management system that can also capture internal
quality is essential.
The methodology of the value chain concept can be summarized into three
steps:
1. Identify the value chain and then assign costs, revenues, and assets to the
value activities.
2. Diagnose the cost drivers regulating each value activity.
3. Develop sustainable competitive advantage either through controlling cost
drivers better than competitors or by reconfiguring the value chain.
In the value-chain concept, the cost drivers are diagnosed by grouping them into
two different groups: (1) structural cost drivers and (2) executional cost drivers.
The structural cost drivers are derived from a company’s choices of economical
structure:
● Scale What is the size of the investment to be made in manufacturing,
R&D, and marketing resources (horizontal integration)?
● Scope What is the degree of vertical integration?
● Experience How many times in the past has the firm already done what it
is doing again?
● Technology What process technologies are used in each step of the firm’s
value chain?
● Complexity How wide a line of products or services is being offered to
customers?
The executional cost drivers determine a firm’s cost position, which hinges on
its ability to execute successfully. In contrast to the structural cost drivers, more is
always better:
● Workforce involvement Is the workforce committed to continuous
improvement?
● Total Quality Management (TQM) Is the workforce committed to total
product quality?
● Capacity utilization What are the scale choices on maximum plant con-
struction?
● Plant layout efficiency How efficient against current norms is the plant’s
layout?
● Product configuration Is the design or formulation of the product effective?
● Linkages with suppliers or customers Is the linkage with suppliers or
customers exploited according to the firm’s value chain?
Attempts have been made to create a comprehensive list of cost drivers. In the
strategic management literature in particular, good lists of cost drivers exist. The
grouping of the cost drivers is particularly interesting as a way of helping man-
agement focus on the correct aspects of the company from a strategic perspective.
NOTES 47
The last two lists are good preliminary checks for management before a more
elaborate analysis is undertaken. However, as indicated by the very name Strategic
Cost Management, this methodology is intended for strategic purposes only.
Hence, SCM is not very useful in design except when it comes to identifying goals
and objectives. Most important, it introduces the notion of value-chain analysis,
which is essential in three ways:
1. Supplier linkages, which are vital to consider every time major changes are
made to make sure that suppliers can handle the changes.
2. Customer linkages, which basically mean that an LCC for the customer
should be performed to make sure long-term relationships with customers
are a win-win situation.
3. Missed opportunities, which must be avoided primarily by being aware of
what is going on in the value chain. For example, if technology changes in
one of the links in the value chain, both downstream customers and upstream
suppliers must evaluate the effects on their business.
Furthermore, SCM does not focus much on risk and uncertainty and is suited
for strategic issues only. Also, the last two steps in the approach seem too generic
to provide any help for many practitioners.
Before Activity-Based LCC can be introduced along with pertinent LCC dis-
cussions, we need to discuss how to deal with risk and uncertainty, which is done
in chapter 3. Risk and uncertainty are, after all, inherent in LCC. Also, the brief
discussion on ABC in this chapter must be expanded upon significantly, which is
done in Chapter 4. Without understanding ABC, at least conceptually, there is lit-
tle hope of understanding Activity-Based LCC.
NOTES
1. Other models also exist for the marketing perspective, such as the seven-stage model
presented by H.G. Adamany and F.A.M. Gonsalves in “Life Cycle Management,”
Handbook of Cost Management, ed. B.J. Brinker, Boston, MA: Warren, Gorham &
Lamont, 1997, pp. D5-1—D5-26. However, they all essentially say the same; only
the degree of refinement varies.
2. More details are provided in M. Schwartz’s, The Value of R&D: Creating Value
Growth through Research & Development, London: Stern Stewart Europe, Ltd.,
1999, p. 11.
3. J. Pettit, EVA & Strategy, New York: Stern Steward & Co., 2000, p. 17.
4. Government Asset Management Committee, “Life Cycle Costing Guideline,”
Sydney, New South Wales: Government Asset Management Committee, 2001, p. 15.
5. According to its CEO and Chairman of the Board, Ray C. Anderson. See R. C.
Anderson’s, Mid-Course Correction, Atlanta, GA: The Peregrinzilla Press, 1998,
p. 204.
48 BASICS OF LIFE-CYCLE COSTING
40. Cost-plus pricing is based on calculating the price as the cost plus a profit. Various
companies use different cost as a basis, but often the cost of goods sold is the basis.
41. G. Taguchi and D. Clausing, “Robust Quality,” Harvard Business Review, January-
February 1990, pp. 65—75.
42. J. Kandampully and R. Duddy, “Competitive Advantage Through Anticipation,
Innovation and Relationships,” Management Decision 37 (1), 1999, pp. 51—56.
43. J.K. Shank and V. Govindarajan, “Strategic Cost Management and the Value Chain,”
Handbook of Cost Management, ed. B.J. Brinker, Boston, MA: Warren, Gorham &
Lamont, 1997, pp. D1-1—D1-37.
3
UNCERTAINTY ANALYSIS AND
RISK MANAGEMENT
“Time and chance happen to them all.” It is the death of certainty and simple, clear-
cut, cause-and-effect relations. It is a nightmare for engineers, for managers, and
for decision-makers in general. Yet many systems will not function without uncer-
tainty. In fact, if you reduce the uncertainty to zero, the risk may increase, but if
the uncertainty is balanced, the risks may be acceptable. Indeed, risk and uncer-
tainty are formidable opponents for any decision-maker, because when we make
decisions, we expose ourselves to the risks that lurk in uncertainty as it were. But
luckily we are not left to our own devices.
In this chapter, a discussion of risk and uncertainty is provided along with a
simple yet powerful way of analyzing risk and uncertainty to support effective
uncertainty analyses and risk management.
approaches can be outright dangerous as they may ignore risks (positive and neg-
ative). This is further emphasized as “The effective business focuses on opportu-
nities rather than problems.”4 Thus, risk management is ultimately about being
proactive, and that must be a part of all other management activities. It is therefore
important that risk goes beyond the directly negative associations. The loss of
opportunities is just as bad for business as direct losses. The difference in my opin-
ion lies in that direct losses have mostly a short-term effect and therefore get more
attention, whereas losses of opportunities have mostly a long-term effect.
Also, it may be useful to be aware of cultural differences. In a survey, for exam-
ple, 49 countries were studied for more than 20 years by questioning over 100,000
respondents. Hence, the study is without any doubt statistically significant; that is,
the chance of error is low. The results are shown in Table 3.1.
With uncertainty avoidance, the degree to which cultures feel uncomfortable
with uncertainty is understood. Whether the questions are risk oriented and/or
uncertainty oriented I do not know, but the respondents are probably mainly think-
ing in terms of risk; that is what most people are concerned about, because it
involves loss and other negative associations. In any case, what is interesting is that
very few of the really risk-averse cultures are also doing very well. The only excep-
tions seem to be France, Japan, and South Korea, whereas many of the most suc-
cessful economies are risk takers, such as Canada, the United States, Great Britain,
Hong Kong, and Singapore. We also understand from Table 3.1 that being eco-
Source: Adapted from P.C. Brewer, “Managing Transnational Cost Management Systems,”
Journal of Cost Management for the Manufacturing Industry, November/December 1997, pp. 35—40.
54 UNCERTAINTY ANALYSIS AND RISK MANAGEMENT
nomically successful consists of more than taking risks because although Jamaica
is a risk taker, but it does not perform well.
Because risk and success seem to follow each other in many cases, it is impor-
tant that organizations have a consistent attitude toward risk. According to some
research, the risk-taking strategy is an essential part of the total strategy, and fur-
thermore, risk acceptance characteristics are essential to the success of many
strategies. This is particularly crucial in organizations where innovation and cre-
ativity are important, because innovation and creativity are uncertain and often
associated with great financial risk as the capital needs are often substantial. One
way to accomplish this is to consider failure as “enlightened trial and error,” which
essentially means that we must learn from our mistakes. That can be a source of
success as well. According to former chairman of the Coca-Cola Company,
Roberto Goizueta, the disastrous New Coke “was the best that could ever happen
to Coca-Cola because this gave a valuable lesson to management. That is why the
company has one of the highest returns on investment of any U.S. company
today.”5 This goes to show that although risk is a negative thing, it can be a source
of great success as long as we learn from our failures.
Sources of Risk
Of course, too many sources of risks exist to mention them all, but in Tables 3.2
and 3.3 provided some examples that may be helpful to consult when doing risk
analysis. It is important to limit oneself to such tables or checklists. Often, the
greatest risks are due to the inherent belief system in an organization, the theory
of the business. When this system no longer reflects reality, the organization runs
a great danger of becoming outdated. This is what caused many large multina-
tionals to succumb in the last 20 years; their theory of business no longer worked.6
Take the American car makers, for example. For decades, they provided affordable
cars to a vast number of customers, but they failed to recognize the quality drive
initiated by Japanese car makers; their theory of business no longer worked.
Of course, not all the aforementioned sources of risks are equally important. A
1999 Deloitte and Touche survey identified the following five risks as significant
(on a scale of 1 to 9):
● Failure to manage major projects (7.05)
● Failure of strategy (6.67)
● Failure to innovate (6.32)
● Poor reputation/brand management (6.30)
● Lack of employee motivation and poor performance (6.00)
The survey basically indicates that the most significant risks were frequently
operational or strategic in nature. For example, the problems that ABB, the
Swedish-Swiss engineering giant, is in today are due to “its exposure to operating
Table 3.2 Examples of Sources of Risk
Commercial and Strategic Economic Environmental
1. Competition 1. Discount rate 1. Amenity values
2. Market demand levels 2. Economic growth 2. Approval processes
3. Growth rates 3. Energy prices 3. Community consultation
4. Technological change 4. Exchange rate variation 4. Site availability/zoning
5. Stakeholder perceptions 5. Inflation 5. Endangered species
6. Market share 6. Demand trends 6. Conservation/heritage
7. Private sector involvement 7. Population growth 7. Degradation or contamination
8. New products and services 8. Commodity prices 8. Visual intrusion
9. Site acquisition
Political Social Contractual
1. Parliamentary support 1. Community expectations 1. Client problems
WHAT ARE RISK AND UNCERTAINTY?
(continues)
Table 3.2 Examples of Sources of Risk (Continued)
56
Compliance
1. Breach of listing rules
2. Breach of financial regulations
3. Breach of Compliance Act requirements
4. Litigation risk
5. Breach of competition laws
6. VAT problems
7. Breach of other regulations and laws
8. Tax penalties
9. Health and safety risks
10. Environmental problems
Source: Adapted from Implementing Turnbull: A Boardroom Briefing with the kind permission of
the Institute of Chartered Accountants of England & Wales.
risks that have been poorly understood.”7 The major projects that have been iden-
tified often had an element of technology.
However, be aware that risks often are industry specific and are closely related
to the circumstances of the company. In other words, only use Tables 3.2 and 3.3
as a starting point from which to get some ideas. Then input ideas in a process of
identifying/analyzing risks, because that must be done before we can even start
thinking about managing risk.
Identifying risks is often a creative exercise in which a process framework
is commonly used. There are probably as many ways of doing this as there are
WHAT ARE RISK AND UNCERTAINTY? 59
companies, but according to the 1999 Deloitte and Touche survey, the following
techniques are the most useful (on a scale from 1 to 9) for identifying business
risks:
● Roundtable debates on key risks (6.92)
● Interactive workshops (6.62)
● Strategic risk reviews (6.58)
● Specific studies/surveys (6.42)
● Structured interviews (6.04)
● Management reports (5.60)
● Checklists/questionnaires (4.43)
It is important to recognize that the previous list is valid only for business risks
and not for technical risks, for example. Of course, these techniques must be
applied to the core areas of the company to have the desired effects. That includes
areas such as strategy, product development, market understanding, service and
product development, the quality of the company’s management, changes both
within and without, and so on.
probability that is relevant for this book and concerns most theories of probability,
and fuzzy sets,10 or possibility theory.
In the former sense, probability refers to a “degree of belief or an approvabil-
ity of an opinion,”11 but it should be noted that “the nature of its (probability) sub-
ject-matter is still in dispute” and that “its logic runs into ‘circularity’.”12 However,
by investigating the roots of the word probability, we find that the previous inter-
pretation is probably useful because the word ⑀ (eikos), which means “plau-
sible” or “probable,” was used by Socrates in ancient Greece to describe “likeness
to truth.”13 When we try to forecast the future, we are essentially trying to express
our opinion and belief about the future as it likens truth. Thus, this interpretation
of probability seems to be what we are looking for.
Evidently, the concept of probability in the former sense refers to a degree of
belief, but with respect to what? It is here that the probability camp and the possi-
bility camp stumble into each other, in my opinion. The opponents of possibility
theory and fuzzy logic uses Cox’s proof, saying that “if one wishes to represent
‘plausibility’ of a proposition by a real number and require consistency in the
resulting calculus, then the ‘axioms’ of probability follow logically.”14 The propo-
nents of possibility theory claim that the most limiting aspect of probability the-
ory is “that it is based on two-valued logic.”15 I believe the points these disputes
miss are that:
● Internal logic consistency does not prove anything with respect to reality, as
discussed earlier.
● Probability theory is not a complete representation of probability.
Probability theory is, as its opponents claim, digital, but that is more due to the
fact that the theory developed from gambling, and dice cannot have any interme-
diate values. But from the original concept of probability, as discussed earlier, we
see that probability is a measure of the degree of belief. Thus, the point is that prob-
ability theory simply does not capture entirely the term probability.
I believe much of the disagreement between the probability camp and the pos-
sibility camp is that both are unaware of these philosophical roots of probability
theory. They have to a large extent been comparing the various forms of measur-
ing probability to each other and claiming that one is superior to the other. What
they have missed is that both are trying to measure the degree of belief, but that
probability theory has often an absolute approach and always a digital approach,
while possibility theory has a relative approach based on measures of degree. This
will be clarified in “Increase Uncertainty to Lower Risks” section. For now, it
suffices to understand that probability theory requires more information than
possibility theory.
Of course, probability theory is a sound approach in many cases and it provides
the most insight, but as explained later in “Probability Theory Versus Possibility
Theory,” cases do occur when this approach may prove deceptive. In such cases,
WHAT ARE RISK AND UNCERTAINTY? 61
the possibility camp view is more useful because for them the point is to compare
the outcomes against each other; that is, the solution space does not matter. This
eliminates several problems, but we lose precision. My point is simply that we
must understand that the two theories complement each other and that we must
understand which theory fits under which circumstances.
However, a genuine difference exists between the probability view and the pos-
sibility view, and that is the interpretation. In probability theory, an outcome will
either occur or not, whereas in fuzzy logic and possibility theory degrees of occur-
rence exist. Possibility theory is therefore closer to the concept of probability than
probability theory, but this is simply due to historical reasons.
In all situations where Monte Carlo simulations take place, the calculus part of
the theories matters the most in terms of operationalizing the difference, which is
explained in the “Probability Theory Versus Possibility Theory” section. From that
discussion, it is evident that because Activity-Based Life-Cycle Costing (LCC)
relies on such numerical approximation methods, this difference between proba-
bility theory and possibility theory evaporates in the realm of theory.
Monte Carlo methods are, however, not the most common approach in eco-
nomic analyses. Rather, the most common way is to assign probabilities to the out-
comes, including LCC (as far as I have seen), and the use of decision trees or
similar techniques, as illustrated in the example below, is common.
Expected
Probability Monetary
Decision Action Events Outcomes of outcomes Value (EMV)
Number of bottles = 10,000 - $ 15,000 40% – $ 6,000
Reuse
Number of bottles = 30,000 $ 40,000 60% $ 24,000
$ 18,000
?
Number of bottles = 10,000 $ 30,000 40% $ 12,000
Recycle
Number of bottles = 30,000 $ 30,000 60% $ 18,000
$ 30,000
Dependency on Uncertainty
Uncertainty exists in all situations that are unknown, unpredictable, open ended,
or complex, but matters that are unknown or unpredictable are too difficult for
analysis. I believe uncertainty can be best described as a subset of unpredictabil-
ity, which in turn is a subset of the unknown. The reason is that an uncertain mat-
ter is not unknown or unpredictable. We simply lack information and knowledge
about it; we lack certainty. In other words, unpredictability can be reduced to
uncertainty given enough knowledge. Uncertainty is therefore best described in
relation to complexity, which arises in open systems.
WHAT ARE RISK AND UNCERTAINTY? 63
that it is the hypothesis that is wrong. Nature is not predictable; nature is unpre-
dictable, but only within certain limits as defined by the laws of nature.
But how does that secure robustness? Because nature is unpredictable, all
organisms contain latent features or abilities in order to face the unpredictable.
This is not necessarily by purpose and design, but at least as a consequence over
time due to the continuous changes in nature. This information resides in the
organism’s genes and cells, and it contains much more information than needed
for its daily life. Essentially, every organism has an overkill of genetic information
for its own purpose, but this overkill makes nature robust. If a major cataclysm
takes place, someone will always benefit from it. For example, in the very early
history of our planet, oxygen was a toxic by-product of the bacteria that ruled the
world. Eventually, the amount of oxygen in the atmosphere rose above a certain
level and the number of bacteria was vastly reduced (something like 99 percent of
bacteria died), yet this gave room for life that thrived on oxygen, which ultimately
led to the dawn of Homo sapiens—us.
Another example is the stock market. If the stock market was predictable, there
would be no point trading stocks because everybody would know what was going
to happen. Hence, there would be no market. Likewise, if the stock market was
completely unpredictable, nobody would gain anything because no way would
exist for establishing an advantage of some sort. Again, there would be no market.
A third example is democracy. If democracy was predictable, what would be
the point of voting? There would be no democracy. Yet if democracy was com-
pletely unpredictable, what would be the point of election campaigns and politics?
In other words, several examples support the claim that uncertainty is necessary
for the survival of open systems.
What determines whether something is unknowable, unpredictable, or uncer-
tain is our current level of knowledge. Something that appears to be unpredictable
can therefore be reduced to uncertainty given an increased amount of knowledge.
Thus, for a system that is as complex as nature, an uncertainty analysis is applica-
ble but quite unreliable because we currently understand too little about nature. As
Göthe said:
Nature goes her own way, and all that to us seems an exception is really
according to order.
In far less complex systems than nature, such as corporations, it may be useful
to create uncertainty in lieu of sufficient uncertainty in order to more effectively
respond to the overall environmental unpredictability. Without this uncertainty, the
system would gravitate toward one solution and hence lose its flexibility to
respond to environmental unpredictability. This supports large corporations in
using competing brands. Although they compete against each other and thereby
create uncertainty within the corporation, they tend to make the corporation as a
whole a more adaptable entity and therefore more competitive in the long run.
WHAT ARE RISK AND UNCERTAINTY? 65
Uncertainty
FUZZINESS AMBIGUITY
The lack of definite or One-to-many relationships
sharp distinctions
• vagueness
• cloudiness
• haziness DISCORD
• unclearness Disagreement in choosing
• indistinctness among several alternatives NONSPECIFICITY
• sharplessness • dissonance Two or more alternatives
• incongruity are left unspecified
• haziness • variety
• discrepancy • generality
• conflict • diversity
• equivocation
• imprecision
Figure 3.2 Basic types of uncertainty. Source: Adapted from G.J. Klir and B. Yuan, Fuzzy
Sets and Fuzzy Logic: Theory and Applications. New York: Prentice-Hall, 1995, p. 268.
The first fuzziness is essentially the error we make when making estimates from
observations. The second fuzziness revolves around our inability either to include
(exactly) everything or to simplify, while the third basically deals with the differ-
ences in human perceptions.
Ambiguity, however, results from unclear definitions of the various alternatives
involved. For example, “See you tomorrow at 7:00” is an ambiguous statement
because you will wonder whether it is A.M. or P.M. The alternatives involved can be
in conflict with each other or they can be left unspecified. The former is ambiguity
resulting from discord, whereas the latter is ambiguity resulting from nonspecificity.
The ambiguity resulting from discord is essentially what the probability theory
revolves around because “probability theory can model only situations where there
are conflicting beliefs about mutually exclusive alternatives.”19 In fact, neither fuzzi-
ness nor nonspecificity can be conceptualized by probability theory. In other words,
uncertainty is too wide-ranging a concept for probability theory.
In fact, one study discussed the various methods used in risk analysis and clas-
sified them as either classical (probability based) or conceptual (fuzzy sets based):
. . . probability models suffer from two major limitations. Some models require
detailed quantitative information, which is not normally available at the time
of planning, and the applicability of such models to real project risk analysis
is limited, because agencies participating in the project have a problem with
making precise decisions. The problems are ill-defined and vague, and they
thus require subjective evaluations, which classical models cannot handle.20
WHAT ARE RISK AND UNCERTAINTY? 67
Zadeh therefore launched the concept of fuzzy logic, the first new method of
dealing with uncertainty since the development of probability. Another product of
Zadeh’s mind is possibility theory, which will be discussed in “Theory of Fuzzy
Numbers and Fuzzy Intervals.” With respect to the discussion in “Probability
Theory Versus Possibility Theory,” we see here that it is important to distinguish
the concept of probability, which has not been discussed earlier, and the measures
of probability, which have been scrutinized previously.
Both fuzzy logic and possibility theory are used to examine and reduce uncer-
tainty in areas as diverse as washing machines and environmental problems. Many
are therefore starting to believe that everything is a matter of degree. This is a lost
point in probability theory because in it everything is crisp and clear; an event will
either occur or not. This is not strange since probability theory was developed from
games of dice and gambling, and in such settings logic is crisp. That is, a dice will
produce a 1, 2, 3, 4, 5, or 6, and nothing in between. Over time, probability distri-
butions have reduced this problem of discreteness to some extent, but they rely on
hard data, something we often do not have the luxury of possessing.
A B
A ∩B
S
ability to satisfy wants,”22 and saying that utility is “the ultimate goal of all eco-
nomic activity.”23
In fact, Daniel Bernoulli (in 1738) and later economists devised an entire the-
ory called utility theory. Utility theory, or expected utility theory, is important
because it attempts to reconcile real behavior with EMV in decision-making, and
it is in fact the theory that most economists rely on when dealing with risks and
decision-making.24 It is both a prescriptive and descriptive approach to decision-
making, as we are told how individuals and corporations should make decisions,
as well as predicting how they do make decisions.
The basic assumption in utility theory that Bernoulli developed is that people
make decisions to maximize Expected Utility (EU). According to the theory, each
decision-maker has a certain utility function that represents each outcome of an
event in terms of utility value. The utility function is therefore derived from the deci-
sion-maker’s attitudes toward risk and outcomes (refer to Figure 3.4) and is usually
determined by the popular Certainty Equivalent Method, whereby information from
an individual is elicited by asking questions about lotteries. Depending on the shape
of the utility function, it is determined whether an individual is risk averse, risk
seeking, or indifferent (risk neutral), as shown in Figure 3.4:
● Risk-averse person, concave utility function
● Risk-neutral person, linear utility function
● Risk-seeking person, convex utility function
As illustrated in Figure 3.4 and argued earlier, it is believed that the utility func-
tion is dependent on wealth. In fact, many economists argue that the logical thing
to do for an intelligent investor is to give up a smaller risk premium when faced
with the same risky option as those less fortunate.25 But people do not necessarily
act logically, as explained later.
The fact that decision-making can be described by such an apparently beauti-
ful mathematical theory is very appealing, and some have therefore introduced
utility theory into cost models. Thus, it may appear that utility theory can be very
Utility
Wealth
What-If Technique
As the name indicates, the what-if technique is a technique where we ask ourselves
or a computer model many “What if this happens?” types of questions. Instead of
$ 60,000
Recycle
Reuse
Profits
–$ 20,000
5,000 50,000
Number of Units
calculating the probabilities of the various events as in the decision tree method,
we simply vary the input variables in the model and look at the response in the out-
put variable(s). In other words, we are investigating the responsiveness, or sensi-
tivity, of the output variable(s) to changes in the input variables. A what-if analysis
is therefore essentially a sensitivity analysis technique and a more general
approach to measuring the impact of uncertainty.
With respect to our example, we would not ask what is the probability of the
number of units being 10,000, 30,000, or anything else. We would rather ask
“What if the number of units received is 10,000?” and then use a model to assess
the profits. We would proceed like that until the decision-maker found the risk (of
making an erroneous decision) acceptable.
Although this technique is very simple, it is probably a better approach than the
decision tree method because it is easier and enables us to more easily and more
cost effectively study a larger amount of possible outcomes. Chapter 6 illustrates
the what-if technique in a case.
Unfortunately, the technique does not overcome the basic problems of the deci-
sion tree method, although it is easier to try out many more scenarios. Rather,
another problem has been added: We can only vary one variable at a time because
otherwise we cannot measure its impact on the output variable. Thus, an implicit
assumption exists, namely that for every new value of an input variable, everything
else remains constant. But how many times can you recall where everything actu-
ally did remain the same? None? This approach is therefore based on a major flaw
that makes it deceptively reliable. It simply ignores the fact that in reality some
input variables may have a higher value than anticipated, some have a lower value,
and others nobody knows.
In the “How Belief Sheds Light on Risk and Uncertainty” section, the solution
to these problems is presented, but first we turn to a very simple and generic
approach for determining significant risks.
Table 3.4 Normalized Probabilities and Impacts due to Sea-Level Rise in the
Next 50 Years
If Sea Risk Risk
Level Rises Likelihood Probability (p) Impact Impact (i )
0.1 meters Almost certain 0.95 Low 0.05
0.5 meters Highly likely 0.70 High 0.50
1.0 meters Likely 0.40 Extreme 0.80
2.0 meters Unlikely 0.10 Catastrophic 0.95
Consider the example in Table 3.4. A sea-level rise is something we are actu-
ally facing, up to 3.0 mm per year, according to studies by the Intergovernmental
Panel on Climate Change (IPCC), but what is the risk? Economic estimates of the
consequences have probably already been done, so the impacts are measured in
terms of money, but let us take another simpler approach.
Our job is to look at the consequences, but we do not want to use any economic
approach because we think money cannot really capture a problem of this magni-
tude. We obtain probability estimates from a risk analysis, so all we have to do is
find the impact and calculate the RFs.
First, we must identify the consequences. It is clear that a sea-level rise will
affect everybody living by the sea, but it is fair to assume that cities will face the
biggest challenges because they must build defenses for their harbors and/or raise
the city in some way. We therefore ask a group of city development experts what
they think the impact will be. Their answers are shown in the impact column in
Table 3.4.
Then we convert the verbal impact statements to numerical measures. We ask
the group to consider this: “On a scale of 0 to 1 (1 being the worst), how bad is cat-
astrophic? How bad is extreme?” We are essentially asking the experts to trans-
form the verbal statements to a baseline of their own liking. In other words, the
impact measure has no units.
The answers are found in the rightmost column of Table 3.4. We see from the
answers that the experts consider the rise in sea level from 0.1 meters to 0.5 meters
to be relatively worse than the rise from 1.0 meters to 2.0 meters. Perhaps they
think that once the water passes 0.5 meters, it is so bad that the impact curve starts
to flatten toward the 1.0 asymptote already at the 0.5 meters rise or even before.
Nevertheless, we now have both probability estimates and impact measures. Then
we just need to combine them to calculate the RF.
The RF can be calculated in numerous ways. It is important to be consistent and
document which method is used. The New South Wales Government Asset
Management Committee provides two techniques for calculating RF. Out of those
two techniques, the one I prefer is given by:
COMMON WAYS OF ANALYZING RISK AND UNCERTAINTY 75
RF ⫽ p ⫹ i ⫺ (p ⫻ i) (3.1)
The sea-level consequences can therefore be described as in Table 3.5. By
ordering the RF in Table 3.5 in decreasing order, we can generate a so-called risk
profile.
The purpose of the risk profile is to aid decision-makers in prioritizing. Priority
should be given to the consequences with the highest RF values. In our simple
example, we see from Table 3.5 that it is the extreme options that got the highest
RF values. For city planners in coastal areas of the world, this would imply (if the
data were real) that they should concentrate on small sea-level rises and very large
rises. Those in between should have a lower priority.
We can also do it even more simplistically, but not necessarily less reliably. If
we use a 1 to 5 scale for both probabilities and impacts (1 being very low and 5
being very high), we get a reworked Table 3.4 as shown in Table 3.6.
The consequences will therefore be measured on a 1 to 25 scale because the
consequence is simply the product of the probability and the impact, that is:
RF ⫽ p ⫻ i (3.2)
For this particular example, the consequences are shown in Table 3.7. Since
Equation 3.2 is different from Equation 3.1, the consequences are, of course, cal-
culated differently and hence the results are different, as is evident from compar-
ing Table 3.5 to Table 3.7.
The immediate question that arises is naturally: Which method is most reliable?
Equation 3.1 or Equation 3.2? Should we focus on an 0.5-meter sea-level rise (the
worst according to Table 3.7) or on the extreme versions, as are evident from Table
3.5? Since we have no common baseline from which we can argue which equation
is the most logical, we cannot really say that one is better than the other. In many
ways, it seems more intuitive to focus on the extreme version (see Table 3.5 and
Equation 3.1), but on the other hand, it is difficult to understand why we should
accept Equation 3.1 over Equation 3.2. I tend to prefer Equation 3.2 and use a scale
of 1 to 5 because it is simple, easy to understand, and conceptually similar to the
EMV. Because that approach does not rely on normalized probability assessments
and impact statements, it is important to be consistent in the choice of scales.
This technique also works well in combination with monetary measures, but
when monetary measures are present, other techniques perform better most of the
time.
not be used in general. It should be noted, however, that in some areas, such as
hydrogeology/hydrology, Bayesian statistics are commonly used.
The third approach is to employ the theory of fuzzy numbers and fuzzy inter-
vals, which derives from fuzzy logic. This is a very flexible way of handling uncer-
tainty, and it has two great advantages:
1. Fuzzy numbers and fuzzy intervals can be used with or without hard data.
Bayesian statistics, on the other hand, need quite a large sample of hard
data. However, it is always preferable to have as much relevant hard data as
possible.
2. The use of fuzzy numbers and fuzzy intervals has very few, if any, restric-
tions. This is the perfect tool of numerical theories and applications.27
The good thing about fuzzy numbers and intervals is that they are an effective
way to model gut instinct. This is crucial because gut instinct is important in man-
agerial thinking (especially at the more senior levels), and experiments show that
processing knowledge about uncertainty categorically—that is, by means of ver-
bal expressions—imposes less mental workload on the decision-maker than
numerical processing. Furthermore, the NRC Governing Board on the Assessment
of Risk points out the “important responsibility not to use numbers, which convey
the impression of precision when the understanding of relationships is indeed less
secure. Thus, while quantitative risk assessment facilitates comparison, such com-
parison may be illusory or misleading if the use of precise numbers is unjustified.”
An equally compelling reason, however, is that probabilistic approaches are
based on counting, whereas possibilistic logic is based simply on comparing.
Comparing is easier for people because, as experiments show, “one needs to pres-
ent comparison scenarios that are located on the probability scale to evoke people’s
own feeling of risk.” This is particularly true for low-probability risks, and many
business risks are low-probability risks because the number of potential outcomes
of a decision is so vast. Moreover, as the literature points out, given the increasingly
complex and uncertain environment in which contemporary organizations operate,
there is a need to be able to “embrace complexity and learn how to handle uncer-
tainty.” Thus, a method in which one can blend rationality with irrationality should
be highly useful. In the words of a commercial lender:
. . . basic credit decisions are often based solely on considerations of the
heart (often biased), head (analytical) or gut instinct (experience). Using
these guidelines, I have generally found that if my heart overrules my head,
the loan has almost uniformly been a poor one. If my head overrules my gut
instinct, the resulting loan may sometimes be a poor one. In looking back at
poor loans, I should have followed my gut instinct more often.28
The basic problem of probability theory is that it is too precise. Yet, in his
acclaimed book Against the Gods: The Remarkable Story of Risk, Peter Bernstein
78 UNCERTAINTY ANALYSIS AND RISK MANAGEMENT
does not mention fuzzy logic at all. Evidently, fuzzy numbers and intervals have
many advantages over more conventional approaches; how this works is explained
in the following section. Also, as discussed below, there are virtually no disad-
vantages if fuzzy numbers and fuzzy intervals are approximated numerically.
Note that the next section is theoretical and can be skipped without losing the
thread of the book. However, it is helpful to understand the logic behind the use of
uncertainty distributions, which is discussed at the end of the section.
Figure 3.6 Fuzzy numbers versus fuzzy intervals. Source: Adapted from H. Bandemer
and W. Näther, Fuzzy Data Analysis. Dordrecht, The Netherlands: Kluwer Academic
Press, 1992, p. 341.
truth) of x in Ã. In other words, the real crux of the fuzzy set lies in the definition
of µÃ(x) which describes to which extent (degree) a number is part of the mem-
bership space. In ordinary sets, it would be either zero or one, but in fuzzy sets, or
fuzzy logic, it can be anything in between as well. In fact, µÃ(x) could be a proba-
bility distribution or a set of probabilities, or maybe we should say a possibility
distribution and a set of possibilities. We can go both ways; we can probabilize the
fuzzy sets as well as fuzzify the events (in probability theory). The possibilities are
limited only by one’s imagination and a mix is unavoidable. In fact, the role pos-
sibility theory plays for fuzzy sets is analogous to the role mathematical
expectancy plays in probability theory. Thus, possibility theory and fuzzy logic
match each other in much the same way that probability theory and ordinary crisp
sets match each other.
Nahmias in the United States and Dubois and Prade in France have taken the
notion of fuzzy sets and developed it further into fuzzy intervals and fuzzy
numbers. Basically, fuzzy numbers provide the connection between the fuzzy set
theory and the confidence theory. Fuzzy numbers and intervals define a multi-
dimensional area of confidence, whereas confidence intervals in confidence the-
ory are one-dimensional, as in Figure 3.6. A confidence interval would be like a
slice in the x direction, thus missing the l direction totally. In other words, fuzzy
numbers and intervals are less strictly defined, are associated with fewer assump-
tions, and are therefore more general approaches.
However, a difference also exists between fuzzy numbers and fuzzy intervals,
because depending on the shape of the membership function, fuzzy numbers and
fuzzy intervals are defined differently (see Figure 3.6). A fuzzy number has to be
bounded and convex, while an interval is only bounded.
For example, a triangular fuzzy number on R (see Figure 3.7) is characterized
by its membership function µN(x): R → (0, 1) with:
80 UNCERTAINTY ANALYSIS AND RISK MANAGEMENT
1
x⫺m⫺ 1
for x H 3l, m4
mN 1x2 ⫽ e m ⫺ 1 1
m ⫺ n x ⫺ m ⫺ n for x H 3m, n 4
1 n
0 otherwise (3.4)
Just like ordinary numbers and intervals, fuzzy numbers and fuzzy intervals can
be added and multiplied. For a triangular number like the one in Figure 3.7, the
computations involved in these two definitions are rather laborious. However, on
the basis of the extension principle, the addition and multiplication of two fuzzy
numbers, Ñ1 and Ñ2, for example, are defined as follows:
Addition:
mN苲1䊝N苲 2 1z2 ⫽ sup1min1mN苲 1 1x2,mN苲2 1y2 2 2
1x, y2 H R2
z⫽x⫹y
⫽ sup1min1mN苲 1 1x2,mN苲 2 1z ⫺ x2 2 2
xHR (3.5)
Multiplication:
mN苲1䊟N苲 2 1z2 ⫽ sup1min1mN苲 1 1x2,mN苲2 1y2 2 2
1x, y2 H R2
z⫽x#y
⫽ sup1min1mN苲 1 1x2,mN苲 2 1z>x2 2 2
xHR (3.6)
mN~ ( x )
1
x
1 m n
To reduce the amount of work associated with the aforementioned addition and
multiplication procedures, some approximation methods have been designed. For
example, one could approximate the addition but solve the rest of the computa-
tions exactly.
A step further from an exact solution and/or from approximating the exact solu-
tion is to approximate the entire solution by employing numerical approximation
techniques. This is done by modeling the uncertainty as fuzzy numbers, as in
Figure 3.8, and consequently solving the model numerically by employing a
Monte Carlo simulation technique. This is the method I have worked with the
most; I recommend it for reasons explained in the next section.
Since no difference exists between the various approaches of modeling uncertainty
when employing numerical approximation techniques, it is more useful simply to talk
about uncertainty distributions, given that we need not capture the difference in inter-
pretation between the various approaches. In the rest of the book, I therefore simply
talk about uncertainty distributions.
For example, some claim that “the computational burden precludes the use of
standard33 Monte Carlo analysis.” This statement must be interpreted in relation to
climate models that are comprised of hierarchical computer codes, because then
the Monte Carlo analyses must be performed at several levels and provide input to
each other, which is not the case in LCC or any management discipline. Clearly, a
Simple Random Sampling (SRS) Monte Carlo method will be time consuming in
such a situation. However, by using Latin Hypercube Sampling (LHS), the num-
ber of trials can be reduced drastically. The only problem with LHS is the diffi-
culty in computing the mean output response. It is therefore suggested to break up
the models into a hierarchy and run a simulation first to identify the most impor-
tant variables. Then run Monte Carlo simulations including only the most impor-
tant variables. However, this can be a dangerous approach because it neglects the
“insignificant” variables. Delta Airlines’ success over the past 20 years, for exam-
ple, can be attributed to the fact that they have done all the little things—that is,
the insignificant variables—right.34 Doing the big things right is probably more a
prerequisite for being in business than excelling in business. Excellence lies in the
details. It should be mentioned that a large number of various sampling techniques
is used with Monte Carlo methods and have certain advantages in certain situa-
tions, and some variations of LHS have been created that perform better.
In any case, the choice should be clear since LCC and cost management prob-
lems with many variables (large m) are not nested. In fact, I have not been able to
find any numerical method that can even remotely compete with Monte Carlo
methods for management purposes. Another issue is that Monte Carlo methods
never go wrong. Thus, the only issue is what is fastest:
● To run a Monte Carlo simulation once and for all and be done, or
● To reduce the size of the problem and then solve the problem wondering if
you missed something
The latter approach is a reductionist approach. Johnson and Kaplan argue, in their
book Relevance Lost: The Rise and Fall of Management Accounting, that the reduc-
tionist approach was one of the main reasons for the loss of relevance in manage-
ment accounting until the introduction of Activity-Based Costing (ABC).
Researchers simply made the problems so simple that they could employ their the-
ories; in the process they missed the point that reality is complex and cannot be
sliced up into one problem here and another problem there. Everything is connected.
In any case, I believe that because modern software enables distributed com-
puting over a web of computers, These methods will take over more and more in
uncertainty analysis, sensitivity analysis, and optimization. Monte Carlo methods
are already being used in an increasing number of areas, such as economics, biol-
ogy, chemistry, and engineering. We have only seen the beginning.
84 UNCERTAINTY ANALYSIS AND RISK MANAGEMENT
j1 ⫹ j2 ⫹ p ⫹ jN
j⫽ (3.7)
N
Then, according to the law of large numbers (Bernoulli’s or Chebyshev’s
Theorem):
j ⬇ Mj ⫽ x (3.8)
p11 ⫺ p2
s⫽ (3.10)
B N
However, since we assume we do not know p, the error term can only be esti-
mated statistically. In the general case, for every ⬎ 0 and every ␦ ⬎ 0, there exists
a number N of trials, such that with a probability greater than 1 ⫺ , the frequency
of occurrences of an event 1 NL 2 will differ from the probability p of the occurrence
of this event by less than ␦:
L
` ⫺p` 6 d (3.11)
N
The degree of certainty of the error is 1⫺␦. By investigating the error term, we
see that the accuracy is highly dependent on the number of trials (N) performed in
the simulation. By simplifying Chebyshev’s inequality, we can estimate the ␦ as
1
d⬃ (3.12)
2N
We see that to improve an estimate tenfold, we need to run a hundred times
more trials! This equation holds for all cases. However, if we assume that the dis-
tribution of the event is approximately Gaussian (follows a normal distribution),
we get the following:
REDUCE RISK BY INTRODUCING UNCERTAINTY 87
3s
dⱕ (3.13)
2N
Thus, we see that in most cases (Gaussian behavior is most common, and all other
behavior tends to approach the Gaussian behavior according to the Central Limit
Theorem), the error also depends on the variance of each independent test (trial).
To sum up defining and discussing the error in Monte Carlo methods, Monte
Carlo methods have three general key features:
1. It is necessary to perform a large number of trials.
2. The errors are smoothed out; thus, the method is stable against noise. This
was particularly important before the digital computer came, because early
computers had random defects. However, the problem with round-off errors
is still present.
3. Monte Carlo methods use a comparatively small amount of memory to
store intermediate results, which makes it well suited to multidimensional
problems.
LHS Technique
The LHS strategy was developed, in part, to overcome some of the difficulties with
SRS. Roughly speaking, LHS involves dividing up the range of variables in sec-
tors of equal probability, sampling each sector using SRS, and finally combining
it all to form an LHS. The point is to ensure that the entire range of variables is
sampled properly to avoid leaving large ranges blank, as shown in Figure 3.4.
In more mathematical terms, the steps in LHS to generate a sample size N from
n variables ⫽ [1, 2, . . . , n] with the joint probability density function (pdf) f()
are:38
● The range of each variable is partitioned into N nonoverlapping intervals on
the basis of equal probability size 1/N. This step is illustrated in Figure 3.9.
● One value from each interval is selected and paired. The pairing may be ran-
dom (if the variables are independent) or may reproduce a correlation in the
input variables.
● The N values obtained for 1 are paired with the N values of 2. The N pairs
are (randomly) combined with the N values of 3 to form N triplets and so on,
until a set of N n-tuples is formed. This set of n-tuples is called a Latin
Hypercube sample.
TRADITIONAL RISK MANAGEMENT 89
Distribution of X 1
(1,6)
I 16 (6,6)
Distribution of X
(2,1) (2,2)
(1,1) (1,2)
I 22
I 12 (6,1)
I 11 I 21 I 61
Figure 3.9 Example of the first key step in LHS. Source: Adapted from A.M. Liebtrau
and M.J. Scott, “Strategies for Modeling the Uncertain Impacts of Climate Change,”
Journal of Policy Modeling 13(2), 1991, pp. 185—204.
As noted earlier, the LHS technique increases the accuracy of a Monte Carlo
simulation and reduces the number of trials necessary to achieve a specified accu-
racy drastically. A rule of thumb is that “the number of trials should be at least six
times the number of variables (emphasis added) to achieve satisfactory esti-
mates.”39 Some researchers have even devised a smart Monte Carlo scheme that
supposedly exceeds even the LHS for a low number of trials (10 to 200).
The biggest disadvantage of LHS is related to the mean output response since
computing its variance is difficult. But this level of sophistication and accuracy is
miles beyond what we need for cost management purposes and the like.
Before leaving the risk and uncertainty theme, I would like to provide a brief
overview of how risks are often managed. Unless we have this basic knowledge,
all our analyses of risk and uncertainty are wasted effort.
The responses are situational, and we must therefore select among several alter-
natives. To guide the selection process, a set of accepted criteria should be decided
on first. In general, the selection will always revolve around costs versus benefits,
unless no options exist, in which case we must accept the risks.
Once the response strategies are chosen, we must decide what to do in case
something does not go as planned. For moderate risks, management measures
should be prepared. These measures are simple action statements that specify the
activities necessary to handle an event. Major risks, however, are more demanding
due to the potential for large losses. They require, in addition to the management
measures, clear definitions of who is responsible, what the time frame is, what the
reporting requirements are, what resources are needed, and so on. This is often
referred to as a risk action schedule.
During this process, it is important to document what is being done and create
a concise report at the end that others can use later. This is important because the
substantial lead time that many projects have or the long life cycle of many prod-
ucts makes it unlikely that the persons who worked with it during the design stage,
for example, will still be present or even remember what they did.
Implementation of the plans requires that risks are monitored and that new ones
are scanned for routinely and continuously so that deviations and problems can be
swiftly identified and dealt with. The frequency and the responsibility of the mon-
itoring depend on a variety of factors that must be decided on from case to case.
92 UNCERTAINTY ANALYSIS AND RISK MANAGEMENT
The true mastery of risk probably lies more in preparing for unidentified risks
than in managing identified risks.
NOTES
1. G.T. Friedlob and L.L.F. Schleifer, “Fuzzy Logic: Application for Audit Risk and
Uncertainty.” Managerial Auditing Journal 14 (3), 1999, pp. 127—135.
2. P.L. Bernstein, Against the Gods: The Remarkable Story of Risk. New York: John
Wiley & Sons, 1996, p. 383.
3. This is a topic of great philosophical discussion, but for this book it suffices to sub-
scribe to the most accepted interpretation. See Honderich’s, The Oxford Companion
to Philosophy, New York: Oxford University Press, 1995, p. 1009.
4. P.F. Drucker, Managing for Results: Economic Tasks and Risk-Taking Decisions.
New York: HarperInformation, 1986, p. 256.
5. F. Allvine, Marketing: Principles and Practices. Boston, MA: Irwin/McGraw-Hill,
1996.
6. P.F. Drucker, “The Theory of the Business.” Harvard Business Review, September-
October 1994.
7. “Barnevik’s Bounty,” The Economist, 362 (8262), 2002, p. 62.
8. A. Kaufmann, “Advances in Fuzzy Sets: An Overview.” Advances in Fuzzy Sets,
Possibility Theory, and Applications, ed. P.P. Wang. New York: Plenum Press, 1983.
9. See note 2.
10. Note that the term fuzzy set is the original term introduced by Zadeh, but that in the
1970s Richard C.T. Lee coined a new term, fuzzy logic, which is the same as fuzzy
sets. Personally, I prefer the term fuzzy logic and that is what is used in this book.
11. See note 2.
12. T. Honderich, ed. 1995. The Oxford Companion to Philosophy. New York: Oxford
University Press, p. 1009.
13. See, for example, Sambursky’s “On the Possible and Probable in Ancient Greece,”
Osiris 12, 1956, pp. 35—48.
14. See, for example, Cheeseman’s, “Probabilistic Versus Fuzzy Reasoning” in
Uncertainty in Artificial Intelligence, eds. L.N. Kanal and J.F. Lemmer, New York:
North-Holland, 1986, pp. 85—102.
15. L.A. Zadeh, “Is Probability Theory Sufficient for Dealing with Uncertainty in AI:
A Negative View,” Uncertainty in Artificial Intelligence, eds. L.N. Kanal and J.F.
Lemmer. New York: North-Holland, 1986, pp. 103—116.
16. W.E. Gilford et al., “Message Characteristics and Perceptions of Uncertainty by
Organizational Decision Makers.” Academy of Management Journal 22 (3), 1979,
pp. 458—481.
17. J. Gleick, Chaos. New York: Penguin, 1987.
18. E. Hisdal, “Infinite-valued Logic Based on Two-valued Logic and Probability, Part
1.1,” International Journal of Man-Machine Studies 25, 1986, pp. 89—111; Hisdal,
NOTES 93
MOTIVATING EXAMPLE
This motivating example illustrates how ABC can be employed in environmental
management. Consider a hypothetical manufacturer, Chair, Inc., which produces two
types of furniture: unfinished (UFIN) and finished (FIN) chairs. The difference
between the chairs lies in the finish, the UFIN chair being the simplest. The produc-
tion volume is 1 million chairs per year, divided equally between the two product
lines. Management views the finished chairs as the most profitable of the two lines
because it has a higher traditional margin. Recently, management has also given seri-
ous thought to phasing out the UFIN chair line, but is that a good decision?
To find out, we first take a brief look at the value chain. It turns out that the man-
ufacturing process for the UFIN results in very little waste that has environmental
consequences: sawdust and residual glue. The manufacturing process for the FIN,
however, involves paints, stains, solvents, and other toxic adhesives in addition to
96 ACTIVITY-BASED COSTING
sawdust and residual glue. But the way the traditional costing system treats the
$5.4 million in environmental overhead does not reflect the fact that UFIN hardly
incurs any environmental costs. In fact, the environmental overhead cost is allo-
cated to the products by using direct labor as an allocation despite the fact that
direct labor has nothing to do with the environmental overhead. This clearly indi-
cates a faulty cost management system, but to truly answer the question we must
analyze the costs for the products using both a volume-based approach and ABC.
Then it will become evident why management wants to phase out UFIN and
whether it is smart or not.
Chair, Inc.’s 1993 total overhead costs are $30 million. They are distributed as
shown in Table 4.1. Forty-five percent of the corporate/plant administration costs
are attributable to environmental costs, which are $5.4 million.
To keep the tables tidy, I use parentheses to denote which units I use. For exam-
ple, (MUSD) indicates that we are discussing numbers whose units are in millions
of U.S. dollars.
But we also need some information about the products. This is presented in
Table 4.2. We see that the FIN chairs are more costly in terms of raw materials but
also a little more labor intensive because of the finishing processes.
If we use a traditional volume-based costing system, the costs would be allo-
cated as shown in Tables 4.3 and 4.4. Since a substantial difference seems to take
place in the way environmental overhead and the other overhead costs are incurred
in the process, the overhead costs are divided into two groups, namely environ-
mental overhead and other overhead. These two groups are, however, allocated
using the same allocation base, as shown in Tables 4.1–4.4.
We see that the volume-based cost system clearly supports the management
decision, but is everything all right? If we look carefully, we will see that the two
processes are treated in the same way and that all the overhead costs are just
lumped together. The volume-based costing system simply does not discover the
difference in activities in the two processes; it does not consider the fact that the
UFIN product line triggers few environmental costs, while the FIN product line is
responsible for triggering most of the environmental costs. This is reflected in the
cost accounting by the fact that the cost assignment mechanism is the same for
both EOH and OOH.
Convinced by this argument, the management of Chair, Inc. asks for an ABC
analysis to see what is really the case. We start by identifying and studying the fin-
ishing activities. We learn that these activities are performed equally unit by unit,
regardless of the amount of direct labor or anything else earlier on in the value
98 ACTIVITY-BASED COSTING
chain. Hence, we choose annual production as an allocation key for the environ-
mental overhead costs. Also, the UFIN chairs (together with the FIN chairs) only
incur sawdust and residual glue disposal costs, which amounts to $30,000. The rest
of the environmental overhead cost, $5.37 million, is solely attributable to the FIN
chairs. This gives us the cost allocations shown in Tables 4.5 and Table 4.6.
With the same selling price as earlier, this gives a $4.69 profit for the UFIN
chairs and a $0.19 loss for the FIN chairs. The situation has changed dramatically,
and the management decision is, according to ABC, wrong.
Management also has another problem: the Clean Air Act (CAA), which
imposes a $1.0 million investment to comply with the law. Management, realizing
the mistakes of traditional costing, decides to shut the FIN product line down. That
decision is apparently correct using ABC principles, assuming that the loss in vol-
ume can be compensated for by an increase in UFIN chair volume and/or a cut in
the overhead costs. But that large investment will erode profits substantially. Why
not try to modify the technology, establish Environmental Management Systems,
and try to recycle and reuse?
The required environmental audit costs $1.67 million. This cost is carried by
the FIN chair line since it directly triggers the need for an audit. Furthermore, it
can be argued that the audit is part of an investment. Thus, the $1.67 million is
depreciated linearly over five years, which gives an annual cost of $334,000.
Instead of eliminating the FIN, line management must employ a solvent recov-
ery system that treats the used solvents and removes the paint by-products. This
system will reduce hazardous waste disposal volumes so that only residual wastes
are hauled away. The solvent recovery system will also reduce the expenditures for
raw materials by $2 per FIN chair. In other words, we have saved:
Table 4.7 ABC Cost Allocation (MUSD) after the First Five Years
Product RM DL DC EOH OOH TOH TC Sales Profit
UFIN 5.00 4.00 9.00 0.015 10.64 10.655 19.655 22.00 2.345
FIN 6.50 5.25 11.75 3.385 13.96 17.345 29.095 32.00 2.905
Total 11.50 9.25 20.75 3.400 24.60 28.000 48.750 55.00 5.250
Table 4.8 ABC Cost Allocation ($/unit) after the First Five Years
Product RM DL DC EOH OOH TOH TC Sales Profit
UFIN 10.00 8.00 18.00 0.03 21.28 21.31 39.31 44.00 4.69
FIN 13.00 10.50 23.50 10.77 27.92 38.69 64.19 64.00 5.81
100 ACTIVITY-BASED COSTING
to be inadequate for the new strategy because the overhead costs that were treated
inadequately represented about 27 percent of the total costs. Essentially, the vol-
ume-based costing system was telling management that low-volume, low-value-
added parts were more profitable than high-volume, high-value-added parts. After
implementing an ABC system, management realized that production should shift
away from low-volume, low-value-added parts to the profitable high-volume,
high-value-added parts. In fact, prior to the installment of ABC, one can claim to
a certain extent that John Deere had systematically encouraged the production of
unprofitable products and reduced or eliminated the production of the profitable
products. Because John Deere was in a sheltered, competitive situation, however,
it did well financially.
In retrospect, some of the findings from the ABC analysis seem obvious. But
because common sense is not that common, sound cost management is needed. It
is better to stop the guesswork and forget rules of thumb and act on facts instead.
ACTIVITY-BASED COSTING
Activity-based costing (ABC) is a big topic, but in this book we discuss these
issues:
● The ABC concept and volume-based (traditional) concepts
● ABC compared to volume-based costing
● Cost-reduction opportunities using ABC
● The expansion of ABC into new areas
● Designing traditional ABC systems
● ABC and Total Quality Management (TQM)
ABC Concept
ABC is a costing system that is based on the formulations of resources, activities,
and cost objects (see Glossary), as shown in Figure 4.1. Resources are everything
the organization uses to operate, and the measure is cost. Activities are what are
actually being done in the organization. Groups of activities with certain com-
monalties are usually referred to as processes, activity centers, departments, and
so on, depending on the type of commonalty. Cost objects are the objects, typically
products and customers, for which we want separate cost, revenue, and profit state-
ments. These elements interact as follows: The cost objects consume activities,
which in turn consume resources. Thus, ABC is a two-stage costing system.
The volume-based (traditional or conventional, as often denoted in the litera-
ture) costing systems, however, are one-stage costing systems without any process
perspective, and hence the costs are allocated directly to the cost objects, usually
using highly volume-related allocation bases such as direct labor hours and
ACTIVITY-BASED COSTING 101
General
Resources
Ledger
Allocation by
Activities direct labor hours
and bill of material
machine hours. This difference is important to notice because it implies that ABC
is process oriented, whereas volume-based systems are not. Put differently, ABC
is based on what really happens, while volume-based costing systems are based
on the organizational structure and volume. Many implications of this are further
discussed in the “Activity-Based versus Volume-Based” section, but first we
should look a bit closer at the direction of the arrows.
We see that in ABC the arrows go upward, whereas the arrows in volume-based
costing systems go downward. This is to signify that ABC is resource oriented and
aims to direct capacity toward demand, which is estimated by upward aggregation,
hence the upward arrows. Volume-based costing systems, however, simply allocate
the capacity quite arbitrarily, in the sense that the allocation is not based on any
cause-and-effect relationships. Thus, the arrows go downward. The cost assignment
in ABC is, however, top-down (see Figure 4.2). To appreciate this difference, recall
from Chapter 2 that cost is a measure of resource consumption that relates to the
demand for jobs to be done, whereas expense is a measure of spending that inves-
tigates the capacity provided to do a job. The resource consumption perspective
counts because management must match capacity to demand and not the other way
around;3 that is what ABC is based on. Volume-based costing systems, in contrast,
are capacity oriented and in fact ignore the demand altogether.
In addition to the cost assignment view, ABC also offers a process view or a
second dimension (see Figure 4.2), and a so-called two-dimensional ABC concept
102 ACTIVITY-BASED COSTING
Resources
Process view
Performance
Cost drivers
Activities measure
Cost
objects
Figure 4.2 Two-dimensional ABC system. Source: Adapted from P.B.B. Turney,
Common Cents: The ABC Performance Breakthrough. Hillboro, OR: Cost Technology,
1991, p. 322.
emerges. The second dimension, the process view, is used for noneconomic per-
formance measurement.
This type of ABC concept is also referred to as a second-generation ABC archi-
tecture since it is an improvement over the older version shown in Figure 4.1. The
core of the second-generation architecture is, however, the same as before, but the
process view is enhanced further.
ACTIVITY-BASED COSTING 103
The two views serve different purposes. The cost assignment view, which was
the original perspective of ABC, deals with issues like:
● Pricing
● Product mix
● Sourcing
● Product design
● Setting priorities for improvement efforts
The process view, on the other hand, concerns matters such as:
● Why work takes place and how much effort must be expended to carry out the
work, which is measured by cost drivers. Cost drivers include factors related
to the performance of prior activities in the value chain as well as factors
internal to the activity.
● How well the work was performed, which is measured by the performance
measures. This includes such issues as quality, time, and efficiency.4
Notice that the process view and its performance measures provide an obvious
link to Balanced Scorecards and similar performance measurement systems such
as the performance prism.
Volume-based costing systems like standard costing, on the other hand, only
produce product costs using a bill of materials (BOM) and then allocate the over-
head costs using direct labor hours,5 machine hours, or something similar as an
allocation base. The process view is completely ignored.
From this discussion, it is clear that ABC has upward cost management,
although the actual cost assignment is downward, as the ABC cost assignment
has its roots in the actual ongoing processes; volume-based costing systems have
downward cost management and a cost assignment according to a simple over-
head allocation. Furthermore, ABC is process oriented (due to the formulation
and dependency of activities), while volume-based costing systems are structure
oriented (since costs are classified according to the current structure of the
organization).
The difference between volume-based costing methods and ABC is therefore
like day and night, yet the source of these differences lies in a few basic assump-
tions:
● Volume-based costing system:
䡩 Products consume resources.
䡩 Costs are allocated using unit-level allocation bases.
● ABC:
䡩 Products consume activities; they do not directly use up resources.
䡩 Costs are traced using multilevel drivers.
104 ACTIVITY-BASED COSTING
Clearly, two major differences exist: resource consumption versus activity con-
sumption, and unit-level allocation bases versus multilevel drivers, which are dis-
cussed in the next section.
Allocation bases
Cost per unit
Profitability
far less demanding. Thus, volume-based costing systems worked just fine when
they were developed; however, the context in which they were developed around
1900 no longer exists. Thus, it is time to rethink, and ABC is the result.
Figure 4.4 shows a schematic overview of how the cost assignment in ABC
works. For more details, see the “ABC Example and Case Study” section. ABC is
often combined with other activity-related approaches under the wider umbrella
of Activity-Based Management (ABM).
If we compare ABC, as shown in Figure 4.4, to the volume-based costing sys-
tems shown in Figure 4.3, we see that many aspects are opposite. For example, the
various functions within a company are not found anywhere in Figure 4.4. That is
because ABC is cross-functional and process oriented. Furthermore, the definition
of activities, the process-orientation, is the central hub between ABC as a cost
assignment tool and as a cost planning/control tool (see Figure 4.2). This is com-
106 ACTIVITY-BASED COSTING
Overhead costs
Activities
Activity-driver
consumption intensity
Cost per
unit
Profitability
pletely missing in Figure 4.3. Equally important is the fact that while ABC pro-
vides an important link to nonfinancial performance measurement during cost
planning/control, traditional approaches totally ignore nonfinancial aspects. The
result is that ABC provides a link to, for example, quality management and a
Balanced Scorecard that is completely missing in volume-based costing systems.
This is discussed more in the “Activity-Based Costing and Total Quality
Management” section.
However, all the differences between Figure 4.3 and Figure 4.4 are attributable
to the two conceptual differences previously mentioned: (1) resource consumption
versus activity consumption and (2) unit-level allocation bases versus multilevel
activity and resource drivers. These are discussed in detail next.
It should be noted that the terms resource driver and activity driver are used in
the second-generation ABC architecture (see Figure 4.2). In the first-generation
ACTIVITY-BASED COSTING 107
ABC architecture, the terms used were first-stage cost driver and second-stage
cost driver, respectively, thus implicitly implying that ABC systems consist of two
stages. But as ABC developed, it was recognized that the term cost driver could
be used more accurately to describe what actually drives the costs of an activity.
In this book, the term driver is used to encapsulate all three types of drivers in the
second-generation ABC architecture.
Just as ABC alone is not a guaranteed recipe for success, other practices also
require decision support concerning the cost perspective. For example, as men-
tioned in Chapter 2, many Baldrige Award winners have encountered severe finan-
cial difficulties; thus, focusing on quality and continuous improvement is not
enough by itself either. What is needed is a balanced approach, which can be
achieved by realizing that ABC is process oriented in a wide sense and that it
requires a paradigm shift to process thinking to be really successful. With process
thinking, I do not mean the superficial act of implementing nonfinancial perform-
ance measures, which is important in its own right, but rather a much more pro-
found process-orientation toward everything that happens within the four walls of
the organization. This should include measures such as:
● Separating13 external financial reporting systems from internal cost manage-
ment systems and making the latter process oriented. A process-oriented cost
management system will have all its costs assigned to processes. Today this
is not easy because workers are organized according to departments and
machines are often not treated as single objects, but rather broken down into
individual components based on, for example, depreciation time, point of
purchase, and so on.
● Extensive use of cross-functional teams and multidisciplinary teams.
● Implementing process-oriented information and quality management systems.
● Thinking about continuous improvement in everything that is done, and not
just in relation to quality management.
1. Unit-level drivers, which are triggered every time a unit of a product is pro-
duced, such as drilling a hole and painting a surface.
ACTIVITY-BASED COSTING 109
2.00 Legend
Volume-Based Costing
Activity-Based Costing
1.50
Product cost/feet
1.00
0.50
Number of
miles/batch: 1.0 2.5 5.0 10.0
Product: A B C D
Figure 4.5 The effect of batch size diversity for volume-based costing versus ABC.
110 ACTIVITY-BASED COSTING
Figure 4.6 Examples of complexity that drives costs. Source: Adapted from R.G. Eiler
and C. Ball, “Implementing Activity-Based Costing,” Handbook of Cost Management,
ed., B.J. Brinker. Boston, MA: Warren, Gorham & Lamont, 1997, pp. B2.1—B2.33.
Sources of Distortion
1. Inherent uncertainty
in design.
2. Inevitable distortion.
3. Assessments affect
what is assessed.
Sources of Measured
Actual Distortion Performance
Performance
Best
Deficiency Assessment Reliability Problems
Unidentified Performance Reliability 1. Situational factors
Deficiency
by the Model due to: Problems affecting the model.
1. Lack of data. 2. The method is not
2. Method is incapable of applied correctly.
handling the problem
(many products, multiple sources of bias, and so on), it is impossible to find this bias
before it is too late, that is, after a decision is made. In other words, it is impossible
to predict either the direction or the magnitude of this bias in volume-based costing
systems because the underlying source of cost formation is ignored. Thus, volume-
based costing systems are inappropriate as cost management tools in most situations.
ABC, on the other hand, has superb characteristics and is an excellent tool in
cost reduction studies. Also, because ABC is process oriented, we can quite reli-
ably identify the direction of the cost distortion. Identifying the magnitude is more
difficult, but not impossible.
Source: Adapted from M.D. Shields and S.M. Young, “Managing Product Life Cycle Costs: An
Organizational Model,” Journal of Cost Management 5(3) Fall 1991, pp. 39—51.
116 ACTIVITY-BASED COSTING
Source: Adapted from M. Partridge and L. Perren, “An Integrated Framework for Activity-Based
Decision Making,” Management Decision 36(9), 1998, pp. 580—588.
Benchmarking
Education
Process
redesign
Public
Expansion into new areas
Cost utilities
reduction
Activity Distribution
mapping
Sector/customer
Energy
profitability
Budgeting
Financial
Transfer services
pricing
Product
Manufacturing
profitability
Application Industries
Also, the concept of Economic Profit (EP) from the 1920s, or possibly earlier, that
Stern Stewart & Co. invented under the term Economic Value Added (EVA), can
quite easily be incorporated into ABC.
The introduction of EP into an ABC framework has the advantage of further
broadening the scope from operating costs and profits to operating costs and prof-
its, including the cost of capital. The reason is that some products, customers, and
processes may incur a disproportional cost of capital (in both positive and nega-
tive directions) and hence alter the picture provided by standard ABC to some
degree. Also, EP correlates well with changes in stock prices, according to Stern
Stewart & Co., and this provides an added benefit for publicly traded and listed
companies.
How to include EP in ABC is beyond the scope of this chapter. It will, however,
be explained in Chapter 5 because this extension is particularly important in the
LCC context as LCC often is applied on large capital goods investments. Here it
suffices to know that the extension is quite straightforward. It rests on two critical
points:
1. The identification and usage of capital drivers whose purpose is to trace the
cost of capital. The term capital driver is analogous to resource drivers,
activity drivers, and cost drivers found in standard ABC.
2. The computation of the cost of capital. The most common way of calculat-
ing the cost of capital is to use the so-called Weighted Average Cost of Capital
(WACC) method20 and multiply that by the net worth of assets. The estimated
cost of capital is a pure calculation; that is, it does not appear in any books.
Source: Adapted from R.S. Player, “The Top Ten Things That Can Go Wrong with an ABM Project (And How to Avoid Them),” As Easy As ABC, Summer
1993, pp. 1—2.
119
120 ACTIVITY-BASED COSTING
Cost Efficiency
Speed
Dependability
Quality
Figure 4.9 Model of cost reduction. Source: Adapted from K. Ferdows and A.
DeMeyer, “Lasting Improvements in Manufacturing Performance: In Search of a New
Theory,” Journal of Operations Management, Vol. 9, pp. 168—184. Copyright 1991,
reprinted with permission from Elsevier Science.
ABC Example
Suppose we were invited by company XYZ to estimate the profitability of its four
products, P1, P2, P3, and P4. We are told that the products are produced on the
same equipment and use similar processes, but that they differ in physical size and
in production volumes, as indicated in Table 4.13. From this table, we also see the
various cost information available.
Furthermore, three types of information are available for the costs related to
direct inputs: (1) material costs, (2) direct labor hours, and (3) machine hours.
When it comes to the costs related to setting up equipment and planning produc-
tion, XYZ can also provide a threefold set of information: (1) the number of setups,
Setup-Related Costs
Costs Related Total
Number Number Times to Number Overhead
Product of Setups of Orders Handled of Parts ($) Costs ($)
P1 1 1 1 1
P2 3 3 3 1
P3 1 1 1 1
P4 3 3 3 1
Amounts
consumed 8 8 8 4
Overhead ($) 960 1000 200 2000
Aggregated
overhead ($) — — 2160 2000 9924
(2) the number of orders, and (3) the times handled. The final cost category, the
costs related to the number of parts, has no further information available. This cost
category includes costs of handling the finished products, sending them to storage,
and so on.
Look at the way a volume-based costing system would report these costs. In
Table 4.14, the results are shown. As usual, direct labor has been chosen as an allo-
cation base, but in this example it would not make any difference if another one
was used. That is because a 1:10:3:30 ratio exists for the material costs, direct labor
hours, and machine hours for all the products. What makes the difference here is
the volume-based approach only considers direct input allocation bases and also
for the other overhead costs.
The overhead rates are found by dividing the total overhead cost ($9,924) by
the total number of direct labor hours (220 hours). The allocated costs are then
found by multiplying the direct labor hours by the overhead rate for each product,
as shown in Table 4.14.
We see that the volume-based costing system assesses the unit costs of P1 and
P2 to be equal and roughly three times lower than the unit costs for P3 and P4,
which are also estimated to cost the same. If we think carefully about those find-
ings, we realize that it does not sound very convincing because how can P1 and
P2, for example, cost the same when their needs for setups are 1:3, respectively?
Something was obviously not right, and to provide an alternative cost assessment
the management of XYZ asked us to implement ABC because they had heard
about it and thought that it sounded interesting.
To employ ABC, we first need to break up the work processes into smaller
units: activities. It turns out that many activities must be considered, but their prod-
uct cost assignment is governed by three distinct activity drivers, as indicated in
Table 4.13. Based on this insight, we divide the overhead costs into three cost
pools, each associated with an activity driver, as shown in Table 4.15. To trace the
costs of these cost pools, we use direct labor hours, the number of setups, and the
number of parts as activity drivers, as shown in Table 4.15. Then we take the over-
head costs associated with each activity driver and divide them by the total number
of activity driver units. The result is the consumption intensity, which is the unit
price of a driver unit. We see, for example, that a setup costs $270.
After calculating the consumption intensities, we multiply the various activity
driver values for each product and for each cost pool, as shown in Tables 4.16
through 4.18, producing the traced costs for each product for each cost pool.
We sum up all the traced costs in the last column of Table 4.16 through Table
4.18 for each product, yielding the total costs traced in the second-to-last column
in Table 4.19. Finally, we just divide the total costs traced by the number of units
to compute the unit costs.
In Table 4.20, the difference between the volume-based approach and ABC is
presented. Clearly, the difference is substantial, up to 300 percent. Imagine the
consequences of XYZ pushing P1 at a $40-per-unit sales price as a part of a mar-
keting strategy, for example. The company risks digging itself into red numbers
while believing it is doing the right thing. All because the existing cost accounting
system tells the company that it is making more than $17 per product unit sold.
Clearly, making decisions based on a volume-based costing system is unwise,
because volume-based costing systems grossly mistreat overhead costs.
You may think that such results are rare and that it occurred in this example by
design, but the fact is that often companies lose money on roughly 80 percent of
their products. If we include the costs of capital, the numbers are even worse. It
is not an exaggeration to claim that companies survive despite their volume-based
cost accounting systems. Think about all the times you have heard about compa-
nies growing their market shares while their profits remain the same or even
decline. The good news is that this represents a major opportunity for those com-
panies that understand their costs, because they can focus on capturing profitable
market shares at a minimum expense while their competitors can have the rest.
ABC at WagonHo!
Implementing an ABC system can be done in many ways due to a variety of fac-
tors, such as budget, cost views (see Table 4.12), organizational complexity (see
Figure 4.6), and actual decision support needs. This case illustrates one of the most
ABC EXAMPLE AND CASE STUDY 125
common approaches, which is to use ABC as a single analysis for strategic usage.
WagonHo! is in such bad shape that for them it is most important to get the big
picture (strategy) right first before beginning to use ABC on a more continuous
basis.
This approach is also usually smart no matter what, because it is very difficult
—if not almost impossible—to go directly from traditional cost accounting prac-
tices to an integrated ABM system. To use ABC analyses for strategic purposes on
an ad hoc basis is therefore good training to prepare the organization for the full-
fledged version of ABM sometime later if they want that. Furthermore, it is a far
less costly approach since it takes time for a company to get used to managing and
acting according to ABC thinking.
Company Overview
WagonHo! is a toy manufacturer located in the computers at the Center for
Manufacturing Information Technology (CMIT) in Atlanta, Georgia. In other
words, WagonHo! is not a real company. CMIT uses it as a simulation company
in which local companies can test out the latest information technology for man-
ufacturing. However, CMIT does have a model factory built in the laboratory
where it actually produces products.
The company experienced a $1.3 million loss last year, which is a highly unsat-
isfactory result, and the management is, of course, in dire need of decision support
to turn this situation around. It operates in a somewhat price-sensitive niche mar-
ket, so increasing prices are not the first thing to consider, but the demand is fairly
good. Other problems are also pending, such as expected higher energy costs due
to an energy shortage.
WagonHo! has 56 employees organized mainly in six production teams and
indirect people. More specifically, the CEO is Samuel P. Stone and the plant man-
ager is Mary Ann Chesnutt. The six production teams consist of a supervisor and
six employees. Besides these production teams, the remaining 14 employees are
indirect. The supervisors of the six teams are also considered indirect. In other
words, 20 indirect people are employed in total.
The strategy is to target the high-price/quality market for children from afflu-
ent families. All the products are made of similar materials, that is, mainly plas-
tics, steel screws, and wood. WagonHo! produces only three products.
The high-end product is the CW1000 wagon, referred to as CW1000 for sim-
plicity. This is a wagon with four wheels and front steering (see Figure 4.10). The
sales price of this product is $120; corrected for 12 percent sales rebates and 2 per-
cent provisions, we get $103.20. Current production is 5,000 units per year. The
simplest product is the CW4000 wheelbarrow. This is single-wheeled, without
steering. It sells for $100, yielding a net sales price of $86. Current production is
3,000 units per year. The CW7000 garden cart is the middle product. This is a two-
ABC EXAMPLE AND CASE STUDY 127
CW1000 Wagon
wheeled cart also without steering that sells for $105, giving a net sales price of
$90.30. Current production is 2,000 units per year.
The shop floor is configured as shown in Figure 4.11 with six lathes (L), six
milling machines (M), six subassembly (SA) stations, one kitting area, six final
assembly and inspection (FA) stations, and one central conveyer. With this produc-
tion line, the estimated aggregated production cycle times are 462, 247, and 259
minutes for the CW1000, CW4000, and CW7000, respectively. The cycle times are
obviously long and clearly need improvement, but how? Also, an overall loss for
the company is not a sustainable situation either, so that must also be improved.
WagonHo! is currently using Contribution Margin Costing, a volume-based
costing system. But past experience has convinced management to try using ABC
to give them better decision support. Table 4.21 presents the hourly labor costs of
L L L
Finished
Bar Code Goods
M M M Scanners Storage
M M M
SA SA Conveyer
SA SA
FA FA FA
SA SA To offices
FA FA FA
the workers, and by multiplying the hourly labor cost with the estimated times, we
get portions of the BOM, as shown in Table 4.22, which presents the BOM for the
CW4000. The unit time in Table 4.22 is an estimated production time (h/unit).
The BOM should be read from bottom to top. Thus, for the CW4000, the total
direct cost is $46.86 per unit while the unit production time is 4.07 hours per unit.
We can also see the various subassembly numbers. For example, the product num-
ber for the bed is 4100 and its part number is CW1373.
Table 4.23 presents a summary of the BOM for the products, along with the
aggregated production times, which currently serve as overhead cost allocation bases
in the company’s volume-based costing system. The aggregated production times are
found by multiplying the unit time for each product by the annual production of each
product. These times will most likely also prove handy for the ABC implementation.
ABC Implementation
Since this chapter is about ABC in general, it is best to present here a common way
of implementing an ABC model rather than presenting the way that provides the
most decision support. It should be noted that this implementation is tailored to
MS Excel and not to ABC software. Personally, I prefer MS Excel for strategic
ABC analyses because such analyses are ad hoc and may require manual adjust-
ments to give proper decision support; hence, the argument of the organization
needing to maintain the model does not apply. Also, MS Excel provides a flexible
implementation tool so that, for example, EP can easily be incorporated.
Furthermore, in some cases executives have probing questions concerning the
implementation and its results, and then an open ABC implementation is much
more explanatory and trustworthy than the black boxes of software.
An MS Excel–based, strategic ABC analysis approach would typically consist
of nine steps, some of which can be performed concurrently:
1. Create ownership of the ABC analysis in the organization and define the
scope of analysis.
2. Review current cost accounting structures and clean out overhead costs from
the direct costs if needed.
3. Get cost data from the General Ledger and aggregate them into cost cate-
gories.
4. Define the activity library.
5. Design the questionnaire and hold group interviews.
6. Calculate a Bill of Activities (BOA).
7. Define activity drivers and gather data. Also gather other relevant data such
as sales data.
8. Aggregate the cost of activities that have the same activity drivers into sep-
arate cost pools.
9. Calculate product/customer costs and profitability.
Normally, I prefer that allocations govern less than 10 percent of the total overhead
costs. In this case, it is 2.7 percent, which is very satisfactory.
Concerning the conveyer system, the costs are distributed equally among the
A12x activities, that is, between activities A121 through A125. This is due to the
fact that all A12x activities use the conveyer system equally because they are all
interrelated via the production flow on the conveyer.
Direct is an even stronger relationship between the resource and activity. What
this means is that, for example, the kitting equipment has a one-to-one relation-
ship with the activity A121. This is the ideal situation of an ABC model because
it reduces the distortion to zero, provided that the activities are defined in enough
detail to provide any insight.
Note that in Table 4.28, BOM costs are excluded. That is because BOM costs are
direct costs and therefore are simply included in the end of the ABC analysis only.
Given these resource driver definitions and the data as shown in Table 4.29, we can
calculate the cost of the activities as found at the bottom of the table. One of the two
1,446,370 numbers is a control summation and should therefore be equal to the other
(to the left). This is necessary when using MS Excel or any other open model.
Table 4.30 provides a summary of the BOA, listing all the activities. These
results can be plotted in a graph, as shown in Figure 4.12. This figure can be help-
ful in identifying abnormal process costs or in identifying what activities are most
important to either eliminate or reduce. We see, for example, that most of the over-
head costs are in fact related to the core production activities (A12x), which may
indicate that many problems must be solved or that the production layout requires
a lot of follow-up.
The interesting question is: Can production costs be reduced significantly by
streamlining the production? Because the cycle times are so long, it may seem that
this is indeed the case.
136
Activity Costs
350,000
300,000
Cost ($/year)
250,000
200,000
150,000
100,000
50,000
0
A122 A112 A121 A124 A125 A44 A42 A222 A123 A21 A221 A43 A41 A111 A113 A31
Activity
The activity drivers that probably most accurately reflect the true cause-and-
effect relationships between products and activities are mill labor hours, lathe labor
hours, and assembly labor hours. This is due to the fact that the production is man-
ual with no usage of automation or batch production.
Other activity drivers also are used multiple times. To ease the analysis, we can
aggregate all the costs that share the same activity drivers into cost pools, which
is done next.
plexity. Unfortunately, it may also reduce the accuracy of the model because it is
tempting to create larger cost pools than warranted to save work and effort. Again,
it is a matter of cost versus usefulness.
Results
The results are normally not presented in the manner shown in Table 4.33. Often,
the results are presented as shown in Table 4.34, in that the numbers are normal-
ized by the sales volume, providing a Return on Sales (ROS) perspective. That is,
we take, for example, the Margin 1 (sales ⫺ direct cost) of the CW1000 ($168,550)
and divide it by the sales ($600,000), which produces the number 28.1 percent. A
Margin 1 of 28.1 percent for the CW1000 essentially indicates that for every dol-
lar sold, 28.1 cents is generated as a surplus toward covering rebates and overhead
costs. Unfortunately, we see that the profitability of the CW1000 is ⫺126.6 per-
cent, or in other words, for every dollar sold, WagonHo! loses $1.266. In fact, all
the products are highly unprofitable.
To improve the situation for WagonHo! we must investigate the results in Table
4.34 more thoroughly. We see, for example, that Margin 1 is quite low for the
CW1000, which may indicate that the CW1000 either is priced too low or has
cycle times that are too long. In this case, probably both are the case, particularly
the latter.
We also see that all products generate too many overhead costs (over 100 per-
cent for all products). This may indicate that simply too many overhead resources
are included compared to the production volume and/or that the production vol-
ume is too low. Since laying people off is not a popular measure, the management
at WagonHo! would like to pursue increasing the production volumes significantly,
but as mentioned earlier, the cycles times are very long. In other words, the man-
ufacturing system must be reconfigured to either lower direct costs and/or reduce
cycle times.
From Table 4.31, we see that the lathes in Activity A123 generate little over-
head costs, yet WagonHo! has as many lathes as mills, for example. This seems to
indicate that too many lathes exist (too much capacity). An audit revealed that five
of the lathes could be sold without any further consequences.
Conversely, for Activity A122 (milling), excessive usage of overhead resources
is taking place. This may indicate that the milling operations have many problems,
so we decide to do something about the mills. After conducting an audit of activ-
ity A122, it became evident that if we had a saw that could cut some parts rapidly
and if the workers got to run extra mills, they could significantly increase output.
It also became clear that introducing cell manufacturing would reduce the prob-
ability of reworking because all the workers would understand the entire manu-
facturing operation and not just their part of it. In addition, the subassembly step
should include quality control, rather than the final assembly step as before. That
is, Activities A124 and A125 are merged. Furthermore, cell manufacturing will
reduce activity A121 significantly. The freed resources can be used in the increased
milling operations. After implementing these suggestions, the cycle time is
142
Product Product Dir. Cost Sales Margin 1 Rebates Margin 2 ABC OH Costs Total Costs Profitability
ID Name 694,900 1,110,000 415,100 37.4% 133,200 281,900 25.4% 1,446,370 130.3% 2,274,470 204.9% -1,164,470 -104.9%
CW1000 Wagon 431,450 600,000 168,550 28.1% 72,000 96,550 16.1% 856,189 142.7% 1,359,639 226.6% -759,639 -126.6%
CW4000 Wheelbr. 149,010 300,000 150,990 50.3% 36,000 114,990 38.3% 337,866 112.6% 522,876 174.3% -222,876 -74.3%
CW7000 Garden C. 114,440 210,000 95,560 45.5% 25,200 70,360 33.5% 252,315 120.2% 391,955 186.6% -181,955 -86.6%
ACTIVITY-BASED COSTING
ABC EXAMPLE AND CASE STUDY 143
reduced with over 50 percent for all products, as shown in Table 4.35, and the shop
floor layout is changed, as shown in Figure 4.13.
However, we also suggest increasing the price for the CW1000 significantly to
cut demand for it. The reason is that the CW1000 actually costs a lot more to pro-
duce than was reflected before in its traditional cost accounting system, and this
capacity can be more profitably employed for the two other products. For exam-
ple, the CW1000 is the only product that consumes the A123 activity (lathes). On
top of that, the CW1000 is basically a more complex product. WagonHo! there-
fore increases the price from $120 to $225 for the CW1000, and sales have fallen
by 50 percent.
After making all these changes, we reestimated the previous estimated times
and they fell a lot. The new improved cycle times (see Table 4.35) will hopefully
make it possible to produce the products more resource-efficiently than before. A
secondary effect is that the production volumes can be increased, which will pro-
vide an economies-of-scale effect for all the products.
What the management of WagonHo! is particularly proud of is that all these
changes were made without firing a single person. Only a couple of early retire-
ments were needed.
S M M
Finished
Cell 1 M SA Goods
Bar Code
Scanners Storage
Cell 2 M SA
Cell 3 M SA
Conveyer
Cell 4 M SA
Cell 5 M SA FA FA FA
To offices
Cell 6 M SA FA FA FA
Discussion
As mentioned at the beginning of this section, this case is a functional one. Most
real cases are much more complex, often having hundreds of products and cus-
tomers. However, that is why this case is so suited for illustrative purposes; we can
concentrate on the essential parts. Also, the processes in WagonHo! were apparently
poorly configured as some of them had quite a massive amounts of surplus capac-
ity. Most companies would manage to solve some of the problems of such gross
production resource misallocation as shown in the case here. The point, however,
is that they would have little, if any, aid from their cost management system. Here,
in contrast, we see that with ABC the production managers in this case will get sig-
nificant decision-support. ABC is basically an excellent attention-directing tool, and
that is what I hope you will learn from this case.
In this case, we focused a lot on production issues. That is due to the available
data from CMIT, which after all is preoccupied with manufacturing information
technology. However, even in this case, we can provide marketing with useful
information regarding pricing issues. In fact, in real-life situations, marketing peo-
ple can benefit quite rapidly from an ABC analysis due to the immediate relation-
ship between total costs and pricing. However, ABC analyses have the greatest
impact on manufacturing and other costly processes, but the road toward harvest-
ing results is longer and requires more diligent work.
Finally, it is important to keep in mind the cost view of the ABC analysis, as
exemplified in Table 4.12. In this case, a strategic approach was chosen. This is the
approach I would recommend for companies that have just started on the journey
toward modern cost management practices for several reasons.
First, the company should first evaluate its strategies, because these strategies
have an enormous impact on long-term profitability. Second, such ad hoc analy-
ses provide more than enough decision support the first five to six years. Third, a
more subtle reason is that one cannot jumpstart the organizational learning
process; it is important to take it step by step. Finally, due to the initial uncertain-
ties of the organization’s execution capabilities and the costs of implementation, a
simple, flexible, cost-effective approach should be chosen to reduce the costs com-
FROM THE TRENCHES 145
mitted and to increase the organizational learning. ABC is, after all, a very logical
approach, and companies that want to explore it should not be hindered by black
boxes and unnecessary committed costs.
Later, when a company feels comfortable with ABC, it can begin to use ABC for
financial cost views (refer to Table 4.12). This will require the use of ABC software
and managers who understand ABC sufficiently to avoid being dictated by the soft-
ware. After all, who would trust major decisions to software engineers who have no
stake in the company whatsoever and hardly understand ABC themselves?
The final stage in the learning process toward world-class cost management is
to use ABC for operational purposes (see Table 4.12). In fact, we no longer talk
about ABC, but ABM and nonfinancial performance measurements, including
Balanced Scorecard, Activity-Based Budgeting (ABB), and so on. Such solutions
must be embedded in an Enterprise Resource Planning (ERP) system or a similar
system, but they require excellent managers that understand what they do.
Otherwise, the company runs the risk of “delivering distorted information every
single day”27 to managers who do not understand what they see.
From this discussion, it follows that although ABC has great potential for use
in all companies, it is important to implement it step by step to ensure sufficient
organizational learning. Using ABC for LCC purposes, as discussed in this book,
does not raise the risks of integrated ABM systems because LCC is ad hoc in
nature, at least for the time being. Activity-Based LCC therefore is a sound next
step up from traditional LCC and the hindsight of cost management in general. Its
theory is discussed in Chapter 5.
Finally, I would like to thank Research Engineer Greg Wiles at CMIT in Atlanta
for his cooperation and the data he provided, which made this case study possible.
My impression is that shortcutting this learning process is more often the rule than
the exception. It is no wonder that some companies become disappointed, and
claim that ABC is just another passing fad.
In my opinion, ABC is no passing fad. ABC represents the return to basics in
the sense that processes, measures, and causalities determine the cost allocation
and not some arbitrary allocation bases that are chosen more out of habit than any-
thing else. Furthermore, ABC is a great cost accounting tool, but its true value lies
in its learning potential and focus on causality. In this sense, the Balanced
Scorecard and similar approaches in which causality is one of the main points can
be seen as an offspring of ABC.
Cost management has invariably been the art of hindsight and cutting costs after
they are incurred. However, ABC as an organizational learning tool opens up for
proactive cost thinking and the identification of cost causalities. The next step
should therefore be to turn the entire cost management process around and focus
more on the cost causalities and less on the actual cost estimates, because it is only
by understanding the cause-and-effect relationships among cost objects, processes,
and resources can we truly manage costs. Once the cost causalities are understood,
cost management can become forecasting oriented and proactive, and that is what
Activity-Based LCC is all about.
NOTES
1. The example is adapted from P.L. Brooks, L.J. Davidson, and J.H. Palamides,
“Environmental Compliance: You Better Know Your ABC’s,” Occupational
Hazards, February 1993, pp. 41—46.
2. The full case description is found in R. Cooper, “Activity-Based Costing: Theory
and Practice,” Handbook of Cost Management, ed. B.J. Brinker. Boston, MA:
Warren, Gorham & Lamont, 1997, pp. B1.1—B1.33.
3. For more details on this matter, see R. Cooper, “Explicating the Logic of ABC,”
Management Accounting (UK), November 1990, pp. 58—60.
4. P.B.B. Turney, “What an Activity-Based Cost Model Looks Like.” Journal of Cost
Management, Winter 1992, pp. 54—60.
5. According to a survey in 1987, 94 percent of the companies use labor hours to allo-
cate overhead costs.
6. For more details, see J.W. Hardy and E.D. Hubbard, “ABC: Revisiting the Basics,”
CMA Magazine, November 1992, pp. 24—28.
7. A thorough discussion on this is provided in H.T. Johnson and R.S. Kaplan,
Relevance Lost: The Rise and Fall of Management Accounting, Boston, MA:
Harvard Business School Press, 1987, p. 269.
8. According to E.M. Goldratt in “Cost Accounting: Number One Enemy of
Productivity,” Proceedings of the APICS Conference, 1983, pp. 433—435.
148 ACTIVITY-BASED COSTING
27. R. Cooper and R.S. Kaplan, “The Promise—and Peril—of Integrated Cost
Systems.” Harvard Business Review, July/August 1998, pp. 109—119.
28. M. Driver, “Activity-Based Costing: A Tool for Adaptive and Generative
Organizational Learning?” The Learning Organization 8 (3), 2001, pp. 94—105.
29. M. Lebas, “Which ABC? Accounting Based on Causality Rather than Activity-
Based Costing.” European Management Journal 17 (5), 1999, pp. 501—511.
5
ACTIVITY-BASED LIFE-CYCLE
COSTING
Modeling steps
Step 1 - Define scope of model and Step 10 - Perform Monte Carlo simulations
corresponding cost objects and relevant analyses
If
needed
Step 2 - Obtain and clean Bill Of Materials Step 9 - Estimate the cost of cost objects
for all cost objects and their performance measures
Step 3 - Identify and quantify the resources Step 8 - Estimate the Bill of Activities
It is crucial to define the objectives of the model because this determines what
type of model one should build. Two options concern whether one should build a
back-casting model or a simulation/forecasting model. LCC models are almost by
definition forecasting models, but often one must build a back-casting model
before one can build a good forecasting model. Alternatively, one can include the
last year in the forecasting model to provide a 100 percent baseline and then use
this year’s costs to check whether the forecasting is good enough.
In a back-casting model, we would start with the resources. We would then, via
the model, determine the costs of activities and the costs of the cost objects. This
is a top-down approach at least as far as the cost assignment goes. Simulation mod-
els, however, are bottom-up models where we start with the process data, product
demand, and so on and calculate the resources needed to run the operation.
Alternatively, we can follow the target costing idea and end up with a target cost,
which is what the operation can cost. However, in order to perform a simulation
model, one must in general start with a back-casting model to determine the “as-
is” situation. Then from the as-is situation, changes can be made and the future
simulated, so to speak.
A more relevant distinction between models for this book is the difference
between cost accounting models and ad hoc LCC analyses. LCC cost accounting
models take a broad view of the situation and are applied to managing future costs
by predicting them and managing their drivers, as shown in Chapter 8. Basically,
they provide relevant decision support to numerous managerial issues as con-
strained and partly defined by a system boundary. In other words, the system
152 ACTIVITY-BASED LIFE-CYCLE COSTING
boundary is an important driver for cost accounting models and ultimately limits
which cost objects are relevant.
Ad hoc LCC analyses, as discussed in Chapter 2, are tailored toward specific
decisions or cost objects. They provide decision support as defined by the cost
object(s) that constrains the system boundary choice. The ad hoc analyses are
therefore focused on a specific cost object(s), whereas the cost accounting models
focus on the system boundary and understand what is inside it. Ad hoc analyses
are consequently more narrow in scope than cost accounting models, usually tai-
lored toward specific decisions, and the system boundaries are different. Tra-
ditional LCC is ad hoc and often focuses on one cost object at a time, whereas in
Activity-Based LCC many cost objects are handled simultaneously when
employed as either ad hoc analyses or cost accounting models.
Concerning the system boundary, we must distinguish between the system
boundary for cost accounting models and for ad hoc models, as mentioned. For
cost accounting models, it is important that the boundary is defined to minimize
the number of transactions across the boundary. The length of the life cycle must
also be defined by convention, such as by 1 year, 10 years, a strategic planning
time horizon, or the life cycle of a product line. Ideally, the physical system bound-
ary should be a Strategic Business Unit (SBU), a company, or an entire organiza-
tion/corporation. At the very minimum, the business unit under study must have
clearly identifiable cost accounts or cost information in a format that can be used.
For example, making a cost model of a department in a company can be greatly
inhibited if that department does not possess clearly identifiable cost accounts and
if substantial interaction does not exist between that department and other depart-
ments. In such a case, it is usually less work to include the entire company in the
cost model than to try to create proper system boundaries. Of course, once the sys-
tem boundaries are defined, we need not study every part of the system with equal
diligence. In other words, we can focus on one particular department if we want.
Thus, once the scope of the modeling is determined, the cost objects will be
defined by default in an LCC cost accounting model.
Conversely, in an ad hoc LCC model where the cost objects are defined first,
the system boundaries are defined with respect to both time and physical exten-
sion. They are primarily designed to follow the length of the life cycle of the cost
object(s) and include all conceivable costs and revenues.
Traditionally, the ad hoc approach has been associated with engineering, while
the cost accounting approach has been associated with managers. In Activity-
Based LCC, the difference no longer exists, except when it comes to defining the
system boundaries.
When it comes to the perspective of the model—that is, whose point of view
the model represents—several perspectives are possible. The most common per-
STEP 3: IDENTIFY AND QUANTIFY THE RESOURCES 153
spective is the SBU perspective, which is the perspective of the SBU where the cost
objects belong. Such a perspective will typically revolve around the operating costs
and possible liability costs in the future. Another perspective that has gained much
popularity in recent years is the shareholder perspective. The shareholder per-
spective is similar to the SBU perspective, but it also includes the cost of capital
or something similar. A perspective that is popular in LCC models used in envi-
ronmental management is the stakeholder perspective. The stakeholder perspec-
tive includes all the costs as seen by many stakeholders. In such models, it is vital
to state for what stakeholder each cost assessment is made because a cost for one
stakeholder is revenue for another and so forth.
the relevant costs given the appropriate perspective. For a company, the perspec-
tive will usually either be an SBU or shareholder perspective.
If the LCC model is to be used in scenario planning or any other activity where
costs (and revenues for that matter) may or may not appear, it is important to in-
clude all possible costs, but more important to decide how to handle such costs. If
we assign probability estimates to the costs, we implicitly assume that the
Expected Monetary Value (EMV) approach is suitable, but cases occur where this
would be highly misleading.
For example, if a cost either does or does not occur (with no intermediate val-
ues), then EMV will produce the average between occurring or not. The only prob-
lem is that this average will never occur; hence, it is misleading. For large costs,
this can become inappropriate, and in such cases it is best to use indicator vari-
ables (variables that take the value of either 0 or 1) or step functions (functions that
produce only discrete outcomes such as 0, 1, 2, 3, and so on). Then assign proba-
bility values to the values in the step function. This will essentially be similar to
using a decision tree but without computing a grand, weighted average.
Resources can be refined into smaller and smaller resource elements. For exam-
ple, a building is a resource that can be split up into the resource elements such as
rent, cleaning, electricity consumption, depreciation (note that depreciation and
other calculated costs can be calculated in several ways) and insurance, which in
turn can be refined into smaller resource elements depending on what is useful
with respect to meeting the objectives of the model. Thus, we can produce resource
hierarchies, but these are virtually impossible to comprehend due to the vast num-
ber of resource elements most companies possess. Consequently, resource hierar-
chies are used.
While performing Step 3, the resources should be identified and quantified if
possible. The identification only requires the name of the resource and type, such
as depreciation, house rent, insurance, and so forth. Proper identification is impor-
tant so that the resources can be identified during the analysis. The reason for
emphasizing “if possible” is that resources directly related to the production vol-
ume and other volume-related variables will not be completely known until Step
9 is completed in simulation models.
In Figure 5.2, three different types of resources, as they relate to the production
of products P1, P2, and P3, are illustrated:
Product/Service Perspective P1 P2 P3
Note that the dividing lines between the three resource categories are blurred.
This is to signify that no 100 percent clear-cut distinction exists between the three
types of resources, not even for materials. The types of resources overlap.
However, Figure 5.2 serves as a mental map during implementation because the
nonvolume-related resources will be known only after Step 9 is completed in sim-
ulation models. The mixed resources will mostly be known before Step 9 is com-
pleted, while the only resources that will be completely known at Step 3 are the
pure overhead resources.
That this may sound difficult and even unclear is nothing to be alarmed by; it
is not without reason that the volume-based costing systems lost their relevance as
they were financially and structurally oriented. They were basically too simplistic
and too aggregated to capture the reality of businesses. This goes to show that
understanding resource consumption is not as easy as taught in school. In fact,
many textbooks do not offer definitions and proper explanations of resources,
costs, capacity, and expenses, let alone the differences. One reason might be that
the traditional cost accounting systems are incapable of distinguishing between
these terms, but for activity-based approaches it is important.
For example, when simulating the costs for a machine that is currently under-
utilized, but for which utilization will increase, it is important to realize that
although the resource consumption of the machine will increase, the expenses will
remain the same (except volume-related expenses). The reason the expenses remain
the same is that the capacity is the same. In this way, the simulation model will cor-
rectly point out the fact that the economics of the machine will improve because
156 ACTIVITY-BASED LIFE-CYCLE COSTING
the expenses stay the same, while the cost of surplus capacity decreases. In other
words, the hidden factory or the waste has decreased. This example concerns just
a machine, but the same line of argument holds for an entire corporation.
For advanced models, it is consequently important to distinguish between the
resource dimension and the capacity dimension, as just explained and further
expanded on in Chapter 4. This is, however, beyond the scope of this book because
this distinction is mostly fruitful for implementations used on a continuous basis
and provided that management in the SBU is capable of utilizing such informa-
tion. Utilizing such information may sound simple but it is actually quite chal-
lenging because one must, in addition to understanding costing, also understand
enough statistics to estimate whether changes in, for example, supply and demand
are statistically significant or not. Organizations where statistical quality control
is used should have an advantage at this because statistical cost control, as we may
call what was just described, is conceptually the same.
In activity-based frameworks, resource consumption is managed by the activ-
ity consumption, which can occur in many different ways. In theory, it is said that
activities are consumed on four levels: (1) unit level, (2) batch level, (3) product
level, and (4) facility level, as discussed in the previous chapter. This is a simpli-
fication because product-line levels, brand levels, department levels, and so on are
not included. In Activity-Based LCC, models of more than four levels are
employed if needed. Hence, although Figure 5.2 is crude with respect to resource
classification, it is more representative of reality than the classification into four
activity levels. But again this discussion is related to distortion problems and one
must choose a balance between accuracy and distortion, and cost and usefulness,
which can be determined only from case to case.
ships between Activity Drivers and Design Changes” section. Other models are
more purely attention-directing ones. Regardless of circumstances, activities
should not be defined in more detail than the available information allows.
When identifying the activities, each activity should be labeled in a special
manner (see Figure 5.3). This method makes it easy to see where the activities
belong in the activity hierarchy and it saves a lot of space. Also, naming activities
in a meaningful way is important. The names should be expressed as verbs plus an
object. For example, an activity should be called Package Product and not Product
Packaging or simply Packaging. This may seem like semantics, but it is in fact
important for one reason: “a verb as attention director signals that continuous
improvement is the new paradigm for the company.”2 Verbs also better describe
what is actually being done. Using nouns to describe an activity easily hides the
real content of the activity.
The activity hierarchy looks like any other hierarchy, as shown in Figure 5.4. It
is important to note that the activity hierarchy shown in the figure is not easy to
use for large models because the hierarchy does not fit on a single page or a com-
puter screen. Activity hierarchies are therefore commonly presented as tables, as
shown in the case studies in this book. Such tables, however, should be interpreted
just like a hierarchy.
When the activity hierarchy is set up, the lowest level of activities, A11 through
Anm, is represented in an activity network, as shown Figure 5.5. The purpose is
mainly to show which products consume which activities, the order of consump-
tion, and important decisions involved. This is important to know when designing
the model because otherwise the wrong products may be associated with the wrong
activities or wrong decisions, which would cause fundamental errors in the model.
Ai j k l m and so on
A1 A2 An
Also, the activity network can help identify what a decision (the diamond-
shaped node) is really about. For example, if Product A is associated with a Yes in
Decision Node A, we see immediately that Product A will incur Activity Alk and
then activities A21, A22, A2k, An2, and An3.
The activity network in Figure 5.5 is not detailed, yet it suffices for most cases.
In cases where detailed process mapping is required or desirable, it is probably bet-
ter to use IDEF0 charts for the most critical parts of the process. The preferred
approach, particularly for design purposes, is to use action charts.3
Yes
A A1k
No
A11 Yes
A21 A22 A2k
B
No
Anm
At the other end of the scale are action charts where no explicit relations exist
between design parameters and activity drivers. This gives greater flexibility, and
action charts are therefore superb at directing attention toward any design changes
in general, not just product changes. Action charts are also useful tools for per-
formance measurements such as quality and time since they are time based and
quality measures such as the scrapping percentage and the like can be easily facil-
itated.5
The action charts used in Activity-Based LCC are modified from action charts
found in the literature.6 Action charts are discussed in more detail later in the “Step
4 Issues: Building Submodels” section.
Assumption Cells
Forecast Cell
Chosen uncertainty
Resulting
distrbutions
Statistical distribution
Spreadsheet Model
Product Cost = Direct Labor + Material
Direct Labor
Trials: $4, $12, $20, etc.
+
Product cost
= Results: $10, $20, $35, etc.
Figure 5.6 A SRS Monte Carlo simulation example. Source: B. Bras and J.
Emblemsvåg, “Designing for the Life-Cycle: Activity-Based Costing and Uncertainty,”
Design for X, ed. G.Q. Huang. London: Chapman & Hall, 1996, pp. 398—423.
infinite numbers from Direct Labor, and afterward drew a histogram, the histogram
would look exactly like the triangular distribution. After all the trials have been
run, a histogram of the forecast cell is created; this is a graphical view of the fore-
cast cell and how it is numerically approximated to vary as a response to the uncer-
tainty in the assumption cells. Each estimate for the Product Cost is stored in the
computer and an ordinary statistical analysis is performed on these stored values
as if they were obtained by a real-life experiment. In other words, we are running
a virtual experiment in a virtual world, where the virtual world is defined in the
model as assumption cells, forecast cells, and model relationships.
We have so far discussed how to design the Activity-Based LCC model. The
remaining three steps are calculation steps. Note that in a simulation model, Step
9 proceeds Step 8 for the reasons explained at the beginning of this chapter.
Most of these are, however, usually small compared to the two main sources:
equity and debt. The cost of capital should reflect this. A sound measure for the
cost of capital in for-profit organizations, therefore, is the Weighted Average Cost
of Capital (WACC). Table 5.1 shows the calculation of WACC. It has two struc-
STEP 9: ESTIMATE THE COST OF COST OBJECTS 163
tures, namely, the cost of equity and the cost of debt. WACC is simply a weighted
average of these two main costs. The cost of debt is the interest rates paid for the
various loans.
The cost of equity, however, is a more interesting topic to discuss. We should
be aware of a couple of facts concerning determining the cost of equity:
● Using the long-term government bond rate is only one of two common
approaches. It is used in Table 5.1. Another approach is to use treasury bills,
which are more risk free than government bonds, but bonds have a distinct
advantage in that they better reflect expected future interest rates than treas-
ury bills.7 In the United States, many companies use the 10-year U.S.
Treasury bond.8
● To infer investors’ expectations for the market risk premium, one must look
at periods longer than a year, and “conventional wisdom suggests one should
select the longest period possible.”9
The cost of capital is subsequently found by multiplying WACC by the net
assets employed, which is defined as the net working capital plus the net fixed cap-
ital. Thus, if a company has $100 in net assets and a WACC of 10.1 percent, the
cost of capital, or capital charge, is $10.10.
Many in the literature discuss many more discounting factors, such as the
borrowing rate offered by banks. However, they do not emphasize that for a com-
pany, all capital is capital. You can borrow money to finance a project, but ulti-
mately it will be difficult to say that certain capital is borrowed while the other
capital is equity or other loans. The reason is that everything is interrelated in an
164 ACTIVITY-BASED LIFE-CYCLE COSTING
organization. It would be like saying that eating breakfast provides nutrition for
the arm and leg muscles while lunch is for the brain, and so on.
Some may argue that in for-profit organizations we can also use an external
measure of the cost of capital such as the rate of borrowing capital in the market-
place. I strongly disagree, because such external measures implicitly assume that
placing capital externally is a viable opportunity, or alternative, for the decision
maker. Thus, such practices effectively drain the company of capital and thereby
reduce its capability to create value over time. External measures are only finan-
cially viable in the short term; they ruin the company in the long run. In my opin-
ion, for-profit organizations should use internal, consistent measures of the cost of
capital, such as WACC, because such organizations must generate profit by their
own business processes and capital.
ple, use taxes (corporate and individual) as capital with a cost equal to the annual
inflation in the economy. The debt works the same way in both the public and pri-
vate sectors. Then a project would be acceptable if it could provide a return higher
than the weighted average cost of capital of taxes and debts.
A final but crucial issue is the fact that the public sector, at least in Norway, does
not follow the same accounting principles as the private sector. The underlying
assumption of all use of discounting factors (one dollar today is better than one
tomorrow) may not be relevant. The reason is that most organizations in the pub-
lic sector spend their funds according to their budgets. This means that time is
close to irrelevant in the public sector; the only thing that matters is the size of their
budgets this year because that is what they can spend. In a situation like this, using
discount factors becomes somewhat artificial because one dollar today is one dol-
lar tomorrow or whatever the politicians decide.
Before discussing how to handle inflation, it should be noted that in non-
governmental organizations (NGO), which are nonprofit organizations, discount
rates should be chosen from case to case. The reason is that NGOs play a role that
they define themselves, and the chosen role should be reflected in their choice of
discount factors.
Handling Inflation
Before we can talk about handling inflation, we must first learn a little bit about
it. Inflation may be defined as “persistent increases in the general level of
prices.”10 Furthermore, “it can be seen as a devaluing of the worth of money.” To
exemplify the concept, if the annual inflation is 5 percent through the year 2000,
then a $100 bill in the beginning of January 2000 will be worth $95 one year
later in terms of purchasing power despite the fact that the bill is still a $100 bill.
In other words, the usefulness of the $100 bill as a means to exchange goods and
services has decreased by 5 percent. Over a long time, this will lead to the $100
bill having virtually no value. For this reason, and the consequences it leads to,
“inflation ranks with unemployment as one of the two major macroeconomic
diseases.”11 Note that inflation usually differs from industry sector to industry
sector and between goods. If we talk about a certain item and the price increase
associated with it, that is often called price escalation, or simply escalation,
rather than inflation although the same mechanism is behind it.
Inflation has numerous causes. The most popular arguments are that inflation
is caused by:
● Excess demand in the economy—demand-pull inflation
● High costs—cost-push inflation
● Excessive increases in the money supply—monetarism
166 ACTIVITY-BASED LIFE-CYCLE COSTING
Regardless of the causes behind inflation, it reduces the value of money. For
organizations, this is important to take into account; otherwise, an apparently prof-
itable project may become unprofitable. Organizations must look out for at least
three situations:
1. The revenues are fixed in, for example, dollar terms. This represents a poten-
tial loss for the organization because the costs may rise while the revenues
associated with the costs do not. The profit is therefore steadily declining
and may easily become negative if inflation increases enough, since many
companies have modest operating margins.
2. Their financial assets (bonds, or equities in other companies) produce less
than inflation. Then a net decrease in value takes place. This is mostly a
problem for retirees, financial institutions, or anybody else who has sub-
stantial financial assets.
3. The costs are fixed in, for example, dollar terms. This can produce an upside
for the organization if its suppliers (of raw materials, labor, and capital) have
fixed contracts and inflation is unexpectedly high.
Because inflation is a well-known phenomenon for the marketplace, most mar-
ket participants calculate a certain amount of inflation into fixed price contracts.
Thus, inflation is only a problem as long as it is not too high (although expected)
or when it is unexpected. When it is unexpected, it leads to a reshuffling of wealth
and this is generally considered economically unfair. This represents a major risk
for fixed contracts because inflation produces economic uncertainty.
When inflation is neither too high nor unexpected, it can be dealt with in differ-
ent ways. This is done by introducing a real rate of the object of discussion. For
example, as shown in Figure 5.7, if you borrow money from the bank at 8 percent
per annum and the inflation is 3 percent, the real interest rate is 5 percent. If the
inflation turned out to become 10 percent that year, however, the real interest would
be ⫺2 percent. The bank would actually give you purchasing power although you
paid the bank 8 percent, provided you had a fixed contract with the bank.
For LCC purposes, the previous first and third bullets are those most commonly
considered. The simplest case occurs when we assume that revenues and costs are
affected the same way by inflation. In other words, the ratio of real costs and rev-
enues remains constant. In such cases, we do not need to worry about inflation in
the model. This is a sound assumption when no long fixed price/cost contracts are
to be considered and when no significant time lag exists between costs and revenues.
If either prices or costs are fixed, or if a significant time lag exists between the costs
incurred and the revenues received, inflation must be included in the LCC models.
If a time lag exists but no fixed prices or costs, the easiest way of dealing with
inflation is to adjust the discounting factor using the principle shown in Figure 5.7.
That means one must add the inflation rate to the discounting factor so that the net
STEP 9: ESTIMATE THE COST OF COST OBJECTS 167
Nominal Less
interest rate Inflation
leaves
3%
Real
interest rate
8%
5%
effect over time is the discounting factor. This is important to remember for all
types of equivalence calculations, such as when we calculate the value of future
cash flows to an equivalent present value.
If prices/costs are fixed for a period, the easiest way to handle the situation that
is to reduce the value of the prices/costs by a factor of 1 ⫺ inflation rate. That is
done by simply multiplying the prices/costs in question by that factor. Then it is
implicitly assumed that year 0 is the base year for the calculations.
Consequently, two approaches must be used when dealing with inflation:
1. Calculate future costs and revenues using nominal dollars, that is, money not
adjusted for inflation, and then use a real discounting factor (a discounting
factor adjusted for inflation). This approach is probably easiest to apply
when a time lag exists but no fixed prices or costs. However, this approach
has a drawback, namely that inflation is assumed to be constant, and that is
simply not the case, particularly if the LCC spans many years.
2. Calculate future costs and revenues in real dollars, or money adjusted for
inflation, and then use a nominal discounting factor, which is a discounting
factor that is not adjusted for inflation. This approach is the safest to use in
most situations because inflation will change over time and this approach
allows us to handle inflation with complete flexibility.12
When it comes to the actual calculations, I would recommend using the same
practical approach to the calculations as accountants do, which is to use spread-
sheets and do each step of the LCC in a step-by-step manner. The safe accountant’s
approach is much easier to validate than formulas. The accountant’s approach is
also recommended even when inflation is not included in the calculations.
168 ACTIVITY-BASED LIFE-CYCLE COSTING
Finally, tax effects can also be included. This is most easily and safely done
by multiplying the appropriate numbers by a factor of 1 ⫺ tax rate and using the
previous real dollars calculation approach. If WACC is used as a discounting fac-
tor, caution must be exercised because WACC is normally already tax adjusted.
It is also important to be aware of the fact that organizations have many ways to
evade tax (legally) that are difficult to model well. Thus, when tax considerations
are included in a model, it is more to give an idea than to calculate the exact
amount.
invested at book value.”14 This statement is too bold in my opinion, but it does have
some merit because market value is often defined as:
Market value ⫽ Book value ⫹ Intellectual capital
Thus, the more knowledge-intensive the company is, the more likely it is that
the market value/book value (M/B) ratio is large. For example, in November 1996,
the M/B ratio of Microsoft was 91.93, whereas for IBM at the same time it was
4.25. In any case, the perceived value of the intellectual capital by far dominates
the book value in the market. This might also be due to the fact that both Microsoft
and IBM are in industries that move more rapidly than other industries, and hence
the need for innovation, which by nature requires a steady flow of new knowledge,
is larger. Companies in such industries will therefore be priced more according to
expectations than according to past performance. Another reason is that for knowl-
edge-intensive companies the balance sheet reflects their true capital base only to
a minor degree. The majority of the capital is in fact in their employees’ heads and
in the IT systems.
Apparently, the link between EP and market value is somewhat unclear, at least
for some companies. But that does not deny the fact that if EP is positive, the com-
pany is generating a net positive economic value, whereas if EP is negative, eco-
nomic value is destroyed. EP is therefore probably a good indicator for many
companies, while for some companies it has more limited value. But one thing is
sure: EP cannot be significantly negative over time without having negative con-
sequences for the market value.
The destruction of economic value is in principle due to two distinct but not
mutually exclusive reasons: (1) the cost objects do not generate sufficient profit to
maintain the capital base (the net assets15), and/or (2) the capital base is too costly.
In the former case, a company must become more cost effective and/or generate
higher net revenues, while in the latter case, the company needs to sell some assets,
preferably ones that produce little, if any, operating profit.
To include EP into an Activity-Based LCC model, we rely on two critical
points:
1. The identification and use of capital drivers whose purpose is to trace the
cost of capital. The term capital driver is analogous to resource drivers and
activity drivers found in activity-based frameworks and works the same way.
2. The computation of the cost of capital, as discussed earlier.
This section has discussed what EP is, how to calculate it, and how to interpret
it, but how does EP compare to NPV? NPV calculations are safe approaches to use
in LCC and in general, because they always produce consistent and “correct”
answers. Interestingly, the present value of future EP is identical to the NPV, how-
ever, whereas “the total NPV of the project . . . helps one understand whether the
170 ACTIVITY-BASED LIFE-CYCLE COSTING
entire project is value creating or not. In contrast, EVA is NPV by period and helps
one understand the pattern of (economic) value creation throughout the project
life.”16 EP is therefore also more suitable for identifying risks. However, the cash
flow analysis that NPV is based on can provide additional insight to better under-
stand the project’s funding requirements and cash generation patterns.
Declining Baseline
Figure 5.8 Moving baseline. Source: Adapted from R.A. Howell and W.A. Schwartz,
“Asset Deployment and Investment Justification,” Handbook of Cost Management, ed.
B.J. Brinker. Boston, MA: Warren, Gorham, & Lamont, 1997, pp. D4.1—D4.32.
Equation 5.1, which can also be thought of as a performance measure, can be fur-
ther specified as follows:
Effectiveness ⫽ Availability ⫻ Reliability
⫻ Maintainability ⫻ Capability (5.2)
These four performance measures have values in the interval (0, 1) and are typ-
ically defined as:
Uptime
Availability ⫽ (5.3)
1Uptime ⫹ Downtime2
R1t2 ⫽ e ⫺lt (5.4)
1. Check if the model meets the objectives specified in the beginning. This is
obvious but nevertheless important.
2. Go through the model to check if any computational errors have been made,
which is done by using control sums. If the control sums are equal, no com-
putational errors exist, which are the first type of errors.
3. The second type of errors is logical errors. These are much more difficult to
identify, and here sensitivity charts can be used to see if any illogic tracings
take place. Also, we sort the results in sequences to identify any weird
results, such as excessively costly products. If we find such results, we back-
track in the model and often we find either a minor computational error or a
logical error. More important, however, is the way the implementation is
done. That is why it is important when implementing activity-based systems
in general that the implementation procedure is systematic and that the peo-
ple implementing the model know what they are doing.
If the model is deemed unsatisfactory, then depending on what failed the appro-
priate steps must be iterated until the model is satisfactory. Once the model is sat-
isfactory, it can be used as specified initially.
export any useful nonfinancial information. Because of this, we could not perform
the ABC analyses we were supposed to do.
Third, if information systems have merged within the overall information sys-
tem, this may present some unexpected problems. For example, one company I
worked with had just merged two accounting systems for two divisions into one
single accounting system. This was no problem in general for the analysis, but it
did add extra work to ensure that the information we had was consistent. The only
real challenge was that it became difficult to trace parts of the material costs of
some products that were manufactured in both divisions.
Fourth, be aware that information systems are set up in certain ways and that
exporting data from them in ways they were not intentionally set up for may intro-
duce challenges. Most of these challenges can be solved by having a skillful data-
base administrator. However, sometimes the data we want has been destroyed or
is simply too difficult to get. In such cases, we must find other information we can
use. One company I worked with installed an Enterprise Resource Planning (ERP)
system a couple of years ago. Unfortunately, the system had been installed in such
a way that all batches that ran over 24 hours were assigned new batch numbers.
Thus, it became impossible to find out how many batches of a certain product, par-
ticularly high-volume products, were produced in a year.
These are some of the potential pitfalls and challenges we may face when per-
forming economic analyses in general. Most often they are solved by either choos-
ing a different year on which to perform the base calculations or by making some
additional assumptions. However, if we cannot make any viable or sound assump-
tions, we have a problem. If the problem is large enough, the whole analysis may
have to be planned differently than originally thought. For this reason, it is impor-
tant to identify data availability problems as early as possible.
However, a particular approach may be useful if we are only interested in the
relative difference between certain items and have no better alternative. For
example, let’s say we have three products and want to know their relative con-
sumption of a machine in order to assign the correct process costs to these prod-
ucts, but we lack data. In such cases, or whenever we want to establish relative
rankings, we can use an approach called Analytic Hierarchy Process (AHP) that
Thomas L. Saaty developed in the late 1960s. AHP is a tool to aid decision
processes, but its matrix system of pair-wise comparisons can be used to subjec-
tively establish the relative importance, or weight, between objects, criteria, or
whatever we want. Even though AHP is based on subjective statements from
groups of people, experts, and so on, it has an indispensable feature that other sub-
jective methods lack, namely an internal, logical consistency check. AHP, in other
words, produces logically consistent answers. Whether this logic is consistent
with reality is, of course, an entirely different issue that only the practitioners of
the method must judge.
FURTHER EXPLANATION REGARDING SOME STEPS 175
From the discussion in this section, it is evident that finding which data to use,
how to get the data, how to find replacement data, and so forth is not straightfor-
ward. It is possibly the most difficult part of the entire analysis because it is not
only a challenge in its own right, but it also directly affects activity definitions,
driver definitions, and even, in the worst cases, cost object definitions. The over-
riding principle of data availability is consequently the cost of obtaining the data
versus the benefit of the data. As a rule of thumb, it is therefore wise to avoid too
many activities and drivers in the LCC model because the number of activities and
drivers is the primary origin of data needs.
● A corporation wants models for the various plants, which are separate busi-
ness units, and models on division levels and on corporate levels. Hierarch-
ical models19 can be employed here.
176 ACTIVITY-BASED LIFE-CYCLE COSTING
● Several plants are tightly cooperating but are separate judicial entities and
make decisions on their own. In this case, relational models20 can be
employed.
● Software limitations can make it necessary to split up models into several mod-
els. This is the situation where either submodels or relational models suffice.
In the LCC context, only submodels are applicable, so we only discuss these here.
Submodels are simply models within a governing model where the governing
model communicates one way with the submodels. The sole purpose of submod-
els is therefore to give particular attention to a defined area within the governing
model. This can be done either by using action charts or by using mathematical
functions to describe the processes. In my experience, using action charts is by far
the most versatile approach.
In both cases, however, the governing model provides the submodel with the
resource elements and other system boundaries, and then the submodel provides
insight into the part of the governing model that needs particular attention. A sub-
model is therefore like a magnifying glass, as illustrated in Figure 5.9, where we
show how an activity (Aij) can be studied in detail by investigating the actions that
it consists of.
Figure 5.10 shows how to design a submodel. The first step concerns how to
use the governing model in setting the system boundaries for the submodel(s). The
system boundaries should include at least cost objects, activities, and their associ-
ated actions. It should be noted that action chart models can be built per cost object
because the governing model ensures proper cost assignment between the various
cost objects.
Aij Aij +1
Assemble Inspect
Hammer Hammer
Creating a Hierarchical
Activity-Based LCC Model
Di mat
Inf
re
or
cti ion
Actions
on Flo
1) Take Shaft
of w
Figure 5.9 How submodels magnify aspects of the governing LCC model.
FURTHER EXPLANATION REGARDING SOME STEPS 177
If needed
Step 2 - Perform process study and fill in action Step 5 - Perform Monte Carlo simulations
action chart(s) and/or establish and relevant analyses
mathematical functions
Step 2 focuses on the detailed process study needed to gain further insight. The
process study should include the cost object structure, which is usually more
detailed than BOM, action sequences, time estimates, and process parameters for
use in conjunction with mathematical functions. In addition, quality measures and
links to product characteristics can be useful in relation to product design. IDEF0
charts can also be used in the process study if desired.
Steps 3, 4, and 5 are similar to Steps 7 through 10 in the governing model. The
resulting model is like a small Activity-Based LCC model with time as the only driver.
Resources Legend:
First Preference
1 2 3 ... j-2 j-1 j
Second Preference
Last Resort
Assigned
Direct
Through Allocation
Attribution
Drivers
Activities
Assigned
Direct
Through Allocation
Attribution
Drivers
Cost Objects
Figure 5.11 Preferences for driver identification. Source: Adapted from M.R. Ostrenga,
“Activities: The Focal Point of Total Cost Management,” Management Accounting,
February 1990, pp. 42—49.
Profound knowledge signifies that gathering the right data from the right
sources at the right time, and then using the right resources to make the right inter-
pretations, leads to the right actions to improve quality.23 Deming understood at
least by the 1950s the importance of continuous improvement, and central in his
thinking was what later became known as the Deming Cycle (see Figure 5.12),
which is essential in continuous improvement thinking as well.
The four sectors—Simulation, Proactive, Assessment (or Accounting), and
Reactive—relate the Deming Cycle to Activity-Based LCC directly. Simulation
and assessment provide attention direction for the other two sectors.
The Deming Cycle consists of four steps:
1. The company should plan what to accomplish over a period of time and
what it is going to do to get there. Activity-Based LCC can be helpful in this
process in terms of simulation capabilities so that the critical success factors
can be known up front and the impact of uncertainty can be assessed.
2. The company should then do what furthers the goals and strategies devel-
oped previously. The identification of the critical success factors is crucial
in this stage because it allows proactive management of what is critical.
3. The company should then check the results of its actions to ensure a satis-
factory fit with the goals. This is the easiest way to employ Activity-Based
LCC because it is a simple after-calculation, or backcasting.
4. The company should then act to eliminate possible differences between the
actual performance and the goals stated up front. This reactive way of act-
ing is not discussed in the case studies.
It is important to realize that continuous improvement applies not only to prod-
ucts, services, and systems but also to measurement systems such as an Activity-
Reactive Simulation
Act Plan
Check Do
Assessment Proactive
Based LCC model. Spending too much time on making the perfect model is usu-
ally counterproductive because the owners of the model typically need results, not
lengthy explanations of why the model is not yet finished. Also, most models
require changes after their initial design, either to answer questions about func-
tions other than what the models were originally designed for or because their
logic must be refined.
The second implementation of Activity-Based LCC will most likely be better
than the first implementation and so on. It is therefore important not to pursue per-
fection in the beginning because it is too time consuming. Rather, seek perfection
over time in understanding, application, and improvement. As Chan (Zen) Master
Jiantang Ji said:
NOTES
1. This approach is the result of my Ph.D. work and has been published in numerous
papers and in a book, Activity-Based Cost and Environmental Management: A
Different Approach to the ISO 14000 Compliance, by J. Emblemsvåg and B. Bras,
Boston: Kluwer Academic Publishers, 2000.
2. J.A. Miller, “Designing and Implementing a New Cost Management System.”
Journal of Cost Management, Winter 1992, pp. 41—53.
3. For further details, see J. Emblemsvåg, “Activity-Based Life-Cycle Assessments in
Design and Management.” Ph.D. Dissertation. Atlanta, GA: George W. Woodruff,
School of Mechanical Engineering, Georgia Institute of Technology, 1999, p. 600.
4. A case where mathematical functions were employed is in the design of a vehicle
for handicapped persons discussed in J. Emblemsvåg’s master’s thesis, “Activity-
Based Costing in Designing for the Life-Cycle,” Atlanta, GA: George W. Woodruff,
School of Mechanical Engineering, Georgia Institute of Technology, 1995, p. 300.
5. In my Ph.D. dissertation (see note 3), I have discussed this in detail and also pro-
vided a good case from a furniture manufacturer in Norway.
6. Specifically, the action charts in W. Beitz, M. Suhr, and A. Rothe’s, “Recyclingorien-
tierte Waschmachine,” Berlin: Institut für Machinenkonstruktion—Konstruktion-
technik, Technische Universität, 1992. This report is in German.
7. For more details, see E. Dimson, P. Marsh, and M. Staunton, “Risk and Return in
the 20th and 21st Centuries,” Business Strategy Review 11 (2), 2000, pp. 1—18.
8. According to S. Godfrey and R. Espinosa, “A Practical Approach to Calculating
Costs of Equity for Investments in Emerging Markets,” Journal of Applied
Corporate Finance, vol. 9, Fall 1996, pp. 80—89.
9. Dimson, Marsh, and Staunton, “Risk and Return.”
182 ACTIVITY-BASED LIFE-CYCLE COSTING
10. G. Bannock, R.E. Baxter, and E. Davis, Dictionary of Economics. London: Penguin
Books, 1999, p. 439.
11. P. Wonnacott and R. Wonnacott, Economics. New York: John Wiley & Sons, 1990,
p. 804.
12. S.K. Fuller and S.R. Petersen, Life Cycle Costing Manual for the Federal Energy
Management Program. Washington, D.C.: U.S. Government Printing Office, 1996,
p. 210.
13. A.P. Sloan, My Years at General Motors. New York: Doubleday, 1990, p. 472.
14. D.B. Kilroy, “Creating the Future: How Creativity and Innovation Drive Share-
holder Wealth.” Management Decision 37 (4), 1999, pp. 363—371.
15. The gross asset value must be corrected for depreciation because depreciation is
included in the operating costs and we must avoid double-counting. Be aware of the
fact that depreciation is a calculated cost and three main avenues exist for calculat-
ing it:
● Linear depreciation, which implies that the asset loses an equal amount of value
per period. This is the most common and crudest approach.
● Accelerated depreciation in which the value loss of the asset accelerates accord-
ing to its age. This approach has several variants.
● Activity-based depreciation, where the value of the asset is reduced by the ratio
between the actual use of the asset and the estimated useful life of the same asset.
16. For more details, see T. Gandhok, A. Dwivedi, and J. Lal, “EVAluating Mergers and
Acquisitions: How to Avoid Overpaying.” Mumbai, India: Stern Stewart & Co.,
2001, p. 8.
17. R.A. Howell and W.A. Schwartz, “Asset Deployment and Investment Justification.”
Handbook of Cost Management, ed. B.J. Brinker. Boston, MA: Warren, Gorham &
Lamont, 1997, pp. D4.1—D4.32.
18. H.P. Barringer and D.P. Weber, “Life Cycle Cost Tutorial.” Fifth International
Conference on Process Plant Reliability, 1996. Houston, TX: Gulf Publishing
Company and Hydrocarbon Processing.
19. See note 3.
20. See note 3.
21. Note that cost objects consume activities, which in turn consume resources as meas-
ured by costs. Thus, strictly speaking, no direct cost assignment exists between an
activity and a cost object, but rather an activity consumption assignment indirectly
produces a cost assignment.
22. R. Cooper, “Implementing an Activity-Based Cost System.” Journal of Cost
Management, Spring 1990, pp. 33—42.
23. S.M. Hronec and S.K. Hunt, “Quality and Cost Management.” Handbook of Cost
Management, ed. J.B. Edwards. Boston, MA: Warren, Gorham & Lamont, 1997,
pp. A1.1—A1.42.
6
CASE STUDY: LIFE-CYCLE
COSTING AND TIRE DISPOSAL1
Randi Carlsen
Sagex Petroleum AS
Continue to contaminate your bed and you will one night suffocate in your
own waste.
Chief Seattle (Sealth)
All around the world car tires are worn out in massive quantities every year—
Norway is no exception. Some places have systems that collect and dispose, recy-
cle, and/or reuse the tires, whereas other places lack such systems. Depending on
whether such a system exists or not, tires can be either a useful resource or a major
waste problem.
When tires are treated as waste, problems arise from the fact that tires are bulky
and difficult, if not impossible, to compress and keep compressed. In turn, this cre-
ates a superb breeding ground for rodents and insects while simultaneously caus-
ing stability problems for the disposal sites. Also, the possibility of fire and the
associated environmental hazards makes tires unwanted material for disposal sites.
Managing tires at the end of their life is therefore important. This case study
investigates what can be done if a system is lacking and a large pile of tires is wait-
ing in the forest, so to speak. The case presented here is a reworking and extension
of a project that Randi Carlsen and Jon-Arve Røyset did during the spring of 2000
at the University of Oslo in Norway.
Two disposal sites for unwanted tires are located in the county of Ullensaker,
north of Oslo. The largest site is the one at Østli, which contains roughly 100,000
tires, while the site at Borgen is somewhat smaller. Since 1992, Ullensaker offi-
cials, the Statens Forurensingstilsyn (SFT),2 landowners, the Sessvollmoen mili-
tary base, and other neighbors have wanted to remove the tires (and other scrap)
from Østli, but it took about 10 years before the tires were removed.
The question was what should be done about the disposal site, and if the tires were
to be removed, what would be the best way of removing them from Østli? The cho-
sen approach must incur the least possible overall damage to the environment and
184 CASE STUDY: LIFE-CYCLE COSTING AND TIRE DISPOSAL
International Airport to close until the fire is extinguished. Closing the airport will
naturally have substantial economic effects. Finally, the rubber in the tires is a
resource that can be used, but in Alternative 1 it is wasted. Evidently, doing noth-
ing can have substantial negative consequences. The only upside of doing nothing
is that the expenses are few and known, at least in the short run.
The second option, Alternative 2, is to recycle the tires to reclaim rubber. This
alternative involves removing and transporting the tires away from their current
location, sending them to a recycler, recycling the rubber, and eliminating all the
risks. Although this option is potentially the most costly for Ullensaker, it is pos-
sibly the best option from a macroeconomic perspective.
The third option, Alternative 3, is to put the tires in a proper depot. This is essen-
tially the same as Alternative 2 except that the tires are transported to a proper
waste depot where the environmental risks are controlled and the impact of a fire
is mitigated. The potential risk of shutting down Oslo International Airport is prac-
tically eliminated; however, the rubber is still wasted and the legal issues around
using a depot to dispose of tires remain. Alternative 3 might, in fact, turn out to be
nonviable due to legal issues.
The three options are summarized in Table 6.1, which lists strengths and weak-
nesses. Opportunities can become future strengths, while threats are potential
problems that may follow each option.
Alternative 1: Do Nothing
If we do nothing, 100,000 tires will still be lying around 20 years from now. This
will have three consequences: (1) accidents, (2) fires, and (3) environmental prob-
lems. These three categories, which represent possible future liability costs, are
discussed in more detail below.
Cost Savings
Because we do nothing, we save on costs. These mostly concern two issues: (1) the
removal of the tires and recycling them, and (2) cleaning the soil where the tires
are burnt.
To remove the tires, we have looked at the less costly scenario. It includes pay-
ing a local sports club $40,000 for loading the tires onto trucks and then using sol-
diers to transport the tires to a recycler. The reason we think the tires will have to
be recycled some time in the future, as a consequence of delaying a decision to act
now, is that current trends, which are growing day by day, point toward a future
where recycling will be more important than it is today. Furthermore, SFT in
Norway has been very restrictive about disposing tires in waste sites.
The soldiers come as an offer from Major Øyamo of the Norwegian Army as
he has offered to do the freight free of charge as part of the training of recruits in
the logistics regiment. The value of this offer is around $60,000. If Ullensaker does
not act now, it may lose this opportunity.
The price the recycler at Moreppen charges depends on whether the tires are
cleaned or not. Clean tires cost $1,180 per metric ton, whereas tires that also need
cleaning cost $1,400 per metric ton. Roughly, 125 tires are in a metric ton, and if
we assume that 10 percent of the tires need cleaning, the total cost of recycling the
tires amounts to $965,000.
TRADITIONAL LCC IMPLEMENTATION 189
After the fire in 1995, some soil was seriously polluted, and this needs to be
cleaned up. Deconterra, a local company that cleans soil, charges $600 per metric
ton to clean up the soil. According to Major Øyamo, the burnt area is roughly 25
⫻ 25 ⫻ 1.5 meters, or 1,700 metric tons. The total cleanup cost will therefore be
around $1,020,000.
LCC Calculation
If we compile the information in the last sections, we get the following costs and
savings shown in Table 6.2. We see that the margin is negative all the years, which
means that this alternative is not a good idea from an economic perspective. Even
if we ignore the fact that the Oslo Gardermoen International Airport can be seri-
ously affected by fire, the sum of the annual margins will still be negative. But due
to the principle that it is better to earn a dollar today than one tomorrow, we should
also compute a NPV—or should we?
Traditionally, we would compute an NPV despite the fact that the idea behind
it is to discount the future, an approach that is not compatible with sustainable
development and environmental management ideas. This is particularly true when
the decision makers are politicians and public officials, because they are by default
servants of the people and also future generations. In any case, to compute the
NPV we must first find a discounting factor. As discussed in Chapter 5, several
premises exist for doing so.
When the decision maker is a public organization, it may operate according to
accounting principles that, for example, ignore depreciation; everything is pure
spending. In this light, the time-value of money has no meaning because it is only
the present that counts. The discounting factor should therefore be 0 percent, but
that is not what is commonly done. Traditionally, the discounting factor is set equal
to a market interest rate, as if investing in the market were an alternative option for
the decision maker, which in the public sector it is not. In their project, Carlsen and
Røyset chose this to be 6 percent.
With a discounting factor of 6 percent, we get an NPV of ⫺$23.2 million. In
other words, the economics of doing nothing is highly negative almost regardless
of what discounting factor is chosen. In fact, already in the first year, the annual
margin is negative (see Table 6.2). This is largely due to the risk of shutting down
Oslo Gardermoen International Airport. However, if that risk is ignored, the NPV
will still be negative (⫺$1.3 million) during the period. Thus, to do nothing about
the tire disposal site is outright uneconomic. Add to that the potential loss of
human lives, degrading the natural environment, and a loss in real-estate value.
The next step in the analysis is to include uncertainty and perform sensitivity
analyses. Traditionally, the values of the discounting factors and some other cru-
cial parameters are changed systematically, and the changes in the result, NPV in
this case, are recorded manually. Here we have chosen a more efficient solution by
running a Monte Carlo simulation (it took only 103 seconds). The NPV with the
associated uncertainty is shown in Figure 6.1. The choice is clear: Something must
be done. In the worst case, doing nothing can inflict a cost of potentially up to $39
million, but more likely $25.1 million, or Ullensaker can hope for $11.2 million in
costs. In any case, the NPV is very negative. What is interesting to note is that the
deterministic NPV of ⫺$23.2 million is higher than the mean of the Monte Carlo
simulation (⫺$25.1 million). This indicates that the uncertainty has most likely a
downside. In other words, the uncertainty is generally unfavorable; hence, the
more risky it is to not act.
.016 162.7
.011 108.5
.005 54.25
.000 0
–39,041,857 –32,092,482 –25,143,107 –18,193,733 –11,244,358
NOK
Alternative 2: Recycling
If the county of Ullensaker chooses to recycle the tires, all the cost and savings
elements are in fact the same as before. They just turn up differently on the bal-
ance, as shown in Table 6.3.
The reason that Table 6.3 is almost like a negative image of Table 6.2 is that the
baseline in Alternative 1 is recycling due to the likelihood of recycling becoming
the most feasible alternative some time in the future. The biggest difference is
therefore that recycling eliminates many potential liabilities and that makes recy-
cling a very economically sound alternative with an estimated NPV of $23.2 mil-
lion. If we think about the uncertainty involved, we can use Figure 6.1 and simply
remove the minus signs.
applied to dispose of 1,700 tires, it was denied a permit. Nonetheless, if such a per-
mit is granted to Ullensaker, Dal Skog will accept all the tires at a cost of $850,000.
This cost, however, is probably too low to reflect the actual costs incurred at
Dal Skog because the cost of disposing tires is two to three times higher than dis-
posing of other municipal waste.4 In other words, Dal Skog is subsidizing
Ullensaker because it simply does not charge Ullensaker for what the service costs.
Most likely, Dal Skog itself does not account for the total costs.
Because Dal Skog has a high standard, the environmental costs of disposing the
tires there will be lower than at their current location. Such sites have an estimated
environmental cost of $0.60 per tire per year.5 This gives an annual cost of $60,000,
which is considerably less than the annual costs of $250,000 that is incurred at
Østli. Hence, from a societal perspective, it makes much more sense to let Dal
Skog handle the tires than Østli. Thus, we get the costs and savings for this alter-
native, as presented in Table 6.4.
We see that the overall picture is quite similar to the recycling scenario. The
biggest difference is that disposing of the tires is less costly for Ullensaker than
recycling them. The NPV of Alternative 3 is therefore slightly better than for
Alternative 2, $23.3 million versus $23.2 million, respectively.
Interestingly, this is a direct consequence of the fact that Dal Skog is pricing its
services too low. If Dal Skog charged $2 million, which would probably better
reflect the true costs, the situation would be turned around. This clearly illustrates
the dangers of subsidies with respect to solving environmental problems. As
pointed out in numerous publications, this is not an exception; it is rather the rule
in the world today. From this we understand that LCC as a concept can hold great
promise in relation to environmental management as well as cost management.
Overlay Chart
Frequency Comparison
.027
.013
.000
Environmental Costs
231,029 282,369
($)
5,000 8,750 12,500 16,250 20,000 990,000 1,005,000 1,020,000 1,035,000 1,050,000
mal distribution, but at the same time we are quite sure that the market interest rate
will never drop below 4 percent in Norway. Hence, we chop the distribution off at
4 percent. At the other end of the distribution, we think that it is very unlikely that
the interest rate will pass 12 percent. The last time that happened was in the mid-
1980s, but that was under a more socialistic monetary regime. Given the trend in
Norwegian politics during the last 30 years, it is unlikely that Norway will ever go
back to such regimes. Also, a return to a socialist monetary regime is unlikely in
light of the worldwide globalization process.
In the upper-right corner, we see a triangular uncertainty distribution. We pre-
fer to model the uncertainty as triangular distribution if we suspect the variable to
be normally distributed, but the uncertainty is quite large. When the uncertainty is
quite large, a normal distribution tends to emphasize too little on the ends of the
distribution, and that is not desirable if we are very unsure. Also, triangular distri-
butions handle asymmetry better than normal ones.
But if we are highly uncertain about a variable and have virtually no preference
for even an expected value, the uniform distribution is chosen. We see that in this
case even though the deterministic value of the accident cost is $10,000, we have
no idea concerning the real cost of an accident; we just know that it will probably
lie in the range of $5,000 to $20,000.
This way of handling uncertainty is clearly far more realistic than being forced
to come up with a single number or even a range of numbers. But more impor-
tantly, because uncertainty can be handled with such ease, we can concentrate on
finding likely uncertainty distributions instead of chasing the wind after the one
magic number. Thus, we increase the mathematical uncertainty in the model but
reduce the risk of giving bad or wrong decision support.
After deciding how to model the uncertainty, we proceed to the second step in
the Monte Carlo method, namely identifying the forecast variables. The forecast
variables are the variables we want to study; hence, they could be of any sort.
These variables are important to name properly so that after the simulation, it is
easy to find the results for them and interpret them. This is, of course, crucial in
large models with several hundred forecast variables.
The next step is to choose a sampling procedure and how many times the sim-
ulation is going to recalculate the model (trials). In this step, it is important to iden-
tify the capabilities of the computer first. Five years ago, it was often our experience
that the computer had to stop too early (with respect to keeping the random error
low) because the Random Access Memory (RAM) was too small. Today, however,
RAM is so affordable that this is no longer an issue for Monte Carlo simulations
used for LCC purposes. We therefore always choose the best sampling procedure,
which is Latin Hypercube Sampling (LHS) despite the fact that it demands much
more RAM than Simple Random Sampling (SRS). The advantage is that we do not
need as many trials as before to achieve a certain level of confidence, but again,
198 CASE STUDY: LIFE-CYCLE COSTING AND TIRE DISPOSAL
due to the power of modern computers we always run a high number of trials
(10,000). In fact, the last time we were forced to cut down on the number of trials
due to computer limitations was in 1996 when the case presented in Chapter 7 was
first run. Then a Pentium 120 MHz Laptop computer with 16 MB of RAM and uti-
lizing a SRS Monte Carlo method had to stop at around 4,000 trials.
When the sampling procedure and number of trials have been chosen, the sim-
ulation is run. Depending on the size of the model and the usage of logical tests
and macros, a simulation can take anywhere from 10 seconds to several hours.
In Figures 6.1 and 6.2, we see the results of such a simulation. From Figure 6.1,
we see that the uncertainty distribution of Alternative 1 is skewed toward the left,
which means that it is more likely that the NPV will be higher (than the mean) than
lower. This is good news for Ullensaker because it shows that if it acts now, it can
benefit from this likely upside.
From the sensitivity chart in Figure 6.6, we see that the possible elimination of
liability costs associated with fire and its impact on Oslo Gardermoen International
Airport is the primary reason for this upside. This is evident from that fact that if
the fire cost increases, the NPV increases. In plain words, the higher the fire costs,
the happier Ullensaker should be since it has avoided the liabilities (as a conse-
quence of recycling the tires). Also, the way we modeled the market interest rate
plays a significant role in the skewness as well as the overall amount of uncertainty.
The market interest rate is unfortunately beyond Ullensaker’s control. This is a fac-
tor that Ullensaker will just have to live with and adjust to.
Unlike the tornado and spider charts presented earlier, the sensitivity chart in
Figure 6.6 is generated by measuring the statistical response (in this case by the rank
correlation method) of the forecast variable given the uncertainty in all the input vari-
Sensitivity Chart
Target Forecast: Alt. 2 NPV of Annual Margins
-1 -0.5 0 0.5 1
Measured by Rank Correlation
ables (to the left in the chart). It has therefore none of the limitations that the tornado
and spider charts have. There is, however, a caveat for the untrained practitioner.
Since the sensitivity chart is generated using statistical information, random
errors occur, as mentioned earlier. These errors are negligible for variables with a
rank correlation coefficient larger than roughly 0.05 (or ⫺0.05), such as the mar-
ket interest rate whose rank correlation coefficient is ⫺0.47. Therefore, the fact
that environmental costs turn up in Figure 6.6 might be due to random errors
because the rank correlation coefficient is only 0.03. To find out whether the rank-
ing of environmental costs is real or due to randomness, we must remove the two
dominating variables from the model and rerun it. Then the other variables will
show up as in Figure 6.7.
From Figure 6.7, we identify another random effect we must be aware of when
utilizing statistical approaches, namely variables that have absolutely nothing to do
with a certain forecast variable that might show up. This is due to the fact that a sta-
tistical correlation took place during that particular simulation between the variable
in question and the forecast variable. Luckily, such effects only appear for variables
with a very low rank correlation coefficient (less than 0.05), such as for the disposal
costs of tires in Figure 6.6, and will disappear or remain insignificant (see Figure 6.7)
if we rerun the model once more after removing the most dominant variables. A
dominant variable is a variable with a rank correlation coefficient of more than 0.10.
Both the aforementioned caveats are due to randomness. However, if you are
aware of the effects randomness may produce, you cannot go wrong using
Sensitivity Chart
Target Forecast: Alt. 2 NPV of Annual Margins
-1 -0.5 0 0.5 1
Measured by Rank Correlation
statistical sensitivity analysis. Just remember to rerun models after eliminating the
dominant variables to check whether the variable that shows up is real or not.
Almost ironically, the same reason may present a problem for random effects,
namely, that statistical approaches do not rely upon direct relationships between
input variables and forecast variables, making statistical approaches the only ones
capable of measuring relations between variables that are loosely coupled as in
complex systems. That is why statistics play such an important role in many fields;
setting up a system of equations is simply not possible due to the sheer complexity.
This means that our way of performing an uncertainty analysis can handle not only
cause-and-effect relations but also weak relations between multiple variables. This
is something a tornado chart or a spider chart cannot do because they rely upon rela-
tions being modeled as systems of equations. This makes sensitivity charts the per-
fect tools for identifying critical success factors, which in this case are primarily to
avoid fires. Thus, if Ullensaker decides not to act, it should at least make plans for
a rapid response to fires. It cannot rely on the luck it had in 1995.
Table 6.6 Preliminary Østli Tire Disposal Site Decision Activity Hierarchy
Level 1 Level 2 Level 3
(Decision Level) (Consequence Level 1) (Consequence Level 2)
Do nothing A1 Tires may catch fire A11
Accidents may happen A12
The environment degrades A13
Do something A2 Load tires A21
Transport the tires A22
Clean up site A23
Terminate the tires A24 Recycle the tires A241
Dispose tires A242
basis, which is to do nothing, and then identify the various decisions. Conceptually,
we can create a decision hierarchy that describes what decisions are involved, but
that would be overkill for this case. Nevertheless, we see that essentially two deci-
sions are involved (in Table 6.6, the corresponding activity hierarchy is presented):
1. The primary decision concerns whether or not Ullensaker should do some-
thing, denoted as D1.
2. We may have to decide how to remove the tires, this decision is denoted as D2.
Unlike an ordinary activity network whose purpose is to present the various
routings and process options in relation to the cost objects, a decision activity net-
work depicts the consequences of a decision in terms of the various routings and
process options that may occur. It is a simple tool for understanding the process
consequences of a decision.
The situation at Østli can be presented as shown in Figure 6.8, given the final activ-
ity definitions in Table 6.5. The activities between D1 and D2 are simply activities
that are necessary for D2 to take place. These activities are therefore nonvalue added
from the decision maker’s point of view. The same is true for all base case activities.
It is important to realize that activity networks say nothing about the sequenc-
ing between decision nodes. In other words, activity A22 does not have to follow
A21. In this case, it will, however, but we see that activities A241 and A242 are
mutually exclusive because only one will follow after decision D2 because they
are the two possible outcomes of decision D2. In other words, all activities that are
downstream to a decision are consequences of that particular decision.
The decisions that incur the highest number of activities are therefore more
risky than those with few activities in the sense that more things can go wrong.
However, this says nothing about the potential impact. In this case, we clearly see
this difference because although recycling has more activities attached to it, it is
less risky than doing nothing, because of the potential impact of activity A11 (tires
may catch fire) overshadows all other activities.
ACTIVITY-BASED LCC IMPLEMENTATION 203
A141
Legend: A142
Step 6
Step 6 is omitted because neither design issues nor activity drivers are used in
this case. We therefore have all the input data we need in order to set up the model.
We just need to model the uncertainty first, which is done next.
because there is too much information to present, but this model is so small that
here the uncertainty distributions can be presented without problems.
In addition to modeling the uncertainty as accurately as possible, we use sym-
metric, bounded, and ⫾10 percent triangular uncertainty distributions for tracing
purposes. We introduce uncertainty into the model in order to trace the actual impor-
tance of the various assumption cells (input variables). This is important with respect
to finding the critical success factors regardless of whether they are uncertain or not.
We can then proceed to the results that are generated during Steps 7 through 9.
First, however, we should evaluate the quality of the modeling and decide whether
we must iterate (Step 10) or not.
We think that the current model captures all the relevant economic aspects of
decisions D1 and D2 and consequently fulfills Ullensaker’s needs. The greatest
weakness of the analysis is the amount of uncertainty surrounding the potential lia-
bilities, but we can’t do anything about that except model the uncertainty as accu-
rately as possible.
Number of tires
(continues)
206 CASE STUDY: LIFE-CYCLE COSTING AND TIRE DISPOSAL
The Results
First, we compare the various alternatives to decide which is the best before we
can discuss the best alternative in greater detail. From Figure 6.9, we see that
Alternative 1 is vastly inferior to the two others, which are virtually identical.
The reasons for this can be seen from the sensitivity chart in Figure 6.10. Alternative
1 is simply associated with too many liability risks. It may be that none of these risks
materialize; that is, no fires or unfavorable weather occur, but we do not know.
However, which is better: Alternative 2 or 3? It turns out that Alternative 3 has
a slightly better margin than Alternative 2. However, recall that Ullensaker has to
apply for a special permit from SFT to dispose of the tires at Dal Skog and that
permit is unlikely to be granted. Also, since Ullensaker partially co-owns Dal
Skog, it would be unwise to use a disproportionately amount of capacity in the
disposal site when the potential savings are minor. Alternative 2 therefore appears
to be the overall best alternative.
Overlay Chart
Frequency Comparison
.073
Margin Alternative 1
.055
Probability
.018
Margin Alternative 3
.000
-60,000,000 -30,000,000 0 30,000,000 60,000,000
Alternative A01 A02 A03 A11 A12 A13 A141 A142 Savings
1 40,000 1,020,000 965,000 2,025,000
2 36,208,000 200,000 5,000,000 60,000 41,468,000
3 36,208,000 200,000 5,000,000 60,000 41,468,000
Sensitivity Chart
Target Forecast: Margin Alternative 1
1 0.5 0 0.5 1
Measured by Rank Correlation
.017 171
Probability
Frequency
.011 114
.006 57
.000 0
21,824,566 30,777,611 39,730,656 48,683,700 57,636,745
($)
6.10; that is, the rank correlation coefficients that are negative in Figure 6.10 are
positive for Alternative 2 and vice versa. Therefore, the chart is omitted here.
Managing Alternative 2
In order to ensure that Alternative 2 indeed becomes the best option, Ullensaker
must manage its resources well. To assist it in that, we can employ the sensitivity
chart in Figure 6.12.
Unfortunately, Ullensaker can actually influence only one factor, namely the
duration of a fire. Ullensaker can also intensify the surveillance of the Østli dis-
posal site in order to prevent fires from starting until it has recycled all the tires,
thus reducing the probability of occurrence as well as the duration.
Recall from earlier discussions that the reason major costs such as airport shut-
down cost ($/day) show up with positive rank correlation coefficients is that they
are potential liability costs of doing nothing. Hence, if you recycle the tires, you
avoid these potential future liabilities completely, and that has a positive economic
value. The reason this line of argument may seem confusing is that often the base
case is implicitly assumed to be zero; this is, however, a faulty assumption in most
cases, as explained in the “Moving Baseline” section in Chapter 5. Also note that
we now look at the effect of the time frame, that is, the effect that extending or
shortening the time span has on the margins. We see that the longer the time hori-
zon is, the larger the margins. This is evidently sensible because the longer the time
horizon, the more likely it is that a liability will occur.
If we summarize our findings, we see that the greatest benefits to Ullensaker
arise from doing something, not necessarily doing “the right thing.” The point is
that Ullensaker must do everything in its power to prevent fires from occurring due
Sensitivity Chart
Target Forecast: Margin Alternative 2
DISCUSSION
The discussion of this case has four main purposes. The primary purpose is sim-
ply to illustrate how LCC can aid decision making concerning issues in the public
sector. Since the public sector expands when societies become more affluent
because “as countries grow richer, they want to spend more of their incomes on it
(social spending), and can afford to,”9 the public sector becomes increasingly dif-
ficult to manage. Yet the tools they have at their disposal were designed for times
when the public sector consisted only of the bare bones: defense, police, schools,
hospitals, and so forth. But what is the cost of a new school today? Politicians and
.016 162.7
Probability
Frequency
.011 108.5
.005 54.25
.000 0
102,256 108,627 114,999 121,370 127,742
($)
other decision makers today typically discuss in great depth the construction costs
but give downstream costs little, if any, notice. The airtight walls between the var-
ious budget posts further enhance such thinking; the construction costs would be
in one budget post while maintenance would be in another post, not to mention the
costs of wages that would probably be in a third post. The result is that decision
makers miss the total costs and nobody is responsible for them. It is the perfect
recipe for wasteful spending or, perhaps more often, for misplaced spending or
cost cutting.
A perfect example of this is the Norwegian nursing home system. Due to the
poor economy of many counties, they try to save money where they can, and the
nursing homes must take their share. I was fortunate enough to work with one nurs-
ing home in becoming more competitive as the city of Ålesund opens up for pri-
vate nursing home initiatives. The nursing home had a process cost of $24.7
million in addition to direct costs related to food, medicine, and so on that
amounted to roughly $3 million. In another city, they tried to save money by no
longer feeding the elderly hot dinners. But for the nursing home that participated
in the project in Ålesund, more than $1.2 million (5 percent of the total process
costs) was spent on just making reports and in total 17 percent ($4.2 million) was
overhead costs. However, according to the General Ledger, overhead costs should
not be around $1.5 million. Thus, significant hidden overhead costs exist, even in
such a small organization. In addition, all the meals combined only cost $900,000.
Thus, no longer serving hot dinners in order to save costs is bad management, to
put it mildly. It also illustrates the danger of focusing on direct costs.
The question, of course, is why such things take place so often in the public sec-
tor. A part of the answer is simple: Decision makers are aided by accounting
schemes where labor, food, and electricity are typical accounts, but no accounts
exist for either the processes or for the services provided. Furthermore, no costing
system is utilized beyond the General Ledger. In other words, no reliable product
cost estimates take place; even the old volume-based costing systems would rep-
resent a massive improvement over today’s situation.
Hence, when costs are cut, the largest costs are axed first with no concern for
the product or processes. As in the previous example, food is a separate account
and therefore easy to spot, whereas no process cost estimates were made (until I
made them). The process costs are simply hidden, particularly the labor costs that
dominate the cost structure. With no costing system, such costs are completely
intangible for the manager of the nursing home. It is, in other words, very difficult
to improve the process efficiencies beyond reducing the inputs, and it is not feasi-
ble to cut labor costs. The logical conclusion is to try cutting somewhere else, but
this yields such bizarre results as trying to save costs by no longer serving hot
meals. Luckily, LCC ignores the artificial boundaries made by organizations. For
DISCUSSION 213
Activity-Based LCC, only actual, total costs count and the boundaries are set by
what actually goes on (the process), regardless of what department does what. This
is exactly what the public sector needs to produce a holistic picture.
A secondary purpose of this case study is to show how uncertainty analysis can
be carried out as discussed in this book. Also, we compare it to more traditional
ways of handling uncertainty. In the literature, many talk about handling uncer-
tainty, but very few actually do it, and in the industry even fewer do it. This may
be so, because the traditional ways of managing risk and uncertainty are so sim-
plistic that decision makers do not trust them. Another explanation might be that
these methods are too laborious to produce reliable decision support.
The fact that the traditional methods are very simplistic is beyond doubt since
they rest upon a very dangerous assumption, namely that “everything else remains
constant,” or ceteris paribus in Latin. This is an assumption that is virtually never
true. The only question is to what extent it is faulty, but that is exactly the type of
question that makes a decision maker distrust the approach because he or she has
spent money on it but is left with a major question that is unresolved. Resolving
this question adequately using traditional approaches is very laborious because we
must try out many different scenarios manually, and if the model is large with sev-
eral hundred variables, this task is daunting.
A totally different approach is to use the Monte Carlo methods. They are very
simple yet sample the entire solution space without a ceteris paribus constraint. So,
instead of wasting time on trying to find one single variable’s impact, we just vary
all the variables, store the results, and perform a statistical analysis in the end. We
perform in many ways a virtual experiment in a virtual world described by the
model and then perform an analysis in the end. Basically, the computer picks num-
bers in the input variables and stores the results of the corresponding forecast
ariables. It is apparently a crude method, which is why it was, and still is by many,
considered a “stupid” approach. However, it is, in our opinion, an ingenious method
because it is simple, efficient, reliable, and highly effective. Or as it was said in a
Chinese fortune cookie: “Simplicity of character is the result of profound thought.”
The efficiency of the Monte Carlo methods comes partially from the fact that
they are simple, but also from the fact that they can so easily be implemented in
computers. We simply model the problem as accurately as possible and crank it
through a Monte Carlo simulation; it does not matter much whether the number of
variables in the model is 100 or 1,000. Afterward we can spend time on under-
standing and implementing the results. We do not have to waste time on thinking
out smart ways of simplifying problems, solving complicated mathematics, or won-
dering about whether our model is valid or not. Because Monte Carlo methods are
reliable, we can concentrate on solving the actual problem and not on solving a
fancy, but possible deceptive, mathematical formulation of the actual problem.
214 CASE STUDY: LIFE-CYCLE COSTING AND TIRE DISPOSAL
Because of the comprehensiveness (many variables, many trials, little need for
simplifying the problem, and so on) of Monte Carlo methods, they are effective as
well. Monte Carlo methods simply enable us to spend time on making the deci-
sion because the mathematics and problem formulation parts are solved so
straightforwardly and with a minimum of assumptions. In this respect, this case
was not an ideal one because we could not contrast the power of Monte Carlo
methods to the traditional approaches. In many ways, the case was so simple that
even the traditional approaches were feasible. That is, however, the point of using
this case to illustrate the ways uncertainty can be handled: It is so simple, so easy
to follow, that we can show both the traditional approaches and the Monte Carlo
methods and discuss all approaches in relation to the case. Now you just have to
imagine a more complex case with maybe 100 or 1,000 variables and then ask
yourself how you would handle the uncertainty. We all would agree that the way
to go is with the Monte Carlo methods.
The third purpose of this case study is to illustrate two important aspects of
Activity-Based LCC: the process orientation and the use of drivers. The traditional
approaches lack a systematic process orientation and drivers. They do not incor-
porate any explicit way of securing process orientation, and drivers are usually for-
gotten. The traditional approaches are basically too cost focused, and the drivers
behind costs are largely ignored. That is not easy to see in this case, but it can be
seen, for example, from the fact that the traditional approach does not explicitly
consider the duration of a fire. It rather focuses on the cost of a certain time period.
Activity-Based LCC, however, focuses on the duration of the fire and the unit price
of the duration, that is, the cost of one day. Then, by multiplying the duration by
the unit price of the duration, the cost is found. Activity-Based LCC is in other
words a more systematic and general approach that disseminates complex cost
structures into relevant elements (drivers, resources, cost objects, and activities)
that are interrelated in a web of cause and effect.
The process orientation is obvious in Activity-Based LCC because we are
forced to define the various decisions and processes or activities that may take
place. By doing this, we discovered, for instance, that two decisions are to be
made: one concerning to do something or not, and one concerning what to do if
we choose to do something. This is useful because it divides up a bigger problem
into smaller, more easily graspable problems.
In a large and complex case, these seemingly academic differences have a huge
impact on the outcomes of an analysis, but that will be discussed later in Chapter
9. Here it suffices to acknowledge that these differences exist and that they have
had some impact even in this little, simple case.
The fourth, and last, purpose of this case study is to illustrate how LCC can be
used purely in providing decision support. In the rest of the book, however, the
CLOSURE 215
LCC models are mostly used for cost management purposes. The difference is that
LCC models for decision making are attuned toward a specific purpose: providing
decision support for a specific decision or a set of decisions; however, managerial
LCC models have a broader view, a distinction that will become clearer in later
chapters.
CLOSURE
Things are not always what they seem, particularly if your cost information is irrel-
evant or highly distorted for the issue under investigation. For the county of
Ullensaker, it may seem tempting to do nothing about the tire disposal site at Østli.
After all, no costs result from having it there, according to the country’s account-
ing system. Removing the tires will therefore be an act of benevolence or, at worst,
a response to threats from the local population, industry, and others.
Luckily, LCC discovers a more interesting and challenging truth. The fact is
that although no costs exist in the books associated with the disposal site, hidden,
potentially large costs can be unleashed by a little child, a small flame, or simply
bad luck. These costs are liability costs, imaginary today, but maybe very real
tomorrow. After all, parents may sue Ullensaker for not securing a well-known
hazardous area close to residential areas. Airline companies may demand com-
pensation from Ullensaker in case of fire because Ullensaker should have pre-
vented fires because the disposal site is so close to Oslo Gardermoen International
Airport. Thus, Ullensaker should have acted accordingly. The local population
may hold Ullensaker responsible for environmental degradation. All such poten-
tial costs, and more, are simply not included in ordinary accounting practices, but
they lie at the very center of LCC and in this case they make all the difference. The
LCC analysis in this case, regardless of approach, discloses that doing nothing is
outright risky and very uneconomic. As John F. Kennedy said:
There are risks and costs to a program of action, but they are far less than
the long-range risks and costs of comfortable inaction.
The analysis of what should be done about the tires is less clear: The choice is
between recycling and disposal. Disposal is slightly more profitable, but that may
also be due to the apparently erroneous underpricing of disposing of the tires, as
discussed earlier. The difference in the margin of the two options is consequently
miniscule and probably lies within the range of the inherent uncertainty of the mod-
eling. Therefore, the decision of what to do about the tires will probably be deter-
mined according to other considerations rather than economic ones. Because
disposing of the tires requires special permits that are hard to obtain and disposing
216 CASE STUDY: LIFE-CYCLE COSTING AND TIRE DISPOSAL
EPILOGUE
After Carlsen and Røyset made the county of Ullensaker fully aware of the pend-
ing legal, environmental, and liability issues it faced, Ullensaker acted quite
swiftly. Both LCC models clearly pictured the potential economic consequences
of further delay in the matter. The new and creative environmental consultant at
Ullensaker, Mari Kristel Gederaas, subsequently gained support and the tires were
removed right before snowfall in November and December of 2001.
RagnSells AS, the company now responsible for operating the tire return sys-
tem, took on the job for about $250,000. The low fee was attached to the condition
that Ullensaker would find a use for the shredded tires within the county within one
year. Currently, there are some pending issues concerning the fact that it is illegal
to dispose of shredded tires, and the rules and regulations concerning potential
applications of shredded tires are far from clear. Once these issues are resolved, the
disposed tires at Østli will finally be relegated to history.
The case illustrates that proper decision support can be a catalyst for effective
decision making because it is much easier to act on facts than relying on anecdotal
guesstimates or worse.
NOTES
1. I would like to thank Randi Carlsen for providing this case study and her valuable
contributions to it. For a complete overview of the project, see R. Carlsen and J.A.
Røyset, “Dekkdeponiet På Østli: Uønsket Nabo” (The Tire Disposal Site at Østli:
Unwanted Neighbor), Oslo University, Oslo, 2000, p. 78. The report is written in
Norwegian.
2. The Norwegian Pollution Control Authority (SFT) is a directorate under the Ministry
of the Environment and was established in 1974. Its main goal is to promote sus-
tainable development. It is the Norwegian equivalent of the U.S. Environmental
Protection Agency (EPA).
3. For more details, see Tellus Institute’s “Disposal Cost Fee Study: Final Report,”
Boston, MA, 1991.
4. According to a study by Mepex Consult.
5. According to the aforementioned report by the Tellus Institute.
6. This is a common condition in all traditional approaches and it implies that “all else
remains the same or constant” except the single variable whose value is being
changed.
NOTES 217
7. R.A. Lutz, “Lutz’s Laws: A Primer for the Business Side of Engineering.” Atlanta,
GA: George W. Woodruff Annual Distinguished Lecture. Georgia Institute of
Technology, 1998.
8. The details of price quotes and so on are found in the report (see note 1).
9. “Globalisation and Its Critics: A Survey of Globalisation.” The Economist, 2001,
p. 34.
7
ACTIVITY-BASED LIFE-CYCLE
COSTING FOR PLATFORM
SUPPLY VESSELS1
If you need a machine and don’t buy it, you pay for it without getting it.
Henry Ford
As organizations become increasingly aware of both environmental costs and cus-
tomer service costs, life-cycle costs will become more and more important to assess,
predict, and trace. This growing concern led to a project between Møre Research
and Farstad Shipping ASA (Farstad for short) in which the operation of a Platform
Supply Vessel (PSV) that operates in the North Sea was studied. In this project, the
economic and environmental aspects of the PSV were analyzed. We also looked at
the choice of propulsion machinery and support systems. The choice of fuel is par-
ticularly important due to the tradeoffs between low fuel costs and high mainte-
nance costs. In this chapter, only the costing part is discussed.
Note that all the input information is from 1995 and is probably not accurate
anymore, so the results are outdated. The purpose of this case study is therefore to
illustrate a complete Activity-Based Life-Cycle Costing (LCC) implementation for
a large and complicated system (the PSV). This case is, however, quite simple in
terms of, for example, overhead cost allocation and product-mix complexity
because it covers only one product.
Before we discuss the LCC modeling and its results, we must first understand
a little bit more about operating a PSV, which is the next issue.
A further complicating factor is that the contracts are of different lengths. Due
to the tight margins in the industry, only a high degree of utilization of their ves-
sels translates into a high profitability for the shipowners. The most sought after
contracts are therefore the long ones (five years or so) because they offer less
uncertainty and revenue risk. The short-term contracts, or spot contracts, are more
profitable, but it is difficult to secure a high-enough degree of utilization. Spot con-
tracts are therefore the last resort.
Even with stable revenues, significant problems related to the maintenance,
service, and repair costs exist. The maintenance, service, and repair activities are
defined as:
● Maintenance All maintenance activities that can be done while the vessel
is in service, or operating as planned.
● Service Planned maintenance activities that require the vessel being out of
service. This happens every time the vessel is docking. How often the vessel
docks depends on the policy of Farstad, but the vessel has to be docked every
at least once every five years to renew its classification with Det Norske
Veritas (DNV).
● Repair Unplanned service.
The aforementioned challenges usually translate into a few crucial questions,
such as:
● How can the amount of off-hire be reduced? Off-hire occurs whenever the ves-
sel is incapable of fulfilling the contract and it results in a direct revenue loss.
In the contract between the shipowners and the charter group, off-hire condi-
tions are specified in detail. Planned services on dock (kept within the lay
days) are not considered off-hire. The lay days are the number of days speci-
fied in the contract between the shipowners and charter group when necessary
repair, service, and maintenance can be done without any revenue loss.
● How can the life-span costs be reduced?
● How can profitable contracts be acquired?
Depending on the contract, off-hire will occur in different situations. For FAR
Scandia, which is the PSV followed in this case, the contract states that:
● Planned service on dock is not considered off-hire.
● Unplanned repairs are considered off-hire.
● The shipowners are given one lay day per month, but the maximum annual
aggregated number of lay days is set to be six. Hence, the annual service time
that exceeds six days is considered off-hire.
Because of this situation, it is important for shipowners to project the costs as
reliably as possible so that they know what their margins are in bidding situa-
220 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
tions. The life-span costs (the total costs of the operational phase) must there-
fore be predicted many years ahead before negotiating with an oil company or a
charter group, or before new ships are designed. During contract negotiations,
the primary issue is therefore what price the shipowner should offer the charter
group, which usually pays the fuel costs, particularly on longer contracts. This
is beneficial for the shipowner because it eliminates the substantial risk of fuel
price fluctuations.
When new ships are designed and built, however, the issue is far more complex.
Typically, the shipowner would only order a new ship if it had a solid first contract.
The problem is that these contracts may last only up to 5 years, but the life span
of a PSV is about 20 years before it is sold. Thus, 15 years are completely uncer-
tain. During these 15 years, the PSV may change from being used in a time char-
ter to a line charter or vice versa.
A time-charter contract is a contract under which the shipowner puts the PSV at
the charter group’s full-time disposal for a certain period of time, whereas in a line
charter, the PSV performs a certain transportation job for a charter group from point
A to point B. The difference between these two contract types is that a time-char-
ter contract typically is longer, involves less activity for the main machinery because
much of the time is usually spent waiting either at the oil platform or onshore (see
Table 7.5), and the fuel costs are paid by the charter group. This means that the fuel
consumption is relatively lower than on a line-charter job, but the shipowner must
carefully manage the maintenance costs to avoid off-hire and too high costs that
would erode margins. The shipowner will therefore typically prefer Marine Gas Oil
(MGO), which is a fuel that keeps maintenance to a minimum.
In a line-charter job, on the other hand, the fuel consumption is high, the fuel
costs are incorporated in the bid, and the shipowner manages the maintenance
costs (as always). In this situation, it is advantageous for the shipowner to have
machinery that balances fuel costs and maintenance costs. Typically, heavy fuel
oil (HFO) is dirt cheap but causes many maintenance costs because such fuel is
very thick, almost like asphalt. Hence, HFO is only profitable when the fuel con-
sumption is high, such as on a line-charter job. In this light, it is clear that the
machinery installed in a new PSV can greatly alter the profitability depending on
the contracts the shipowner may acquire in the future.
In the project, the use of tributyltin (TBT) Self-Polishing Copolymer antifoul-
ing paint on the hull was also discussed. This paint has very positive cost effects.
According to studies,2 the savings compared to other painting types is “at the very
least £700 million annually (or roughly $1.1 billion) for the Deep Sea fleet due to
lesser fuel costs, lower repair costs, and less time in dock.” Unfortunately, TBT
causes environmental problems, such as imposex of the dogwhelk snail, Nucella
Lapillus (female snails develop a penis), and reduced growth and enhanced mor-
tality of larvae of bivalves.3 The costs of such environmental problems are unfor-
tunately impossible to estimate because they affect animals that have no economic
PROBLEM STATEMENT AND SYSTEM BOUNDARIES 221
UT 705
Platform Supply Vessel
SFI Group 72
Cooling
System
Before describing how the Activity-Based LCC model was built, the informa-
tion sources are discussed.
INFORMATION SOURCES
Many types of information are needed to design a reliable Activity-Based LCC
model for Farstad. Table 7.1 presents an overview of the information sources and
their contributions. This gives an idea of the comprehensive amount of informa-
tion that must be gathered for this costing model.
Source: A.M. Fet, J. Emblemsvåg, and J.T. Johannesen, Environmental Impacts and Activity
Based Costing during Operation of a Platform Supply Vessel. Ålesund, Norway: Møreforsking,
1996.
proportionately. In this case, however, only one cost object is used and the step is
therefore omitted.
shipowner needs to provide the PSV service. The difference between A2 and A3
is that Activity A2 is according to the contract, whereas A3 is extraordinary and
hence results in off-hire and therefore loss of revenues for Farstad.
Also, the activities have four different levels. For example, the Repair activity
(Activity A3) consists of five level 2 activities: Tow Ship (A31), Quay Ship (A32),
Dock Ship (A33), Repair Machinery (A34), and Repair Propellers (A35). The five
level-2 Repair activities are far from the only unwanted incidents that may occur,
but according to historical data, these unwanted incidents occur relatively fre-
quently compared to other incidents like fire, collision, and war. The system
boundary of the project also limits which incidents to consider.
In Table 7.4, the information used for the Repair activities is shown. Note that
the occurrences are projected for a 10-year period.
After the hierarchy is made, an activity network is designed as shown in Figure
7.2. The circular nodes are activities, while the diamond-shaped nodes are design-
MGO MGO
A111 - A115 A131 - A1425
A232
HFO
A A24
MGO
A232 A = Different Fuel
decision nodes. The letter in the decision nodes represents a specific design deci-
sion. In the activity network, we use the lowest-level activities from the activity
hierarchy in Table 7.3, that is, the shaded cells.
Figure 7.3 Modeling the A1416 Job 0.61 resource driver for components 601.1—2.01.
228 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
hours is the resource driver the overhead is distributed by, but it will not be
significantly affected by any design changes.
● Fuel consumption This resource driver keeps track of the fuel costs for the
vessel, is highly affected by different fuel types, and is consequently a design-
dependent driver.
Note that since only one cost object exists in this model, the UT 705 PSV, activ-
ity drivers are not needed.
defined assumptions are uncertainty distributions that are needed to model the
uncertainty, and they can be changed whenever the user wants to. In this model,
many user-defined assumptions5 are made, but only a small sample is presented
(see Figure 7.3 and Table 7.5) for brevity.
The inherent assumptions are made by the designer of the model (that is, me)
to simplify the modeling based on the user preferences and budget. These assump-
tions are therefore embodied in the framework of the model and thus cannot be
easily changed. In this model, only a few of these assumptions are made:
● The historical data are used as a good guideline for the future development.
That is, we assume the future will proceed similarly to the past, a common
assumption in all forecasting, as discussed in Chapter 1.
● Real revenues and costs are assumed constant. In other words, inflation is not
an issue.
● The maintenance program for the vessels is followed accurately so that the
jobs in the maintenance program are done as listed. According to technical
manager Jan H. Farstad, these programs are followed meticulously. The
model nonetheless allows ⫾10 percent variability.
● The technical condition of the machinery and hull remains adequate as long
as maintenance and service programs are followed. Thus, costs other than
depreciation due to aging will not exist. Since only the first 10 years of the life
span were considered, this assumption has very little influence on the results.
For vessels older than 20 years, the situation may be significantly different.
With these assumptions and design decision A embodied in the model, the
Monte Carlo simulations took place using the commercially available software
Crystal Ball®, which adds into Microsoft Excel as a macro. A high number of tri-
als are used in the simulations to reduce the random error to a minimum.
the Results and Iterate If Needed” section below for the actual results. Since this
was a pure costing exercise, no additional performance measures are done, except
risk measures, which are discussed in the “Identifying the Major Operational
Risks” section at the end of this chapter.
Internal Validity. The most basic element in checking the internal validity of the
model is to evaluate its logic. In other words, has the model been built using all
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 231
External Validity. The external validity of the model can best be investigated by
showing the model to technical managers Farstad and Nygaarden because they
know their business very well, have extensive experience both technically and eco-
.041 231
Probability
Frequency
.028 154
.014 77
.000 0
450.00 575.00 700.00 825.00 950.00
1000 USD
Figure 7.4 Year 7 maintenance cost uncertainty distribution using MGO fuel.
232 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
Sensitivity Chart
Target Forecast: Year 7 Maintenance Costs MGO
: : -1 -0.5
: 0 0.5 1
Measured by Rank Correlation
Figure 7.5 Year 7 maintenance cost sensitivity chart using MGO fuel.
.019 121.5
Probability
Frequency
.013 81
.006 40.5
.000 0
760.00 780.00 800.00 820.00 840.00
1000 USD
Figure 7.6 Year 7 maintenance cost uncertainty distribution using MGO fuel.
nomically, and have been involved in the project from start to finish. We met and
presented the model and its results to them and they asked questions. They were
very pleased with the model because they found it logical, sound, and matched
well with their experience. Thus, we had every reason to believe that the model
captures the real issues well. After checking the model for internal and external
validity, the next step is to predict future costs and revenues.
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 233
Trend Chart
UT705 MGO Main Component Cost Development
1000.00
100%
800.00
70%
600.00
30%
400.00
5%
200.00
Year 7 Main Component Costs MGO
Year 1 Main Component Costs MGO
Trend Chart
The Development of the Vessel s Profitability the first 10 years
8,000
100%
5,750
75%
3,500 50%
25%
1,250
5%
-1,000
Annual Profit Year 1
Figure 7.8 Profitability for the vessel the first 10 years (unit is $1,000).
tenance costs for the main components fluctuate from approximately $400,000 to
over $800,000 per year. If we look at the 70 percent confidence interval, for
example, we see it has a certain repetition, every three years being the most costly
from a maintenance point of view. This is not strange since the machinery has cer-
tain maintenance and service intervals that involve more work than others. Again,
this seems to indicate that the model is sound.
However, the trend chart in Figure 7.8 shows that these fluctuations have little
effect in the overall profitability for the PSV. The shift in profitability from year 4
to year 5 is due to the refinancing of the Farstad fleet in 1995.
Of course, predicting the future is nice, but the real strength of Activity-Based
LCC does in fact not lie in superior predicting and uncertainty analysis capabilities.
It lies in the process orientation and use of drivers because that enables us to find
the future critical success factors. When we have identified the critical success fac-
tors, it is much easier to work toward future goals because now we know what mat-
ters most.
In my opinion, too much effort is wasted every year in most organizations on
chasing “how much” and “how to do it” when they really should be concerned
about “what to do” and “why.” It is as Drucker says:8
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 235
Not in a very long time—not, perhaps, since the late 1940s or early 1950s—
have there been as many new major management techniques as there are
today: downsizing, outsourcing, total quality management, economic value
analysis, benchmarking, reengineering. Each is a powerful tool. But, with
the exceptions of outsourcing and reengineering, these tools are designed
primarily to do differently what is already being done. They are “how to do”
tools.
Yet “what to do” is increasingly becoming the central challenge facing
managements. Especially those of big companies that have enjoyed long-
term success.
Although activity-based techniques in general are mainly “how to do it” tools,
they also incorporate a “what to do” aspect in that they direct attention toward what
drives the business. Activity-Based LCC directs attention even better than standard
ABC due to improved forecasting and simulation capabilities and tracing mecha-
nisms. The critical success factors for Farstad is discussed next.
Sensitivity Chart
Target Forecast: Aggregated Profit
: : :
-1 -0.5 0 0.5 1
Measured by Rank Correlation
For the shipowner, however, the situation is slightly different, because the char-
ter group provides the fuel. The difference between the vessel’s profitability and
the shipowner profitability is roughly $3.5 million, or the annual fuel costs. It
should be noted that not all the costs are included due to the system boundaries, as
was discussed in the “Problem Statement and System Boundaries” section.
So what is determining the profitability for the shipowner? For Farstad, the
profitability sensitivity chart is shown in Figure 7.10. Compared to Figure 7.9, we
see that the main drivers are more or less the same, except that the fuel-related driv-
ers have dropped out in Figure 7.10.
However, the sensitivity chart in Figure 7.10 may be unreliable, except for the
top three contributors, because how can anything with fuel affect the profitability
when the fuel cost is not included in the shipowner’s profitability? The reason for
the unreliable identification of the small critical success factors in Figure 7.10 is
that the top three are very dominant, especially daily revenue, whose correlation
coefficient is 0.94. To eliminate this problem, all the large critical success factors
already identified are eliminated:
● Daily revenue.
● Interest rate.
● Crew costs.
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 237
Sensitivity Chart
Target Forecast: Shipowner's Aggregated Profit
Daily Revenue (NOK/day) .94
Annual Crew Costs (NOK/yr) -.28
An. Inv. Overhead first 5 yr (NOK/yr) -.14
Reimbursement (NOK/yr) .03
Other Annual Overhead (NOK/yr) -.03
Hull/Machinery Insurance (NOK/yr) -.03
Interest Rate (%) -.03
Annual Running Hours (h/yr) -.03
Unload Fuel Consumption (tonn/day) -.02
-1 -0.5 0 0.5 1
Measured by Rank Correlation
● Fuel-related drivers (if the shipowner pays the fuel as in line chartering).
● Annual running hours.
● All the different maintenance and service intervals. These do not affect the
total costs for the 10-year period significantly, but they highly affect the peri-
odicity of the costs, as mentioned earlier.
● Reimbursement for the usage of Norwegian seamen.
● Insurance.
The model is run once more and the results are presented in Figure 7.11. Now
the less important critical success factors can be identified.
Table 7.6 presents the 209 expected largest cost and revenue factors to the ship-
owner’s profitability. The table starts with the expected largest. Note that:
● The ordering of the critical success factors may be slightly wrong, and there
may be other factors that should have been listed instead of some of the con-
tributors listed at the end of the table due to random effects in the Monte
Carlo simulation. Nevertheless, the listed critical success factors are always
important.
● All the cost factors from the A3 Repair activity are excluded due to the large
amount of inherent uncertainty in those activities. Farstad simply cannot
manage a vessel to avoid, say, a collision. Farstad can only launch some pro-
grams that can reduce the risk of a collision.
238 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
Sensitivity Chart
Target Forecast: Shipowner's Aggregated Profit
Exchange Rate USD/NOK -.70
Serviced Tank Area (m2) -.32
601.1-2.10 A1418 Job 0.50 Comp. -.31
Lay days pr. year (day/yr) .27
Serviced Bottom/Hull Area (m2) -.23
601.1-2.25 A1416 Comp. (NOK) -.19
601.1-2.70 A1418 Comp. (NOK) -.13
A221 Hours (h) -.11
Number of Paint Layers -.11
634.33.1-2 A232 Comp. (NOK) -.08
A233 Hours (h) -.08
Intersmooth HISOL 900 (NOK/l) -.08
601.1-2.05 A1416 Comp. (NOK) -.07
601.1-2.01 A1416 Comp. (NOK) -.07
Consumed Paint (Partial Service) (l) -.07
: : :
-1 -0.5 0 0.5 1
Measured by Rank Correlation
Figure 7.11 Sensitivity chart for the shipowner’s aggregated life-span profitability after
eliminating the largest factors.
Table 7.6 Twenty Most Critical Success Factors for the Shipowner’s
Aggregated Life-Span Profitability
Ranking Cost/Revenue Contributors Type
1 Daily revenue Revenue
2 Capital cost Cost
3 Crew cost (included reimbursement) Cost
4 Annual running hours Cost
5 Other annual overhead costs Cost
6 Insurance Cost
7 Different maintenance and docking intervals Cost
8 Classification (exchange rate USD/NOK) Cost
9 Serviced tank area Cost
10 601.1—2.10 A1418 Job 0.50 component cost Cost
11 Lay days per year Revenue
12 Serviced bottom/hull area Cost
13 601.1—2.25 A1416 component cost Cost
14 601.1—2.70 A1418 component cost Cost
15 A221 hours Cost
16 Number of paint layers Cost
17 634.33.1—2. A232 component cost Cost
18 A233 hours Cost
19 Intersmooth HISOL 900 Cost
20 601.12.05 A1416 component cost Cost
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 239
Only two revenue factors, daily revenue and the number of annual lay days, can
be identified from Table 7.6. During negotiations with a possible charter group, the
different daily revenue and the number of annual lay days options can be simu-
lated using the model presented here. Farstad can then, aided by a trend chart as
in Figure 7.8, determine which option is preferable, or Farstad can determine
before negotiations how low daily revenue can be and what are the fewest lay days
it can agree upon. In both cases, some of the strength of the model presented here,
compared to ordinary LCC models concerning prediction, is that future uncer-
tainties can be modeled realistically and taken into account during the negotiations.
Table 7.6 can furthermore be used to identify potential areas of saving. After study-
ing the model and identifying the critical success factors, the effect of using dif-
ferent fuel, MGO versus IF 40 HFO, is investigated.
.021 95.25
Probability
Frequency
.014 63.5
.007 31.75
.000 0
-6,000 -2,500 1,000 4,500 8,000
1000 USD
Figure 7.12 Uncertainty distribution for the aggregated savings of using IF 40 from the
vessel’s perspective.
Sensitivity Chart
Target Forecast: Vessel Aggregated Savings Using IF 40
IF 40 (lSO2 < 2.5%) (NOK/ tonn) -.67
MGO Price (NOK/l) .48
IF 40 Tax (SO2 < 2.5%) (NOK/ tonn) .28
MGO Tax (NOK/l) -.26
Nom. Fuel Consump. FullSpeed .18
HFO/MGO Consumption Ratio -.14
Annual Running Hours (h) -.12
Time at Full Speed (h) .11
Time along Platform (h) -.08
: : :
-1 -0.5 0 0.5 1
Measured by Rank Correlation
Figure 7.13 Sensitivity analysis for the vessel aggregated savings when using IF 40.
䡩 The vessel will be running steadily with a high fuel consumption. This can
only be achieved when the vessel is going back and forth on full speed,
because then the positive effect of high fuel consumption will dominate the
negative effect of more annual running hours.
ACTIVITY-BASED LCC MODEL IMPLEMENTATION AND RESULTS 241
Sensitivity Chart
Target Forecast: Shipowner Aggregated Savings Using IF 40
601.1-2.01 A1415 Job 1.01 Comp. (NOK) HFO -.34
601.1-2.10 A1418 Job 0.50 Comp. (NOK) .27
601.1-2.01 A1416 Job 4.60 Comp. (NOK) HFO -.23
601.01.06 A1416 Comp. (NOK) HFO -.23
601.1-2.01 A1416 Job 5.60 Comp. (NOK) HFO -.23
601.1-2.25 A1416 Job 0.25 Comp. (NOK) HFO -.21
601.1-2.25 A1416 Job 0.24 Comp. (NOK) HFO -.21
601.1-2.01 A1416 Job 3.60 Comp. (NOK) HFO -.20
601.1-2.01 A1415 Job 0.64 Comp. (NOK) HFO -.20
601.1-2.01 A1416 Job 2.60 Comp. (NOK) HFO -.20
601.1-2.01 A1416 Job 6.60 Comp. (NOK) HFO -.20
601.1-2.25 A1416 Job 0.23 Comp. (NOK) HFO -.19
601.1-2.25 A1416 Job 0.26 Comp. (NOK) HFO -.19
601.1-2.25 A1416 Job 0.22 Comp. (NOK) HFO -.19
601.1-2.25 A1416 Job 0.21 Comp. (NOK) HFO -.18
601.1-2.25 A1416 Comp. (NOK) .17
601.1-2.70 A1418 Comp. (NOK) .14
601.1-2.10 A1418 Job 0.50 Comp. (NOK) HFO -.13
601.1-2.10 A1418 Job 0.20 Comp. (NOK) HFO -.12
601.1-2.70 A1418 Comp. (NOK) HFO -.11
-1 -0.5 0 0.5 1
Measured by Rank Correlation
Figure 7.14 Sensitivity chart for the aggregated savings of using IF 40 from the
shipowner’s perspective.
● From the shipowner’s point of view, it is favorable to use MGO, except when
the shipowner must pay the fuel and the previous points occur. This situation
is only likely to occur when the vessel is on line-charter contracts.
In summary, for the shipowner, MGO is best in most cases (unless the charter
group specifies something else). The only case where IF 40 will in general bene-
fit the shipowner is when the vessel is on line-chartering and the IF 40 price is
expected to be favorable.
It is important to understand that the way design decision A was investigated is
not for procurement/design purposes in the sense that we did not try to find out
what were the financial consequences of purchasing/installing machinery that used
HFO. If that was the purpose, we should have used a discounting factor because
the initial investment would be on the other end of the time-line than the operation
costs and benefits. The purpose was rather to find out under what operating cir-
cumstances HFO is preferable to MGO on a general and continuous basis. If we
had used discounting factors, we would have distorted the sensitivity analysis in
242 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
Figure 7.14, which would have led to erroneous conclusions regarding the critical
operating factors. Thus, understanding the purpose of the LCC model is pivotal.
Before we close this case, a few issues about risk management should be men-
tioned since any LCC inherently is about future profitability, which is associated
with risks.
Table 7.7 Three Most Critical Success Factors for the Shipowner’s
Aggregated Life-Span Profitability and Their Risks
Cost/Revenue
Ranking Contributors Risks Type
1 Daily revenue ● Contract is discontinued for Revenue
various reasons.
● Poor customer-relationship
management inhibits contract
extension.
2 Capital cost ● Fixed interest rates inhibit Cost
interest rate reductions.
● Market premium risk
increases/decreases.
● Farstad Shipping ASA beta
increases/decreases.
● Interest on long-term government
bond increases/decreases.
3 Crew cost ● Social costs increase/decrease. Cost
(included ● Wages increase.
reimbursement) ● Regulations require more employees.
CLOSURE 243
The rest of what constitutes sound risk management practices can be done quite
straightforwardly, as briefly explained in Chapter 3.
CLOSURE
The results from the Activity-Based LCC model include much more information
than presented here. But the results presented do illustrate the comprehensiveness
and the effectiveness of Activity-Based LCC, which is the main purpose of this
chapter. That Activity-Based LCC works well should be beyond any doubt.
The big question that may arise after reading this case is why no discounting
factors were used. Clues to the answer can be found in the the “Problem Statement
and System Boundaries” section, where the twofold problem statement is pre-
sented. Basically, Farstad wanted a LCC model that would provide decision sup-
port concerning:
● The total costs of operating a PSV up to 10 years.
● Under what circumstances will machinery that runs on HFO be preferable to
machinery that uses MGO?
The point is that both issues are related to the continuous operation of a PSV.
In such a situation, we simply cannot discount the future because operational/man-
agement issues are continuous in character and the time value is consequently the
same regardless of where you are in a timeline. That is, it is just as important to
operate a PSV well in year 5 as in year 1 provided that Farstad is interested in being
in business after year 5.
It is also important to find out how rapidly the industry is moving. The offshore
shipping industry is a mature industry and one year in it is therefore not compara-
ble to one year in a rapidly changing industry such as the IT industry. For Farstad,
a lag of one month between costs and revenues does not warrant using discount
factors except in extraordinary circumstances. Thus, what is continuous and what
is not is a matter of industry characteristics and the relative size of the costs or the
cash flows. All such issues should be considered when considering discounting
factors. Be sure that the assumptions that accompany the usage of discounting fac-
tors represent the reality of the case.
In what kind of situation could discount factors rightfully be used? For Farstad,
many investments (financial and real) can be supported by an Activity-Based LCC
model, but it all boils down to whether a significant time difference exists between
costs and revenues. For example, if Farstad decides to rebuild a PSV by changing
machinery from using MGO to HFO, it would have an initial cost that must be
recovered from a series of annual profitability improvements. Then it would be
necessary to use a discounting factor because the company has spent resources
today that will produce benefits for years to come. If Farstad simply wants to know
244 ACTIVITY-BASED LCC FOR PLATFORM SUPPLY VESSELS
how to increase its annual profitability, it would be wrong to use a discounting fac-
tor because then it would be a matter of improving the daily operation, a continu-
ous activity where costs and revenues follow each other.
NOTES
1. I gratefully acknowledge the very good cooperation and financial support from
Farstad Shipping ASA and Møre Research in Ålesund, Norway, that made the work
presented in this chapter possible. I would particularly like to mention that unless I
had been fortunate enough to be invited by Annik Magerholm Fet to participate in
the project, I would never have had the opportunity to present this case in the first
place. The project is described fully in A.M. Fet, J. Emblemsvåg, and J.T.
Johannesen, Environmental Impacts and Activity-Based Costing during Operation of
a Platform Supply Vessel (Ålesund, Norway: Møreforsking, 1996).
2. See, for example, J. Brown, “Copolymer Antifoulings: Look Beneath the Surface,”
Propeller Direct 3 (1), 1996, pp. 1—5.
3. For more information, see S.K. Bailey and I.M. Davies, “The Effects of Tributyltine
on Dogwhelks (Nucella Lapillus) from Scottish Coastal Waters,” Journal of Marine
Biology Association UK 63, 1989, pp. 335—354, and A.R. Beaumont and M.D. Budd,
“High Mortality of the Larvae of the Common Mussel at Low Concentrations of
Tributyltin,” Marine Pollution Bulletin 15, 1984, pp. 402—405.
4. Because something has no economic value does not imply that it is worthless in itself.
It simply indicates that the current transactions in the economic system do not include
it because there is no market for it.
5. There are 501 assumption cells and 65 forecast cells. The term cells is used because
the model is implemented in MS Excel. A more generic term would be variable.
6. See, for example, K. Pedersen, J. Emblemsvåg, R. Bailey, J.K. Allen, and F. Mistree,
“Validating Design Methods and Research: The Validation Square,” the 2000 ASME
Design Engineering Technical Conference, Baltimore, Maryland, ASME.
7. Due to the Central Limit Theorem, the common would approximately be a bell-
shaped curve, a normal distribution.
8. P.F. Drucker, “The Theory of the Business.” Harvard Business Review, September-
October, 1994.
9. By running more simulations and eliminating the most dominant factors in each sim-
ulation, this table could have been expanded to capture all the 501 assumption cells.
8
ACTIVITY-BASED
LIFE-CYCLE COSTING AT
WAGONHO!1
There are risks and costs to a program of action, but they are far less
than the long-range risks and costs of comfortable inaction.
John F. Kennedy
In Chapter 4, WagonHo! was discussed extensively within the context of identify-
ing ways to improve the company. In this chapter, a complete Activity-Based Life-
Cycle Costing (LCC) implementation for all WagonHo!’s products, and for the
entire product life cycle, is discussed. That is, the Activity-Based LCC model will
include all the stages shown in Figure 2.1 from manufacturing to use/service to
downstream.
The primary purpose of this chapter is to illustrate how Activity-Based LCC
can be implemented to cover an entire product life cycle. A simple case like
WagonHo! is ideal for several reasons:
● Very few organizations are doing such exercises. Most organizations use
LCC for procurement and design. A very limited number of for-profit organ-
izations include end-of-life issues, as far as I know.
● A complete real-life case would have been too large and complex to com-
prehensibly present in a book format. For example, in one of the projects I
worked on recently, the model had about 4,000 variables, 100-plus product
lines, about 50 activities per manufacturing site, of which there were three,
and so forth. The model provided detailed decision support for deciding
between four major restructuring scenarios for a $150 million manufacturing
company and several process and product outsourcing alternatives. Because
of the complicated information system in the company combined with sim-
ulating the introduction of three new major product lines, we spent 14 days
checking and tuning it, which is twice the normal time usually spent. Clearly,
presenting such a case in a comprehensible manner to someone who does not
know the company would be virtually impossible.
246 ACTIVITY-BASED LCC AT WAGONHO!
● The WagonHo! case has all the elements needed (multiple products, detailed
process information, substantial overhead costs, and so on) to illustrate the
points, yet it is simple enough to provide value to the readers.
Since WagonHo!, Inc. was discussed in Chapter 4, where much background
information was presented, this chapter starts by reiterating some of the most
important issues. It then discusses how to implement Activity-Based LCC and
presents the results.
CW1000 Wagon
Later, however, redesign of the products may also become an option, but for
now management realizes that it cannot make too many big changes. More details
concerning products, the Bill of Materials (BOM), hourly labor rates, and so on
were discussed found in Chapter 4.
To facilitate decision support of these issues and more, both a costing model
and a cash flow model must be built. The reason for making a cash flow model is
that during the transition phase, it is important to keep track of liquidity to avoid
insolvency.
The Activity-Based LCC model implemented at WagonHo! is a full absorption
model. That is, all the costs are traced to the products. Such full absorption mod-
els are particularly useful with respect to evaluating the pricing of the products.
To provide the decision support needed for the downstream operations, we must
in addition to estimating the costs and profitabilities of the three products also
assess the costs of the main processes, as depicted in Figure 2.1. These generic
processes are:
1. Mining (raw material extraction)
2. Material processing
3. Product manufacture
4. Distribution
5. Use and service
6. Product take-back
7. Product demanufacture and disassembly, including the remanufacturing of
reusable components and the reprocessing of recycled materials
8. Materials demanufacturing, including energy recycling, fuel production, and
materials regeneration
9. Disposal
The first four processes are already covered in the standard ABC model pre-
sented in Chapter 4. The main focus of the Activity-Based LCC model is there-
fore processes 5 through 9 to the degree they are applicable. It is crucial to realize,
however, that since the product properties are established early in the product life
cycle yet have large consequences for the entire life cycle, the Activity-Based
LCC model must include all applicable processes. To give decision support with
respect to organizational issues, the model must therefore treat every activity as
a cost object.
tion could be included. However, the inflation will affect both expenses and sav-
ings in a similar fashion in this particular case, because the relative difference
remains the same as before. To make life simple, inflation is therefore omitted.
The changes in expense levels combined with the old cost structure have pro-
duced the current cost base, as shown in Table 8.4, after the three-year transition
period following the investment. Each cost element is broken further down into
greater detail in the model to fit the activity definitions as closely as possible. In
other words, the standard General Ledger, which is organized according to cost
categories (depreciation, direct labor costs, indirect labor costs, rent, and so on)
has been rearranged and become object oriented. This makes the distortion from
the resource level to the activity level in the model as small as possible. More
importantly, however, the object orientation makes process cost assessments eas-
ier and more accurate, and cause-effect relations are easier to identify.
The manufacturing costs were quantified using information from the BOM and
the existing cost accounting system. The cost of demanufacturing resources
is based on guesswork with respect to the kind of personnel needed and what
processes and activities are needed to sustain a viable demanufacturing process.
Even though this is guesswork, the point is that the Activity-Based LCC model
makes it possible to investigate the consequences (costs and risks) and what the crit-
ical success factors are regarding the entire demanufacturing process. All equipment
costs are handled as depreciation to which the annual maintenance costs have been
added. The building is an aggregated resource that includes the annual gas and elec-
tricity costs. The resources in Table 8.4 also include all new employees.
In this case, it is important to notice that since the products are estimated to have
an expected life in the market of three years before WagonHo! can take them back,
there will be a transition period of roughly three years. Note that Year 0 appears
two years after the new strategy has been implemented. The reason is that it takes
three years before any products are taken back due to the life expectancy and the
expiration of leasing contracts. To monitor this period, a cash flow model is needed
to avoid liquidity problems.
The cash flow model is a comparative model; that is, it compares the cash flow
of the new strategy to not implementing the new strategy, and not to the total cash
flow in the company. This suffices in this case, and in most investment analyses,
the baseline is the “do nothing” option. This is a commonly chosen baseline, but
it is rarely the correct one because doing nothing has hidden implications. The rea-
son is that the baseline is always moving; the business environment as well as the
organization itself is always evolving, and doing nothing therefore means the
situation will get worse, as explained in Chapter 5’s “Moving Baseline” section. For
WagonHo!, however, the new strategy will most likely have a positive impact. Doing
nothing as a baseline will therefore be a conservative choice because the future real-
ity will most likely be better than today since WagonHo! has already undergone sub-
stantial positive changes the last two years, as discussed in Chapter 4.
CW1000
Take apart
and sort Remove Inspect
Incineration Fasteners
Clean
Good
Ship Materials for Recycling Comp. Receive Take-back Products
S M M
Finished
Cell 1 M SA Goods
Bar Code Storage
Scanners
Cell 2 M SA
Cell 3 M SA
Conveyer
Cell 4 M SA
FA FA FA
Cell 5 M SA
to offices
Cell 6 M SA FA FA FA
WagonHo! does not need any internal, dedicated transportation system other than
the conveyer system.
no cost pools exist other than the activities themselves. Also, many activities have
two activity drivers. One is related to the variable activity consumption, such as
labor hours, whereas the other is related to the fixed activity consumption, such as
area usage.
It should also be noted that because production at WagonHo! is not automated
or performed in large batches, many activity drivers are volume related. One might
argue in such a situation that a volume-based costing system would suffice. The
reality is, however, that the existing volume-based costing system does not provide
correct enough cost assignments and is insufficient in providing decision support
in terms of what to do about the situation.2 Thus, even in a situation where the cost
estimates of a volume-based system and an ABC system are the same, the ABC
256 ACTIVITY-BASED LCC AT WAGONHO!
system would still be preferable due to its superior tracing capabilities, which are
essential for providing decision support.
Figure 8.5 Two assumption cells in the WagonHo! Activity-Based LCC model.
Table 8.8 First-Stage Cost Assignment for the Two First Activities (Continued)
A111 A112
Resource Total Consumption Resource Activity Resource Activity
Resources Drivers Resource Intensity Driver Cost Driver Cost
Office equipment:
● Computer systems Acquisition 39,500 0.23 ($/$) 3,000.00 ($) 685 5,000.00 ($) 1,142
costs
● Furniture and so on Acquisition 4,000 0.12 ($/$) 500.00 ($) 62 1,000.00 ($) 123
costs
● Reception No. of 2,150 0.03 ($/com.) 2,500 80 4,800 (comm.) 153
communi- (comm.)
cations
Facility overhead costs Labor costs 80,000 0.05 ($/$) 17,000 ($) 804 7,500 ($) 355
CW7000 in this case), is exactly the same as the first-stage assignment. The only
differences are what is being assigned, how it is being assigned, and to what:
● In the first stage, resources ($) are assigned to activities using resource drivers.
● In the second stage, activity consumption (various measures of activity lev-
els) is assigned to cost objects using activity drivers. Cost objects are there-
fore assigned costs indirectly.
With this in mind, we see the similarity between Table 8.8 and Table 8.9. Note
that Table 8.9 should also include the two other products, cost objects CW4000
and CW7000, but due to space constraints this is not possible.
An additional performance measure in this model is the Net Present Value
(NPV) of the cash flow. As explained earlier, in the current situation for
WagonHo!, keeping track of cash requirements is important to avoid insolvency.
It is also the chosen measure for the management at WagonHo! to decide whether
to invest in the new business idea or not.
Note that at the bottom of Table 8.9 we see the deterministic results for
WagonHo!, Inc. as a whole (to the left) and for CW1000 (to the right). The results
should be used with care because they are deterministic. Basically, the results do
not take uncertainty into account. To do that, we must run a Monte Carlo simula-
tion, as discussed next.
A33 Annual products sold 90,323 3.25 ($/unit,sold) 2,550 (unit,sold) 8,300 8,431
Annual production 1,439 0.05 ($/unit) 2,500 (unit) 131
A34 Total labor hours 157,472 2.99 ($/h) 8,000 (h) 23,882 24,013
Annual production 1,439 0.05 ($/unit) 2,500 (unit) 131
side of the balance sheet is used. That is evident from Figure 8.6, where we can
see that we need data on the average cost of debt, both short-term and long-term,
and the cost structure, which is found by dividing total equity by the sum of equity
and debt, and likewise for total debt. In addition to those numbers, some macro-
economic numbers are needed, such as the tax rate, the return on long-term gov-
ernment bond, the beta, which is 1 for WagonHo! because it is not a publicly listed
company, and the market-risk premium.
The computations are:
● Cost of equity ⫽ long-term government bond ⫹ (beta ⫻ market-risk pre-
mium) ⫽ 13 percent
● Average cost of debt ⫽ ((avg. cost of short-term debt ⫻ sum short-term debt)
⫹ (avg. cost of long-term debt ⫻ sum of long-term debt)) ⫼ sum debt ⫽ ((8
percent ⫻ $109,255) ⫹ (9 percent ⫻ $217,004)) ⫼ $326,259 ⫽ 8.3 percent
● Avg. cost of debt after tax ⫽ (1 ⫺ tax rate) ⫻ avg. cost of debt ⫽ (1 ⫺ 28
percent) ⫻ 8.3 percent ⫽ 6 percent
● Cost structure, equity ⫽ sum equity ⫼ sum equity and debt ⫽ $233,850 ⫼
$560,109 ⫽ 41.8 percent
● Cost structure, debt ⫽ sum debt/sum equity and debt ⫽ $326,259 ⫼
$560,109 ⫽ 58.2 percent
● WACC ⫽ (cost of equity ⫻ equity) ⫹ (average cost of debt after tax ⫻ debt)
⫽ (13.0 percent ⫻ 41.8 percent) ⫹ (6 percent ⫻ 58.2 percent) ⫽ 8.9 percent
Cost structure
Equity 41.8 %
Debt 58.2 %
WACC 8.9 %
Note that the 8.9 percent includes the newly purchased equipment. In general,
the purchase of assets and financing them by loans is likely to decrease the WACC
somewhat because equity normally produces a higher WACC than debt. The rea-
son is that most investors expect around a 15 percent return on equity, whereas the
interest on debt is usually less than 10 percent, at least in countries with a sound
economic policy.
The side of the balance that lists the assets is used to compute the cost of capi-
tal, which could have been used to compute the EP, or the Economic Value Added
(EVA), as Stern Stewart & Company in New York call it. For WagonHo!, the cal-
culation would have been as follows:
WagonHo! operating profit, found in Table 8.9 ⫽ $288,507
⫺ Taxes (assumed to be 28 percent) ⫽ $ 80,782
⫺ Cost of capital, found in Table 8.10 ⫽ $ 49,980
⫽ Economic Profit (EVA) ⫽ $157,745
An economic profit of $157,745, or 5.8 percent in terms of sales, is very good
for WagonHo!, because as long as the EP is larger than zero it indicates that share-
holder value is generated. We should, however, not be overly joyous concerning
the result because WagonHo! is labor intensive and the cost of capital is conse-
quently low. Also, since the cost of capital is so low compared to the annual prof-
its, it is not necessary to include EP calculations on a product and process level. If
we did, however, we would expect all products to become less profitable accord-
ing to their use of capital, in which case the CW1000 would probably become
more costly than the two others.
In any case, with a WACC of 8.9 percent, a time horizon of five years, and net
cash flows, as shown in Table 8.11, we get an NPV of $81,012 (deterministically).
This NPV suggests that the investment in new equipment and the process is worth
pursuing because it provides a higher return than WACC.
Trend Chart
100%
56,250
90%
–12,500
50%
–81,250
10
–150,000
Net Cash Flow Year 0
.015 153.7
.010 102.5
.005 51.25
.000 0
time, luckily, is not associated with any risk. But to assess whether this cash flow
is sufficient or not in generating enough positive cash flow, the NPV is computed.
Table 8.11 shows that the NPV is positive in the deterministic case, and from
Figure 8.8 we can see the uncertainty distribution of the NPV. The NPV is clearly
positive most of the time; that is, a small probability exists that the NPV will be
negative. To better investigate the NPV, we turn to Table 8.12 where the percentiles
are shown. Clearly, the probability of loss is a little less than 10 percent. The poten-
tial upside, however, is very substantial: If everything went as well as possible, the
NPV would in fact reach about $220,000.
RESULTS AND HOW TO USE THEM 269
Sensitivity Chart
Target Forecast: Net Present Value
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Sensitivity Chart
Target Forecast: Net Present Value
–1 –0.5 0 0.5 1
Measured by Rank Correlation
on material costs. From Figure 8.10, we can even identify which components are
most important to redesign, such as the CW1373 bed.
The last important thing to notice is that although the degree of heating-related
energy consumption introduces large amounts of uncertainty into the model (refer
to Figure 8.9), it is not an important variable because the rank correlation coeffi-
cient is only 0.05. Further work, if initiated, should therefore not include the issue
of heating. It simply will not pay off, although it is a very uncertain issue.
The analysis so far suggests that the new strategy is an economically sound strat-
egy in the sense that the cash flows and NPV will be satisfactory. In the next two sec-
tions, costs and profitability are discussed, but first the activity costs are discussed.
Activity Costs
All activity-based frameworks consist of two stages in the cost assignment. The
activity costs are an intermediate stage when it comes to estimating cost and
profitabilities of products, for example. However, the activity costs can provide a
useful insight into business process reengineering, process design, and other
process-focused efforts.
Figure 8.11 presents a profile of the activity costs that includes the uncertainty.
Normally, it would have been organized according to descending activity costs,
but in a simulation model that is not feasible. In any case, what we would nor-
mally look for are the highest costs, which in this case are associated with
Trend Chart
Activity Cost Profile
500,000
375,000 100%
250,000
125,000 50%
0
A111 Activity Cost
A112 Activity Cost
A113 Activity Cost
A121 Activity Cost
A122 Activity Cost
A123 Activity Cost
A124 Activity Cost
A125 Activity Cost
A13 Activity Cost
A141 Activity Cost
A142 Activity Cost
A143 Activity Cost
A15 Activity Cost
A21 Activity Cost
A22 Activity Cost
A231 Activity Cost
A232 Activity Cost
A233 Activity Cost
A234 Activity Cost
A241 Activity Cost
A242 Activity Cost
A31 Activity Cost
A32 Activity Cost
A33 Activity Cost
A34 Activity Cost
Activities A122, A124, A32, and so on. Then we can investigate more closely
whether ways exist for reducing the costs of all value-adding activities, such as
A122 and A124. The nonvalue-added (NVA) activities such as A32 should ideally
be eliminated. But everybody knows that we need some production management
and other NVA activities. Anyway, Figure 8.11 is useful in identifying activities
to start improving.
Often the sum of the activity costs for a certain process is different from what
we might expect, based on the organizational chart. For example, many companies
may have a few people in their organization working with taking orders from cus-
tomers, but once the activity costs for that process are added up, the costs are sev-
eral times larger than what those few people could cost. What is detected by the
ABC system is that many people work with customers’ orders in some fashion
throughout the organization. Such costs are useful to identify because they are
good indications of misallocated capacity, or most often they indicate that the sys-
tem does not work as planned and that people have to make telephone calls and so
on to correct errors, find missing information, and so on. For WagonHo!, a simi-
lar problem may exist because activity A32, run production, is very large: Is it sen-
sible to spend over $200,000 annually on managing the production for such a small
company? This may indicate severe system problems and further investigations
should be undertaken.
Finally, it should be noted that the trend chart is somewhat misguiding in this
context because it may look like there are costs between the activities. That is, of
course, not the case. The reason for using a trend chart for this purpose at all is that
it provides a simple overview of the activity costs.
.019 189.7
.013 126.5
.006 63.25
.000 0
0 150,000 300,000 450,000 600,000
$/year
manage the risks. That can be seen in Figure 8.13. We see that the six largest con-
tributors to the uncertainty are all related to sales-related factors (price and sales
volume). Unfortunately, we cannot do much about the prices because the market
is the master, but we can try to manage the operational risks associated with the
CW4000 production volume. One way of doing that is to ensure that, for example,
the CW4000 has a production priority above the two other products.
Another risk is related to the sales rebates. We see from Figure 8.13 that if the
sales rebates increase, the profitability decreases. Here we are in a classical
dilemma. Salespeople are often rewarded based on volume because a substantial
body of research from decades ago shows that a strong correlation exists between
274 ACTIVITY-BASED LCC AT WAGONHO!
Sensitivity Chart
Target Forecast: Result WagonHo!
–1 –0.5 0 0.5 1
Measured by Rank Correlation
market share and Return on Investment (ROI).3 This has, however, been taken to
the extreme in many cases, and the result is that salespeople push volume without
thinking of profitability. To manage this risk, however, WagonHo! should reward
salespeople using profit incentives and not volume incentives.
It might be tempting to reduce the uncertainty of such variables in the model in
order to narrow down the forecast. However, if we reduce the uncertainty without
reducing the real uncertainty, we increase the risks of making the decision support
deceitful, and that can be a major decision risk.
Of the factors we can do something about, we see that the A122 labor cost is
substantial, as is the degree of reuse of nonmetallic parts. The latter is also para-
mount for the NPV. Other important factors are the A32, A124, A34, and A142
labor costs. You may wonder how labor costs can be such important sources of
uncertainty, but the fact is that costs, like quality, are statistical in nature. Just think
of all the factors that may impact the labor costs: wage increases, overtime, lay-
offs, use of temporary employees, illness, and so on. All these factors are unknown
prior to the cost estimation.
When costs are statistical on the aggregate level, imagine how much more sta-
tistical they are on the product level, which is evident from Figure 8.14. The costs
RESULTS AND HOW TO USE THEM 275
Overlay Chart
Frequency Comparison
.042
CW1000 Profitability
.031
Probability
.010
CW7000 Profitability
.000
–25.00% –11.25% 2.50% 16.25% 30.00%
on the product level cannot, as on the aggregate level, be simply summed up. The
costs on the product level must first undergo a cost assignment process, as shown
in the “Step 7: Model the Uncertainty” and “Step 8: Estimate the Bill of Activities”
sections, which use resource and activity drivers, whose nature is also statistical.
The entire cost assignment process is highly statistical combined with a statistical
input (the resources that are measured as costs).
From this, we understand that to estimate costs with a single number is a major
simplification that ignores the fact that costs are statistical. The result is that deci-
sion-makers overtrust or reject the cost estimates and instead use gut feelings as a
gauge. Needless to say, when we have ways of presenting costs as they are, we
should use them, because guessing about such important fundamentals of compa-
nies is gambling with shareholder values to an unacceptable extent.
In any case, the profitability of the products of WagonHo! are associated with
substantial amounts of uncertainty. As Table 8.14 shows, if everything went really
badly, all the products would go into the red (unprofitable). Luckily for WagonHo!,
the probability of that occurring is small (less than 5 percent) for both CW4000
and CW7000.
According to Table 8.9, the profitability of the CW1000 is ⫺2.85 percent in the
deterministic case. However, from Table 8.14 we see that in fact a 30 percent prob-
ability of positive profitability exists, which should be good news for the manage-
ment of WagonHo! A substantial downside must be avoided, however, and to do
that the sensitivity chart in Figure 8.15 is helpful.
Finally, it should be noted that the CW1000 is also associated with the largest
uncertainty range (43.54 percent). This indicates that the CW1000 is also the most
276 ACTIVITY-BASED LCC AT WAGONHO!
difficult product to manage. In fact, the Japanese quality guru Genichi Taguchi
devised a loss function where the variability of a factor is viewed as an important
measure of quality. The larger the variability, the worse the quality, given the same
expected value. Eliminating the variance in all significant processes is also a major
objective in the successful Six Sigma approach, which is essentially a Total Quality
Management (TQM) methodology that focuses heavily on statistical measures.
If we apply that idea to the CW1000, we soon understand that the CW1000 is
inherently more difficult to make profitable, since costs are also statistical due to
the higher uncertainty range than the two others. This, like earlier findings, sup-
ports the advice to significantly redesign the CW1000. Simply too many things
can go wrong, which means that too many resources are spent ensuring that they
do not go wrong, which ultimately leads to poor profitability.
So far, the uncertainty of the profitabilities has been discussed; next the under-
lying factors that determine the profitability is identified and discussed.
Sensitivity Chart
Target Forecast: Result WagonHo!
Sensitivity Chart
Target Forecast: CW1000 Profitability
Thus, the products must be redesigned also in order to change the leasing strat-
egy to eliminate the use of rebates altogether. For that to happen, WagonHo! must
most likely be capable of arguing that its new redesigned products are better than
before. How that can be achieved is unclear, at least for the time being.
Due to the similarity of the products, it suffices to discuss the two extreme
points: the most unprofitable CW1000 and the most profitable CW4000. The sen-
sitivity chart for the CW1000 is shown in Figure 8.16. As expected, sales price and
sales volume are the most important factors. The reason annual production turns
out negative is that the model handles inventory changes and then an increase in
production will result in reduced profitability because it is the number of units sold
that counts. Of course, over time production and sales must match.
It is interesting to note that an increase in the production levels of the CW4000
and the CW7000 increases the CW1000 profitability. This is the good, old
economies-of-scale effect; basically, the more units produced, the less overhead
costs will be traced to each unit on average. Another interesting factor to note is
the relatively low importance of the degree of reused components. This is due to
two factors.
First, the CW1000 has many special components and screws that make reuse
less feasible than for the two other products. Second, but more important, the other
costs in Figure 8.15 are large and substantial, so the reuse of components there-
RESULTS AND HOW TO USE THEM 279
Sensitivity Chart
Target Forecast: CW1000 Direct Labor Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
fore plays a relatively lesser role although substantial enough to result in an over-
all improvement of the WagonHo! result of roughly $75,000 annually. Many of the
labor costs are very substantial and significantly impact the profitability of the
CW1000 as is shown in Figure 8.16. These costs can be studied in greater detail
in Figure 8.17.
Three main sources for the costs exist:
1. The time it takes to manufacture the CW1000 is too long, particularly the milling
activity (A122) and the assembly activity (A124). This is a further indication of
the need to redesign the CW1000 completely both with respect to the way the
parts are manufactured (milling) and the way they are joined (assembly).
2. The CW1000 uses too many overhead resources, particularly production
planning and so on (A32). This is a result of the CW1000’s complexity,
which calls for product redesign via simplification.
3. The cost of the workers is important, but not easy to do anything about. The
best that can be done is to try to automate some particularly time-consum-
ing tasks. To do that, so-called action charts can be deployed,4 as explained
briefly in Chapter 5’s “Step 3 Issues: Role of the General Ledger” section.
The reason for not using action charts in this particular implementation is that
the products need a complete redesign and not just improvements with respect to
some functions or tasks.
The most profitable product is, as stated earlier, the CW4000, and Figure 8.18
presents the corresponding sensitivity analysis. As usual, sales-related factors are
280 ACTIVITY-BASED LCC AT WAGONHO!
Sensitivity Chart
Target Forecast: CW4000 Profitability
–1 –0.5 0 0.5 1
Measured by Rank Correlation
crucial, but compared to the CW1000 sensitivity chart in Figure 8.16, we see three
major and interesting things concerning the CW4000 in contrast to the CW1000:
1. It is relatively less labor intensive. The fact that the CW4000 is less labor
intensive gives a direct improvement on the profitability because labor is a
direct cost.
2. The degree of reused components plays a relatively more significant role.
This means that the direct materials become less costly because they are
reused to a significant extent.
3. The use of overhead resources is significantly less despite the absolute vol-
ume being several times larger (2,500 CW1000 units versus 14,000 CW4000
units). That the CW4000 production is more than five times larger also gives
it a better economies-of-scale effect, which is of course highly beneficial for
the unit cost of the CW4000, but not necessarily for the overall CW4000 cost
unless the CW4000 is a simpler product, which it is in this case.
All in all, the CW4000 seems to be a much better product than the CW1000,
although the price and traditional margin are less. The CW4000 goes through the
CLOSURE 281
company without any extra work, complicating parts, procedures, and so on, and
that translates into lower costs and better profitability. The CW4000 is, however,
also plagued by the sales rebate.
The CW7000 product is somewhat between the CW1000 and the CW4000 in
most respects. That is important because it shows that if WagonHo! embarks on a
total redesign of all products to eliminate complexities (to cut overhead resources),
to reduce manufacturing times (to cut direct labor costs), and to make the products
easier to disassemble and reuse (to cut direct material costs), the rewards will be
positive. If it manages to redesign the products sufficiently, it may even get around
the sales rebate problem, because sales rebates are often signs of products being
not quite what customers want. The potential for the WagonHo! result will be, in
other words, much better than what it is today. By implementing the new business
strategy and then embarking on the redesign program, as shown in this this
Activity-Based LCC analysis, major profitability improvements can be made.
CLOSURE
This case study is special since it concerns a simulated company and not a real one.
On one hand, this has bearing on the reliability of both the input information and
the way the output from the model is interpreted. On the other hand, the case study
is illustrative enough to point out several issues concerning LCC in general and
Activity-Based LCC in particular. For this book, the latter is more important than
the former, particularly as other real-world case studies are presented elsewhere in
the book. In any case, what can be learned from this case study is worth discussing,
and that is done in the next two sections.
LCC in General
As explained in Chapter 2, most LCC approaches are not costing methods but cash
flow methods. In this case study, both a cash flow analysis and a costing analysis
have been conducted.
The cash flow analysis is clearly useful in that it provides insight concerning
liquidity, the NPV of the investment, and the financial risk exposure. It cannot,
however, provide any insight in the profitability of the products, the company as a
whole, or the underlying success factors. In this case, as often, the success factors
of cash flows and operating profits are somewhat related. Both the costing analy-
sis and the cash flow analysis point out the degree of reuse as relatively important,
although it is relatively much more important for the cash flow than the operating
costs; however, the cash flow analysis totally misses the labor costs and most of
the overhead costs.
282 ACTIVITY-BASED LCC AT WAGONHO!
This goes to show that a cash flow analysis will not, and cannot, substitute for
a costing analysis. Unfortunately, many, particularly in the environmental man-
agement domain, believe that cash flow analyses are costing analyses. As shown
here, they are not. The consequences of that misinterpretation are that significant
areas of improvements can be missed.
One may, however, argue that the cash flow analysis in this case study was only
comparative and not absolute; that is, not all cash flows were incorporated in the
model. That is true in the sense that if all cash flows were incorporated, the cash
flow model would have had a wider scope, but many costs are simply not repre-
sented by cash flows and would have been missed in any case. Also, costs repre-
sent the demand for jobs to be done, whereas cash flows represent some of the
capacity for doing the jobs. Good management practice is to match capacity to
demand, and that cannot be achieved by a cash flow model.
NOTES
1. I would like to thank research engineer Greg Wiles at the Center for Manufacturing
Information Technology in Atlanta for his cooperation and the data he provided,
which made this case possible.
2. This has been discussed extensively in J. Emblemsvåg and B. Bras, “ISO 14000 and
Activity-Based Life-Cycle Assessment in Environmentally Conscious Design and
Manufacturing: A Comparison,” 1998 American Society of Mechanical Engineers
(ASME) Design Engineering Technical Conference, Atlanta, GA.
3. According to F. Allvine, Marketing: Principles and Practices, Boston, MA:
Irwin/McGraw-Hill, 1996.
4. See, for example, J. Emblemsvåg and B. Bras, “The Use of Activity-Based Costing,
Uncertainty, and Disassembly Action Charts in Demanufacture Cost Assessments,”
1995 American Society of Mechanical Engineers (ASME) Advances in Design
Automation Conference, DE-Vol. 82, Boston, MA, pp. 285—292.
9
FROM HINDSIGHT
TO FORESIGHT
Life-Cycle Costing (LCC) is a tool for engineers, managers, and others who care
about downstream costs and total costs. As with most tools, the success of LCC
models is the result of balancing understanding on the one hand and craftsmanship
on the other. I have tried to find a suitable balance in order to avoid lengthy dis-
cussions about academic differences and also to avoid presenting many examples
with little reflection and insight. Experience without reflection is not worth much;
in fact, it can be outright dangerous. As Confucius said (The Analects, 2:16):
Study without thinking, and you are blind; think without studying, and you
are in danger.
Thus, some theoretical foundation is clearly needed to clarify what LCC is and
what it should be, what the difference is between traditional LCC and Activity-
Based LCC, and so forth. Similarly, the three cases were chosen to complement
each other and to complement the presented theory as well. This book should
therefore provide a practical guide to a new, powerful method of conduction LCC,
risk management, and uncertainty analysis that opens up many new avenues for
engineers, managers, and others.
Life-Cycle Costing:
• Life-cycle perspective • Cash flows
• Total costs • Discounting factors
Activity-Based Costing:
• Overhead costs • Process-orientation
• Relevant cost assignment • Cost vs expense
• Cause and effect • Cost vs cash flow
• Multiple cost objects • Links to TQM, EP
Characteristics
Characteristics refer to what is novel about Activity-Based LCC, both in relation
to traditional cost management as a whole and to traditional LCC approaches in
particular. As already discussed, these characteristics are closely linked to the three
layers of Activity-Based LCC, as shown in Figure 9.1.
Third, traditional LCC does not emphasize the need for establishing cause-and-
effect-relationships to the extent that Activity-Based LCC does. This is partly a
consequence of the fact that traditional LCC is not process oriented. Therefore,
identifying cause-and-effect relationships is limited to issues such as material and
labor costs and product characteristics. Activity-Based LCC, in contrast, is sys-
tematically built up from definitions of activities, resources, and cause-and-effect
relationships (resource drivers, cost drivers, and activity drivers). Needless to say,
this gives an entirely different push toward understanding the cause-and-effect
relationships.
Fourth, traditional LCC ignores overhead cost. This was not a major problem in
the 1960s when it was invented by the U.S. Department of Defense, but today when
many companies face 30 to 40 percent overhead costs, the need for handling such
costs is obvious. Handling overhead costs realistically is one of the strong sides of
Activity-Based LCC and other activity-based frameworks. The fact that traditional
LCC ignores overhead costs makes it less useful as systems today become increas-
ingly capital intensive and require more support.
Fifth, traditional LCC is incapable of handling several products at the same
time. This is because it is rarely attempted, but more importantly because tradi-
tional LCC is unable to handle overhead costs realistically, it simply cannot han-
dle multiple products. The problem with this product-by-product approach is that
it can, and often does, lead to suboptimization. Although the products may have a
satisfactory life-cycle cost when estimated in isolation, when put together, the
overall product line, product family, or product portfolio will have an unnecessary
complexity in terms of unique parts and practices and is therefore still too costly.
For Activity-Based LCC, handling many products is no problem at all. In fact,
activity-based frameworks are rarely used on a single product because that would
be overkill. That said, it should be mentioned that the process perspective of activ-
ity-based frameworks adds value that the traditional concepts cannot, as shown in
this book.
Traditional LCC has many shortcomings that make the concept increasingly
unsuitable for management purposes and engineering. For example, as technolo-
gies grow increasingly complicated, the need for support and capital increases. The
time when engineers could design a system or product without giving much
thought to the impact on overhead costs is gone. Likewise, the inability of tradi-
tional LCC to establish cause-and-effect relationships renders it incomplete in pro-
viding relevant decision support. Thus, I feel confident in claiming that the
characteristics of Activity-Based LCC outlined previously clearly indicate that it
is a more effective and efficient approach than traditional LCC. In fact, due to the
aforementioned five characteristics, the use of Monte Carlo methods becomes
much more potent than it would otherwise be.
288 FROM HINDSIGHT TO FORESIGHT
Benefits
The “absolute” benefits of Activity-Based LCC arise from the individual charac-
teristics of Activity-Based LCC as well as their totality, whereas the benefits of tra-
ditional LCC are due to the five differences explained in the “Activity-Based LCC
and Traditional LCC” section. Since this book is about LCC and not cost man-
agement in general, the discussion will be limited to benefits in comparison to tra-
ditional LCC.
ACTIVITY-BASED LCC REVISITED 289
In the literature, many practitioners and researchers discuss cash flow and costs
as if they were the same thing. Cash flow and costs are, however, different, as
explained in Chapter 2. Cash flows concern liquidity, financial measures, and
financial risk exposure; they are therefore important in cases where a significant
time difference exists between expenses and revenues. Costs, however, concern
resource consumption and determine the profitability for a given level of revenues.
Naturally, it can be useful to present both perspectives in order to provide decision-
makers with wide support regarding liquidity, net present value, profitability, and
other important economic measures of performance. Activity-Based LCC facili-
tates both perspectives. This versatility makes the approach useful in a wide array
of economic considerations even beyond life-cycle thinking.
The process orientation of Activity-Based LCC ensures a close link to other
well-established process-oriented methods, such as Business Process Reen-
gineering (BPR), Total Quality Management (TQM), Six Sigma, and so on. More-
over, the process orientation is important because we cannot manage costs directly;
we can only manage them indirectly, by understanding how the activities (pro-
cesses) impact costs, as explained in Chapter 4. Also, process orientation has
numerous more subtle benefits that are beyond the scope of the discussion here.2
The primary need for process orientation is to establish reliable cause-and-
effect relationships. Process orientation would be worth little if it were not for the
fact that causes are almost by definition found in the production processes, the
work processes, or the management processes. The reliance on cause-and-effect
relationships is crucial because it ensures accuracy and, more importantly, rele-
vance. After all, we cannot find a root cause unless we have some ideas about
causes and effects. We can measure the effects, but it is the causes we must man-
age. The benefits from this characteristic are numerous: They include accurate cost
assignment, accurate cost estimates, the correct handling of overhead costs, the
tracing of critical success factors/root causes, a clearer understanding of the under-
lying causes of cost formation, and superb forecasting capabilities. Activity-Based
LCC is an attention-directing tool. Unless attention is paid to the causes, relevant
decision-support information will never materialize. Thus, the systematic usage of
cause-and-effect relationships is at the very core of effective and efficient design
and management.
Due to the rapid increase in technology fueled by modern capitalism and tech-
nological innovations, the whole economy is changing to what many refer to as a
“knowledge economy.” The capital base of a company including intellectual cap-
ital is becoming an increasingly larger part of the company’s wealth. For example,
even though the book values of IBM and Microsoft are only $16.6 billion and $930
million, respectively, their market values are $70.7 billion and $85.5 billion.3 The
difference can largely be attributed to the perceived value of their intellectual
290 FROM HINDSIGHT TO FORESIGHT
capital and its ability to generate future profits. The effect of these changes on all
forms of cost management is that it has become increasingly important to under-
stand the formation of overhead costs and assign such costs to cost objects. Other-
wise, decision-making would be based on an increasingly smaller part of the total
picture, thus increasing the risk of deceptive analyses and erroneous conclusions.
Activity-Based LCC is tailor-made in this respect because its root, ABC, allows
the effective handling of overhead costs, as discussed in Chapter 4.
Overhead costs are like a big clump of gel on top of the processes in an organ-
ization. By simply removing a product or an activity without any further reduction
in the overhead costs, the gel will simply flow over onto the remaining products
and activities. This is the malaise of the traditional approaches; they simply focus
on the direct costs and ignore the overhead costs. Similarly, estimating the cost of
a product in isolation to the other products induces gross errors because every
product in an organization is ultimately related to the other products in that they
share a common overhead cost structure and processes. Activity-Based LCC
avoids such problems by estimating the costs for all products and all activities in
a given business unit at the same time. Thus, we get the whole picture established
simultaneously and no room is left for costs to hide (at least in principle).
These benefits translate into a profound advantage, namely, the fact that
Activity-Based LCC provides decision-makers with the relevant information they
need to make better decisions. Traditional cost management and traditional LCC
are both too fragmented to provide relevant decision support. Activity-Based LCC
incorporates all costs, all cost objects, all activities, and the entire business unit—
basically, a complete cost picture.
We have all heard the story about the group of blind people who tried to
describe an elephant. Each person felt a different part of the animal and had a dif-
ferent fact about it. It was only when these facts were combined that the full pic-
ture could be seen. Activity-Based LCC provides the sight, but the decision-makers
must decide what to do about what they see.
Often the whole budgeting exercise is a simple increase from last year’s budget
by some percentage points. Some of the most progressive companies have already
left the old ways of doing budgeting and embraced a new forecasting-oriented
approach where the company only estimates capacity for the next quarter or so.
However, the basic problem is more or less the same, namely, that we chase num-
bers about future issues based on hindsight. One advantage of the new approach
is that it makes it easier to update the prognoses because we only forecast one quar-
ter at a time. This is definitively a step forward for industries undergoing rapid
change. The main problem, however, persists: Errors in budgeting will only be
detected when it is too late, and cost management becomes a matter of damage
control.
If we use the ideas presented in this book, we can greatly reduce the chance of
budget errors and unforeseen financial troubles. The overall objective of a budget
is to ensure that economic and financial goals are met. In the traditional budgeting
world, this is achieved by controlling spending, or so it is believed. The problem
is that a company is an open system that interacts with an environment where
changes are frequent and surprises lurk around the corner. The greatest obstacle to
reaching budgeted goals is not a company’s control of spending, but rather its lack
of control over risks. Truly effective budgeting should therefore be risk-based;
hence, we can talk about Risk-Based Budgeting (RBB).
With the RBB idea in mind, we can introduce a second idea: Activity-Based
Budgeting (ABB). ABB was originally developed by consultants Coopers Lybrand
Deloitte.5 Its root is obviously in ABC, but it also brings in aspects from other
established techniques as appropriate, including Zero-Based Budgeting (ZBB) and
Priority-Based Budgeting. It focuses the budgets on activities “by defining the
activities underlying the financial figures in each function and using the level of
activity to determine how much resource should be allocated, how well it is being
managed and to explain variances from budget.”6 One major objective and benefit
of ABB is its ability to provide an ideal7 interface between long-term planning and
budgetary control, but it is data intensive and it lacks risk capabilities. Hence, a
conceptual merger between RBB and ABB would produce a truly viable solution.
IDEAS FOR THE FUTURE 293
To further boost performance while at the same time reducing the need for accu-
rate numbers, Monte Carlo simulations should be used. Monte Carlo simulations,
as shown in this book and elsewhere, offer the great benefit of turning uncertainty
into an asset and not a liability.
This way of doing budgeting or continuously updating forecasts will greatly
reduce the costs of performing the budgeting/forecasting tasks because:
All this translates into a more effective, relevant, and less costly budgeting/
forecasting process. With a risk-based budgeting process, the next step would be
to also make some of the cost accounting information risk-based. Of course, the
cost accounting activities revolving around calculating the costs after the fact need
not be risk-based; they should, however, incorporate measures of uncertainty.
What I am talking about is when the Bill of Materials (BOM) and Bill of
Activities (BOA) are set up before production. Then it would be useful to under-
stand which risks are present in terms of either underestimating the costs prior to
production or increasing the costs during production. Such information is largely
ignored today, and all the missed budgets should therefore come as no surprise.
Organizations simply do not operate according to plans, either because of internal
problems and variations or external factors. In any case, it would be helpful to
think about risks before it is too late so that the worst problems can be either pre-
vented or mitigated.
So far, this is only an idea. Future research may prove it wrong or right, or
something in between. My feeling is that it may well prove useful, because after
all it is not a particularly novel idea; it is simply a matter of applying a body of
knowledge from one field to another field and making some minor adjustments. It
is time to stop restricting risk management ideas to engineering and finance, and
to employ them on a wider basis where they can add value. Cost management
seems to be one obvious place, and some people have probably already thought
about it to some extent.
However, the greatest impact from introducing risk management is probably on
the management process itself, because the management process is the pivotal
point of all major decisions in organizations. As Peter F. Drucker said:
The first duty of business is to survive and the guiding principle of business
economics is not the maximization of profits—it is the profit of loss.
294 FROM HINDSIGHT TO FORESIGHT
Avoiding loss is the primary objective of risk management. Thus, risk man-
agement must become an integral part of the entire management process—from
the definition of objectives and strategies to the follow-up part. By using the
approach discussed in this book, significant decision support can be provided, but
this is not just a question of management tools. Risk management also requires
education and a change of mind-set. As Dag Hammarskjold said:
The longest journey is the journey inward.
pened wih the New Coke flop by Coca-Cola. A cost estimate may turn out to be
way too low because hidden costs and risks were not considered, such as when
Exxon chose to use single-hull tankers to save money and ended up paying billions
in cleanup and fines. These are all examples of a small, unpredictable issue turn-
ing a situation upside down. Numerous examples exist; what they all have in com-
mon is that the complexity of the situation was not considered well enough prior
to the catastrophe. In hindsight, it is easy to see what went wrong. IBM failed
because they assumed the future would be like the past (a common assumption in
most forecasting), Coca-Cola thought that people bought Coke for the taste alone.
A simple risk analysis would have shown Exxon the madness of its actions.
Unfortunately, no means exists for eliminating such erroneous analyses and
decisions. However, I believe that one reason such problems arise is that many
analyses tend to appear more precise than they are, so decision-makers believe they
know more than they actually do. Such analyses focus on accuracy and answers
rather than approximations and understanding. The result is organizational self-
delusions and hidden assumptions become prolific. It is useful to think in terms of
Zadeh’s Law of Incompatibility:
As complexity rises, precise statements lose meaning and meaningful state-
ments lose precision.
This is a profound insight. It implies that since uncertainty is inevitable, it is
outright dangerous and deceptive to reduce the uncertainty, for example, in an
analysis by making it appear precise. Similarly, grand plans of strategies and budg-
ets are bound to fail because the more precise they are, the less meaningful they
become. It is, for example, interesting to notice8 that “CEO Superman”9 Jack
Welsh of General Electric (GE), during his 20-year reign, focused on relatively
simple yet adaptable strategies without grand action plans and the like. Yet, accord-
ing to Stern Stewart & Company’s new Wealth Added Index™ (WAI), GE added
the most wealth ($226.8 billion) of all listed companies in the world from 1996 to
2000. Instead of viewing uncertainty as an enemy, GE used uncertainty to its
advantage by rapidly responding to new opportunities or threats. A good strategy
is more concerned about what not to do than what to do, in my opinion. Similarly,
a cost assessment should be carried out according to the succinct phrase: “It is bet-
ter to be approximately right than exactly wrong.” The point is that we should seek
a reliable solution space (an approximation) and not a point solution (exact and
accurate). After all, with a solution space, we know what is likely and what is not,
whereas with a point solution, we have no idea except that it is definitely wrong.
The interesting thing is that, on the one hand, an Activity-Based LCC model is
built up around cause-and-effect relationships, while on the other hand, the Monte
Carlo methods actually introduce uncertainty on purpose. Hence, the uncertainty
296 FROM HINDSIGHT TO FORESIGHT
NOTES
1. According to H.T. Johnson, “It’s Time to Stop Overselling Activity-Based
Concepts,” Management Accounting, September 1992.
2. See, for example, J. Emblemsvåg and B. Bras, “Process Thinking: A New Paradigm
for Science and Engineering,” Futures 32 (7), 2000, pp. 635—654.
3. L.A. Joia, “Measuring Intangible Corporate Assets: Linking Business Strategy with
Intellectual Capital,” Journal of Intellectual Capital 1 (1), 2000, pp. 68—84.
4. M.C. Jensen, “Corporate Budgeting Is Broken: Let’s Fix It,” Harvard Business
Review 79, No. 10, November 2001, pp. 94—101.
5. According to J. Brimson and R. Fraser, “The Key Features of ABB,” Management
Accounting, January 1991.
6. See M. Morrow and T. Connolly, “The Emergence of Activity-Based Budgeting,”
Management Accounting, February 1991.
7. According to M. Harvey, “Activity-Based Budgeting,” Certified Accountant, July
1991, pp. 27—30.
8. J. Welsh and J.A. Byrne, Jack: Straight from the Gut. New York: Warner Business
Books, 2001, p. 479.
9. According to P.F. Drucker, “The Next Society: A Survey of the Near Future,” The
Economist 361 (8246), 2001.
10. K.J. Arrow, Social Choices and Individual Values. New Haven, CT: Yale University
Press, 1963.
11. D. Jankowicz, “Why Does Subjectivity Make Us Nervous? Making the Tacit
Explicit,” Journal of Intellectual Capital 2 (1), 2001, pp. 61—73.
12. S. Sailsbury, “Failures of My Lending Career,” Journal of Commercial Lending 67
(2), 1984.
APPENDIX
A
MONTE CARLO SIMULATION
EXAMPLE
Do not put your faith in what statistics say until you have carefully con-
sidered what they do not say.
William W. Watt
This appendix illustrates how Monte Carlo methods can be used to aid informa-
tion management, uncertainty analysis (and consequently risk management), and
cost management. To do that, a very simple example is used, which is structured
in three parts: (1) definition, (2) what hypothesis to test, and finally (3) the results
and discussion.
PROBLEM DEFINITION
Assume that Company X has two products, P1 and P2, with the costs of materials
and labor as shown in Table A.1.
Company X wants to manage its costs from three different perspectives: uncer-
tainty/forecasting, information management, and continuous improvement. To do
that, Monte Carlo methods are employed in two different ways:
1. For cost management and the corresponding continuous improvement
efforts, the assumption cells are modeled as shown in Figure A.1, where all
the uncertainty distributions are modeled as triangular distributions with
⫾10 percent upper and lower bounds respectively. Note that it can, in prin-
ciple, be any value; thus, ⫾5 percent triangular is just as good as ⫾10 per-
0.90 0.95 1.00 1.05 1.10 0.90 0.95 1.00 1.05 1.10
0.90 0.95 1.00 1.05 1.10 1.80 1.90 2.00 2.10 2.20
Figure A.1 Modeling assumption cells for cost and information management purposes.
cent triangular. The point is to be consistent and for the chosen distributions
to be symmetric and bounded. The purpose is to find out which factors have
the greatest impact on the total costs and what information is subsequently
most critical (and should therefore be paid extra attention to) respectively.
This is done by tracing the different contributors using sensitivity analyses.
This case is consequently referred to as the tracing case.
2. For uncertainty analysis and the corresponding information management,
we try to find out how the uncertainty affects the forecasts and what infor-
mation should be pursued in order to reduce this uncertainty. In this case,
which is referred to as the uncertainty case, we model the uncertainty as
accurately as possible. In this example, we simply choose triangular distri-
butions so that we can compare the two different ways of employing Monte
Carlo simulations. From Figure A.2, we see that we have chosen two ⫾10
percent triangular distributions and two (–5 percent, 10 percent) triangular
distributions.
We note that information management in the two cases has two distinct roles.
In the tracing case, the information management revolves around the issue of what
information is most crucial to have with respect to managing costs, while in the
uncertainty case, the issue is what information generates the most uncertainty, and
risk for that matter, in the model.
Next, we put forth some hypotheses that we intend to test using Monte Carlo
simulations.
300 APPENDIX A
0.90 0.95 1.00 1.05 1.10 0.95 0.99 1.03 1.06 1.10
0.90 0.95 1.00 1.05 1.10 1.90 1.98 2.05 2.13 2.20
Figure A.2 Modeling assumption cells for uncertainty and information management
purposes.
HYPOTHESES TO BE TESTED
The question is how will these two different ways of using Monte Carlo simula-
tions affect the forecast cells? Since this example is simple, we pose the following
four hypotheses:
1. For P1, in the tracing case, the material cost and the labor cost should be
found equally important since the material cost is equal to the labor cost.
2. For P2, in the tracing case, the labor cost should be found twice as impor-
tant as the material cost since the labor cost is twice the material cost.
3. For P1, in the uncertainty case, the material cost should be found more
important than the labor cost since the material cost is equal to the labor cost
in magnitude but associated with a larger uncertainty.
4. For P2, in the uncertainty case, the material cost and the labor cost should
be found unequally important because the magnitudes and uncertainties are
different.
Furthermore, in Monte Carlo simulations, the number of trials is very impor-
tant for the accuracy. Hence, we put forth two more hypotheses:
5. When we increase the number of trials, the random effects that appear in the
sensitivity charts should be reduced.
6. When we add two symmetric distributions, the forecast distribution should
be symmetric as well. That is, the skewness should be zero.
APPENDIX A 301
If we find these six hypotheses to be fulfilled, we can conclude that the Monte
Carlo methods can be employed as claimed, provided that we are aware of the fact
that their accuracy is dependent on the number of trials performed. The results are
discussed next.
Sensitivity Chart
Target Forecast: Tracing: P1 Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Sensitivity Chart
Target Forecast: Tracing: P1 Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Sensitivity Chart
Target Forecast: Tracing: P2 Cost
Sensitivity Chart
Target Forecast: Tracing: P2 Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Table A.2 How the Number of Trials Affects Random Effects and Skewness
Uncertainty Tracing Uncertainty Tracing
Statistics P1 P2 P1 P2 P1 P2 P1 P2
Trials 10,000 10,000 10,000 10,000 100,000 100,000 100,000 100,000
Mean 2.02 3.03 2.00 3.00 2.02 3.03 2.00 3.00
Median 2.02 3.03 2.00 3.00 2.02 3.03 2.00 3.00
Standard 0.05 0.07 0.06 0.09 0.05 0.07 0.06 0.09
deviation
Variance 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01
Skewness 0.07 0.18 0.02 0.00 0.05 0.18 0.00 0.00
Range 1.86 2.83 1.83 2.74 1.86 2.81 1.81 2.72
minimum
Range 2.18 3.28 2.18 3.27 2.19 3.29 2.19 3.28
maximum
Range 0.32 0.45 0.35 0.53 0.34 0.48 0.38 0.56
width
Mean 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
statistics
error
Sensitivity Chart
Target Forecast: Uncertainty: P1 Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Sensitivity Chart
Target Forecast: Uncertainty: P2 Cost
–1 –0.5 0 0.5 1
Measured by Rank Correlation
Before we leave this topic, let’s summarize the different models we can have:
● Tracing models Models where uncertainty is added consistently using
bounded and symmetric distributions only. The uncertainty distributions in
tracing models provide information regarding the possible distortion prob-
lems in the models. The sensitivity charts are used to identify the critical suc-
cess factors and what data are most important for the model in terms of cause
and effect.
● Uncertainty models Models where the true uncertainty is modeled as
accurately as possible. The uncertainty distributions in uncertainty models
give information with respect to how the uncertainty in the assumption cells
affects the forecast cells. The sensitivity charts are employed to identify what
information should be gathered to reduce the uncertainty in the model and
what data are most uncertain.
Tracing models are important because they show that adding uncertainty to a
model can make it more useful in identifying improvements. That is, by increas-
ing the uncertainty, we can lower the risk of making ill-fated decisions.
In short, we do not need accurate data. We need satisfactory process descriptions
that reflect the cause-and-effect relationships and data that are roughly correct.
APPENDIX
B
SFI GROUP SYSTEM
The SFI group system is used in the shipping industry to categorize vessel parts
and components. SFI is distributed by Norwegian Shipping and Offshore Services
AS, and it is the most frequently used system in Norway.
1 2 3 4 5 6 7 8
306
27
Material protection,
external
28 88
Material Common
protection, electrical
internal system
89
Electrical
distribution
systems
307
GLOSSARY
For reference information concerning the various definitions, contact the author of
this book.
Action The various units of work that activities are comprised of, such as
individual tasks, jobs, steps, operations, or any other possible division of
work.
Activity-Based Costing (ABC) A methodology that measures the cost and
performance of activities, resources, and cost objects. Resources are
assigned to activities, and then activities are assigned to cost objects based
on their use. ABC recognizes the causal relationships of cost drivers to
activities. ABC also adopts an attention-focusing, long-term, resource-
consumption orientation.
Activity-Based Management (ABM) A discipline that focuses on the
management of activities as the route to improving the value received by the
customer and the profit achieved by providing this value. This discipline
includes cost driver analysis, activity analysis, and performance
measurement. ABM draws on Activity-Based Costing (ABC) as its major
source of information.
Activity driver A measure of the consumption of an activity by another
activity or an assessment object. Activity drivers that measure the
consumption by an assessment object are also referred to as final activity
drivers, whereas activity drivers that measure the consumption of activities
by other activities are called intermediate activity drivers. Examples of
activity drivers are the amount of labor, the weight of a product, the number
of products, and so on.
Allocation 1. An apportionment or a distribution. 2. A process of assigning
cost to an activity or a cost object when a direct measure does not exist. For
example, assigning the cost of power to a machine activity by means of
machine hours is an allocation, because machine hours is an indirect
measure of power consumption. In some cases, allocations can be converted
to tracing by incurring additional measurement costs. Instead of using
machine hours to allocate power consumption, a company can place a
power meter on machines to measure actual power consumption. Note that
considerable confusion about this topic exists due to the early descriptions
GLOSSARY 309
Value chain costing An activity-based cost model that contains all activities in
the value chain.
Variable cost Two distinct definitions exist depending on whether the term is
applied in volume-based systems or in activity-based systems:
1. Variable costs are costs that vary with the amount of output. Like fixed costs,
variable costs are also divided into two categories:
● The cost of goods sold, which covers materials, labor, and factory over-
head applied directly to production
● Costs that are not directly tied up in production but nevertheless vary
directly with volume, such as sales commissions, discounts, and delivery
expense
2. A cost element of an activity varies with changes in the volume of cost driv-
ers or activity drivers.
Volume-based costing An umbrella term for all costing methods that rely on
the distinction of variable and fixed costs to determine the product costs.
Because variable costs vary with the amount of output and only one single
allocation base, it follows that the product costs strongly correlate with the
production volume. Contribution margin costing and standard costing are
two well-known volume-based costing methods.
ACRONYMS
Acronym Explanation
ABB Activity-Based Budgeting
ABC Activity-Based Costing
ABM Activity-Based Management
AHP Analytic Hierarchic Process
AVA Activity Value Analysis
BOA Bill of Activities
BOM Bill of Materials
BPR Business Process Reengineering
CAP Critical Assumption Planning
EMV Expected Monetary Value
EP Economic Profit
EU Expected Utility
EVA Economic Value Added
GAAP Generally Accepted Accounting Principles
HFO Heavy Fuel Oil
ISO (Greek for equal) International Organization for Standardization
JIT Just-in-Time
LCA Life-Cycle Assessment or Life-Cycle Analysis
LCC Life-Cycle Costing
LHS Latin Hypercube Sampling
MGO Marine Gas Oil
NPV Net Present Value
NVA NonValue Added
PSV Platform Supply Vessel
QFD Quality Function Deployment
SPC Statistical Process Control
SQC Statistical Quality Control
SRS Simple Random Sampling
RF Risk Function
TQM Total Quality Management
VA Value Added
WACC Weighted Average Cost of Capital
INDEX
Johnson, H. Thomas, 26, 39, 83, 145 Minimum Attractive Rate of Return
Just-In-Time (JIT) Costing, 41–43 (MARR), 162
Monte Carlo simulation/methods, 10, 13,
Kaplan, Robert S., 26, 39, 83, 118, 294, 61, 78, 81–89, 288, 291–296, 311
309 Case, 196–200, 213–214
Kaufmann, Arnold, 59 Defined, 85
Kennedy, John F., 215 Error, 82, 199, 301
Example, 160–161, 298–304
Law of Incompatibility, 65, 283, 295 Number of trials, 86–87, 198, 300–301
Law of large numbers, 86 Simple Random Sampling (SRS), 83,
Life cycle, 4, 16–24 85, 88, 197
Customer perspective, 16–17 Latin Hypercube Sampling (LHS), 83,
Market, 18, 21–23 88–89, 197
Marketing perspective, 16 Time of simulation, 190
Production perspective, 16 Variance reduction sampling techniques,
Product, 17–21, 310 87–88
Societal perspective, 17 Møre Research, 218
Life Cycle Assessment (LCA), 27 Moving baseline, 170, 251
Life cycle cost, 171, 287
Categories of, 30–34 National Bureau of Standards, 84
Definitions, 29 Net assets, 163
Life Cycle Costing (LCC), 4, 23, 118, 145, Net Present Value (NPV), 162, 169, 190,
284–291 267–270
Analogy, 36–37 New Coke, 33, 54, 295
Cost accounting, 39–47 New South Wales Government Asset
Environmental, 19 Management Committee, 38
Engineering tool, 24–25 Non-governmental organizations (NGO),
Engineering cost method, 38 165
Environmental tool, 26–27 Norske Kroner (NOK) versus USD ($),
Industrial engineering approach, 38 184, 224
Management tool, 25–26 Norton, David P., 294
Parametric, 37–38 NRC Governing Board on the Assessment
Purpose of, 24–27 of Risk, 77
Likelihood, 52 Numerical methods, 82
Linear goal programming, 41 Nursing home, 212
Linguistic variables, 78 Nygaaren, Bjarne, 228
London Stock Exchange, 4
Lorenz, Edward, 63 Objectives, of model, 151
Lutz, Robert, 7, 45, 200 Off-hire, 219
Ohno, Taiichi, 45, 118
Maintenance, defined, 219 Open systems, 24, 37, 62–63, 310
Malcom Baldrigde Award, 45, 108 Opportunity, 52–53
Management process, 293–294 Oslo International Airport Gardermoen,
Market risk premium, 163 184–188
Market value, 169, 289
Material demanufacture, 20 Payback method, 170
Mathematical functions, 159 Pareto analysis, 118
Microsoft, 169, 289, 294 Performance measures, 3, 11, 103,
Microsoft Excel®, 130, 146 110–111, 160
INDEX 319