Nothing Special   »   [go: up one dir, main page]

Forecast Value Add Technique FVA

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Forecast Value Added Analysis: Step-by-Step

WHITE PAPER

SAS White Paper

Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 What Is Forecast Value Added?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Nave Forecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Sample Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Why Is FVA Important?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 FVA Analysis: Step-by-Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Mapping the Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Collecting the Data .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Analyzing the Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Reporting the Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Interpreting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Further Application of FVA Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Academic Research. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Home Furnishings Manufacturer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Pharmaceutical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Automotive Supplier .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Technology Manufacturer .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Specialty Retailer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Food and Beverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Lean Approach to Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Appendix: Sample SAS Code for Creating FVA Report . . . . . . . . . . 21 Sample SAS Code for FVA Report . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Content for this white paper was provided by Michael Gilliland, Product Marketing Manager at SAS.

Forecast Value Added Analysis: Step-by-Step

Introduction
Traditional forecasting performance metrics, such as mean absolute percent error (MAPE), tell you the size of your forecast error. However, these metrics tell you nothing about how efficient you are at forecasting, what your error should be or whether your efforts are making the forecast better or worse. To determine whether your forecasting efforts are making things better, I advocate using a simple metric called forecast value added, or FVA. There is nothing earth-shattering about this metric; it just gives a name to a fundamental method of science that is too often overlooked in business. Consider this example: Suppose a pharmaceutical company announces a pill for colds and touts that after 10 days, your cold will be gone. Does this sound like a grand medical breakthrough? Will you be buying the pill, or investing in the company? Or has the description of the pills curative power raised some suspicion? It should. Doesnt a cold go away after 10 days anyway? What value is such a pill adding? Shouldnt we require demonstration that the pill does something worthwhile? This is exactly the kind of situation we face in forecasting; but in forecasting, we arent nearly suspicious enough. Why do we assume all our elaborate systems and processes are adding value by making the forecast better? What would happen if we did away with them, and used just the simplest of forecasting methods what results would we achieve then? These are the sorts of things MAPE, by itself, will not tell you. But these are the sorts of things that FVA analysis lets you investigate. This white paper defines and illustrates the FVA calculation and provides the details for conducting FVA analysis at your organization. Because forecasting is often a visible and politicized function, and FVA results can be embarrassing, I also discuss how to most effectively present your findings to colleagues and management. The white paper concludes with case studies of several companies that have applied FVA analysis and publicly presented their results.

SAS White Paper

What Is Forecast Value Added?


So, what is FVA analysis, and why do we do it? FVA is a metric for evaluating the performance of each step and each participant in the forecasting process. It is defined as the change in a forecasting performance metric whatever metric you happen to be using, such as MAPE, forecast accuracy or bias that can be attributed to each particular step and participant in your forecasting process. FVA is a common-sense approach that is easy to understand. It expresses the results of doing something versus having done nothing. FVA can be either positive or negative, telling you whether your efforts are adding value by making the forecast better, or whether you are just making things worse!

FVA is the change in a forecasting performance metric that can be attributed to a

Demand History

Statistical Model

particular step or participant in the forecasting process.

Analyst Override

Figure 1: Simple forecasting process.

Lets look at an example with a very simple forecasting process. Perhaps the simplest process is to read the demand history into a statistical forecasting model that generates a forecast, and then have an analyst review and (if necessary) override the statistical forecast. In FVA analysis, you would compare the analysts override to the statistically generated forecast to determine if the override makes the forecast better. FVA analysis also compares both the statistical forecast and the analyst forecast to whats called a nave forecast. (We discuss nave forecasts in the next section.) Suppose you found that the statistical forecast achieved a mean absolute percent error of 25 percent, and that the analyst overrides actually reduced MAPE to 24 percent. In this case, wed say that the extra step of having an analyst review and adjust the statistical forecasts is adding value by making the forecast better. The reason we measure FVA is to identify waste and inefficiency in the forecasting process. When FVA is negative that is, when a process activity is making the forecast worse then clearly this activity is a waste and should be eliminated. Eliminating waste saves company resources that can be directed to more productive activities. You also get better forecasts by eliminating those activities that just make the forecast worse. When FVA is negative, that is clearly a bad thing, and the process step or participant with negative FVA should be eliminated. But when FVA is positive, as in this example, do we conclude that the step or participant (here, the analyst override) should be kept in the process? Not necessarily.

Forecast Value Added Analysis: Step-by-Step

The mere fact that a process activity has positive FVA doesnt necessarily mean that you should keep it in your process. You need to compare the overall financial benefits of the improvement to the cost of that activity. Is the extra accuracy increasing your revenue, reducing your costs or making your customers happier? In this example, the analyst override did reduce error by one percentage point. But having to hire an analyst to review every forecast can get costly, and if the improvement is only one percentage point, is it really worth it?

The Nave Forecast


FVA analysis is based on a simple, scientific method. When a pharmaceutical company comes up with a new pill, it must demonstrate that the pill is safe and effective. Part of this demonstration is to run a controlled experiment, such as finding 100 people with colds, randomly dividing them into two groups, and giving one group the new pill and the other a placebo. If you find that those who get the pill overcome their cold much faster, and suffer less severe symptoms, then you may conclude that the pill had an effect. If there is little difference between the groups perhaps if everyone overcomes the cold within 10 days whether they received the pill or not you can probably conclude that the pill adds no value. The nice thing about applying this approach to forecasting is that we have a placebo something called the nave forecast. Per the glossary of the Institute of Business Forecasting (IBF), a nave forecast is something simple to compute, requiring the minimum of effort and manipulation to prepare a forecast. There are several commonly used examples: The random walk, also called the no-change model, just uses your last-known actual value as the future forecast. For example, if you sold 12 units last week, your forecast is 12. If you sell 10 this week, your new forecast becomes 10. For the seasonal random walk, you can use something such as the same period from a year ago as your forecast for this year. Thus, if last year you sold 50 units in June and 70 units in July, your forecast for June and July of this year would also be 50 and 70. A moving average, or other simple statistical formula, is also suitable to use as your nave model, because its also simple to compute and takes minimal effort. The duration of the moving average is up to you, although a full year of data (12 months or 52 weeks) has the advantage of smoothing out any seasonality.

SAS White Paper

Consider the following graphical views of nave forecasts with monthly data (Figures 2-5): If you use the random walk as your nave (as shown in Figure 2), then the forecast for all future periods is the last-known actual, which in this case is 40 units in May 2008.

Figure 2: Random walk model (forecast = last known actual).

If you use a seasonal random walk as your nave (as shown in Figure 3), then the forecast for all future periods is the actual from the same period in the prior year. Therefore, June 2008 through May 2009 is forecast to look exactly like June 2007 to May 2008.

Figure 3: Seasonal random walk model (forecast = actual from same period last year).

Figure 4 shows a 12-month moving average for the nave forecast, which happens to be 55.4 for this sales data.

Forecast Value Added Analysis: Step-by-Step

Figure 4: Moving average model (forecast = moving average of actuals).

As you can see, you may get wildly different forecasts from different choices of the nave model (shown in Figure 5). Depending on the nature of the pattern you are trying to forecast, some nave models may forecast much better than others. So which one to choose? One suggestion, from Emily Rodriguez, is to use a composite of nave models.

Figure 5: Forecasts from different nave models.

SAS White Paper

Sample Results
Figure 6 gives an example of an FVA report, showing how you would compare each process step to the nave model.

Figure 6: An FVA report.

In this case, the nave model was able to achieve MAPE of 25 percent. The statistical forecast added value by reducing MAPE five percentage points to 20 percent. However, the analyst override actually made the forecast worse, increasing MAPE to 30 percent. The overrides FVA was five percentage points less than the nave models FVA, and was 10 percentage points less than the statistical forecasts FVA. You may wonder how adding human judgment to the statistical forecast could possibly make it worse? This actually happens all the time, and recent academic research (discussed later in this paper) investigates the topic.

Why Is FVA Important?


Weve seen how FVA is defined and measured, and an example of how you can report it. But why is FVA such an important metric? FVA is important because it helps you identify waste in your forecasting process. By identifying and eliminating the activities that do not add value (those activities that are not making the forecast better) you can streamline your process. FVA helps you ensure that any resources youve invested in the forecasting process from computer hardware and software to the time and energy of management are helping. If they are not helping, then redirect the resources and the time to activities that are doing something worthwhile. The nice thing about FVA is that when you eliminate those activities that are just making the forecast worse, you can actually get better forecasts for free!

Forecast Value Added Analysis: Step-by-Step

FVA can also be used as a basis for performance comparison. Suppose you are a forecasting manager and have a bonus to give to your best forecast analyst. The traditional way to determine the best one is to compare their forecast errors. Based on this traditional analysis, as shown in Figure 7, Analyst A is clearly the best forecaster and deserves the bonus. But is the traditional analysis the correct analysis?

Figure 7: Comparing analyst performance traditional approach.

What if we consider additional information about each analyst and the types of products that they have been assigned to forecast?

Figure 8: Comparing analyst performance FVA approach.

As shown in Figure 8, Analyst A had the lowest MAPE, but we must note the kinds of products that were assigned: long-running, basic items with no seasonality or promotional activity, no new items and low-demand variability. In fact, an FVA analysis might reveal that a nave model could have forecast this sort of demand with a MAPE of only 10 percent, and that Analyst A only made the forecast worse! For Analyst B, demand was more difficult to forecast, with factors such as promotional activity and new items that make forecasting so difficult. FVA analysis reveals that this analyst added no value compared to a nave model but at least this person didnt make the forecast worse.

SAS White Paper

What FVA analysis reveals is that only Analyst C deserves the bonus. Even though Analyst C had the worst forecasts, with a MAPE of 40 percent, Analyst C was challenged with items that are very difficult to forecast short lifecycle fashion items with high promotional activity and high-demand variability. Only Analyst C actually added value compared to a nave model and made the forecast better. This example reveals another thing to be wary of in traditional performance comparison, as you see in published forecasting benchmarks. Dont compare yourself, or your organization, to what others are doing. The organization that achieves best-inclass forecast accuracy may do so because they have easier-to-forecast demand, not because their process is worthy of admiration. The proper comparison is your performance versus a nave model. If you are doing better, then that is good. But if you or your process is doing worse than a nave model, then you have some serious (but fixable) problems. MAPE is probably the most popular forecasting performance metric, but by itself, is not legitimate for comparing forecasting performance. MAPE tells you the magnitude of your error, but MAPE does not tell you what error you should be able to achieve. By itself, MAPE gives no indication of the efficiency of your forecasting process. To understand these things, you need to use FVA analysis.

FVA Analysis: Step-by-Step


Weve seen why FVA is an important metric. Now we tackle the nuts and bolts of how to conduct FVA analysis at your organization.

Mapping the Process


The first step in FVA analysis is to understand and map your overall forecasting process. The process may be very simple (as shown in Figure 9A), with just a statistically generated forecast and a manual override. Or (as shown in Figure 9B), it can be an elaborate consensus or collaborative process, with participation from different internal departments like sales, marketing and finance. It might also include inputs from customers or suppliers, if you are using the collaborative planning, forecasting and replenishment (CPFR) framework.

Forecast Value Added Analysis: Step-by-Step

Demand History

Statistical Model

Causal Factors

Demand History

Statistical Model

Sales

Analyst Override

Exec Targets

Analyst Override

Marketing

Consensus

Supply Constraints

Finance

Executive Review

P&IC

Approved Forecast

Figures 9A-9B: Simple and complex forecasting processes.

Many organizations also have a final executive review step, where general managers, division presidents, or even CEOs get a chance to change the numbers before approving them. This can translate into a great deal of high-cost management time spent on forecasting. But does it make the forecast any better? That is what we are trying to find out.

Collecting the Data


After you identify all of the steps and participants in your forecasting process and map the process flow, you must gather data. The data for FVA analysis is the forecast provided by each participant and each step of the process. You need to gather this information at the most granular level of detail (such as an item at a location), as shown in the Level of Forecasting Hierarchy columns in Figure 10. You also need to record the time bucket of the forecast, which is typically the week or month that you are forecasting. In addition, you must record the actual demand in the time bucket that you were trying to forecast.

bucket

demand

Figure 10: FVA data elements for simple process.

SAS White Paper

The Forecast of Process Steps and Participants columns contain the forecasts provided by each step and participant in the process. In this example, for the very simple process we showed earlier, you only need to gather the nave forecast (using whatever nave model you decide to use), the statistical forecast generated by your forecasting software and the final forecast that includes any manual overrides made by the forecast analyst. Figure 10 also shows what an FVA data set would look like, with variable fields across the top and data records in each row. If you want to do a one-time FVA report for just a few items, you could do this much in Excel. However, for a thorough and ongoing FVA analysis and to make FVA a routine metric reported every period to management you need much more powerful data handling, data storage, analytics and reporting capabilities than Excel provides. Both SAS Analytics Pro and SAS Visual Data Discovery are perfect entry-level solutions for FVA analysis. For SAS Forecast Server customers, the Appendix contains sample code for generating a simple FVA report as a stored service. A thorough and ongoing FVA analysis requires you to capture the forecast for each participant, at each step and in every period for all of your item and location combinations. This will quickly grow into a very large amount of data to store and maintain, so you will need software with sufficient scalability and capability. Analysis on this scale is definitely not something you do in Excel.

Analyzing the Process


You can do FVA analysis with whatever traditional performance metric you are currently using, be that MAPE, forecast accuracy or anything else. Because FVA measures the change in the metric, it isnt so important which metric you use. You must also decide what to use as your nave forecasting model. The standard examples are the random walk, which is commonly known as Nave Forecast 1 (NF1), or the seasonal random walk, which is known as Nave Forecast 2 (NF2). A purist might choose from NF1 or NF2. But it is fine to use other models, such as a moving average or a simple exponential smoothing model. You might also follow Emily Rodriguezs suggestion and use a composite for your nave forecast, such as the average of NF1 and NF2. Recall that per the definition of a nave forecast, it should be something simple to compute, requiring the minimum amount of effort. Some organizations have gone so far as to interpret this to mean that any automatically generated statistical model is suitable to use as a nave model. Their argument is that once these more sophisticated models are created, there is no additional effort or cost to use them.

10

Forecast Value Added Analysis: Step-by-Step

Comparing results to an automatically generated statistical forecast is a good practice. But it is always worthwhile to use a truly nave model, such as NF1, for your ultimate point of comparison. You cant just assume that your statistical model is better than a random walk. Nave models can be surprisingly difficult to beat. Some statistical forecasting software uses unsound methods, such as blindly picking models that best fit the historical data rather than selecting models that are most appropriate for good forecasting.

Reporting the Results


When reporting your results, remember that there are many comparisons to make. You probably dont have to report every single pair-wise comparison, but you should at least report FVA for the major chronological steps in the process. Thus you would probably want to show FVA for: Statistical forecasts versus nave forecasts. Analyst overrides versus the statistical forecasts. Consensus or collaborative forecasts versus the analyst overrides. Executive-approved forecasts versus the consensus forecasts. You will probably find that some of these steps add value, and others dont. When a particular step or participant is not adding value, you should first try to understand why. For example, do your statistical models need updating for better performance? Does your analyst need additional experience or training on when to make judgmental overrides, and when its best to leave the statistical forecast alone? Do certain participants in your consensus process bias the results because of their own personal agendas? Do your executives only approve forecasts that are meeting the operating plan, and revise those forecasts that are falling below plan? In some cases, it may be possible to improve performance with some education or technical training for the participants. In other cases, the only solution is to eliminate the problematic step or participant from the process. Most people are not going to complain when they are excused from the forecasting process. There arent that many people who actually like being responsible for forecasting! There is no rigid, fixed way to report your FVA results, and you are encouraged to be creative in your presentation. However, the stairstep table in Figure 11 is a good way to start. On the left side, you list the process steps or participants and their performance in terms of MAPE, or accuracy, or whatever metric you are using. Columns to the right show the FVA from step to step in the process. For a more elaborate process, the report layout is the same. You simply use more rows to show the additional process steps, and more columns to show the additional comparisons between steps. These reports should also indicate the hierarchical level at which you are reporting, such as the individual item and location, or an aggregation (such as product category by region). You would also indicate the time frame covered in the data.

11

SAS White Paper

ITEM=xxx LOCATION=xxx TIME: 06/2011 5/2012


Demand History
Statistical Model

Causal Factors

Sales

Analyst Override

Exec Targets

Marketing

Consensus

Supply Constraints

Finance

Executive Review

P&IC

Approved Forecast

Figure 11: FVA report for complex process.

This style of report should be easy to understand. We see that the overall process is adding value compared to the nave model, because in the bottom row the approved forecast has a MAPE of 10 percentage points less than the MAPE of the nave forecast. However, it also shows that we would have been better off eliminating the executive review step, because it actually made the MAPE five percentage points worse than the consensus forecast. It is quite typical to find that executive tampering with a forecast just makes it worse. As mentioned previously, FVA versus the nave forecast can vary depending on which nave model you choose. For example, if you are dealing with seasonal demand, then a seasonal random walk may provide much better forecasts than a plain random walk. The right thing to do is decide which nave model or composite of nave models that you are going to use, and then use this consistently throughout your analysis. Also, be aware that nave forecasts can be surprisingly difficult to beat. When you report your results, they may be rather embarrassing to those participants who are failing to add value. Therefore, present the results tactfully. Your objective is to improve the forecasting process not to humiliate anyone. You may also want to present initial results privately, to avoid public embarrassment for the non-value adders.

12

Forecast Value Added Analysis: Step-by-Step

Interpreting Results
The FVA approach is intended to be objective and scientific, so you must be careful not to draw conclusions that are unwarranted by the data. For example, measuring FVA over one week or one month does not provide enough data to draw any valid conclusions. Period to period, FVA will go up and down, and over short time frames FVA may be particularly high or low simply due to randomness. When you express the results in a table, as weve shown up to this point, be sure to indicate the time frame reported, and make sure that time frame has been long enough to provide meaningful results. It is ideal if you have a full year of data from which to draw conclusions. If youve been thoroughly tracking inputs to the forecasting process already, then you probably have the data needed to do the analysis right now. You can look at the last year of statistical forecasts, analyst overrides, consensus forecasts, executive-approved forecasts and actual results, and then compute the FVA. Because nave models are always easy to reconstruct for the past, you can see how well a nave model would have done with your data last year. While a full year of data is ideal, if you are just starting to collect forecast data, then you might not have to wait a full year to draw conclusions. Graphical presentation of this data, using methods from statistical process control, is a big help here. Lets suppose that you just recently started gathering the data needed for FVA analysis, and so far you have 13 weeks of data. Depending on what you find, this may be enough information to draw some conclusions. Well look at two situations that you might encounter. For additional examples and ideas on how to interpret and report data using an approach from statistical process control, see Donald Wheelers excellent book Understanding Variation. Wheeler delivers a savage criticism of normal management analysis and reporting, exposing the shoddiness of typical business thought and decision making, and the general lack of appreciation for things like randomness and variation. Following the spirit of Wheelers message, lets look at a situation you might encounter with 13 weeks of FVA data. Figure 12 shows MAPE for the statistical forecast in the solid pink line, MAPE for the consensus forecast in the dotted dark blue line and FVA for the consensus process in the dashed yellow line. Over the entire 13 weeks, MAPE for the consensus forecast is 3.8 percentage points lower than MAPE for the statistical forecast, so FVA is positive. It would appear that the consensus step is adding value by delivering a forecast that has lower error than the statistical forecast. But is this enough data to draw a definite conclusion that the consensus process is a good use of resources?

13

SAS White Paper

50 40

MAPE: Consensus vs. Statistical

MAPE

30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13

Statistical MAPE = 25.1% Consensus MAPE = 21.3% Consensus vs. Statistical FVA = 3.8%

FVA: Consensus vs. Statistical


40.0 30.0 20.0

FVA

10.0 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13

-10.0 -20.0 -30.0

Figure 12: Situation 1.

In this situation, you probably cant yet draw that conclusion. As you see from all the lines, there is quite a large amount of variation in performance of the statistical model, the consensus process, and the resulting FVA. You also see that the FVA is positive in only six of the 13 weeks. Wheelers book provides methods for assessing the amount of variation. Because the overall difference between statistical and consensus performance is relatively small, and there is so much variability in the results, the positive FVA may just be due to randomness. In a case like this, you probably need to gather more data before drawing any conclusions about the efficacy of the consensus process.

14

Forecast Value Added Analysis: Step-by-Step

50 40

MAPE: Consensus vs. Statistical

MAPE

30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13

Statistical MAPE = 10.6% Consensus MAPE = 28.8% Consensus vs. Statistical FVA = -18.2%

FVA: Consensus vs. Statistical


10.0 0.0 1 -10.0 2 3 4 5 6 7 8 9 10 11 12 13

Figure 13: Situation 2.

In Figure 13, we again see MAPE for the statistical forecast in the solid pink line, MAPE for the consensus forecast in the dotted dark blue and FVA in the dashed yellow line. Here, we find that the consensus forecast has consistently done worse than the statistical forecast. In this case, the FVA is very negative (averaging -18.2 percentage points), with positive FVA in only two of the 13 weeks. The data seems to indicate that the consensus step is not adding value, and is in fact making the forecast worse. At this point, you may want to bring these findings to your management and try to understand why the consensus process is having this effect. You can start to investigate the dynamics of the consensus meeting and the political agendas of the participants. Ultimately, you must decide whether the consensus process can be fixed to improve the value of the forecast, or whether it should be eliminated. FVA analysis lets you take an objective, scientific and data-driven approach to process analysis. The point of all this is to encourage you to at least conduct a rudimentary FVA analysis and determine whether your process is beating a nave model. This can be done quite easily; most organizations will have the data necessary for a limited-scale, quick-and-dirty analysis in Excel. Thorough and ongoing FVA reporting takes more effort, more data and more robust software tools (and perhaps even IT department involvement). However, several organizations are now doing this, or are in the midst of building their own FVA tracking and reporting systems. SAS users can take advantage of the sample code provided in the Appendix to create their own reports. Simply put, the message is this: If you dont know that you are beating a nave forecast, then maybe youre not.

FVA

-20.0 -30.0 -40.0 -50.0

15

SAS White Paper

Further Application of FVA Analysis


We conclude this section with some ideas for extending the application of FVA analysis into other areas. These are all things that organizations, across many industries, are already doing today. Weve seen how FVA can be used to evaluate your forecasting process and to decide between alternative process methods. Should you have a consensus meeting? Should you do CPFR with your customers or suppliers? Should executive management have final say over the forecast? FVA can be used as an ongoing metric, tracking statistical model performance and indicating when models need to be recalibrated. The FVA idea can also extend to setting performance expectations. Some organizations pull forecasting objectives out of the air or base them on what the organization wants or needs to achieve. However, setting arbitrary objectives (such as MAPE must be less than 20 percent) without any consideration of the underlying forecastability of your demand is completely wrong. Simply wanting or needing to hit a certain error target does not mean it is even possible, and unreachable targets just demoralize your forecasting staff and encourage them to cheat. Setting objectives based on industry benchmarks is also completely wrong. Benchmark data based on survey responses is highly suspect. Even when an organization achieves best-in-class forecasting performance, is it because they have an exemplary forecasting process, or because they have easy-to-forecast demand? With FVA, you realize that perhaps the only reasonable goals for forecasting performance are to beat a nave model and continuously improve the process. Improvements can be reflected through a reduction in forecast errors, or a reduction in forecasting process (minimizing the use of company resources in forecasting). If good automated software can give you usable forecasts with little or no management intervention, why not rely on the software and invest management time in other areas that have the potential to bring more value to the organization? Let your production people produce and let your sales people sell dont encumber them with forecasting unless you truly must. You want to eliminate waste and streamline your process for generating forecasts as accurately and efficiently as possible.

Case Studies
FVA has been used at a wide range of organizations, across several major industries. These include pharmaceuticals, retail, technology manufacturing, home furnishings, transportation, apparel, and food and beverage. All of these organizations have spoken publicly about their use of FVA analysis and their findings. But before turning to the case studies, lets look at recent academic research on the real-world application of FVA.

16

Forecast Value Added Analysis: Step-by-Step

Academic Research
Robert Fildes and Paul Goodwin from the UK reported on a study of 60,000 forecasts at four supply chain companies and published the results in the Fall 2007 issue of Foresight: The International Journal of Applied Forecasting. They found that about 75 percent of the time, the statistical forecasts were manually adjusted meaning that 45,000 forecasts were changed by hand!

Figure 14: Research results.

Perhaps the most interesting finding was that small adjustments had essentially no impact on forecast accuracy. The small adjustments were simply a waste of time. However, large adjustments, particularly downward adjustments, tended to be beneficial. In this diagram, the vertical axis labeled Percent Improvement indicates the value added. As you can see from the data in Figure 14, only the larger, downward adjustments had meaningful improvement on accuracy.

Home Furnishings Manufacturer


In our first case study, one home furnishings manufacturer uses a collaborative forecasting process wherein the baseline statistical forecast is manually updated with market intelligence, resulting in the final collaborative forecast. This company incorporated FVA analysis to provide visibility into their process to identify areas for improvement. Through this analysis, the company realized that the best way to leverage the knowledge of sales people was to appeal to their competitive nature. The sales representatives needed a challenge. Therefore, rather than just ask the sales people to forecast, the company challenged them to beat the nerd in the corner by adding value to the nerds computer-generated forecasts.

17

SAS White Paper

Pharmaceutical
A major pharmaceutical company reports FVA as part of a forecast quality dashboard. The dashboard includes metrics for process governance, assessing whether the forecast was on time and complete. It includes metrics on organizational behavior, assessing if the forecast was followed, ignored or changed. The dashboard also includes metrics on forecast accuracy, bias and value added. This company is also at the leading edge of applying process-control techniques to forecasting. It pays special attention to the forecastability of its products and differentiates those with stable versus unstable demand. Analysts use the forecastability assessment to evaluate risk in their forecasts, and then build plans accounting for the risk. The lesson here is that not all forecasts are created equal some are a great deal more reliable than others. You shouldnt bet the company on forecasts that dont merit that degree of confidence.

Automotive Supplier
An automotive supplier knows the value of assessing the costs of forecast inaccuracy. When forecasts are too high, the supplier runs the risk of excess inventory and all the costs that go with it. When forecasts are too low, the supplier is confronted with the risk of unfilled orders, lost revenue and loss of credibility as a supplier. This company used FVA to evaluate management adjustments to the forecast, and then applied its Cost of Inaccuracy metric to determine whether those efforts were costeffective. The company found that even in cases where management adjustments make slight improvements to forecast accuracy resulting in positive value added these small improvements may not be worth the cost of management time and resources. By eliminating the management participation that didnt provide sufficient financial benefit, the company has streamlined its forecasting processes.

Technology Manufacturer
Armed with good historical data, the long-range planning department of a large technology manufacturer spearheaded an FVA initiative to review performance over the past six years. The analysts found that half the time a nave model did as well or better than their official forecast. When the official forecast was better, the value added was less than 10 percent. They also found that the nave models were more stable and less biased, and therefore not chronically too high or too low. These initial FVA findings showed that generally the same or better results could be achieved with much less cost and effort. The analysis provided the data to help shift management thinking and open the organization to creative process re-engineering. The company continues to use FVA analysis to evaluate proposed process changes and new forecasting methods. These new proposals can be tested to determine whether they are improving performance and whether the improvement justifies the cost of a full implementation.

18

Forecast Value Added Analysis: Step-by-Step

Specialty Retailer
This retailer hired a new inventory and forecasting director who has a background in manufacturing and familiarity with process control. Observing that his forecast analysts constantly revised forecasts based on the latest bits of sales information, he decided to assess the value of their efforts. He compared the analyst accuracy with a five-week moving average. Only 25 percent of the time did the analyst overrides beat the moving average! Overreacting to new information, just as these analysts obsessed over sales, is a common occurrence. But if your statistical forecasting software is doing well enough, leave it alone and dont try to second-guess it. Dont create extra work for yourself by revising numbers based on last weeks sales data. There is always going to be randomness in any sales number. It is important to understand what variability is natural and react only when something out of the ordinary occurs. Like the technology manufacturer, this retailer uses FVA to validate process improvement prototypes. The retailer rolls out a prototype companywide only if it provides sufficient FVA results. The retailers inventory director also recommended using FVA to evaluate competing software packages, and of course, compare software-generated statistical forecasts to judgmental overrides.

Food and Beverage


The last case study is a food manufacturer that had a good statistical forecasting process with overall accuracy that beat a nave model by four percentage points. The manufacturer tried to improve this by requiring sales people to provide item-level forecasts, per customer, for those major customers that accounted for 80 percent of volume. After tracking the data for several months, analysts determined that the input from sales had no impact on forecast accuracy. The manufacturer returned to the original forecasting process freeing sales people to spend more time building relationships with their customers and making sales. This company also found that they were getting the value from their statistical forecasting models, not from manual overrides. Overrides were found to make the forecast worse 60 percent of the time.

19

SAS White Paper

Lean Approach to Forecasting


The lean approach consists of identifying and eliminating waste in any process. FVA analysis is one tool to use in a lean approach to your business. The objectives should be to generate forecasts that are as accurate and unbiased as anyone can reasonably expect, and to do this as efficiently as possible. Dont gauge success strictly by forecast accuracy; you may have little control over that. Forecast accuracy is determined, more than anything else, by the nature of the demand you are trying to forecast. If 50 percent accuracy is the best you can achieve (such as forecasting heads or tails with the flip of a fair coin), then dont waste resources trying to forecast better than that. Instead, use good software that automates your processes as thoroughly as possible and minimize resource commitments to forecasting. Rather than making the forecast better, overly elaborate forecasting processes that have many management touch points tend to make the forecast worse. More touch points mean more opportunities for people to add their own biases and personal agendas. Not everyone cares about getting the most accurate forecast. Some will want to bias the forecast high so that plenty of inventory will be available to sell. Others will want to bias the forecast low in order to lower cost projections or sandbag their quotas. You shouldnt necessarily trust your forecasting process participants. While you cant always control the accuracy of your forecasts, you can control the process used and the resources you invest. Ensure that every part of your process is adding value and eliminate activities that make the forecast worse. Just by removing those non-value-adding activities, you can find yourself getting better forecasts for free!

Bibliography
Case Study References: Fildes, Robert and Goodwin, Paul. Good and Bad Judgment in Forecasting. Foresight: The International Journal of Applied Forecasting, Fall 2007. Gilliland, Michael. Is Forecasting a Waste of Time? Supply Chain Management Review, July-August 2002. Gilliland, Michael. (2010), The Business Forecasting Deal. New York: John Wiley & Sons. Additional Sources: Makridakis, Spyro, Steven Wheelwright and Robert Hyndman. (1998), Forecasting Methods and Applications (3rd Edition). New York: John Wiley & Sons. Wheeler, Donald. (2000), Understanding Variation: The Key to Managing Chaos. SPC Press SAS Forecasting Web Series (available for free, on-demand review): Forecast Value Added Analysis: Step-by-Step (sas.com/reg/web/corp/4385)

20

Forecast Value Added Analysis: Step-by-Step

Appendix: Sample SAS Code for Creating FVA Report

To understand how significantly advanced statistical models can improve forecast accuracy, you can compare the accuracy achieved using SAS Forecast Server with the accuracy achieved using a simple model. The following stored service code (also known as stored process code) creates a graph that compares the forecast accuracy achieved using SAS Forecast Server and the forecast accuracy achieved using a nave (random walk) model. The graph also shows the relationship between forecast accuracy and variation in the data. (Note: This code was provided by Snurre Jensen, SAS Denmark.) Installation 1. Enter the sample SAS code from the following pages into a file: FVA Analysis.sas. 2. Save the SAS program and use SAS Management Console to register it as a stored service that can be used in SAS Forecast Server. 3. Alternatively, you can simply use the code as a starting point to write your own report, and forgo registering it as a stored service. Usage 1. First, create your forecasts using SAS Forecast Server. The amount of series doesnt matter and hierarchies can be included. 2. Then, select this stored service from the list of available reports.

Notes It only compares the accuracy at the most detailed level; for example, at the lowest level of the hierarchy. Although this code is written as a SAS stored service, you may want to strip out some of the macro language and use the code as a basis for writing your own custom report.

21

SAS White Paper

Sample SAS Code for FVA Report

*ProcessBody; %stpbegin; options mlogic symbolgen mprint; %macro fva_report; *%include &HPF_INCLUDE; Proc HPFARIMASPEC /* Model: RANDWALK * Label: Y ~ D=(1) NOINT */ MODELREPOSITORY = work.models SPECNAME=RandomWalk SPECLABEL=Random Walk SPECTYPE=RANDWALK SPECSOURCE=FSUI ; FORECAST TRANSFORM = NONE NOINT DIF = ( 1 ) ; ESTIMATE METHOD=CLS CONVERGE=0.0010 MAXITER=50 DELTA=0.0010 SINGULAR=1.0E-7 ; run; proc hpfselect modelrepository=work.models selectname=naive selectlabel=Naive Models List; spec RandomWalk; diagnose seasontest=NONE intermittent=100; run; proc sort data=&hpf_input_libname..&hpf_input_dataset. out=sortdata; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars &hpf_timeid.; %end; %else %do; by &hpf_timeid.; %end; run; proc hpfengine data=sortdata out=_null_ outfor=naive back=&hpf_back. lead=&hpf_lead. modelrepository=work.models globalselection=naive; id &hpf_timeid. interval=&hpf_interval. accumulate=total; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; forecast &hpf_depvars.;

22

Forecast Value Added Analysis: Step-by-Step

run; data _null_; if &hpf_back=0 then wheredate=%unquote(&hpf_dataend.)-365; else wheredate=intnx(&hpf_interval.,%unquote(&hpf_dataend.),-&hpf_back); call symput(wherecls,wheredate); run; data history (where=(&hpf_timeid. ge &wherecls.) keep=&hpf_byvars. &hpf_timeid. predict actual abserr max); set naive; if predict=. then delete; if predict lt 0 then predict=0; abserr=abs(predict-actual); max=max(predict,actual); %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; run; proc summary nway data=history; %if &hpf_num_byvars. > 0 %then %do; class &hpf_byvars.; %end; var actual abserr max; output out=summary sum=sumactual sumabserr summax cv=cvactual cvabserr cvmax; run; data results1 (keep=&hpf_byvars. fa_naive cv); set summary; cv=cvactual; fa_naive=100*(1-sumabserr/summax); format cv fa_naive 4.1; run; %if &hpf_num_levels>1 %then %do; libname _leaf &HPF_PROJECT_location/hierarchy/&&hpf_byvar&hpf_num_byvars. ; %end; %else %do; libname _leaf &HPF_PROJECT_location/hierarchy/leaf ; %end; data history (where=(&hpf_timeid. ge &wherecls) keep=&hpf_byvars. &hpf_timeid. predict actual abserr max); set _leaf.outfor; if predict=. then delete; if predict lt 0 then predict=0; abserr=abs(predict-actual); max=max(predict,actual); run;

23

SAS White Paper

proc summary nway data=history; %if &hpf_num_byvars. > 0 %then %do; class &hpf_byvars.; %end; var abserr max; output out=summary sum=sumabserr summax; run; data results2 (keep=&hpf_byvars. fa_stat); set summary; fa_stat=100*(1-sumabserr/summax); format fa_stat 4.1; run; data all; merge results1 results2; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; fva=fa_stat-fa_naive; run; legend1 label=(Models: ); symbol2 color=red value=x height=0.75; symbol1 value=star color=blue height=0.75; axis1 order=(0 to 100 by 10) label=(Forecast Accuracy); axis2 label=(Volatility); title Comparison of forecast accuracy; proc gplot data=all; plot (fa_stat fa_naive)*cv/overlay vaxis=axis1 haxis=axis2 frame legend=legend1; run; quit; symbol1;axis;title; %mend fva_report; %fva_report; %stpend;

24

About SAS
SAS is the leader in business analytics software and services, and the largest independent vendor in the business intelligence market. Through innovative solutions, SAS helps customers at more than 60,000 sites improve performance and deliver value by making better decisions faster. Since 1976 SAS has been giving customers around the world THE POWER TO KNOW.

SAS Institute Inc. World Headquarters +1 919 677 8000


To contact your local SAS office, please visit: sas.com/offices
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product names are trademarks of their respective companies. Copyright 2013, SAS Institute Inc. All rights reserved. 106186_S94636_0113

You might also like