Forecast Value Add Technique FVA
Forecast Value Add Technique FVA
Forecast Value Add Technique FVA
WHITE PAPER
Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 What Is Forecast Value Added?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Nave Forecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Sample Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Why Is FVA Important?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 FVA Analysis: Step-by-Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Mapping the Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Collecting the Data .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Analyzing the Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Reporting the Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Interpreting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Further Application of FVA Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Academic Research. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Home Furnishings Manufacturer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Pharmaceutical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Automotive Supplier .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Technology Manufacturer .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Specialty Retailer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Food and Beverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Lean Approach to Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Appendix: Sample SAS Code for Creating FVA Report . . . . . . . . . . 21 Sample SAS Code for FVA Report . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Content for this white paper was provided by Michael Gilliland, Product Marketing Manager at SAS.
Introduction
Traditional forecasting performance metrics, such as mean absolute percent error (MAPE), tell you the size of your forecast error. However, these metrics tell you nothing about how efficient you are at forecasting, what your error should be or whether your efforts are making the forecast better or worse. To determine whether your forecasting efforts are making things better, I advocate using a simple metric called forecast value added, or FVA. There is nothing earth-shattering about this metric; it just gives a name to a fundamental method of science that is too often overlooked in business. Consider this example: Suppose a pharmaceutical company announces a pill for colds and touts that after 10 days, your cold will be gone. Does this sound like a grand medical breakthrough? Will you be buying the pill, or investing in the company? Or has the description of the pills curative power raised some suspicion? It should. Doesnt a cold go away after 10 days anyway? What value is such a pill adding? Shouldnt we require demonstration that the pill does something worthwhile? This is exactly the kind of situation we face in forecasting; but in forecasting, we arent nearly suspicious enough. Why do we assume all our elaborate systems and processes are adding value by making the forecast better? What would happen if we did away with them, and used just the simplest of forecasting methods what results would we achieve then? These are the sorts of things MAPE, by itself, will not tell you. But these are the sorts of things that FVA analysis lets you investigate. This white paper defines and illustrates the FVA calculation and provides the details for conducting FVA analysis at your organization. Because forecasting is often a visible and politicized function, and FVA results can be embarrassing, I also discuss how to most effectively present your findings to colleagues and management. The white paper concludes with case studies of several companies that have applied FVA analysis and publicly presented their results.
Demand History
Statistical Model
Analyst Override
Lets look at an example with a very simple forecasting process. Perhaps the simplest process is to read the demand history into a statistical forecasting model that generates a forecast, and then have an analyst review and (if necessary) override the statistical forecast. In FVA analysis, you would compare the analysts override to the statistically generated forecast to determine if the override makes the forecast better. FVA analysis also compares both the statistical forecast and the analyst forecast to whats called a nave forecast. (We discuss nave forecasts in the next section.) Suppose you found that the statistical forecast achieved a mean absolute percent error of 25 percent, and that the analyst overrides actually reduced MAPE to 24 percent. In this case, wed say that the extra step of having an analyst review and adjust the statistical forecasts is adding value by making the forecast better. The reason we measure FVA is to identify waste and inefficiency in the forecasting process. When FVA is negative that is, when a process activity is making the forecast worse then clearly this activity is a waste and should be eliminated. Eliminating waste saves company resources that can be directed to more productive activities. You also get better forecasts by eliminating those activities that just make the forecast worse. When FVA is negative, that is clearly a bad thing, and the process step or participant with negative FVA should be eliminated. But when FVA is positive, as in this example, do we conclude that the step or participant (here, the analyst override) should be kept in the process? Not necessarily.
The mere fact that a process activity has positive FVA doesnt necessarily mean that you should keep it in your process. You need to compare the overall financial benefits of the improvement to the cost of that activity. Is the extra accuracy increasing your revenue, reducing your costs or making your customers happier? In this example, the analyst override did reduce error by one percentage point. But having to hire an analyst to review every forecast can get costly, and if the improvement is only one percentage point, is it really worth it?
Consider the following graphical views of nave forecasts with monthly data (Figures 2-5): If you use the random walk as your nave (as shown in Figure 2), then the forecast for all future periods is the last-known actual, which in this case is 40 units in May 2008.
If you use a seasonal random walk as your nave (as shown in Figure 3), then the forecast for all future periods is the actual from the same period in the prior year. Therefore, June 2008 through May 2009 is forecast to look exactly like June 2007 to May 2008.
Figure 3: Seasonal random walk model (forecast = actual from same period last year).
Figure 4 shows a 12-month moving average for the nave forecast, which happens to be 55.4 for this sales data.
As you can see, you may get wildly different forecasts from different choices of the nave model (shown in Figure 5). Depending on the nature of the pattern you are trying to forecast, some nave models may forecast much better than others. So which one to choose? One suggestion, from Emily Rodriguez, is to use a composite of nave models.
Sample Results
Figure 6 gives an example of an FVA report, showing how you would compare each process step to the nave model.
In this case, the nave model was able to achieve MAPE of 25 percent. The statistical forecast added value by reducing MAPE five percentage points to 20 percent. However, the analyst override actually made the forecast worse, increasing MAPE to 30 percent. The overrides FVA was five percentage points less than the nave models FVA, and was 10 percentage points less than the statistical forecasts FVA. You may wonder how adding human judgment to the statistical forecast could possibly make it worse? This actually happens all the time, and recent academic research (discussed later in this paper) investigates the topic.
FVA can also be used as a basis for performance comparison. Suppose you are a forecasting manager and have a bonus to give to your best forecast analyst. The traditional way to determine the best one is to compare their forecast errors. Based on this traditional analysis, as shown in Figure 7, Analyst A is clearly the best forecaster and deserves the bonus. But is the traditional analysis the correct analysis?
What if we consider additional information about each analyst and the types of products that they have been assigned to forecast?
As shown in Figure 8, Analyst A had the lowest MAPE, but we must note the kinds of products that were assigned: long-running, basic items with no seasonality or promotional activity, no new items and low-demand variability. In fact, an FVA analysis might reveal that a nave model could have forecast this sort of demand with a MAPE of only 10 percent, and that Analyst A only made the forecast worse! For Analyst B, demand was more difficult to forecast, with factors such as promotional activity and new items that make forecasting so difficult. FVA analysis reveals that this analyst added no value compared to a nave model but at least this person didnt make the forecast worse.
What FVA analysis reveals is that only Analyst C deserves the bonus. Even though Analyst C had the worst forecasts, with a MAPE of 40 percent, Analyst C was challenged with items that are very difficult to forecast short lifecycle fashion items with high promotional activity and high-demand variability. Only Analyst C actually added value compared to a nave model and made the forecast better. This example reveals another thing to be wary of in traditional performance comparison, as you see in published forecasting benchmarks. Dont compare yourself, or your organization, to what others are doing. The organization that achieves best-inclass forecast accuracy may do so because they have easier-to-forecast demand, not because their process is worthy of admiration. The proper comparison is your performance versus a nave model. If you are doing better, then that is good. But if you or your process is doing worse than a nave model, then you have some serious (but fixable) problems. MAPE is probably the most popular forecasting performance metric, but by itself, is not legitimate for comparing forecasting performance. MAPE tells you the magnitude of your error, but MAPE does not tell you what error you should be able to achieve. By itself, MAPE gives no indication of the efficiency of your forecasting process. To understand these things, you need to use FVA analysis.
Demand History
Statistical Model
Causal Factors
Demand History
Statistical Model
Sales
Analyst Override
Exec Targets
Analyst Override
Marketing
Consensus
Supply Constraints
Finance
Executive Review
P&IC
Approved Forecast
Many organizations also have a final executive review step, where general managers, division presidents, or even CEOs get a chance to change the numbers before approving them. This can translate into a great deal of high-cost management time spent on forecasting. But does it make the forecast any better? That is what we are trying to find out.
bucket
demand
The Forecast of Process Steps and Participants columns contain the forecasts provided by each step and participant in the process. In this example, for the very simple process we showed earlier, you only need to gather the nave forecast (using whatever nave model you decide to use), the statistical forecast generated by your forecasting software and the final forecast that includes any manual overrides made by the forecast analyst. Figure 10 also shows what an FVA data set would look like, with variable fields across the top and data records in each row. If you want to do a one-time FVA report for just a few items, you could do this much in Excel. However, for a thorough and ongoing FVA analysis and to make FVA a routine metric reported every period to management you need much more powerful data handling, data storage, analytics and reporting capabilities than Excel provides. Both SAS Analytics Pro and SAS Visual Data Discovery are perfect entry-level solutions for FVA analysis. For SAS Forecast Server customers, the Appendix contains sample code for generating a simple FVA report as a stored service. A thorough and ongoing FVA analysis requires you to capture the forecast for each participant, at each step and in every period for all of your item and location combinations. This will quickly grow into a very large amount of data to store and maintain, so you will need software with sufficient scalability and capability. Analysis on this scale is definitely not something you do in Excel.
10
Comparing results to an automatically generated statistical forecast is a good practice. But it is always worthwhile to use a truly nave model, such as NF1, for your ultimate point of comparison. You cant just assume that your statistical model is better than a random walk. Nave models can be surprisingly difficult to beat. Some statistical forecasting software uses unsound methods, such as blindly picking models that best fit the historical data rather than selecting models that are most appropriate for good forecasting.
11
Causal Factors
Sales
Analyst Override
Exec Targets
Marketing
Consensus
Supply Constraints
Finance
Executive Review
P&IC
Approved Forecast
This style of report should be easy to understand. We see that the overall process is adding value compared to the nave model, because in the bottom row the approved forecast has a MAPE of 10 percentage points less than the MAPE of the nave forecast. However, it also shows that we would have been better off eliminating the executive review step, because it actually made the MAPE five percentage points worse than the consensus forecast. It is quite typical to find that executive tampering with a forecast just makes it worse. As mentioned previously, FVA versus the nave forecast can vary depending on which nave model you choose. For example, if you are dealing with seasonal demand, then a seasonal random walk may provide much better forecasts than a plain random walk. The right thing to do is decide which nave model or composite of nave models that you are going to use, and then use this consistently throughout your analysis. Also, be aware that nave forecasts can be surprisingly difficult to beat. When you report your results, they may be rather embarrassing to those participants who are failing to add value. Therefore, present the results tactfully. Your objective is to improve the forecasting process not to humiliate anyone. You may also want to present initial results privately, to avoid public embarrassment for the non-value adders.
12
Interpreting Results
The FVA approach is intended to be objective and scientific, so you must be careful not to draw conclusions that are unwarranted by the data. For example, measuring FVA over one week or one month does not provide enough data to draw any valid conclusions. Period to period, FVA will go up and down, and over short time frames FVA may be particularly high or low simply due to randomness. When you express the results in a table, as weve shown up to this point, be sure to indicate the time frame reported, and make sure that time frame has been long enough to provide meaningful results. It is ideal if you have a full year of data from which to draw conclusions. If youve been thoroughly tracking inputs to the forecasting process already, then you probably have the data needed to do the analysis right now. You can look at the last year of statistical forecasts, analyst overrides, consensus forecasts, executive-approved forecasts and actual results, and then compute the FVA. Because nave models are always easy to reconstruct for the past, you can see how well a nave model would have done with your data last year. While a full year of data is ideal, if you are just starting to collect forecast data, then you might not have to wait a full year to draw conclusions. Graphical presentation of this data, using methods from statistical process control, is a big help here. Lets suppose that you just recently started gathering the data needed for FVA analysis, and so far you have 13 weeks of data. Depending on what you find, this may be enough information to draw some conclusions. Well look at two situations that you might encounter. For additional examples and ideas on how to interpret and report data using an approach from statistical process control, see Donald Wheelers excellent book Understanding Variation. Wheeler delivers a savage criticism of normal management analysis and reporting, exposing the shoddiness of typical business thought and decision making, and the general lack of appreciation for things like randomness and variation. Following the spirit of Wheelers message, lets look at a situation you might encounter with 13 weeks of FVA data. Figure 12 shows MAPE for the statistical forecast in the solid pink line, MAPE for the consensus forecast in the dotted dark blue line and FVA for the consensus process in the dashed yellow line. Over the entire 13 weeks, MAPE for the consensus forecast is 3.8 percentage points lower than MAPE for the statistical forecast, so FVA is positive. It would appear that the consensus step is adding value by delivering a forecast that has lower error than the statistical forecast. But is this enough data to draw a definite conclusion that the consensus process is a good use of resources?
13
50 40
MAPE
30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Statistical MAPE = 25.1% Consensus MAPE = 21.3% Consensus vs. Statistical FVA = 3.8%
FVA
10.0 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13
In this situation, you probably cant yet draw that conclusion. As you see from all the lines, there is quite a large amount of variation in performance of the statistical model, the consensus process, and the resulting FVA. You also see that the FVA is positive in only six of the 13 weeks. Wheelers book provides methods for assessing the amount of variation. Because the overall difference between statistical and consensus performance is relatively small, and there is so much variability in the results, the positive FVA may just be due to randomness. In a case like this, you probably need to gather more data before drawing any conclusions about the efficacy of the consensus process.
14
50 40
MAPE
30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Statistical MAPE = 10.6% Consensus MAPE = 28.8% Consensus vs. Statistical FVA = -18.2%
In Figure 13, we again see MAPE for the statistical forecast in the solid pink line, MAPE for the consensus forecast in the dotted dark blue and FVA in the dashed yellow line. Here, we find that the consensus forecast has consistently done worse than the statistical forecast. In this case, the FVA is very negative (averaging -18.2 percentage points), with positive FVA in only two of the 13 weeks. The data seems to indicate that the consensus step is not adding value, and is in fact making the forecast worse. At this point, you may want to bring these findings to your management and try to understand why the consensus process is having this effect. You can start to investigate the dynamics of the consensus meeting and the political agendas of the participants. Ultimately, you must decide whether the consensus process can be fixed to improve the value of the forecast, or whether it should be eliminated. FVA analysis lets you take an objective, scientific and data-driven approach to process analysis. The point of all this is to encourage you to at least conduct a rudimentary FVA analysis and determine whether your process is beating a nave model. This can be done quite easily; most organizations will have the data necessary for a limited-scale, quick-and-dirty analysis in Excel. Thorough and ongoing FVA reporting takes more effort, more data and more robust software tools (and perhaps even IT department involvement). However, several organizations are now doing this, or are in the midst of building their own FVA tracking and reporting systems. SAS users can take advantage of the sample code provided in the Appendix to create their own reports. Simply put, the message is this: If you dont know that you are beating a nave forecast, then maybe youre not.
FVA
15
Case Studies
FVA has been used at a wide range of organizations, across several major industries. These include pharmaceuticals, retail, technology manufacturing, home furnishings, transportation, apparel, and food and beverage. All of these organizations have spoken publicly about their use of FVA analysis and their findings. But before turning to the case studies, lets look at recent academic research on the real-world application of FVA.
16
Academic Research
Robert Fildes and Paul Goodwin from the UK reported on a study of 60,000 forecasts at four supply chain companies and published the results in the Fall 2007 issue of Foresight: The International Journal of Applied Forecasting. They found that about 75 percent of the time, the statistical forecasts were manually adjusted meaning that 45,000 forecasts were changed by hand!
Perhaps the most interesting finding was that small adjustments had essentially no impact on forecast accuracy. The small adjustments were simply a waste of time. However, large adjustments, particularly downward adjustments, tended to be beneficial. In this diagram, the vertical axis labeled Percent Improvement indicates the value added. As you can see from the data in Figure 14, only the larger, downward adjustments had meaningful improvement on accuracy.
17
Pharmaceutical
A major pharmaceutical company reports FVA as part of a forecast quality dashboard. The dashboard includes metrics for process governance, assessing whether the forecast was on time and complete. It includes metrics on organizational behavior, assessing if the forecast was followed, ignored or changed. The dashboard also includes metrics on forecast accuracy, bias and value added. This company is also at the leading edge of applying process-control techniques to forecasting. It pays special attention to the forecastability of its products and differentiates those with stable versus unstable demand. Analysts use the forecastability assessment to evaluate risk in their forecasts, and then build plans accounting for the risk. The lesson here is that not all forecasts are created equal some are a great deal more reliable than others. You shouldnt bet the company on forecasts that dont merit that degree of confidence.
Automotive Supplier
An automotive supplier knows the value of assessing the costs of forecast inaccuracy. When forecasts are too high, the supplier runs the risk of excess inventory and all the costs that go with it. When forecasts are too low, the supplier is confronted with the risk of unfilled orders, lost revenue and loss of credibility as a supplier. This company used FVA to evaluate management adjustments to the forecast, and then applied its Cost of Inaccuracy metric to determine whether those efforts were costeffective. The company found that even in cases where management adjustments make slight improvements to forecast accuracy resulting in positive value added these small improvements may not be worth the cost of management time and resources. By eliminating the management participation that didnt provide sufficient financial benefit, the company has streamlined its forecasting processes.
Technology Manufacturer
Armed with good historical data, the long-range planning department of a large technology manufacturer spearheaded an FVA initiative to review performance over the past six years. The analysts found that half the time a nave model did as well or better than their official forecast. When the official forecast was better, the value added was less than 10 percent. They also found that the nave models were more stable and less biased, and therefore not chronically too high or too low. These initial FVA findings showed that generally the same or better results could be achieved with much less cost and effort. The analysis provided the data to help shift management thinking and open the organization to creative process re-engineering. The company continues to use FVA analysis to evaluate proposed process changes and new forecasting methods. These new proposals can be tested to determine whether they are improving performance and whether the improvement justifies the cost of a full implementation.
18
Specialty Retailer
This retailer hired a new inventory and forecasting director who has a background in manufacturing and familiarity with process control. Observing that his forecast analysts constantly revised forecasts based on the latest bits of sales information, he decided to assess the value of their efforts. He compared the analyst accuracy with a five-week moving average. Only 25 percent of the time did the analyst overrides beat the moving average! Overreacting to new information, just as these analysts obsessed over sales, is a common occurrence. But if your statistical forecasting software is doing well enough, leave it alone and dont try to second-guess it. Dont create extra work for yourself by revising numbers based on last weeks sales data. There is always going to be randomness in any sales number. It is important to understand what variability is natural and react only when something out of the ordinary occurs. Like the technology manufacturer, this retailer uses FVA to validate process improvement prototypes. The retailer rolls out a prototype companywide only if it provides sufficient FVA results. The retailers inventory director also recommended using FVA to evaluate competing software packages, and of course, compare software-generated statistical forecasts to judgmental overrides.
19
Bibliography
Case Study References: Fildes, Robert and Goodwin, Paul. Good and Bad Judgment in Forecasting. Foresight: The International Journal of Applied Forecasting, Fall 2007. Gilliland, Michael. Is Forecasting a Waste of Time? Supply Chain Management Review, July-August 2002. Gilliland, Michael. (2010), The Business Forecasting Deal. New York: John Wiley & Sons. Additional Sources: Makridakis, Spyro, Steven Wheelwright and Robert Hyndman. (1998), Forecasting Methods and Applications (3rd Edition). New York: John Wiley & Sons. Wheeler, Donald. (2000), Understanding Variation: The Key to Managing Chaos. SPC Press SAS Forecasting Web Series (available for free, on-demand review): Forecast Value Added Analysis: Step-by-Step (sas.com/reg/web/corp/4385)
20
To understand how significantly advanced statistical models can improve forecast accuracy, you can compare the accuracy achieved using SAS Forecast Server with the accuracy achieved using a simple model. The following stored service code (also known as stored process code) creates a graph that compares the forecast accuracy achieved using SAS Forecast Server and the forecast accuracy achieved using a nave (random walk) model. The graph also shows the relationship between forecast accuracy and variation in the data. (Note: This code was provided by Snurre Jensen, SAS Denmark.) Installation 1. Enter the sample SAS code from the following pages into a file: FVA Analysis.sas. 2. Save the SAS program and use SAS Management Console to register it as a stored service that can be used in SAS Forecast Server. 3. Alternatively, you can simply use the code as a starting point to write your own report, and forgo registering it as a stored service. Usage 1. First, create your forecasts using SAS Forecast Server. The amount of series doesnt matter and hierarchies can be included. 2. Then, select this stored service from the list of available reports.
Notes It only compares the accuracy at the most detailed level; for example, at the lowest level of the hierarchy. Although this code is written as a SAS stored service, you may want to strip out some of the macro language and use the code as a basis for writing your own custom report.
21
*ProcessBody; %stpbegin; options mlogic symbolgen mprint; %macro fva_report; *%include &HPF_INCLUDE; Proc HPFARIMASPEC /* Model: RANDWALK * Label: Y ~ D=(1) NOINT */ MODELREPOSITORY = work.models SPECNAME=RandomWalk SPECLABEL=Random Walk SPECTYPE=RANDWALK SPECSOURCE=FSUI ; FORECAST TRANSFORM = NONE NOINT DIF = ( 1 ) ; ESTIMATE METHOD=CLS CONVERGE=0.0010 MAXITER=50 DELTA=0.0010 SINGULAR=1.0E-7 ; run; proc hpfselect modelrepository=work.models selectname=naive selectlabel=Naive Models List; spec RandomWalk; diagnose seasontest=NONE intermittent=100; run; proc sort data=&hpf_input_libname..&hpf_input_dataset. out=sortdata; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars &hpf_timeid.; %end; %else %do; by &hpf_timeid.; %end; run; proc hpfengine data=sortdata out=_null_ outfor=naive back=&hpf_back. lead=&hpf_lead. modelrepository=work.models globalselection=naive; id &hpf_timeid. interval=&hpf_interval. accumulate=total; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; forecast &hpf_depvars.;
22
run; data _null_; if &hpf_back=0 then wheredate=%unquote(&hpf_dataend.)-365; else wheredate=intnx(&hpf_interval.,%unquote(&hpf_dataend.),-&hpf_back); call symput(wherecls,wheredate); run; data history (where=(&hpf_timeid. ge &wherecls.) keep=&hpf_byvars. &hpf_timeid. predict actual abserr max); set naive; if predict=. then delete; if predict lt 0 then predict=0; abserr=abs(predict-actual); max=max(predict,actual); %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; run; proc summary nway data=history; %if &hpf_num_byvars. > 0 %then %do; class &hpf_byvars.; %end; var actual abserr max; output out=summary sum=sumactual sumabserr summax cv=cvactual cvabserr cvmax; run; data results1 (keep=&hpf_byvars. fa_naive cv); set summary; cv=cvactual; fa_naive=100*(1-sumabserr/summax); format cv fa_naive 4.1; run; %if &hpf_num_levels>1 %then %do; libname _leaf &HPF_PROJECT_location/hierarchy/&&hpf_byvar&hpf_num_byvars. ; %end; %else %do; libname _leaf &HPF_PROJECT_location/hierarchy/leaf ; %end; data history (where=(&hpf_timeid. ge &wherecls) keep=&hpf_byvars. &hpf_timeid. predict actual abserr max); set _leaf.outfor; if predict=. then delete; if predict lt 0 then predict=0; abserr=abs(predict-actual); max=max(predict,actual); run;
23
proc summary nway data=history; %if &hpf_num_byvars. > 0 %then %do; class &hpf_byvars.; %end; var abserr max; output out=summary sum=sumabserr summax; run; data results2 (keep=&hpf_byvars. fa_stat); set summary; fa_stat=100*(1-sumabserr/summax); format fa_stat 4.1; run; data all; merge results1 results2; %if &hpf_num_byvars. > 0 %then %do; by &hpf_byvars.; %end; fva=fa_stat-fa_naive; run; legend1 label=(Models: ); symbol2 color=red value=x height=0.75; symbol1 value=star color=blue height=0.75; axis1 order=(0 to 100 by 10) label=(Forecast Accuracy); axis2 label=(Volatility); title Comparison of forecast accuracy; proc gplot data=all; plot (fa_stat fa_naive)*cv/overlay vaxis=axis1 haxis=axis2 frame legend=legend1; run; quit; symbol1;axis;title; %mend fva_report; %fva_report; %stpend;
24
About SAS
SAS is the leader in business analytics software and services, and the largest independent vendor in the business intelligence market. Through innovative solutions, SAS helps customers at more than 60,000 sites improve performance and deliver value by making better decisions faster. Since 1976 SAS has been giving customers around the world THE POWER TO KNOW.