MEF-Harry Hatry Performance Measurem
MEF-Harry Hatry Performance Measurem
MEF-Harry Hatry Performance Measurem
Results-Based Budgeting
From
Harry P. Hatry
The Urban Institute Press
Washington, DC
Chapter 13
Results-Based Budgeting
Look at life through the windshield, not the rearview mirror.
Budgeting is the annual (sometimes biennial) process by which organizations
estimate their resource needs and allocations for the future. This chapter focuses on how
performance measurement information, particularly outcome data, can be used to assist
budget formulation and review.
For many governments, making the budget process more results-based has been
the primary motivation for legislating performance measurement. A budget process
should encourage, if not demand, a performance orientation.
Identify poorly performing programs, thereby signaling the need to make changes and
allocate less or more funds. (Other information is needed to determine which changes
to make.)
Identify programs that are performing well and presumably need no significant
changes. (Even here, other information is needed to determine what, if any, changes
may be desirable.)
Assess new programs for what they are expected to accomplish, not just their costs or
general statements of their expected value. Are the new programs worth their
expected costs?
Compare different proposed options on their expected outcomes and costs.
Help identify agency activities that have similar outcome indicators and, thus, are
candidates for coordination and perhaps revised funding needs.
Justify budget choices more effectively to agency and elected officialsand the
public.
Provide the basis for greater agency accountability, if reasonable performance targets
are set for the budget year and achieved values are subsequently compared to targets.
The first three points are discussed at length in chapter 10primarily in the
context of analyzing performance information for changing programs and policies during
the budget year. This chapter focuses on using the same information to establish and
examine budgets.
Results-based budgeting supports an overall agency focus on outcomes. Here is
an example of its use in helping justify budgets:
The Massachusetts Department of Environmental Protection sought to obtain
funding from the state legislature to line unlined landfills. It justified the
expenditure by reporting the product of the expenditure as the number of acres
expected to be lined. This did not move the legislature, which turned down the
request. The department then switched to a more outcome-based approach and
justified the request in terms of gallons of leachate prevented. Legislators asked
for a definition of leachate. When they found that it referred to potential
pollutants leaked into the groundwater and water supply, they approved the
funding request.1
The U.S. Office of Management and Budget has instituted probably the most
extensive use of outcome information as part of its PART process. OMB reviews each
major federal program on a number of performance factors, including results achieved.
OMB has emphasized that the ratings are not the only factor in decisions and that low (or
high) scores do not necessarily mean decreased (or increased) funding. Nevertheless, the
ratings appear to have affected, or at least supported, some funding decisions.2
In a performance-based budgeting system, agencies need to select targets (make
projections) for the budget year for each output, outcome, and efficiency indicator, as
well as for expenditures.3
A key problem for results-based budgeting, especially at the state and federal
levels, is to persuade legislators and legislative staffs to switch from primary dependence
on line-item budgeting to an outcomes focus. At the very least, legislators and their staffs
need to address outcomes during appropriation hearings. The executive branch is
responsible for providing meaningful, reliable, important outcome information to its
legislatorsin a user-friendly format. When some state governments initiated their
results-based budgeting efforts, they loaded legislators with large numbers of indicators
and data (sometimes including outputs and outcomes mixed in together) presented
unattractively, thus discouraging their use.
This book does not cover budgeting in general. Instead, it addresses the new
dimension of outcome information. The key issues in results-based budgeting are listed in
exhibit 13-1 and discussed later in this chapterafter a bit of history.
A Bit of History
Performance budgeting has been around at least since the 1960s. At that time, it
focused primarily on the relationship between inputs and outputs. Some communities
(such as Milwaukee, Wisconsin, and Nassau County, New York) produced budgets that
contained hundreds of unit-cost measurements linking costs or employee-hours to
outputs. Sunnyvale (California) has used such measurements since the early 1970s,
converting unit costs into productivity indices focusing on outputs. These indices permit
comparisons across services and across years. More recently, Sunnyvale has begun
focusing on outcomes.
A typical output-based performance indicator would be, for example, the cost (or
number of employee-hours) per ton of asphalt laid. In some cases, these output-based
indicator reports were dropped because the number of unit-cost indicators overwhelmed
the external users of the information. Nevertheless, such unit-cost information can be
useful to managers and supervisors (and elected officials, if they wish) for tracking the
technical efficiency of their activities.
The major new dimension is to relate outcomes to budget requests. The term
results-based budgeting reflects this new focus.4 At both the federal and state levels,
recent legislation has emphasized the concept that budget decisions should be made not
based on dollars alone, nor on physical outputs, but in relation to outcomes.
Of the many recent performance measurement systems that have been initiated to
provide some form of results-based budgeting, the Government Performance and Results
Act (GPRA) of 1993 is a prime example (recently expanded to include the PART process
discussed briefly above). This federal budget action was unique in (a) having support
from both political parties and both the executive and legislative branches and (b) being
explicitly embodied in legislation, unlike earlier approaches such as the PlanningProgramming-Budgeting-System (PPBS), Zero-Based Budgeting (ZBB), and
Management by Objectives (MBO).
4
The Texas, Oregon, and Louisiana legislatures were among the first states to
legislate a form of results-based budgeting, sometimes including selected outcome
indicators with recent and projected data in their appropriations acts.
It reduces the temptation for agencies and their programs to focus all their funding
decisions on the short term.
For some programs, achievement of the hoped-for outcomes will require funds not
only from the current years budget but from future budgets as well.
When important outcomes will not occur until after the proposed-budget period, the
outcome targets for the budget year will not reflect those effects.
A partial solution is to build into the budget process any important outcomes
expected to occur because of the proposed new funding. Programs would be asked to
estimate the values for each outcome indicator for each out-year the proposed budget
funding is expected to significantly affect. Requiring outcome projections in the budget
development process is likely to encourage agencies to consider multiyear effects.
For some programs, this forecasting can be done readily. For example, a federal
program to help residential housing might request funds for rehabilitating the homes of a
certain number of families. The program can probably predict the years in which those
rehabs will occur and the number of families occupying the housing units. A program
that provides drug treatment funding will find it more difficult to estimate the number of
clients who will become drug free and in which years. Performance measurement data on
past success rates will likely help those preparing or reviewing the budget to estimate
such outcomes.
A less demanding option is to ask for estimated future outcomes without requiring
that they be distributed by year.
The need to consider future outcomes of the current years budget is less frequent
for local than for federal and state programs. But even at the local level, some programs,
such as school and health programs, will have long-term outcome goals.
Most governments have not addressed the problem of long-term outcomes . A
partial exception is that some state governments separate expansion requests (including
new programs) from requests for continuation of current programs. For expansion
requests, these governments require out-year projections of future outcomes.
usually provide only rough indications of what will happen in the budget year. For
example, federal agencies such as the departments of Education, Housing and Urban
Development, Health and Human Services, and Labor as well as the Environmental
Protection Agency provide much of their assistance to state and local agencies rather than
to the ultimate customers. If these state and local agencies undertake promising steps that
the federal department has encouraged, the steps can be considered intermediate
outcomes for that department. Data on the past relationship between the amounts of
federal funds and assistance, on the one hand, and the extent to which the state and local
governments undertook promising initiatives, on the other, are likely to be useful. But the
past relationship provides only a rough estimate of what state and local agencies will do
in the budget year.
Some intermediate outcomes can be estimated relatively accurately. For example,
agencies can make fairly accurate estimates of such intermediate outcomes as future
response times, given particular amounts of staff and dollar resources.7 Even here,
however, a number of outside factors over which the program has little control can
intervene. For example, an unexpectedly large number of requests for service or changes
in the proportion of complex requests can have major effects on response times.
Here are some examples of difficult-to-predict intermediate outcomes:
Number of businesses (or households) that alter their handling of waste to be more
environmentally prudent after receiving assistance from state or local programs
Number and percentage of parents who take special parenting classes and then alter
their behavior in ways that encourage their childrens learning in school
Customer satisfaction
All these outcomes are driven not only by agency efforts to seek certain customer
behaviors and perceptions but also by many aspects of the behavior and circumstances of
the customers themselves, as well as outside factors.
The bottom line is that agencies should expect historical data on costs and
intermediate outcomes to be useful in preparing cost and intermediate outcome
information for budgets. In many cases, however, agencies will be able to make only
rough projections about the future relationship between costs and intermediate outcomes.
Linking inputs to end outcomes. As a rule, agencies should not expect to have
solid, known relationships between inputs and end outcomes, no matter how good the
historical data are. (In more economic terms, little information is available about the
production function that relates the inputs to the end outcomes.) Nevertheless, these
relationships are extremely important and need to be considered, at least qualitatively, in
any budget process.
Some end outcomes are easier to relate to inputs than others. For example, the
number and percent of a state or local jurisdictions roads that are in satisfactory
condition can be considered an end outcome indicator for road maintenance services.
These numbers relate closely to the funds that the agency applies to road maintenance
and repair. Past data on this relationship can be used to estimate the expenditures needed
in order to achieve a certain value for this outcome indicator (or conversely, to estimate
the percent of road miles in satisfactory condition given a particular funding level). In
contrast, how much a clients condition is improved by expenditures of particular
amounts of federal, state, local, or private funds to reduce substance abuse or to enhance
elementary education is considerably more difficult to estimate.
Projecting how well budgeted resources will achieve prevention (whether of
crime, disease, family problems, or so on) is extremely difficult. At best, the historical
data will provide very rough clues about the relationship between resources and
prevention. In-depth studies can provide evidence, but decisionmakers may need to rely
more heavily on qualitative information and subjective judgments on the prevention
outcomes expected from a particular level of budgeted resources.
In general, the more direct a programs influence over an outcome, the greater the
programs ability to develop numerical relationships between inputs and the outcome.
Local governments and private agencies generally have more direct influence on end
outcomes than state or federal agencies; therefore, the relationships between their inputs
and outcomes (both intermediate and end) are likely to be clearer. Nevertheless, for many
end outcome indicators, the relationship will inevitably be imprecise. How many more
resources would be needed to increase the percentage of customers satisfied with their
recreation experiences by 5 percentage points (such as from 65 percent to 70 percent)?
The answers to questions like this usually can be estimated only very roughly, at best.
If identifying the quantitative (or even qualitative) relationships between size and
type of input, type of intervention, and amount of outcomes achieved is likely to be
crucial to future major budget decisions about an existing program, agencies should seek
an in-depth program evaluation.
Agencies can systematically track changes in resources to assess the differences
on outcomes and then use that information to help make future budget estimates.
Agencies and their programs might also be able to intentionally alter the amount of input
to certain activities to see how more or fewer resources affect outcomesand then use
such information for future estimates.
Linking outputs to outcomes. Outcomes presumably flow from outputs. For
example, the number of calls answered is an output for a service (whether these calls
relate to police, fire, sewage backups, travel information, or any other service request).
This output leads to outcomes, such as what resulted and whether the requests were
fulfilled to the customers satisfaction . Some outcome indicators explicitly relate outputs
to outcomes, such as the percent of those to whom services were provided (an output)
who had successful outcomes.
Staff preparing or reviewing budget proposals should examine the amount of
output expected in the budget year and assess what outcomes can be expected from that
10
numberand when. If X customers are expected to be served during the budget year (an
output), how many customers (and what percent) can be expected to be helped to achieve
the desired outcomes that year and in future years (an outcome)? For example,
how many persons are expected to find employment after receiving training services,
and when?
what percentage of babies born to low-income women who received appropriate
prenatal care will be healthy?
Those preparing the budget request and those subsequently examining it should
ascertain that the outcome numbers make sense relative to the amount of output. For
services that have lengthy lag times between outputs and outcomes, the outcome numbers
for the budget year need to be compared to output numbers in the relevant previous
years.
Linking intermediate outcomes to end outcomes. It is likely to be difficult to
provide quantitative relationships between intermediate and end outcomes, but it is often
easier than directly estimating the relationships between input and end outcomes. For
example, a state agency might provide funds or technical assistance to local agencies to
undertake an environmental protection regulation designed to lead to cleaner air. The
relationship between the local agencys successfully getting businesses to adapt better
practices for handling hazardous wastes (an intermediate outcome for both the state and
local agencies) and the extent to which cleaner air results (an end outcome for both
agencies) is uncertain. Some relationships are clearer, such as the extent to which
increased percentages of children vaccinated against a disease can be expected to lead to
reduced incidence of the disease among the vaccinated population.
How to make these links? For most programs, knowledge about most of the
above links is lacking. Historical data from the performance measurement process, even
if it has been implemented for only one or two years, can provide clues. But there will
almost always be considerable uncertainty about projections of outcomes, especially end
outcomes, for given budget levels. A key is to be able to make plausible connections
between the amount of budgeted funds and the outcomes projected. These connections
can be based on past performance and modified by information on changes in either
internal or external factors expected in the budget year.
5. The Role of Efficiency Indicators
Efficiency is an important consideration in the budget process. As noted earlier,
efficiency is traditionally measured as the ratio of inputs to outputs. The new type of
indicator added in results-based budgeting is ratios of inputs to outcomes. An example of
this is cost per person served whose condition improved significantly after receiving the
service. The more traditional output-based efficiency indicator is cost per person
served.
11
12
Two special target-setting options are available to programs that are highly
uncertain about the future values of one or more outcome indicators: variable targets and
target ranges.
The variable target option applies to outcome indicators whose values are
believed to be highly dependent on a characteristic of the incoming workload and where
major uncertainty exists about that characteristic. In this procedure, the expected
relationship between the characteristic and outcome is identified first. The final outcome
target is determined after the fact, depending on the workload characteristics that actually
occurred in the budget year.
For example, if an outcome is expected to be highly sensitive to the mix of
workload (e.g., customers) coming in, and the mix for the budget year is subject to
considerable uncertainty, the program can set targets for each category of workload
without making assumptions about the workload mix. The aggregate target is determined
after the budget year closes and the mix is known.
For the indicator percent of people who leave welfare for work, the program
might set separate targets for groups defined by their amount of formal education.
Suppose the program estimated that 75 percent of people coming in with at least a high
school diploma would find jobs and get off welfare in the budget year, but only 30
percent of those with less than a high school education would do so. These targets would
be presented in the budget. The aggregate percent, which might also be included, would
be based on the programs estimated mix of clients.
At the end of the year, the aggregate target for the year would be calculated for
the actual education mix and compared to the aggregate percent. If 420 people who had
not completed high school and 180 people who had completed high school entered the
program during the year, the aggregate target would be 44 percent30 percent of 420
(126) plus 75 percent of 180 (135), equaling 261. Dividing 261 by the total number in the
program that year (600) yields the aggregate target for the share expected to go off
welfare, 44 percent.
The target might also be linked to the national unemployment rate. For example,
the program target might be 15 percent of enrollees off welfare if the national
unemployment rate turned out to be over 5.4 percent and 25 percent off welfare if the
national unemployment rate turned out to be less than 5.0 percent. The program would
not know if it achieved the target until the national figure became available. Another
option is to use a formula that relates expected outcome to the value of the external
factorin this example, a formula that relates the expected percentage off welfare to the
national unemployment rate.
The target range option applies to any outcome indicator with highly uncertain
future values. A range of values, rather than one number, is given as the target for the
indicator. Many programs might benefit from this approach, especially for their end
outcomes. Here are some examples of target ranges:
13
In some instances, executives and elected officials will prefer unclear, fuzzy
goals. For example, school districts have debated whether they should include precise
objectives on student test improvement (such as increasing the overall scores by 5
percentage points or reducing the difference in performance between the minority and
majority student population by 7 percentage points during the year). These officials might
be willing to accept a target range.
14
Note: Agency personnel sometimes are reluctant to provide targets that are lower
than the previous years targets, even if budget-year resources are lower in real terms
(i.e., after allowing for cost increases). They fear this will make them look bad. Even so,
it is important that agencies and their individual programs realistically estimate the
consequences of reduced resources. Agencies should encourage such reporting if it can
be justified. Not being able to do everything they did in the previous year is not a basis
for applying blame to programs if resources are cut. Upper management may believe that
productivity improvements can make up for the reduced resources (and this may be
trueup to a point). If political pressure requires that a program establish published
targets that are higher than the program believes are achievable, the distinction should at
least be made clear internally.
Setting performance targets is an excellent management tool for agencies,
particularly if the targets are provided and progress is examined periodically during the
year, such as monthly or quarterly. Even if an agency does not use outcome targets in its
budget process, the agency can choose to retain an internal outcome-targeting process.
7. Use of Explanatory Information
As discussed in chapters 10 and 11, agency programs should be encouraged to
provide explanatory information along with their past performance measurement data
when developing and submitting budget requests.
Staff preparing budgets should examine such information for insights into why
the program performed well or poorly and for any suggestions about what is needed to
improve it. This information can also help identify program changes likely to affect cost
and outcome estimates.
As already noted, the results of any relevant program evaluations should be part
of budget preparation and review. The findings on outcomes and the extent to which the
program has been instrumental in producing the outcomes are important for judging the
value of the current program. Persons who review the programs proposed budget can use
later performance data to assess whether the proposed budget reflects the changes
suggested by the evaluation. Program evaluation findings should typically take
precedence over findings from the agencys performance measurement system.9
For target values that deviate substantially from past results, agency programs
should be encouraged to provide explanations for those targets, especially on key
outcome indicators. Such information should identify the basic assumptions used to
develop the outcome projections and any important external factors expected to make the
outcome value deviate from past performance levels.
Explanatory information on past performance, including any available findings
from recent program evaluations, can help identify the reasons for success or lack of it
that is, program strengths and weaknesses. Budget preparers and reviewers can then
assess the extent to which steps have been taken, or are needed, to correct problems.
15
16
17
technology (as has been done by the states of North Carolina and Oregon and by
Multnomah County, Oregon). Agency programs that contribute to several outcomes can
be coded to identify which programs contribute to which outcomes.15 Such crosswalks
can at least trigger the need for coordination and cooperation among programs, and they
will help budget examiners detect the need for across-program budget reviews.
approaches and then using that information as the basis for estimating outcomes and
costs. These procedures are worthwhile if the agencies can wait to make their final
decision until the test has been completed and the findings have become available.
Agencies should use the findings from such analyses and experiments to help formulate
and subsequently justify budget proposals.
As the use of performance measurement, and particularly results-based budgeting,
grows, the need to project outcomes systematically will also grow. The field of program
analysis may then stage a comeback.
Cost-benefit analysis. Cost-benefit analysis goes one step further than program
analysis. It provides a monetary estimate of the value of a program. (Cost-benefit
analysis can also help evaluate the value of a programs past performance.) Its key
characteristic is that it translates nonmonetary outcomes into monetary ones. The costs
are compared to the estimated dollar benefits to produce cost-benefit ratios and estimated
differences in the monetary values of the costs and benefits. Before the calculations into
monetary values can be performed, the basic outcome values, usually measured in
nonmonetary units, are needed. That is, program analysis needs to be done first. Costbenefit analysis adds an additional, usually difficult, step to the process.
The monetary value of the outcomes has to be imputed in some way. For example,
an estimate that X number of traffic accidents could be avoided by a particular activity
might be converted into monetary estimates of the costs of those accidents, including
damage repair, hospital and other health care, time lost from work, and the economic
value of any lives lost. The costs of the activity being considered would then be
compared to these dollar valuations and a cost-benefit ratio calculated.
Sound cost-benefit analysis, whether of past program accomplishments or
projected program value, can provide major backup information for program budget
requests. Such calculations can also appeal to public and private officials, because most
outcomes are converted into dollars and summarized in one number (the cost-benefit
ratio), which can be interpreted as the value of the program. One summary number is
much easier for decisionmakers to handle. The usual application of this approach is to
compare options within a single service area, but it could also be used to compare
programs across services.
Cost-benefit analysis has a number of drawbacks. The calculations of monetary
value usually require numerous assumptions that can be quite controversial. For example,
how should the value of lost work time or of deaths be determined? (The value of lives
lost has sometimes been estimated based on the economic potential of human beings at
particular ages. This approach sounds reasonable, but giving older people little or no
value in the calculations implies that it is all right to knock off the elderly .) Another
problem is that the monetary values often accrue to different populations from the
populations that pay the costs. For example, revenues for most government expenditures
are raised by taxes from the public and businesses, but the benefits often accrue primarily
to particular groups.
20
21
their major objectives (such as those included in strategic plans) or descriptions of the
strategies the programs propose to use to achieve the objectives, the reviewers will need
to ask the program to construct these or construct these themselvesdiscussing them
with program personnel as necessary.
For example, federal, state, or local litigation offices may emphasize deterrence of
future criminal behavior in their formal mission statements. Litigation programs,
however, usually have not included indicators that explicitly address deterrence. The
outcome indicators tracked will probably focus on bringing offenders to justice. From the
programs viewpoint this focus is reasonable, but reviewers should consider whether it is
feasible to track deterrence using counts of nondeterrence as a surrogate (i.e., the amount
of reported criminal behavior) or be content to seek qualitative information. (Note:
Measuring deterrence directly is usually best done, if done at all, through in-depth studies
and not through a performance measurement process.) Reviewers might also decide that
the litigation program does not in fact have the responsibility or the capacity for
estimating prevention. They might determine that the mission statement was overstated
and that the programs focus on number of offenders brought to justice is appropriate.
3. If the program is seeking increased resources, assess whether it has provided
adequate information on the amount each output and outcome indicator is expected to
change over recent levels. The changes might be expressed as a special table showing
pluses or minuses for each affected indicator. Programs need to make clear what effects
their special proposals are expected to have on outputs and outcomesnot merely on
funding and personnel resources.
4. Examine the programs projected workload, outputs, intermediate outcomes,
and end outcomes, as well as the amount of funds and personnel. Make sure these
numbers are consistent with each other (e.g., that the amount of output is consistent
with the projected workload). Determine whether the program has included data on the
results expected from the outputs it has identified. Use steps such as those listed in
exhibits 13-2 and 13-3 to develop and examine the targets. Output indicators normally
should be included in the budget submission for each major category of workload. (Note:
outputs represent completed work. Workload includes work in progress and items that are
pending.) Intermediate outcomes should be consistent with outputs and end outcomes
consistent with intermediate outcomes. If such information has not been included, the
program can be asked to provide the needed data.
The data on outputs and outcomes should be checked for consistency with each
other. For example, do the number of successes for a reporting period exceed the number
of cases completed during that period?
Note, however, that substantial time lags can occur between the time a customer
comes in for service and the outcomes. For example, the outcome indicator percent of
cases that were successful should be derived by dividing the number of cases expected
to be successfully completed during the budget year by the number of cases completed
during the year, regardless of the year the case was initiated, not by the number of cases
22
worked on or started during the budget year. Another example: A budget-year estimate
for the outcome indicator percent of child adoption cases in which the child was placed
with adoptive parents within 24 months of the childs entry into the system would need
to be based on the number of children that came into the child welfare system two years
before the budget year. Where appropriate outcome indicators and/or outcome data have
not been provided, ask the program to provide them.
Two reminders:
Outcomes can result from activities undertaken before the budget year. Also, some
outcomes intended to result from the proposed budget might not occur until after the
budget year. The budget submission should identify such situations.
In the initial years of the performance measurement system, programs may not be
able to provide data on some outcome indicators.
Compare the latest data on actual performance to those for previous reporting periods
and to the proposed budget targets.
Compare historical data on individual outcome indicators to the past targets set for
those indicators to assess the programs accuracy in setting targets. In light of this
past experience, assess the programs proposed targets. Some agencies may have a
pattern of being highly optimistic about their ability to achieve outcomes; others may
have a pattern of overly conservative targets. Budget analysts should take this into
account as they interpret target achievement. Ideally, targets should be set at a level
that encourages high, but achievable, performance. (The budget analysis office should
attempt to track the proclivities of individual program managers to set their targets
overly high or low.)
23
that appear unusually high or low. This step should be given special attention when any
of the earlier steps indicate that the performance levels projected need further
examination. Explanatory information should be examined before any conclusions are
drawn about the performance of the program and its resource implications.
Explanations can be substantive or be merely rationalizations or excuses. To
assess the value of the explanations, the analysts may need to follow up with the program
to clarify and/or obtain more information.
7. For programs likely to have delays or backlogs that might complicate
program services, be sure the data adequately cover the extent of delays, backlogs, and
lack of coverage. Buildups of such problems can be a major justification for added
resources. The size of any delays or backlogs, and how these may be growing, can be
important customer-focused, quality-of-service performance indicators for social, health,
welfare, loan, licensing, and many other programs. For legal prosecutions and court
cases, justice delayed is justice denied.
Conversely, if a programs indicators show no evidence of significant delays, then
existing resource levels appear adequate for the futureunless the program provides
evidence that a significant buildup of its future workload is likely. Programs, where
possible, should systematically categorize their incoming caseloads by level of difficulty
or complexity (see chapter 8). Programs should also project the size of their caseload by
difficulty or complexity as a factor in determining their proposed budget. Is there any
evidence that the program is now getting or expects to get more complex and/or more
difficult cases? Such changes would offer justification for additional resources.
Indicators that programs can be asked to provide include the following:
Counts of the number of cases pending and projected at the end of each year (tracked
over time, this will indicate buildups)
Indicators of the time it has taken and is expected to take, given proposed budget
resources, to complete various activities
Estimates of the number of cases that will have to be turned away (for programs that
have the discretion to turn them away)
24
include such estimates for the budget period? Do the monitoring resources proposed in
the budget appear too little or too large compared to the expected outcomes?
9. Ascertain that the program has sufficiently considered possible changes in
workload that are likely to affect outcomes (such as higher or lower proportions of
difficult workload). Programs may not report such breakouts in their budget submissions,
but they are often able to supply such information. (Programs should be encouraged, for
their own data analyses, to break out their outcome data by various work and customer
characteristics, such as type of case, its difficulty, and different locations or facilities.)
For example, federal and state correctional facilities will probably have internal reports
on individual facilities and facility categories, such as security level and type of prisoner.
Health and human services programs can probably provide some service data on
individual facilities or offices and on various demographic groupings of clients.
Examine whether the outcomes differ substantially for some service
characteristics (such as for some facilities or regions) over others. If so, examine why.
This information can be very helpful in interpreting a programs projected outcome data.
For example, certain types of locations or cases may be considerably more difficult to
handle than others, suggesting that lower-than-desired projected performance is the result
of an increase in the proportion of difficult cases and thus providing a supportable case
for lower outcomes. Budget reviewers should look for evidence that substantially more
cases that are difficult (or easy) are likely to come in during the budget year.
Comparing outcomes among demographic groups is also important in assessing
equity and fairness. Are some groups underserved? Should additional resources be
applied to those groups? Even though identifying who loses and who gains can be a
political hazard, the information is basic to resource allocation. It needs to be addressed.
10. If recent outcomes for a program have been substantially worse than
expected, make sure the program has included in its budget proposal the steps, and
resources, it plans to take toward improvement. If the program projects improved
performance, are the resources and planned steps commensurate? If not, why not? (For
example, substantial time may be needed between the time that funding is approved,
implementation, and the consequences of the funded activities for achievement of certain
outcomes.)
11. Examine findings from any program evaluations or other special studies
completed during the reporting period. Assess whether these findings have been
adequately incorporated into the budget proposals. This includes studies produced by
other organizations. Such information may provide added support for the activities and
budget proposed by the program, or it may contradict the findings produced by the
program to support its proposed activities and budget.
12. Determine whether the program has developed and used information on the
relationship between resource requirements, outputs, and outcomes (e.g., the added
money estimated to increase the number of successfully completed cases by a specified
25
amount). Assess that information for plausibility. Few programs are likely to have
undertaken much systematic analysis of this relationship. Programs should be encouraged
to do so to help substantiate future budget requests.
Relating expenditures and resources to outcomes (both intermediate and end
outcomes) is usually difficult and uncertain. However, to the extent that additional dollars
and staff enable the program to take on more work (more customers, more investigations,
more road repairs, more inspections, etc.), the program can probably estimate roughly
how much additional work it can handle based on past performance information. For
example, a program may be able to estimate the percent of cases or incidents it might not
be able to handle (such as identifying illegal immigrants) without the added funding
requested.
Many, if not most, programs will be unlikely to have investigated the cost-tooutput and cost-to-outcome relationships that underlie their budget requests. However,
these relationships are at the heart of resource allocation decisions, implicitly if not
explicitly, and the program should be pushed to be as explicit as possible about them.
After all, the projected targets the program sets each year based on its outcome
indicators by definition imply such relationships, however rough the estimates may be.
A program seeking additional resources will tend to be overly optimistic about the
outcomes that will result. Budget analysts should look for supportable estimates of the
relationships between resource requirements (dollars and personnel) and at least
approximate values for each outcome indicator.
Over the long run, programs should be encouraged to develop information about
these relationships. The analysis needed for such studies usually requires special
background, however, which is not likely to be in place in most programs. Analytical
staff, whether attached to each program or to a central analysis office, should be helpful
for this purpose.
13. Identify indicators with significantly reduced outputs or outcomes projected
for the budget year (compared to recent performance data) and no decrease in funding
(adjusted for projected price increases) or staffing. Identify and assess the programs
rationale. Reduced funding or staffing projections are obviously plausible rationales for
reduced outcome projections, as is a more difficult or complex workload in the new year.
If the program has been systematically categorizing its incoming caseload by level of
difficulty or complexity, it should be able to provide evidence supporting a reduction.
The program might already have in its pipeline many especially difficult cases. For
example, litigation or investigation programs may be working on several cases that are
highly complex and require additional program resources.
Other possible reasons for lower outcome targets include (a) an unexpected jump
in workload during the budget year without an accompanying increase in resources,
leading to reductions in the percent of cases for which the program can produce
successful outcomes; (b) new legislative or agency policies that add complications or
26
27
28
29
Exhibit 13-1
Key Issues in Results-Based Budgeting
1. Need to increase focus on outcomes, not only inputs and outputs
2. Limitations in the usefulness of performance measurement information for resultsbased budgeting
3. Time frame that should be covered by results-based budgeting, especially considering
that outcomes often occur years after the one in which the funds were budgeted
4. Whether proposed inputs can be linked to outputs and outcomes
5. The role of efficiency indicators
6. Setting performance targets in budgets
7. Use of explanatory information
8. Strength of program influence over future outcomes
9. Using performance information in formulating and examining budget requests
10. Applying results-based budgeting to internal support services
11. Using results-based budgeting for capital budgeting
12. Budgeting-by-objectives and budgeting for outcomes
13. Special analytical techniques for projections
14. The role of qualitative outcome information in results-based budgeting
30
Exhibit 13-2
Suggested Steps in Developing Outcome Targets
1. Examine the agencys strategic plan (if one exists). Targets contained in the budget
should be compatible with targets in the strategic plan.
2. Analyze the historical relationships between inputs (expenditures and staffing),
outputs, and outcomes. Examine any explanatory information that accompanied the
historical data. Use that combination of information to provide an initial estimate of
targets compatible with the amount of resources being considered for the programs
proposed budget.
3. Consider each factor listed in exhibit 13-3 (such as outside resources, environmental
factors, changes in legislation or requirements, and expected program delivery changes)
and adjust the targets accordingly.
4. Consider the level of outcomes achieved by similar organizations or under various
conditions (as discussed in chapter 9). For example, the outcomes achieved by betterperforming offices or facilities that provide similar services are benchmarks the
program may want to emulate.
5. Review the findings and recommendations from any recent program evaluations to
identify past performance levels and past problems. Consider their implications for the
coming years.
6. Use program analysis, cost-effectiveness analysis, and/or cost-benefit analysis to
estimate the future effects of the program.
31
Exhibit 13-3
Factors to Consider When Selecting Specific Outcome Targets
Past outcome levels I. The most recent outcomes and time trends provide a starting
point for setting the outcome targets. (For example, recent trends may indicate that
the values for a particular outcome indicator have been increasing annually by 10
percent in recent years; this would indicate the next years number should be
increased by a similar percentage.)
Past outcome levels II. If the values for an outcome indicator are already high, only
small improvements in the outcome level can reasonably be expected. If the values
for an outcome indicator are low, future improvements can be expected to be larger
(there is more room for improvement).
Amount of dollar and personnel resources expected to be available through the target
period. If staff and funds are being reduced or increased, how will this affect the
programs ability to produce desired outcomes?
Factors likely to be present in the wider environment through the target period. These
include such factors as the economy, population demographics, weather, major
changes in industries in the area (such as major new industries scheduled to begin or
depart), and major changes in international competition.
Likely lag times from the time budgets are approved until the outcomes are expected
to occur. This applies both to the effects of past years expenditures on the outcome
values targeted for the budget year and to the likely timing of outcomes produced
with the funds allocated in the budget year. (For some outcome indicators, effects will
be expected in the budget year, but for others, effects will occur primarily in years
after the budget year.)
Political concerns. Politics may at times push for reporting outcome targets that
exceed feasible levels. (Even so, the program and budget analysts should provide
those selecting the targets with estimates of the likely achievable levels of outcomes.)
32
Exhibit 13-4
Steps for Examining Performance Information in Budget Requests
1. Examine the budget submission to ascertain that it provides the latest information and
targets on the workload, output, intermediate outcomes, and end outcomesas well as
the funds and personnel resources requested.
2. Assess whether the outcome indicators and targets are consistent with the mission of,
and strategies proposed by, the program and adequately cover that mission.
3. If the program is seeking increased resources, assess whether it has provided adequate
information on the amount each output and outcome indicator is expected to change
over recent levels.
4. Examine the programs projected workload, outputs, intermediate outcomes, and end
outcomes, as well as the amounts of funds and personnel. Make sure these numbers
are consistent with one another (e.g., that the amount of output is consistent with the
projected workload). Determine whether the program has included data on the results
expected from the outputs it has identified.
5. Compare past data on workload, output, intermediate outcomes, and end outcomes
with the proposed budget targets. Identify unusually high or low projected outputs or
outcomes.
6. Examine the explanatory information, especially for outcome indicators whose past
values fell significantly below expectations and for any performance targets that
appear unusually high or low.
7. For programs likely to have delays or backlogs that might complicate program
services, be sure the data adequately cover the extent of delays, backlogs, and lack of
coverage.
8. For regulatory programs, be sure that adequate coverage is provided for compliance
outcomes (not merely number of inspections).
9. Ascertain that program has sufficiently considered possible changes in workload that
are likely to affect outcomes (such as higher or lower proportions of difficult
workload).
10. If recent outcomes for a program have been substantially worse than expected, make
sure the program has included in its budget proposal the steps, and resources, it plans
to take toward improvement.
11. Examine findings from any program evaluations or other special studies completed
during the reporting period. Assess whether these findings have been adequately
incorporated into the budget proposals.
33
12. Determine whether the program has developed and used information on the
relationship between resource requirements, outputs, and outcomes (e.g., the added
money estimated to increase the number of successfully completed cases by a
specified amount).
13. Identify indicators with significantly reduced outputs or outcomes projected for the
budget year (compared to recent performance data) and no decrease in funding
(adjusted for projected price increases) or staffing. Identify and assess the programs
rationale for these reductions.
14. Identify outcome indicators with significantly improved outcomes projected by the
program for the budget year (compared to recent performance data) and no increase in
staffing, funding (adjusted for projected price increases), or output. Identify and assess
the programs reasons for these increases.
15. Identify what, if any, significant outcomes from the budgeted funds are expected to
occur in years beyond the budget year. Assess whether they are adequately identified
and support the budget request.
16. Identify any external factors not considered in the budget request that might
significantly affect the funds needed or the outcomes projected. Make needed
adjustments.
17. Compare the latest program performance data to those from any other programs with
similar objectives for which similar past performance data are available. Assess
whether projected performance is compatible with that achieved by similar programs.
18. Identify any overarching outcome indicators that can provide a more meaningful and
comprehensive perspective on results. Consider coordinating with other programs,
other agencies, and other levels of government.
34
35
14. This approach is presented in The Price of Government by David Osborne and Peter Hutchinson (New
York: Basic Books, 2004), especially chapter 3.
15. An example of this is the crosswalk developed by the Oregon Progress Board and the Department of
Administrative Services, 1999 Benchmark Blue Books: Linking Oregon Benchmarks and State Government
Programs (Salem, May 1999).
16. Few publications are available that suggest specific steps for reviewing budget proposals that include
examining the outcome consequences of the proposed budget levels. A recent document prepared for state
legislative analysts nevertheless appears also applicable to budget examinations for any level or branch of
government: Asking Key Questions: How to Review Program Results (Denver, CO: National Conference
of State Legislatures, 2005). However, it primarily focuses on past results, rather than also including an
examination of the proposed outcomes.
17. The U.S. Office of Drug Control Policy has been a leading agency in attempting to work out such
cooperative efforts among federal, state, local, and foreign governments. See, for example, National Drug
Control Strategy: FY 2007 Budget Summary (Washington, DC: The White House, 2006).
36