Nothing Special   »   [go: up one dir, main page]

MEF-Harry Hatry Performance Measurem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Pre-Publication, September 2006, Version of Chapter 13

Results-Based Budgeting
From

Performance Measurement: Getting Results

Harry P. Hatry
The Urban Institute Press
Washington, DC

Chapter 13
Results-Based Budgeting
Look at life through the windshield, not the rearview mirror.
Budgeting is the annual (sometimes biennial) process by which organizations
estimate their resource needs and allocations for the future. This chapter focuses on how
performance measurement information, particularly outcome data, can be used to assist
budget formulation and review.
For many governments, making the budget process more results-based has been
the primary motivation for legislating performance measurement. A budget process
should encourage, if not demand, a performance orientation.

Results- or Performance-Based Budgeting: Widely Acclaimed, But Is


Anyone Doing It?
Results-based budgeting, more frequently called performance-based budgeting,
gives outcomes central attention in the budget process. Presumably, it emphasizes the
importance of outcome data in both formulating and justifying proposed budgets. Much
lip service has been paid to the topic. Unfortunately, it is not clear how much real
attention has been devoted to the actual use of outcome information in budgeting. The
suggestions in this chapter identify procedures for incorporating outcomes into a budget
process. Individual agencies will need to devote time to refining the process.
Results-oriented performance measurement provides basic information to the staff
formulating, and subsequently justifying, a budget. The information helps locate
problems and successes that may need additional or reduced resources. The key element
of results-based budgeting is that it attempts to consider, if only roughly, the future
values of performance indicatorsthe amount of outcomes expected from proposed
resourcesand projected outputs.
Agencies preparing performance budgets project values for each performance
indicator for the forthcoming year(s). Projections of outputs and outcomes are intended to
reflect the estimated consequences of the resources budgeted. These projections represent
what the agency is getting for its money. The data from an agencys performance
measurement system should provide basic information for developing budget proposals
and subsequently help justify the budget proposals that have already been developed.
This information is likely to have even greater weight if linked to strategic plans.
By focusing systematically on the results sought, results-based budgeting should
better enable decisionmakers to achieve the following:
2

Identify poorly performing programs, thereby signaling the need to make changes and
allocate less or more funds. (Other information is needed to determine which changes
to make.)
Identify programs that are performing well and presumably need no significant
changes. (Even here, other information is needed to determine what, if any, changes
may be desirable.)
Assess new programs for what they are expected to accomplish, not just their costs or
general statements of their expected value. Are the new programs worth their
expected costs?
Compare different proposed options on their expected outcomes and costs.
Help identify agency activities that have similar outcome indicators and, thus, are
candidates for coordination and perhaps revised funding needs.
Justify budget choices more effectively to agency and elected officialsand the
public.
Provide the basis for greater agency accountability, if reasonable performance targets
are set for the budget year and achieved values are subsequently compared to targets.

The first three points are discussed at length in chapter 10primarily in the
context of analyzing performance information for changing programs and policies during
the budget year. This chapter focuses on using the same information to establish and
examine budgets.
Results-based budgeting supports an overall agency focus on outcomes. Here is
an example of its use in helping justify budgets:
The Massachusetts Department of Environmental Protection sought to obtain
funding from the state legislature to line unlined landfills. It justified the
expenditure by reporting the product of the expenditure as the number of acres
expected to be lined. This did not move the legislature, which turned down the
request. The department then switched to a more outcome-based approach and
justified the request in terms of gallons of leachate prevented. Legislators asked
for a definition of leachate. When they found that it referred to potential
pollutants leaked into the groundwater and water supply, they approved the
funding request.1

The U.S. Office of Management and Budget has instituted probably the most
extensive use of outcome information as part of its PART process. OMB reviews each
major federal program on a number of performance factors, including results achieved.
OMB has emphasized that the ratings are not the only factor in decisions and that low (or
high) scores do not necessarily mean decreased (or increased) funding. Nevertheless, the
ratings appear to have affected, or at least supported, some funding decisions.2
In a performance-based budgeting system, agencies need to select targets (make
projections) for the budget year for each output, outcome, and efficiency indicator, as
well as for expenditures.3

A key problem for results-based budgeting, especially at the state and federal
levels, is to persuade legislators and legislative staffs to switch from primary dependence
on line-item budgeting to an outcomes focus. At the very least, legislators and their staffs
need to address outcomes during appropriation hearings. The executive branch is
responsible for providing meaningful, reliable, important outcome information to its
legislatorsin a user-friendly format. When some state governments initiated their
results-based budgeting efforts, they loaded legislators with large numbers of indicators
and data (sometimes including outputs and outcomes mixed in together) presented
unattractively, thus discouraging their use.
This book does not cover budgeting in general. Instead, it addresses the new
dimension of outcome information. The key issues in results-based budgeting are listed in
exhibit 13-1 and discussed later in this chapterafter a bit of history.

A Bit of History
Performance budgeting has been around at least since the 1960s. At that time, it
focused primarily on the relationship between inputs and outputs. Some communities
(such as Milwaukee, Wisconsin, and Nassau County, New York) produced budgets that
contained hundreds of unit-cost measurements linking costs or employee-hours to
outputs. Sunnyvale (California) has used such measurements since the early 1970s,
converting unit costs into productivity indices focusing on outputs. These indices permit
comparisons across services and across years. More recently, Sunnyvale has begun
focusing on outcomes.
A typical output-based performance indicator would be, for example, the cost (or
number of employee-hours) per ton of asphalt laid. In some cases, these output-based
indicator reports were dropped because the number of unit-cost indicators overwhelmed
the external users of the information. Nevertheless, such unit-cost information can be
useful to managers and supervisors (and elected officials, if they wish) for tracking the
technical efficiency of their activities.
The major new dimension is to relate outcomes to budget requests. The term
results-based budgeting reflects this new focus.4 At both the federal and state levels,
recent legislation has emphasized the concept that budget decisions should be made not
based on dollars alone, nor on physical outputs, but in relation to outcomes.
Of the many recent performance measurement systems that have been initiated to
provide some form of results-based budgeting, the Government Performance and Results
Act (GPRA) of 1993 is a prime example (recently expanded to include the PART process
discussed briefly above). This federal budget action was unique in (a) having support
from both political parties and both the executive and legislative branches and (b) being
explicitly embodied in legislation, unlike earlier approaches such as the PlanningProgramming-Budgeting-System (PPBS), Zero-Based Budgeting (ZBB), and
Management by Objectives (MBO).
4

The Texas, Oregon, and Louisiana legislatures were among the first states to
legislate a form of results-based budgeting, sometimes including selected outcome
indicators with recent and projected data in their appropriations acts.

Key Issues in Results-Based Budgeting


1. Need to Increase Focus on Outcomes, Not Only Inputs and Outputs
Using outcome information for budgeting seems quite sensible on the surface, but
in fact, its use in budgeting for agency operations is controversial. It has been, for
example, the major subject of debates comparing the New Zealand to the Australian and
U.S. approaches to budgeting at the federal level. New Zealands approach had been to
hold its operating departments responsible for outputs but not outcomesand to use
performance agreements with department heads to hold them accountable for outputs.
New Zealands rationale was that agencies control outputs, but too many other factors
beyond the control of the operating departments affect outcomes. Only ministers were
held responsible for outcomes. New Zealand has recently changed back to including
outcomes in department responsibilities.
The counterargument to the view that agencies should be held responsible only
for producing outputs is that outcomes are the fundamental reasons for establishing an
agency in the first place. The activities of operating agencies clearly contribute to
program outcomes, even if no single group of people, whether operating personnel or the
policymakers themselves, fully controls results. Take as an example income maintenance
and public assistance programs. Primarily policy issues are who is eligible for assistance
and what level of payments is to be provided. These decisions are made by the legislature
and the upper echelon of the executive branch of government. However, if the policy is
not implemented wellif program personnel do not execute the policy properly and get
the correct checks to the right people quicklythe desired outcomes will be
compromised, and program personnel can be at least partly responsible for the failure.
Encouraging agency personnel to work to improve service outcomes seems a
much better way to go. Many, if not most, outcomes are produced by many agencies and
sectors of the economy, and responsibility is thus inherently shared. This is the implicit
philosophy of the Australian and U.S. governments. As suggested in chapter 10, agencies
can rate the extent of their influence over individual performance indicators in their
performance reports. This alerts users to the inherent limitations of outcome information
while retaining a degree of responsibility for each agency.
The controversy over output versus outcome responsibilities has been less an
issue at lower levels of government (especially at the local level).

2. Limitations in the Usefulness of Performance Measurement Information for


Results-Based Budgeting
Performance measurement looks backward. It attempts to provide the best possible data
on what happened in the past. Past outcome data provide important information for
projections, but estimating future outcomes differs radically from assessing past
performance. Past trends are only one among many influences on future outcomes. The
future effects of those other influences are inevitably a matter of uncertainty, particularly
in cases where little is known about the quantitative relationship between inputs and
outcomes.
Suggestion: For outcome forecasts that are particularly uncertain, provide a range
of values instead of a single value. This range is likely to be more realistic and
informative.

3. Time Frame to Be Covered by Results-Based Budgeting


Typically, budgets only present data for the current budget year(s). Some central
governments, such as those of the United States and Australia, now also include out-year
funding estimates (perhaps for three additional years) but not outcome projections, except
in separate long-run strategic plans. Including out-year forecasts for outcomes can be
important for some programs, particularly at the federal and state levels, for three
reasons:

It reduces the temptation for agencies and their programs to focus all their funding
decisions on the short term.
For some programs, achievement of the hoped-for outcomes will require funds not
only from the current years budget but from future budgets as well.
When important outcomes will not occur until after the proposed-budget period, the
outcome targets for the budget year will not reflect those effects.

Therefore, budget proposals, especially those of the higher levels of government,


should include out-year estimates for some outcomes, regardless of whether this
information is included in the final appropriation document. For many programs,
organizations will be able to better allocate resources when they explicitly consider
expected costs and outcomes for out-years. For example, the results of funding a new
federal or state program to reduce alcohol abuse might not be apparent for two or more
years. (Time will be needed to gain acceptance by localities, train staff members in the
program, publicize and run the program, make sure clients receive the programs
services, and then measure the outcomes.)
Even for intermediate outcome indicators, measurable effects may not be
expected until after the budget year. Another example: Road construction can reduce
accidents and congestion over several years. Should not estimates of the magnitude of
these future improvements be included in the budget justification?
6

A partial solution is to build into the budget process any important outcomes
expected to occur because of the proposed new funding. Programs would be asked to
estimate the values for each outcome indicator for each out-year the proposed budget
funding is expected to significantly affect. Requiring outcome projections in the budget
development process is likely to encourage agencies to consider multiyear effects.
For some programs, this forecasting can be done readily. For example, a federal
program to help residential housing might request funds for rehabilitating the homes of a
certain number of families. The program can probably predict the years in which those
rehabs will occur and the number of families occupying the housing units. A program
that provides drug treatment funding will find it more difficult to estimate the number of
clients who will become drug free and in which years. Performance measurement data on
past success rates will likely help those preparing or reviewing the budget to estimate
such outcomes.
A less demanding option is to ask for estimated future outcomes without requiring
that they be distributed by year.
The need to consider future outcomes of the current years budget is less frequent
for local than for federal and state programs. But even at the local level, some programs,
such as school and health programs, will have long-term outcome goals.
Most governments have not addressed the problem of long-term outcomes . A
partial exception is that some state governments separate expansion requests (including
new programs) from requests for continuation of current programs. For expansion
requests, these governments require out-year projections of future outcomes.

4. Whether Proposed Inputs Can Be Linked to Outputs and Outcomes


In analyzing performance information for budgeting, a critical step is to link
information on proposed costs to the projected amount of output and outcomes. Resultsbased budgeting and similar resource allocation efforts (including strategic planning)
enter into this new dimensionestimating the link between inputs and expected future
results. Such estimates can be subject to considerable uncertainty.
Part of the uncertainty relates to lack of good historical cost, output, and
(particularly) outcome information. This problem is potentially curable. More difficult is
estimating future costs. Even more difficult is estimating the amount of expenditures
needed to increase outcomes, especially end outcomes, by specific amounts. Typically,
programs do not know with any certainty how much more (or less) funding or staffing is
needed to increase (or reduce) an end outcome by a certain amount. As programs gain
experience with their outcome data, they should be able to better estimate this
relationship, although it will never be as predictable as the relationship between funding
or personnel and output indicators.

Projecting accurately becomes increasingly difficult and uncertain as programs


move from linking inputs to outputs, to linking inputs to intermediate outcomes, and,
finally, to linking inputs or outputs to end outcomes. The following sections discuss the
links between inputs, outputs, intermediate outcomes, and end outcomes. Little past work
has examined these latter relationships.
Linking inputs to outputs. The amount of output expected in the budget year can
be used to estimate the associated costs and personnel requirements or vice versa. If the
amounts of dollars and personnel are the starting point for the budget, the amount of
output achievable can be estimated.5 Many, if not most, programs can estimate somewhat
accurately how much workload they are likely to have, and thus the amount of output
they can accomplish, given particular amounts of staff and funds. Programs will likely
have reasonably accurate counts of past outputs and the direct costs of employee time. If
they do not currently record such information, they can obtain it.6
If funding needs are developed from estimates of the workload, estimates of
future expenditures of employee time and money will be affected by the programs
ability to estimate accurately the magnitude and character of the budget year workload
and the effects of any new service procedures or technology. For example, school
systems try to estimate the next years school population in order to decide about school
buildings, classrooms, teachers, and purchases of books and other teaching materials.
Inaccurate projections have been known to embarrass school officials.
Performance measurement information from earlier years normally provides the
basis for projecting the relationship between inputs and outputs for the current budget
year. However, if the complexity of the workload during the forthcoming budget year is
likely to differ substantially from that in previous years, this change needs to be
considered when developing the budget. For example, the Internal Revenue Service can
tabulate the number and complexity of tax returns that come in each year. However,
many factorssuch as revisions to the tax codecan alter the future mix of tax-return
difficulty and thus the amount of time required to review and process returns.
External factors can also affect future workload. For example, at the state and
local levels, weather conditions, such as freeze-thaw conditions, can have substantial
effects on costs, outputs, and outcomes. Agencies can obtain projections of these for the
budget year, affecting such performance indicators as estimates of future roadwork and
costs, and accident and injury rates. Similarly, the number and characteristics of
incoming clients, such as their need for employment, health, and social service programs,
can be highly unpredictable because they are affected by many economic and social
factorsand projections based on past data are by no means certain. For some programs,
agencies can use reasonably reliable estimates of their client populations, but these are
also subject to uncertainties, such as increased immigration from countries in crisis.
Linking inputs to intermediate outcomes. Precise relationships between past
input data and past intermediate outcome data can be developed for some outcome
indicators. Even so, past relationships between the intermediate outcomes and inputs will

usually provide only rough indications of what will happen in the budget year. For
example, federal agencies such as the departments of Education, Housing and Urban
Development, Health and Human Services, and Labor as well as the Environmental
Protection Agency provide much of their assistance to state and local agencies rather than
to the ultimate customers. If these state and local agencies undertake promising steps that
the federal department has encouraged, the steps can be considered intermediate
outcomes for that department. Data on the past relationship between the amounts of
federal funds and assistance, on the one hand, and the extent to which the state and local
governments undertook promising initiatives, on the other, are likely to be useful. But the
past relationship provides only a rough estimate of what state and local agencies will do
in the budget year.
Some intermediate outcomes can be estimated relatively accurately. For example,
agencies can make fairly accurate estimates of such intermediate outcomes as future
response times, given particular amounts of staff and dollar resources.7 Even here,
however, a number of outside factors over which the program has little control can
intervene. For example, an unexpectedly large number of requests for service or changes
in the proportion of complex requests can have major effects on response times.
Here are some examples of difficult-to-predict intermediate outcomes:

Number of businesses (or households) that alter their handling of waste to be more
environmentally prudent after receiving assistance from state or local programs
Number and percentage of parents who take special parenting classes and then alter
their behavior in ways that encourage their childrens learning in school
Customer satisfaction

All these outcomes are driven not only by agency efforts to seek certain customer
behaviors and perceptions but also by many aspects of the behavior and circumstances of
the customers themselves, as well as outside factors.
The bottom line is that agencies should expect historical data on costs and
intermediate outcomes to be useful in preparing cost and intermediate outcome
information for budgets. In many cases, however, agencies will be able to make only
rough projections about the future relationship between costs and intermediate outcomes.
Linking inputs to end outcomes. As a rule, agencies should not expect to have
solid, known relationships between inputs and end outcomes, no matter how good the
historical data are. (In more economic terms, little information is available about the
production function that relates the inputs to the end outcomes.) Nevertheless, these
relationships are extremely important and need to be considered, at least qualitatively, in
any budget process.
Some end outcomes are easier to relate to inputs than others. For example, the
number and percent of a state or local jurisdictions roads that are in satisfactory
condition can be considered an end outcome indicator for road maintenance services.

These numbers relate closely to the funds that the agency applies to road maintenance
and repair. Past data on this relationship can be used to estimate the expenditures needed
in order to achieve a certain value for this outcome indicator (or conversely, to estimate
the percent of road miles in satisfactory condition given a particular funding level). In
contrast, how much a clients condition is improved by expenditures of particular
amounts of federal, state, local, or private funds to reduce substance abuse or to enhance
elementary education is considerably more difficult to estimate.
Projecting how well budgeted resources will achieve prevention (whether of
crime, disease, family problems, or so on) is extremely difficult. At best, the historical
data will provide very rough clues about the relationship between resources and
prevention. In-depth studies can provide evidence, but decisionmakers may need to rely
more heavily on qualitative information and subjective judgments on the prevention
outcomes expected from a particular level of budgeted resources.
In general, the more direct a programs influence over an outcome, the greater the
programs ability to develop numerical relationships between inputs and the outcome.
Local governments and private agencies generally have more direct influence on end
outcomes than state or federal agencies; therefore, the relationships between their inputs
and outcomes (both intermediate and end) are likely to be clearer. Nevertheless, for many
end outcome indicators, the relationship will inevitably be imprecise. How many more
resources would be needed to increase the percentage of customers satisfied with their
recreation experiences by 5 percentage points (such as from 65 percent to 70 percent)?
The answers to questions like this usually can be estimated only very roughly, at best.
If identifying the quantitative (or even qualitative) relationships between size and
type of input, type of intervention, and amount of outcomes achieved is likely to be
crucial to future major budget decisions about an existing program, agencies should seek
an in-depth program evaluation.
Agencies can systematically track changes in resources to assess the differences
on outcomes and then use that information to help make future budget estimates.
Agencies and their programs might also be able to intentionally alter the amount of input
to certain activities to see how more or fewer resources affect outcomesand then use
such information for future estimates.
Linking outputs to outcomes. Outcomes presumably flow from outputs. For
example, the number of calls answered is an output for a service (whether these calls
relate to police, fire, sewage backups, travel information, or any other service request).
This output leads to outcomes, such as what resulted and whether the requests were
fulfilled to the customers satisfaction . Some outcome indicators explicitly relate outputs
to outcomes, such as the percent of those to whom services were provided (an output)
who had successful outcomes.
Staff preparing or reviewing budget proposals should examine the amount of
output expected in the budget year and assess what outcomes can be expected from that

10

numberand when. If X customers are expected to be served during the budget year (an
output), how many customers (and what percent) can be expected to be helped to achieve
the desired outcomes that year and in future years (an outcome)? For example,

how many persons are expected to find employment after receiving training services,
and when?
what percentage of babies born to low-income women who received appropriate
prenatal care will be healthy?

Those preparing the budget request and those subsequently examining it should
ascertain that the outcome numbers make sense relative to the amount of output. For
services that have lengthy lag times between outputs and outcomes, the outcome numbers
for the budget year need to be compared to output numbers in the relevant previous
years.
Linking intermediate outcomes to end outcomes. It is likely to be difficult to
provide quantitative relationships between intermediate and end outcomes, but it is often
easier than directly estimating the relationships between input and end outcomes. For
example, a state agency might provide funds or technical assistance to local agencies to
undertake an environmental protection regulation designed to lead to cleaner air. The
relationship between the local agencys successfully getting businesses to adapt better
practices for handling hazardous wastes (an intermediate outcome for both the state and
local agencies) and the extent to which cleaner air results (an end outcome for both
agencies) is uncertain. Some relationships are clearer, such as the extent to which
increased percentages of children vaccinated against a disease can be expected to lead to
reduced incidence of the disease among the vaccinated population.
How to make these links? For most programs, knowledge about most of the
above links is lacking. Historical data from the performance measurement process, even
if it has been implemented for only one or two years, can provide clues. But there will
almost always be considerable uncertainty about projections of outcomes, especially end
outcomes, for given budget levels. A key is to be able to make plausible connections
between the amount of budgeted funds and the outcomes projected. These connections
can be based on past performance and modified by information on changes in either
internal or external factors expected in the budget year.
5. The Role of Efficiency Indicators
Efficiency is an important consideration in the budget process. As noted earlier,
efficiency is traditionally measured as the ratio of inputs to outputs. The new type of
indicator added in results-based budgeting is ratios of inputs to outcomes. An example of
this is cost per person served whose condition improved significantly after receiving the
service. The more traditional output-based efficiency indicator is cost per person
served.

11

When reasonably solid numerical relationships exist between outputs or outcomes


and the associated inputs, past data can be used to develop historical unit-cost figures,
such as the cost per lane-mile of road maintained or the cost per lane-mile rated as in
good condition. These figures can then be used to make estimates for the budget year.
Likely future factors need to be factored in. For example, road maintenance budget
estimates should consider any planned price changes, any new technologies that might be
used and their cost, and any indications that repairs will be more extensive or more
difficult than in past years.
Some outcomes, such as road condition, can reasonably be numerically related to
outputs, such as the number of lane-miles expected to be repaired during the budget year
for a given dollar allocation. Budget preparers and reviewers can then examine various
levels of the number of lane-miles to be repaired for various levels of expenditures and
estimate the number of lane-miles that will be in satisfactory condition for each
expenditure option. These estimates will inform decisionmakers of the trade-offs between
costs and outcomes, so they can select their preferred combination.
In police investigative work, number of cases cleared per police dollar or per
investigation hour is an outcome-based efficiency indicator. Past data on clearances can
be used to make estimates for the forthcoming budget. However, the number and percent
of crimes cleared in the budget year will also depend significantly on the number of
crimes reported (many more crimes may mean investigators have less time to spend on
individual cases), the types of crimes (for example, burglaries have substantially lower
clearance rates than robberies), and the amount of evidence available at the scene. Factors
largely outside the control of the police department (such as case difficulty) as well as
internal factors (such as the amount of investigator turnover and the quality and quantity
of investigative effort) can significantly affect clearance rates. Trends in such factors
should be considered when projecting clearance rates from past efficiency data.
The use of unit costs in which the units are outputs is common in budgeting.
However, the use of unit costs in which the units are outcomes is rare. One reason for this
is that outcome data have not often been part of the budget preparation process. This is
changing. In the future, the primary reason for limited use of costs per unit of outcome
will be not lack of outcome data but rather lack of solid numerical relationships between
inputs and outcomes.
6. Setting Performance Targets in Budgets
The projected values for individual outcome indicators are important numbers in
results-based budget submissions. In view of the considerable uncertainty surrounding
future conditions and the links between agency resources and indicator values, how
should agencies develop these targets? Suggested steps are listed in exhibit 13-2. The
specific factors to consider are listed in exhibit 13-3.8 Addition suggestions are provided
in chapter 9.

12

Two special target-setting options are available to programs that are highly
uncertain about the future values of one or more outcome indicators: variable targets and
target ranges.
The variable target option applies to outcome indicators whose values are
believed to be highly dependent on a characteristic of the incoming workload and where
major uncertainty exists about that characteristic. In this procedure, the expected
relationship between the characteristic and outcome is identified first. The final outcome
target is determined after the fact, depending on the workload characteristics that actually
occurred in the budget year.
For example, if an outcome is expected to be highly sensitive to the mix of
workload (e.g., customers) coming in, and the mix for the budget year is subject to
considerable uncertainty, the program can set targets for each category of workload
without making assumptions about the workload mix. The aggregate target is determined
after the budget year closes and the mix is known.
For the indicator percent of people who leave welfare for work, the program
might set separate targets for groups defined by their amount of formal education.
Suppose the program estimated that 75 percent of people coming in with at least a high
school diploma would find jobs and get off welfare in the budget year, but only 30
percent of those with less than a high school education would do so. These targets would
be presented in the budget. The aggregate percent, which might also be included, would
be based on the programs estimated mix of clients.
At the end of the year, the aggregate target for the year would be calculated for
the actual education mix and compared to the aggregate percent. If 420 people who had
not completed high school and 180 people who had completed high school entered the
program during the year, the aggregate target would be 44 percent30 percent of 420
(126) plus 75 percent of 180 (135), equaling 261. Dividing 261 by the total number in the
program that year (600) yields the aggregate target for the share expected to go off
welfare, 44 percent.
The target might also be linked to the national unemployment rate. For example,
the program target might be 15 percent of enrollees off welfare if the national
unemployment rate turned out to be over 5.4 percent and 25 percent off welfare if the
national unemployment rate turned out to be less than 5.0 percent. The program would
not know if it achieved the target until the national figure became available. Another
option is to use a formula that relates expected outcome to the value of the external
factorin this example, a formula that relates the expected percentage off welfare to the
national unemployment rate.
The target range option applies to any outcome indicator with highly uncertain
future values. A range of values, rather than one number, is given as the target for the
indicator. Many programs might benefit from this approach, especially for their end
outcomes. Here are some examples of target ranges:

13

The customer satisfaction level is expected to be in the range of 80 percent to 87


percent.
The percentage of clients who will be off illegal drugs 12 months after program
completion is expected to be between 40 and 50 percent.

The danger of gaming and ways to alleviate it. As higher-level administrators


and elected officials begin to use targets in budget documents, the temptation to game
targets will inevitably grow. Such gaming can occur at any level. Program managers and
upper-level officials might set targets so their projected outcomes will look good. (This is
an argument for legislators to ask independent audit offices to review and comment on
proposed budget targets, especially at the state and federal levels.) Elected officials might
manipulate targets for political purposes.
Setting targets that are easy to achieve will be tempting to those whose funding or
individual compensation is based significantly on achieving targets. The opposite
setting very optimistic, if not impossible, targetsis tempting to those seeking support
for high budgets.
The following are some ways to alleviate gaming:

Establish a multilevel review process in which executive personnel check targets to


identify values that appear overly optimistic or overly conservative.
Examine the past relationships between inputs, outputs, and outcomes to see if the
proposed targets are consistent with those relationships.
Use one of the special target-setting options noted above to avoid a single-number
target. These ranges can still be gamed, but the effects of gaming should be reduced.
Explicitly identify in performance reports any future outcomes that are particularly
difficult to estimate. Budget documents should also identify new outcome indicators,
pointing out that setting targets for them is particularly difficult because there is no
experience on which to base estimates.
Ask programs to provide explanations for unusual-looking targets.
Reduce reliance on major incentives that link funding or salary compensation to
target achievement. However, pressure to link compensation to target achievement is
likely to increase as agencies switch to outcome-based target-setting procedures. In
such cases, an in-depth examination of the reasons for highly successful or highly
unsuccessful outcomes should be undertaken before final funding or salary decisions
are made.

In some instances, executives and elected officials will prefer unclear, fuzzy
goals. For example, school districts have debated whether they should include precise
objectives on student test improvement (such as increasing the overall scores by 5
percentage points or reducing the difference in performance between the minority and
majority student population by 7 percentage points during the year). These officials might
be willing to accept a target range.

14

Note: Agency personnel sometimes are reluctant to provide targets that are lower
than the previous years targets, even if budget-year resources are lower in real terms
(i.e., after allowing for cost increases). They fear this will make them look bad. Even so,
it is important that agencies and their individual programs realistically estimate the
consequences of reduced resources. Agencies should encourage such reporting if it can
be justified. Not being able to do everything they did in the previous year is not a basis
for applying blame to programs if resources are cut. Upper management may believe that
productivity improvements can make up for the reduced resources (and this may be
trueup to a point). If political pressure requires that a program establish published
targets that are higher than the program believes are achievable, the distinction should at
least be made clear internally.
Setting performance targets is an excellent management tool for agencies,
particularly if the targets are provided and progress is examined periodically during the
year, such as monthly or quarterly. Even if an agency does not use outcome targets in its
budget process, the agency can choose to retain an internal outcome-targeting process.
7. Use of Explanatory Information
As discussed in chapters 10 and 11, agency programs should be encouraged to
provide explanatory information along with their past performance measurement data
when developing and submitting budget requests.
Staff preparing budgets should examine such information for insights into why
the program performed well or poorly and for any suggestions about what is needed to
improve it. This information can also help identify program changes likely to affect cost
and outcome estimates.
As already noted, the results of any relevant program evaluations should be part
of budget preparation and review. The findings on outcomes and the extent to which the
program has been instrumental in producing the outcomes are important for judging the
value of the current program. Persons who review the programs proposed budget can use
later performance data to assess whether the proposed budget reflects the changes
suggested by the evaluation. Program evaluation findings should typically take
precedence over findings from the agencys performance measurement system.9
For target values that deviate substantially from past results, agency programs
should be encouraged to provide explanations for those targets, especially on key
outcome indicators. Such information should identify the basic assumptions used to
develop the outcome projections and any important external factors expected to make the
outcome value deviate from past performance levels.
Explanatory information on past performance, including any available findings
from recent program evaluations, can help identify the reasons for success or lack of it
that is, program strengths and weaknesses. Budget preparers and reviewers can then
assess the extent to which steps have been taken, or are needed, to correct problems.
15

8. Strength of Program Influence over Future Outcomes


Agency managers are usually quite apprehensive about including outcome
indicators as a part of their performance measurements. As discussed in previous
chapters, managers often have only partial control over outcomes, especially end
outcomes.
To alleviate this concern in budget preparation, and to give budget reviewers a
better perspective on the projected outcome data, agencies should consider categorizing
each outcome indicator by the extent of the agencys influence over it (see chapter 6).
This will identify the extent to which the agency can affect each indicator relative to
outside factors likely to affect the programs outcomes.10 Note, however, that agencies
and their programs may have more influence than they think. In many instances,
innovative approaches to their missions might influence outcomes in meaningful ways,
including making recommendations for legislative changes.
Indicators can be slotted into a small number of broad categories, such as
considerable influence, some influence, or little influence. (If the program has no
influence over the value of a performance indicator, then it should not be considered a
performance indicator. For budget examination purposes, however, programs should be
asked to identify the reasons they think they have no influence.)
Lack of influence may indicate that the program is not doing the right things,
perhaps requiring major program changes.

9. Using Performance Information in Formulating and Examining Budget


Requests
The budget preparation and review process is intended to help ensure that needed
resources are budgeted for the most cost-effective purpose. Data on past inputs, outputs,
outcomes, and efficiency, as well as explanatory information, allows analysts to
formulate and examine program budget proposals much more comprehensively and
meaningfully than in the past. Outcome information, even if relatively crude and partial,
enables analysts to consider both resource needs and likely outcomes from those
resourcesand under what conditions results have been good or bad. This adds much
more substance to a budget process.
Chapter 10 described how to analyze past performance. Similar approaches are
useful in results-based budgeting. A later section of this chapter lists and discusses 18
steps for using performance information to examine budget requestswhether inside an
operating agency, by a central office such as a budget office, or by elected officials.

16

10. Applying Results-Based Budgeting to Internal Support Services


Governments at all levels and private agencies support a variety of administrative
functions, such as building maintenance, facilities maintenance, information technology,
human resources, risk management, purchasing, and accounting. The link between the
products of such activities and public service outcomes is distant and usually extremely
difficult or impossible to determine, even roughly.11
These activities are nonetheless important in providing needed support for
operating programs. Good management requires that administrative services track their
own internal intermediate outcomes (such as the quality of their services to other agency
offices). The principles and procedures described in earlier chapters can be readily
adapted to administrative services. For example, the types of data collection described in
chapter 7agency records, customer surveys, and trained observer ratingscan be used
to obtain data on service quality.12

11. Using Results-Based Budgeting for Capital Budgeting


Many state and local governments prepare separate capital budgets, sometimes in
the form of multiyear capital improvement programs. Capital budgets typically list
proposed projects and the estimated capital funds required for each project in the budget
year. Multiyear plans usually contain such information for each out-year. These plans
may include general statements about the purposes of the expenditures, but they seldom
contain information about their expected effects on outcomes.
There is no reason results-based budgeting should not apply to capital budgets.
The agency should gain experience with result-based budgeting and then call for the
explicit estimation of the effects of major capital expenditures on outcomes. For example,
planned capital expenditures for road rehabilitation might be justified in terms of their
expected effects on future road conditions, such as added rideability and safety, compared
with the conditions that would occur without the capital expenditures. Similarly, funds
for water and sewer purposes should be related to projected improvements in water
quality and health protection. For capital projects that primarily benefit particular
segments of the community, estimates should be provided on which, and how many,
citizens are expected to benefit.
Many agencies are also faced periodically with the need to invest in information
technology. These investments should be assessed not only on their costs but also on their
expected benefits. For example, how does the proposed technology reduce response times
to customers or change the accuracy of service delivery?
Some capital expenditures, such as those for administrative services, do not link
well with end outcomes. New construction of office buildings is a good example. For this

17

construction, a performance measurement system might track such internal outcomes as


work completed on time, work completed within budget, ratings of the quality of the
facilities built, and any added efficiencies or improved working conditions for
employees.
Decision makers and the public should not be expected to make capital
investment decisions without information on the benefits expected from these
expenditures.

12. Budgeting-by-Objectives and Budgeting for Outcomes


Conceptually, it makes sense for a department to submit budgets with proposed
funding grouped by major objectives.13 For example, child abuse prevention, alcohol
abuse reduction, unemployment assistance, and traffic accident reduction might be major
objectives. All activities related to the particular objective would be included, regardless
of which program or agency is involved. Budgeting-by-objectives was a characteristic of
the original program budgeting and PPBS (Planning-Programming-Budgeting-System) in
the late 1960s and 1970s. However, this approach has rarely been used.
The major question that confronts organizations that try this approach is how to
sort out objectives and programs. Most programs have multiple objectives, and their
personnel and other resources simultaneously affect more than one objective. The
crosswalk between objectives and programs or agencies can be cumbersome. If some
activities simultaneously affect more than one objective, how should costs be split
between them, or should they be split at all? For example, transportation programs can
influence multiple objectives across a wide range of the policy spectrum, including
making transportation quick and convenient, enhancing health and safety, and protecting
the environment.
A recent variation of budgeting-by-objectives is budgeting for outcomes. Here,
service organizations estimate how much outcome they will provide and at what cost.
The focus is not on programs but on the results the organization says it will achieve.
Budgeting for outcomes encourages innovation in the way outcomes will be produced,
and can even encourage providers outside the government to bid. The government
might also pre-select the major outcomes it wants and establish a total expenditure level
for each outcome.14 The State of Washington has experimented with this approach, but it
is too early to assess its long-term success. The approach has some major hurdles,
including the need to have good outcome information and be able to judge the claims of
bidders. In addition, most government organizations seek many outcomes, and sorting
them all out and determining allocations for each outcome (and what to do about
outcomes that are not considered major) present major difficulties.
At this point, it is by no means clear whether budgeting by objectives or
budgeting for outcomes can be made practical. Providing crosswalks linking activities to
each outcome, however, does seem a reasonable approach for modern information
18

technology (as has been done by the states of North Carolina and Oregon and by
Multnomah County, Oregon). Agency programs that contribute to several outcomes can
be coded to identify which programs contribute to which outcomes.15 Such crosswalks
can at least trigger the need for coordination and cooperation among programs, and they
will help budget examiners detect the need for across-program budget reviews.

13. Special Analytical Techniques for Projections


Budgeting, like strategic planning (and unlike performance measurement),
involves projecting costs and outcomes into the future. Estimating future costs, and
especially future outcomes, can be very difficult, as already emphasized. Program
analysis (sometimes called cost-effectiveness analysis) and cost-benefit analysis can help
agencies select service delivery variations. The findings should help the agency select the
service option that should be budgeted, help estimate the outcomes, and then help justify
the budget proposal. These techniques have been around for many years, but their use in
budget preparation and review is rare.
This book does not detail these techniques, but the following review briefly
identifies the major features of each that can help in results-based budgeting.
Program (cost-effectiveness) analysis. This term applies to special quantitative
analyses used to estimate the future costs and effectiveness of alternative ways to deliver
a service. While program evaluation is retrospective, program analysis is prospective.
The Department of Defense is one of the few agencies in the country that has designated
personnel to undertake regular program analysis. Otherwise, systematic program analysis
has not taken hold in the public sector or in nongovernmental organizations. The
Department of Health, Education, and Welfare (now the Department of Health and
Human Services) and the state of Pennsylvania had special offices with such expertise in
the 1960s, when these were fashionable as part of PPBS efforts, but later discontinued
them.
While some agencies have policy analysis shops, these are usually heavily
qualitative. Program evaluation offices, which primarily examine past performance, may
sometimes take on this role, since some of the same technical skills are involved.
Information from program evaluations can be valuable when the past data can be used to
help decide about the future.
For results-based budgeting, program analysis is particularly helpful when an
agency proposes to introduce a new service delivery approach or a significant variation of
an existing approach. Unless the delivery approach proposed closely resembles an
approach for which relevant past data are available, projecting costs and outcomes from
past data may not be very useful.
Agencies can consider doing pilot tests or experiments (as discussed in chapter 9),
using the performance measurement system for data on the old and the new service
19

approaches and then using that information as the basis for estimating outcomes and
costs. These procedures are worthwhile if the agencies can wait to make their final
decision until the test has been completed and the findings have become available.
Agencies should use the findings from such analyses and experiments to help formulate
and subsequently justify budget proposals.
As the use of performance measurement, and particularly results-based budgeting,
grows, the need to project outcomes systematically will also grow. The field of program
analysis may then stage a comeback.
Cost-benefit analysis. Cost-benefit analysis goes one step further than program
analysis. It provides a monetary estimate of the value of a program. (Cost-benefit
analysis can also help evaluate the value of a programs past performance.) Its key
characteristic is that it translates nonmonetary outcomes into monetary ones. The costs
are compared to the estimated dollar benefits to produce cost-benefit ratios and estimated
differences in the monetary values of the costs and benefits. Before the calculations into
monetary values can be performed, the basic outcome values, usually measured in
nonmonetary units, are needed. That is, program analysis needs to be done first. Costbenefit analysis adds an additional, usually difficult, step to the process.
The monetary value of the outcomes has to be imputed in some way. For example,
an estimate that X number of traffic accidents could be avoided by a particular activity
might be converted into monetary estimates of the costs of those accidents, including
damage repair, hospital and other health care, time lost from work, and the economic
value of any lives lost. The costs of the activity being considered would then be
compared to these dollar valuations and a cost-benefit ratio calculated.
Sound cost-benefit analysis, whether of past program accomplishments or
projected program value, can provide major backup information for program budget
requests. Such calculations can also appeal to public and private officials, because most
outcomes are converted into dollars and summarized in one number (the cost-benefit
ratio), which can be interpreted as the value of the program. One summary number is
much easier for decisionmakers to handle. The usual application of this approach is to
compare options within a single service area, but it could also be used to compare
programs across services.
Cost-benefit analysis has a number of drawbacks. The calculations of monetary
value usually require numerous assumptions that can be quite controversial. For example,
how should the value of lost work time or of deaths be determined? (The value of lives
lost has sometimes been estimated based on the economic potential of human beings at
particular ages. This approach sounds reasonable, but giving older people little or no
value in the calculations implies that it is all right to knock off the elderly .) Another
problem is that the monetary values often accrue to different populations from the
populations that pay the costs. For example, revenues for most government expenditures
are raised by taxes from the public and businesses, but the benefits often accrue primarily
to particular groups.

20

If performed and used carefully, cost-benefit calculations can provide insights


into the expected value of the proposed budget for a program. However, cost-benefit
analysis reports should always spell out the value assumptions used so readers can better
understand the basis for the findings.
Cost-benefit analysis tends to be time-consuming and expensive. As a result, it
has been used very selectively, primarily by the federal government. The Army Corps of
Engineers has undertaken many such studies when selecting major water and other
construction projects. Cost-benefit analysis has also been used, and sometimes mandated,
for federal regulatory programs.

14. The Role of Qualitative Outcome Information in Results-Based Budgeting


As discussed in chapter 6, not all outcomes can be adequately measured in
quantitative terms. An agencys budget process should at least qualitatively consider the
implications of the budget for desired (and undesired) outcomes. Even if outcomes can
only be expressed qualitatively, explicitly including them in the budget, and in the
political debate over amounts and allocations, can help improve decisions on
expenditures.

Steps for Examining Performance Information in Budget Reviews


Some basic steps for developing and examining budget requests are listed in
exhibit 13-4 and discussed below.16 Together, these steps represent a heavy workload for
those reviewing or developing budget requests. However, these steps can be used
selectively. They are also likely to be appropriate at any time during the year when a
program seeks additional resources.
1. Examine the budget submission to ascertain that it provides the latest
information and targets on workload, output, intermediate outcomes, and end
outcomesas well as the funds and personnel resources requested. The budget
submission should include past data on each indicator, the latest available outcome data
for the current budget year, and the targets for the fiscal year(s) for which the budget is
being submitted. If an indicator is too new for data or targets to be available, the
submission should note this and indicate when data will be available (both actual data and
targets).
If the program does not believe it can obtain numerical values for important
indicators, then it should explain why and provide qualitative information on past and
expected future progress.
2. Assess whether the outcome indicators and targets are consistent with the
mission of, and strategies proposed by, the program and adequately cover that mission.
If the agencys programs do not have explicit mission statements that adequately define

21

their major objectives (such as those included in strategic plans) or descriptions of the
strategies the programs propose to use to achieve the objectives, the reviewers will need
to ask the program to construct these or construct these themselvesdiscussing them
with program personnel as necessary.
For example, federal, state, or local litigation offices may emphasize deterrence of
future criminal behavior in their formal mission statements. Litigation programs,
however, usually have not included indicators that explicitly address deterrence. The
outcome indicators tracked will probably focus on bringing offenders to justice. From the
programs viewpoint this focus is reasonable, but reviewers should consider whether it is
feasible to track deterrence using counts of nondeterrence as a surrogate (i.e., the amount
of reported criminal behavior) or be content to seek qualitative information. (Note:
Measuring deterrence directly is usually best done, if done at all, through in-depth studies
and not through a performance measurement process.) Reviewers might also decide that
the litigation program does not in fact have the responsibility or the capacity for
estimating prevention. They might determine that the mission statement was overstated
and that the programs focus on number of offenders brought to justice is appropriate.
3. If the program is seeking increased resources, assess whether it has provided
adequate information on the amount each output and outcome indicator is expected to
change over recent levels. The changes might be expressed as a special table showing
pluses or minuses for each affected indicator. Programs need to make clear what effects
their special proposals are expected to have on outputs and outcomesnot merely on
funding and personnel resources.
4. Examine the programs projected workload, outputs, intermediate outcomes,
and end outcomes, as well as the amount of funds and personnel. Make sure these
numbers are consistent with each other (e.g., that the amount of output is consistent
with the projected workload). Determine whether the program has included data on the
results expected from the outputs it has identified. Use steps such as those listed in
exhibits 13-2 and 13-3 to develop and examine the targets. Output indicators normally
should be included in the budget submission for each major category of workload. (Note:
outputs represent completed work. Workload includes work in progress and items that are
pending.) Intermediate outcomes should be consistent with outputs and end outcomes
consistent with intermediate outcomes. If such information has not been included, the
program can be asked to provide the needed data.
The data on outputs and outcomes should be checked for consistency with each
other. For example, do the number of successes for a reporting period exceed the number
of cases completed during that period?
Note, however, that substantial time lags can occur between the time a customer
comes in for service and the outcomes. For example, the outcome indicator percent of
cases that were successful should be derived by dividing the number of cases expected
to be successfully completed during the budget year by the number of cases completed
during the year, regardless of the year the case was initiated, not by the number of cases

22

worked on or started during the budget year. Another example: A budget-year estimate
for the outcome indicator percent of child adoption cases in which the child was placed
with adoptive parents within 24 months of the childs entry into the system would need
to be based on the number of children that came into the child welfare system two years
before the budget year. Where appropriate outcome indicators and/or outcome data have
not been provided, ask the program to provide them.
Two reminders:

Outcomes can result from activities undertaken before the budget year. Also, some
outcomes intended to result from the proposed budget might not occur until after the
budget year. The budget submission should identify such situations.
In the initial years of the performance measurement system, programs may not be
able to provide data on some outcome indicators.

5. Compare past data on workload, output, intermediate outcomes, and end


outcomes with the proposed budget targets. Identify unusually high or low projected
outputs or outcomes. This can be done in at least two ways:

Compare the latest data on actual performance to those for previous reporting periods
and to the proposed budget targets.
Compare historical data on individual outcome indicators to the past targets set for
those indicators to assess the programs accuracy in setting targets. In light of this
past experience, assess the programs proposed targets. Some agencies may have a
pattern of being highly optimistic about their ability to achieve outcomes; others may
have a pattern of overly conservative targets. Budget analysts should take this into
account as they interpret target achievement. Ideally, targets should be set at a level
that encourages high, but achievable, performance. (The budget analysis office should
attempt to track the proclivities of individual program managers to set their targets
overly high or low.)

Where projected performance values differ considerably from past values, or


appear otherwise unusual, seek explanations. Has the program provided any other
information that explains this? If not, ask for explanations. For example, if a program has
the same targets it had last year, and it fell far short of those targets, ask what has
changed to make the targets more achievable this year. If the program is requesting a
considerable increase in funds without increasing outcome targets over previous years
actual results, ask why the added funds are needed. If a program projects lower values for
outputs or outcomes, find out why. The program might report, for example, that the
reason was reduced workload (check the related workload indicators), reduced resources
(check the related expenditure and staffing amounts), unusually difficult or complex
workload (check any evidence provided by the program), or reduced efficiency or
effectiveness in delivering the service (not likely to be reported by the program).
6. Examine the explanatory information, especially for outcome indicators
whose past values fell significantly below expectations and for any performance targets

23

that appear unusually high or low. This step should be given special attention when any
of the earlier steps indicate that the performance levels projected need further
examination. Explanatory information should be examined before any conclusions are
drawn about the performance of the program and its resource implications.
Explanations can be substantive or be merely rationalizations or excuses. To
assess the value of the explanations, the analysts may need to follow up with the program
to clarify and/or obtain more information.
7. For programs likely to have delays or backlogs that might complicate
program services, be sure the data adequately cover the extent of delays, backlogs, and
lack of coverage. Buildups of such problems can be a major justification for added
resources. The size of any delays or backlogs, and how these may be growing, can be
important customer-focused, quality-of-service performance indicators for social, health,
welfare, loan, licensing, and many other programs. For legal prosecutions and court
cases, justice delayed is justice denied.
Conversely, if a programs indicators show no evidence of significant delays, then
existing resource levels appear adequate for the futureunless the program provides
evidence that a significant buildup of its future workload is likely. Programs, where
possible, should systematically categorize their incoming caseloads by level of difficulty
or complexity (see chapter 8). Programs should also project the size of their caseload by
difficulty or complexity as a factor in determining their proposed budget. Is there any
evidence that the program is now getting or expects to get more complex and/or more
difficult cases? Such changes would offer justification for additional resources.
Indicators that programs can be asked to provide include the following:

Counts of the number of cases pending and projected at the end of each year (tracked
over time, this will indicate buildups)
Indicators of the time it has taken and is expected to take, given proposed budget
resources, to complete various activities
Estimates of the number of cases that will have to be turned away (for programs that
have the discretion to turn them away)

8. For regulatory programs, be sure that adequate coverage is provided for


compliance outcomes (not merely numbers of inspections). Examples include
environmental regulation programs, work-safety programs, civil rights programs, and
regulatory boards. The analysts should ascertain that the outputs and intermediate and
end outcomes of compliance-monitoring activities are identified. For example, does the
budget proposal report on expected outputs (such as the number of needed inspections
that are expected), and the intervals at which they are projected to be done? Do the
indicators provide past data on such outcomes as the number of organizations found in
previous years not in compliance and then the number and percent that subsequently were
found to have fully corrected the problems? Do the indicators include the incidence of
problems that occurred despite the regulation activities? Do the budget-year projections

24

include such estimates for the budget period? Do the monitoring resources proposed in
the budget appear too little or too large compared to the expected outcomes?
9. Ascertain that the program has sufficiently considered possible changes in
workload that are likely to affect outcomes (such as higher or lower proportions of
difficult workload). Programs may not report such breakouts in their budget submissions,
but they are often able to supply such information. (Programs should be encouraged, for
their own data analyses, to break out their outcome data by various work and customer
characteristics, such as type of case, its difficulty, and different locations or facilities.)
For example, federal and state correctional facilities will probably have internal reports
on individual facilities and facility categories, such as security level and type of prisoner.
Health and human services programs can probably provide some service data on
individual facilities or offices and on various demographic groupings of clients.
Examine whether the outcomes differ substantially for some service
characteristics (such as for some facilities or regions) over others. If so, examine why.
This information can be very helpful in interpreting a programs projected outcome data.
For example, certain types of locations or cases may be considerably more difficult to
handle than others, suggesting that lower-than-desired projected performance is the result
of an increase in the proportion of difficult cases and thus providing a supportable case
for lower outcomes. Budget reviewers should look for evidence that substantially more
cases that are difficult (or easy) are likely to come in during the budget year.
Comparing outcomes among demographic groups is also important in assessing
equity and fairness. Are some groups underserved? Should additional resources be
applied to those groups? Even though identifying who loses and who gains can be a
political hazard, the information is basic to resource allocation. It needs to be addressed.
10. If recent outcomes for a program have been substantially worse than
expected, make sure the program has included in its budget proposal the steps, and
resources, it plans to take toward improvement. If the program projects improved
performance, are the resources and planned steps commensurate? If not, why not? (For
example, substantial time may be needed between the time that funding is approved,
implementation, and the consequences of the funded activities for achievement of certain
outcomes.)
11. Examine findings from any program evaluations or other special studies
completed during the reporting period. Assess whether these findings have been
adequately incorporated into the budget proposals. This includes studies produced by
other organizations. Such information may provide added support for the activities and
budget proposed by the program, or it may contradict the findings produced by the
program to support its proposed activities and budget.
12. Determine whether the program has developed and used information on the
relationship between resource requirements, outputs, and outcomes (e.g., the added
money estimated to increase the number of successfully completed cases by a specified

25

amount). Assess that information for plausibility. Few programs are likely to have
undertaken much systematic analysis of this relationship. Programs should be encouraged
to do so to help substantiate future budget requests.
Relating expenditures and resources to outcomes (both intermediate and end
outcomes) is usually difficult and uncertain. However, to the extent that additional dollars
and staff enable the program to take on more work (more customers, more investigations,
more road repairs, more inspections, etc.), the program can probably estimate roughly
how much additional work it can handle based on past performance information. For
example, a program may be able to estimate the percent of cases or incidents it might not
be able to handle (such as identifying illegal immigrants) without the added funding
requested.
Many, if not most, programs will be unlikely to have investigated the cost-tooutput and cost-to-outcome relationships that underlie their budget requests. However,
these relationships are at the heart of resource allocation decisions, implicitly if not
explicitly, and the program should be pushed to be as explicit as possible about them.
After all, the projected targets the program sets each year based on its outcome
indicators by definition imply such relationships, however rough the estimates may be.
A program seeking additional resources will tend to be overly optimistic about the
outcomes that will result. Budget analysts should look for supportable estimates of the
relationships between resource requirements (dollars and personnel) and at least
approximate values for each outcome indicator.
Over the long run, programs should be encouraged to develop information about
these relationships. The analysis needed for such studies usually requires special
background, however, which is not likely to be in place in most programs. Analytical
staff, whether attached to each program or to a central analysis office, should be helpful
for this purpose.
13. Identify indicators with significantly reduced outputs or outcomes projected
for the budget year (compared to recent performance data) and no decrease in funding
(adjusted for projected price increases) or staffing. Identify and assess the programs
rationale. Reduced funding or staffing projections are obviously plausible rationales for
reduced outcome projections, as is a more difficult or complex workload in the new year.
If the program has been systematically categorizing its incoming caseload by level of
difficulty or complexity, it should be able to provide evidence supporting a reduction.
The program might already have in its pipeline many especially difficult cases. For
example, litigation or investigation programs may be working on several cases that are
highly complex and require additional program resources.
Other possible reasons for lower outcome targets include (a) an unexpected jump
in workload during the budget year without an accompanying increase in resources,
leading to reductions in the percent of cases for which the program can produce
successful outcomes; (b) new legislative or agency policies that add complications or

26

restrictions, reducing the probability of successful outcomes in certain categories of


cases; and (c) external events that would impair outcomes, such as the expected departure
of key industries from a community, affecting local employment and income.
14. Identify outcome indicators with significantly improved outcomes projected
by the program for the budget year (compared to recent performance data) and no
increase in staffing, funding (adjusted for projected price increases), or output. Identify
and assess the programs reasons for these increases. Budget reviewers should ask the
program how it expects to achieve the improved performanceto check the plausibility
of the higher targets. Such improvements might occur if the program plans to improve the
efficiency of its operations. Another reasonable rationale is that the program expects its
workload to be easier or less complex. The program may already have in its pipeline
cases that it expects to be successful in the budget year.
15. Identify what, if any, significant outcomes from the budgeted funds are
expected to occur in years beyond the budget year. Assess whether they are adequately
identified and support the budget request. As noted earlier, many programs and their
activities affect outcomes in years beyond the budget year (particularly federal and state
programs that work through other levels of government and any investment funding). To
justify expenditures for such activities, programs should project these expenditures
effects on the various outcomes for years beyond the budget year. The program should
also provide rationales for such projections. Budget analysts should review these
rationales for plausibility.
16. Identify any external factors not considered in the budget request that might
significantly affect the funds needed or the outcomes projected. Make needed
adjustments. The persons examining the budget request may be privy to information not
available to those preparing it. For example, newly proposed or passed legislation or
recently released economic forecasts can have major effects on the outcome projections.
17. Compare the latest program performance data to those from any other
programs with similar objectives for which similar past performance data are
available. Assess whether projected performance is compatible with that achieved by
similar programs. This point and the next are resource allocation issues that cross
program lines. Agency budget analysts should consider the performance experience of
other, similar programs even if the programs are in another agency. Are the programs
past accomplishments poor relative to similar programs? If so, work with program
personnel to determine why and identify what can be done to improve future
performance. Make any resource judgments that such future actions might entail. Does
the program complement or overlap other programs efforts? If they are complementary,
check whether the data are consistent among the programs. If they overlap, consider
whether altered resource allocations are appropriate to reduce the overlap.
18. Identify any overarching outcome indicators that can provide a more
meaningful and comprehensive perspective on results. Consider coordinating with
other programs, other agencies, and other levels of government. Few programs produce

27

outcomes alone, especially end outcomes. This is a core concern in performance


measurement. Programs related to employment, youth development, substance abuse,
crime, and so on generally involve scores of other programs that also influence the
desired ends. For example, crime control involves investigation, apprehension,
adjudication, punishment, and probably a variety of social services. Each component is
critical to final success, and each is handled by different programs and agencies.
Look for, and examine, consolidated outcome indicators that apply to all such
programs. The budget examiners should make recommendations for any needed
coordination and collaboration among programs and agencies. This would include the use
of common cross-cutting outcome indicators and determining the roles and
responsibilities of each program in achieving jointly targeted outcomes.
For example, reduced drug and alcohol abuse involves many different programs,
agencies, and sectors. Each agency with a substantial role in helping reduce substance
abuse should track the overall incidence and prevalence (but one agency would normally
be responsible for data collection)recognizing that their responsibility is shared. Each
program will likely have its own intermediate outcome indicators and focus on one part
of the overall problem (such as on reducing drug abuse by one age group).17

Summary of the Relationship between Performance Measurement and


Results-Based Budgeting
The primary uses of performance data in budgeting are to help formulate the
budget and to make a more convincing case for the budget recommendations.
Performance information, especially if it includes credible outcome data, should lead to
better choices and more convincing choices than are possible in its absence. Outcome
targets for the budget year also establish a baseline for accountability (encouraging
reviews of actual accomplishments throughout the year and at years end).
Performance measurement of outputs, outcomes, and efficiency for past years is
important for budget allocation decisions. First, the performance information provides
baseline data on outcomes, which is fundamental for making decisions. If you do not
know where you are, you will have difficulty determining where you need to go. Second,
historical data are usually a primary basis for budget projections of future
accomplishments.
Making projections for the budget year and beyond is considerably more difficult
and is subject to much more uncertainty than measuring past performance. The future is
very hard to predict, even if for only one or two years, because of the many external
factors that can affect results. This problem becomes particularly troublesome if the
program is suggesting significant new program variations or new programs to tackle its
mission. Then past data will be a much less adequate guide to the future.

28

However uncertain the data, addressing the relationship of inputs to outcomes


should be a major issue in making resource allocation decisions and budget justifications
in any budgeting system. Even if such discussions are heavily qualitative and judgmental,
they are far better than nothing, because they encourage those making budget decisions
to focus on what is most important to achieve.
The budget review effort should be viewed as an opportunity for both the program
and the agencys budget review staff to develop the best possible budget, to make the best
possible case for budget requests, and to focus on maximizing outcomes for a given
amount of resources. The inherent tension between budget analysts who perceive their
primary job as keeping costs to a minimum and program personnel who want to obtain as
many resources as they can will inevitably pose problems. The two groups will find the
process much less difficult and less contentious if they work to make it as much of a
partnership as possible. The interests of both groups are best served if the final resource
allocation decisions forwarded to higher levels are presented as effectively as possible.
These days, that means proposals need to be justified, at least in part, based on
outcomesthe potential benefits to the public.

29

Exhibit 13-1
Key Issues in Results-Based Budgeting
1. Need to increase focus on outcomes, not only inputs and outputs
2. Limitations in the usefulness of performance measurement information for resultsbased budgeting
3. Time frame that should be covered by results-based budgeting, especially considering
that outcomes often occur years after the one in which the funds were budgeted
4. Whether proposed inputs can be linked to outputs and outcomes
5. The role of efficiency indicators
6. Setting performance targets in budgets
7. Use of explanatory information
8. Strength of program influence over future outcomes
9. Using performance information in formulating and examining budget requests
10. Applying results-based budgeting to internal support services
11. Using results-based budgeting for capital budgeting
12. Budgeting-by-objectives and budgeting for outcomes
13. Special analytical techniques for projections
14. The role of qualitative outcome information in results-based budgeting

30

Exhibit 13-2
Suggested Steps in Developing Outcome Targets
1. Examine the agencys strategic plan (if one exists). Targets contained in the budget
should be compatible with targets in the strategic plan.
2. Analyze the historical relationships between inputs (expenditures and staffing),
outputs, and outcomes. Examine any explanatory information that accompanied the
historical data. Use that combination of information to provide an initial estimate of
targets compatible with the amount of resources being considered for the programs
proposed budget.
3. Consider each factor listed in exhibit 13-3 (such as outside resources, environmental
factors, changes in legislation or requirements, and expected program delivery changes)
and adjust the targets accordingly.
4. Consider the level of outcomes achieved by similar organizations or under various
conditions (as discussed in chapter 9). For example, the outcomes achieved by betterperforming offices or facilities that provide similar services are benchmarks the
program may want to emulate.
5. Review the findings and recommendations from any recent program evaluations to
identify past performance levels and past problems. Consider their implications for the
coming years.
6. Use program analysis, cost-effectiveness analysis, and/or cost-benefit analysis to
estimate the future effects of the program.

31

Exhibit 13-3
Factors to Consider When Selecting Specific Outcome Targets

Past outcome levels I. The most recent outcomes and time trends provide a starting
point for setting the outcome targets. (For example, recent trends may indicate that
the values for a particular outcome indicator have been increasing annually by 10
percent in recent years; this would indicate the next years number should be
increased by a similar percentage.)

Past outcome levels II. If the values for an outcome indicator are already high, only
small improvements in the outcome level can reasonably be expected. If the values
for an outcome indicator are low, future improvements can be expected to be larger
(there is more room for improvement).

Amount of dollar and personnel resources expected to be available through the target
period. If staff and funds are being reduced or increased, how will this affect the
programs ability to produce desired outcomes?

Amount of outside resources expected to supplement the programs resources.


Potential sources include other agencies, foundations, volunteers, and the business
community. If such resources can play a significant role in producing the outcomes
sought by the program, and the program has indications that these are being
significantly increased or decreased, how is this likely to affect the outcomes?

Factors likely to be present in the wider environment through the target period. These
include such factors as the economy, population demographics, weather, major
changes in industries in the area (such as major new industries scheduled to begin or
depart), and major changes in international competition.

Recent or pending changes in legislation and other requirements of higher-level


governments. To what extent are they likely to increase or decrease the ability of the
program to produce favorable outcomes?

Changes planned by the program in policies, procedures, technology, and so on. It is


important to consider lead times to implement such changes.

Likely lag times from the time budgets are approved until the outcomes are expected
to occur. This applies both to the effects of past years expenditures on the outcome
values targeted for the budget year and to the likely timing of outcomes produced
with the funds allocated in the budget year. (For some outcome indicators, effects will
be expected in the budget year, but for others, effects will occur primarily in years
after the budget year.)

Political concerns. Politics may at times push for reporting outcome targets that
exceed feasible levels. (Even so, the program and budget analysts should provide
those selecting the targets with estimates of the likely achievable levels of outcomes.)

32

Exhibit 13-4
Steps for Examining Performance Information in Budget Requests
1. Examine the budget submission to ascertain that it provides the latest information and
targets on the workload, output, intermediate outcomes, and end outcomesas well as
the funds and personnel resources requested.
2. Assess whether the outcome indicators and targets are consistent with the mission of,
and strategies proposed by, the program and adequately cover that mission.
3. If the program is seeking increased resources, assess whether it has provided adequate
information on the amount each output and outcome indicator is expected to change
over recent levels.
4. Examine the programs projected workload, outputs, intermediate outcomes, and end
outcomes, as well as the amounts of funds and personnel. Make sure these numbers
are consistent with one another (e.g., that the amount of output is consistent with the
projected workload). Determine whether the program has included data on the results
expected from the outputs it has identified.
5. Compare past data on workload, output, intermediate outcomes, and end outcomes
with the proposed budget targets. Identify unusually high or low projected outputs or
outcomes.
6. Examine the explanatory information, especially for outcome indicators whose past
values fell significantly below expectations and for any performance targets that
appear unusually high or low.
7. For programs likely to have delays or backlogs that might complicate program
services, be sure the data adequately cover the extent of delays, backlogs, and lack of
coverage.
8. For regulatory programs, be sure that adequate coverage is provided for compliance
outcomes (not merely number of inspections).
9. Ascertain that program has sufficiently considered possible changes in workload that
are likely to affect outcomes (such as higher or lower proportions of difficult
workload).
10. If recent outcomes for a program have been substantially worse than expected, make
sure the program has included in its budget proposal the steps, and resources, it plans
to take toward improvement.
11. Examine findings from any program evaluations or other special studies completed
during the reporting period. Assess whether these findings have been adequately
incorporated into the budget proposals.

33

12. Determine whether the program has developed and used information on the
relationship between resource requirements, outputs, and outcomes (e.g., the added
money estimated to increase the number of successfully completed cases by a
specified amount).
13. Identify indicators with significantly reduced outputs or outcomes projected for the
budget year (compared to recent performance data) and no decrease in funding
(adjusted for projected price increases) or staffing. Identify and assess the programs
rationale for these reductions.
14. Identify outcome indicators with significantly improved outcomes projected by the
program for the budget year (compared to recent performance data) and no increase in
staffing, funding (adjusted for projected price increases), or output. Identify and assess
the programs reasons for these increases.
15. Identify what, if any, significant outcomes from the budgeted funds are expected to
occur in years beyond the budget year. Assess whether they are adequately identified
and support the budget request.
16. Identify any external factors not considered in the budget request that might
significantly affect the funds needed or the outcomes projected. Make needed
adjustments.
17. Compare the latest program performance data to those from any other programs with
similar objectives for which similar past performance data are available. Assess
whether projected performance is compatible with that achieved by similar programs.
18. Identify any overarching outcome indicators that can provide a more meaningful and
comprehensive perspective on results. Consider coordinating with other programs,
other agencies, and other levels of government.

34

References and Notes


1. Personal communication with David Strauhs, Commissioner of Massachusetts Department of
Environmental Protection, December 4, 1997.
2. The OMB web site fully describes the PART process. The web site also presents the detailed ratings for
each federal program. See, for example, a PowerPoint summary of the process, Program Assessment
Rating Tool (PART): Improving Performance, March 2005.
3. The word target is not always used in this context. The Government Performance and Results Act of
1993 uses the term annual goals. Another terminology problem arises for programs, such as law
enforcement, in which the word targets for some outputs or intermediate outcomes might be interpreted as
establishing quotas, such as on the number of arrests, prosecutions, or collections. For this reason,
programs whose missions are investigative, such as criminal investigation activities, might use another,
more neutral, label, such as projections.
4. The terms performance-based budgeting and budgeting-for-results are also used.
5. This book does not address the many issues involved in developing comprehensive cost estimates for
particular programs, such as how to handle indirect or capital costs. For many public and private agencies,
cost accounting is deficient. Efforts such as activity-based costing may help, but they still have substantial
limitations for projecting future costs. More important for budget formulation is for agencies to have good
cost analysis capabilitythat is, to be able to estimate the likely additional expenditures that will be
incurred to produce particular outcome levels. One approach to cost analysis is contained in David H.
Greenberg and Ute Appenzeller, Cost Analysis Step by Step: A How-to Guide for Planners and Providers
of Welfare-to-Work and Other Employment Training Programs (New York: Manpower Demonstration
Research Corporation, October 1998).
6. This, however, can be a major problem for developing countries that have not yet established reasonably
accurate procedures for tracking expenditures and outputs.
7. Those readers who do not believe that response times to requests for services should be labeled an
outcome might prefer a label such as quality-of-output indicator.
8. For the application of performance targeting in other countries, see Sylvie Trosa, Public Sector Reform
Strategy: A Giant Leap or a Small Step? in Monitoring Performance in the Public Sector: Future
Directions from International Experience, edited by John Mayne and Eduardo Zapico-Goni (New
Brunswick, NJ: Translation Publishers, 1997).
9. Preferably, an agency would sponsor in-depth program evaluations for each of its major programs, say,
once every few years. New programs might be required to provide an evaluation strategy. Unfortunately,
in-depth evaluations are expensive and time-consuming. Agencies and programs with highly limited
resources might instead schedule periodic, but less comprehensive, reviews of each of their programs to
learn more about how well they are working and why.
10. Degree of influence does not refer to the ability of an agency or program to manipulate the data to its
own advantage. That is a quality control issue, discussed in chapter 14.
11. The costs of support services, however, need to be considered when analyzing the total costs of a
program and comparing its costs to its benefits.
12. Note that activities relating to collecting revenues, such as taxes and fees, are not included here as
internal activities. Performance indicators indicating the extent of success, such as percent of owed
taxes that were collected, can be considered major outcome indicators for governments.
13. This approach is discussed in Mark Friedman, A Guide to Developing and Using Performance
Measures in Results-Based Budgeting (Washington, DC: The Finance Project, May 1997).

35

14. This approach is presented in The Price of Government by David Osborne and Peter Hutchinson (New
York: Basic Books, 2004), especially chapter 3.
15. An example of this is the crosswalk developed by the Oregon Progress Board and the Department of
Administrative Services, 1999 Benchmark Blue Books: Linking Oregon Benchmarks and State Government
Programs (Salem, May 1999).
16. Few publications are available that suggest specific steps for reviewing budget proposals that include
examining the outcome consequences of the proposed budget levels. A recent document prepared for state
legislative analysts nevertheless appears also applicable to budget examinations for any level or branch of
government: Asking Key Questions: How to Review Program Results (Denver, CO: National Conference
of State Legislatures, 2005). However, it primarily focuses on past results, rather than also including an
examination of the proposed outcomes.
17. The U.S. Office of Drug Control Policy has been a leading agency in attempting to work out such
cooperative efforts among federal, state, local, and foreign governments. See, for example, National Drug
Control Strategy: FY 2007 Budget Summary (Washington, DC: The White House, 2006).

36

You might also like