DOCUMENT RESUME
ED 457 437
AUTHOR
TITLE
PUB DATE
NOTE
PUB TYPE
EDRS PRICE
DESCRIPTORS
CG 031 173
Granello, Darcy H.; Granello, Paul F.
Counseling Outcome Research: Making Practical Choices for
Real-World Applications.
2001-00-00
12p.; In its: Assessment: Issues and Challenges for the
Millennium; see CG 031 161.
Opinion Papers (120)
MF01/PC01 Plus Postage.
Community Health Services; Counseling; *Counseling
Effectiveness; Evaluation; *Outcomes of Treatment; *Theory
Practice Relationship
ABSTRACT
The incorporation of published outcome data into clinical
practice plays a significant role in determining appropriate treatment
interventions and the efficacy of various modalities. If practitioners are
willing to conduct their own outcome research, the results will enhance the
quality of care for clients and improve the quality of information provided
to funding sources. When simple measures of effectiveness are implemented,
the demonstrated outcomes from such research can be a very effective tool for
providing evidence of treatment success. To begin outcome research,
counselors must have an understanding of efficacy studies and effectiveness
studies. Counselors can use published efficacy studies to make initial
choices about treatment interventions, and then conduct effectiveness studies
on their own practice to measure the success of their treatment. The results
of effectiveness studies can be useful in helping with allocation of
resources and in marketing programs to community and health care
organizations. (Contains 28 references.) (JDM)
Reproductions supplied by EDRS are the best that can be made
from the original document.
Counseling Outcome Research: Making
Practical Choices for Real-World
Applications
By
Darcy H. Granello
Paul F. Granello
BEST COPY AVAILABLE
U.S. DEPARTMENT OF EDUCATION
Office of Educational Research and Improvement
EDUCATIONAL RESOURCES INFORMATION
CENTER (ERIC)
0 This document has been reproduced as
received from the person or organization
originating it.
0 Minor changes have been made to
improve reprbduction quality.
CY)
r--
1
Points of view or opinions stated in this
document do not necessarly represent
official OERI position orpolicy.
cf)
2
Chapter Fourteen
Counseling Outcome Research: Making
Practical Choices for Real-World
Applications
Darcy H. Granello
Paul F. Granello
Abstract
Mental health practitioners are increasingly being called upon
to demonstrate the effectiveness of their clinical interventions.
Effectiveness studies are a type of outcome research that can provide
useful information to clinicians and to managed care organizations.
In an age of managed care, counselors are increasingly being called
upon to demonstrate the effectiveness of their clinical interventions
(Granello, Granello, & Lee, 1999). The ability to demonstrate treatment
success is rapidly becoming the standard by which reimbursement is
judged (Sexton, 1996). In spite of these pressures, many counselors
have been left unprepared to meet this new standard. Historically,
mental health practitioners used professional judgment and theoretical
beliefs to determine treatment interventions. Fee-for-service policies
and insurance reimbursement were assumed, and insurance companies
rarely questioned treatment decisions (Plante, Couchman, & Diaz,
1995). In the current practice environment, however, counselors who
cannot demonstrate their successes may find themselves unable to
survive professionally (Burlingame, Lambert, & Reisinger, 1995).
Although the demonstration of treatment effectiveness is
increasing in importance, many mental health professionals and
agencies have resisted participation in outcome measures, and there is
widespread resistance among mental health professionals to beginning
their own assessment programs (Plante, et al. 1995). Studies have
revealed that the vast majority of mental health practitioners report
that they do not read research or engage in research and believe that
163
research has little or no impact on their counseling practices (Cohen,
Sargent, & Sechrest, 1986; Falvey, 1989). In 1983, Norcross and
Prochaska found that when presented with 14 reasons to select a
particular approach or orientation with a client, the psychologists in
their study rated outcome research 10th, just above "family
experiences" and "own therapist's orientation." More recently, Norcross
(2000) noted there was little evidence that this ranking had improved
significantly during the past 17 years, although he predicted that the
recent emphasis on the importance of outcome research should result
in increased reliance on such research in the future. A recent survey
found that although the majority of the clinical diplomates of the
American Board of Professional Psychology (65%) supported the
development of empirically supported treatments, the majority of
respondents (54%) did not routinely use them in their practices (Plante,
Anderson, & Boccaccini, 1999).
Both philosophical and practical concerns have been identified at
the root of the resistance to engaging in outcome research and
incorporating research results into practice. Philosophically, some
providers have argued that the invasion of accountability into mental
health care has negatively affected therapeutic decision making
(Sherman, 1992). Some argue that the therapeutic process itself is not
quantifiable (Mirin & Namerow, 1991) or that clinical flexibility,
clinical judgment, and creative expression of theory should be valued
more than scientific method and statistical analysis (Havens, 1994).
Still others argue that time spent in evaluation could be better used in
treatment (Plante, et al. 1995). Even among clinicians who are willing
to conduct outcome research, practical concerns often stand in the way.
Practitioners may erroneously believe that the task will be
overwhelming or that a program of research will necessarily be costly,
complex, and time-consuming (Granello et al., 1999). What has become
apparent is that few mental health practitioners have received the
training they need to conduct such research. Research methods courses
in university programs often focus on understanding laboratory research
with true experimental designs that are often impossible to implement
in real-world assessment (Sandell, Blomberg, & Lazar, 1997). Thus,
practitioners may be ill prepared to conduct their own outcome research,
regardless of their willingness to do so.
The incorporation of already published outcome data into clinical
practice plays a significant role in determining appropriate treatment
interventions and the efficacy of various modalities (Sexton, 2000).
Bridging the gap between research and practice is essential (Whiston
& Coker, 2000). However, if a practitioner is willing to conduct his or
her own outcome research, in conjunction with already published
research to support general clinical interventions, the result will be
164
4
enhanced quality of care for clients and improved quality of information
provided to funding sources (Granello, Granello, & Lee, in press).
Measuring treatment effectiveness need not be a difficult or
cumbersome task. Simple measures of effectiveness can be
implemented quite easily, and the demonstrated outcomes from such
research can be a very effective tool for providing evidence of treatment
success.
Methodological Considerations
To engage in outcome research, counselors must first have an
understanding of the two main types of research that are used to
demonstrate clinical success: efficacy studies and effectiveness studies.
Efficacy studies use random assignment to treatment and control group,
manualize treatment, and use participants who meet criteria for a single
diagnosed disorder (Seligman,
1995; Wampold, 1997). Additionally, there are clearly defined
inclusion and exclusion criteria for clients and an adequate sample
size to obtain the necessary statistical power (Fishman, 2000). Efficacy
studies provide useful information and are appropriate designs for
laboratory studies or settings in which highly controlled manipulation
of variables is possible (Sandell et al., 1997). However, these studies
are very expensive and time-consuming and often are funded through
a university or through a grant offered by a foundation or a
pharmaceutical company.
Effectiveness studies, on the other hand, attempt to answer how
well clients fare under treatment as it is actually practiced in the field.
Such studies yield useful and credible information that can empirically
validate psychotherapy (Lambert, Huefner, & Nace, 1997).
Effectiveness studies recognize that less-than-methodologically-ideal
situations exist in the field. Among these situations are that (a) therapy
is not always of fixed duration, and typically continues until the client
improves or quits or until insurance coverage runs out; (b)
psychotherapy often is eclectic rather than manualized and typically is
self-correcting (e.g., if one technique is not working, then another
usually is tried); (c) clients typically present with multiple problems,
some subclinical and some diagnosable, rather than the pure diagnoses
represented in efficacy studies; and (d) psychotherapy in the field
typically is concerned with improvements in general functioning rather
than in specific symptom relief, which is the typical measure in efficacy
studies (Seligman, 1995).
Efficacy and effectiveness studies have different strengths and
limitations. Efficacy research typically has high internal validity but
low external validity. The conditions under which efficacy research is
165
5
conducted are so structured that there is a high degree of confidence
that changes that occur are due to the treatment, not to confounding
variables. However, the conditions under which efficacy research is
conducted are often so dissimilar to what happens in the field that
there is a low degree of confidence in generalizing the results of a
particular study to field conditions. Conversely, effectiveness studies
have high external validity but low internal validity. Because they
sample a population directly from the field, there is a high level of
confidence that results can be generalized to other members of the
population (Fishman, 2000). The lack of a control group and of therapist
adherence to specific treatment interventions are noteworthy, however,
and lead to concerns about confounding variables (e.g., the passage of
time) that might affect treatment results (Granello et al., 1999). Overall,
efficacy and effectiveness studies provide complementary research
designs. Counselors can use published efficacy studies to make initial
choices about treatment interventions, then conduct effectiveness
studies on their own practice to measure the success of their treatment
(Granello & Hill, 2000).
Research Design
Research design is guided by the research questions under
investigation (Granello & Hill, 2000). What specific information does
the counselor wish to have about his or her practice or clients? Clinicians
wishing to engage in tracking the success of a single client for
reimbursement purposes would ask different research questions than
would those wishing to investigate their treatment success with their
overall client load or with clients having particular disorders (e.g.,
anxiety disorders).
Many effectiveness studies follow a pre-post or prepostfollow-
up design. That is, clients are given an instrument or series of
instruments upon entering treatment, and the same instrument or
instrument battery is given at discharge, and if desired, at pre-designated
follow-up periods (typically 3, 6, or 12 months, or all three). Other
types of effectiveness studies track the progress of a single client at
various points in treatment (e.g., every week, every month), on a specific
rating scale, with results that can be represented graphically to
demonstrate progress. Still other studies use existing data from client
records (e.g., Global Assessment of Functioning scores) to make
comparisons over time or across client groups. Thus, for a single client,
the counselor may choose to measure the reduction of a very specific
symptom and engage in a single-case pre-post design, using a repeated
measures t-test, or may choose to forego statistical analysis in favor of
a graphic representation of multiple data points. To measure symptom
166
6
wish to collect
the clinician may
reduction in multiple clients,comparisons
repeated
measures
(via
demographic data and make
depending
on
types of symptoms
From this
MANOVA) of reduction of various
or Axis I diagnosis.
demographic data (e.g., age, gender)could
he
or
she
is very
learn that
information, for example, a clinician
clinical depression to reduce their
effective at helping clients with but not as effective at helping to
cognitive symptoms of depression Likewise, she or he could discover
reduce the behavioral components.
implemented seem to work well for female clients
that the treatments
clients. Clearly, all of this information
but are less successful with male
improving clinical effectiveness.
can yield valuable data for
Selecting Instruments
the type of data that can be obtained,
Instrumentation determines instrumentation
must be made with
should
and thus the choices regarding
questions that are being investigated
to
care. The basic research
strongly
encouraged
are
guide the instrument selection. Clinicians
and
reliability
established validity
use existing instruments with attempting to develop their own.
whenever possible, rather than
large commitments of
Independently developed instruments require
data is
reliability and validity, and once
time and resources to ensure
made with norming groups from
collected, no comparisons can be The test manual for a published
existing research (Hansen, 1999). samples that can help determine
instrument should provide norming tested should be compared with
whether the person or sample being
instruments, practitioners
from existing
the test norms. When selecting instruments, including time required to
should consider the cost of the
results. Further, it is important to
administer, score, and analyze the
changes in symptomatology
consider a measure that is sensitive to1994; see Lambert, Ogles, &
(Burlingame et al., 1995; Waxman,
and analyze the appropriateness
Masters, 1992 for methods to select
of outcome instruments).
rather than just one, may
Using a small battery of instruments,
be useful to collect data from
provide the best information. It may
family/
report, clinician rating,
several different sources (e.g., client
picture of the client's functioning
teacher rating) to gain a clearer 1997). Counselors should take care
(Sexton, Whiston, Bleuer, & Walz, administer so many instruments
clients or to
not to overburden their
data, however. Two or three short
overwhelmed
with
that they are
questionnaire, may be sufficient
instruments, plus a demographics
(e.g., a Global Assessment of
(Granello et al., 1999). Clinician ratings
component of treatment
Functioning score) can be an important
unique position to provide insight
evaluation, as clinicians may be in a
167
into patient progress. Using clinician
ratings as a stand-alone measure
of progress is unwise,
however, as they have been
criticized for their
subjectivity (McLeod, 1994).
Using the Results
The results of effectiveness studies
can be useful in a variety of
ways. In several large-scale
outcome
studies
data on program effectiveness
conducted by the authors,
child partial hospitalization were useful in marketing both adult and
companies, and to managed programs to the community, to insurance
et al., in press). Importantly,care panels (Granello et al., 1999; Granello
a measure of client satisfaction
essential part of this research
was an
and was
highlighted in marketing
materials. In a study of an eating disorder
unit, results of the
effectiveness research were used to increase
to that unit (Granello & Hill,
2000).
hospital resources allocated
Conducting such research has
other, less tangible results.
Clinicians with access to data
can use those data to improve their
treatment interventions, and research
has found that practitioners'
efficacy improves when they
are involved in research (Hauri,
Corson, & Violette, 1988). Reports
Sanborn,
from agencies that make
attempts to investigate their outcomes
systematic
indicate that
become aware of variations
in client outcomes, they once clinicians
position to generate ideas for
are in a better
improvement and hypotheses for
testing ("Authors pose," 1997).
further
have great clinical importance. Thus, data collection and analysis may
Tips for Implementation
Although effectiveness studies
with Seligman's (1995) assertion clearly have limitations, we agree
that they are a complementary
research method to efficacy studies.
They provide practitioners with
research that is clinically useful
and important for negotiating
care contracts, while allowing meaningful
managed
research
to
be
with minimal disruption
conducted
to their work with clients.
Practitioners wishing to conduct
outcome research in their own
practice are encouraged to keep
a
few
important
(see Granello et al., 1999 for a more
suggestions in mind
implementation of effectiveness studies). complete discussion on
1. Effectiveness studies cannot
be all things to all people.
designs with multiple
Complex
administations
and
a
large
number
of
instruments may so overwhelm
the
clinician
that
they
are never
completed or, once completed,
are never statistically analyzed
in a meaningful way. For practitioners
just beginning to collect
168
data, our recommendation is to keep the data collection and
analysis manageable.
2. Although outcome research need not be cost prohibitive, some
foresight will be necessary to set aside sufficient funds for
instruments and, if necessary, data analysis. We have found
that university-agency collaboration, although not necessary,
can provide a symbiotic relationship (data for the university,
data analysis for the agency).
3. As much as possible, the collection of data should be integrated
into clinical practice (e.g., put pretests in admissions packets
so they are not forgotten).
4. For clinicians not currently collecting data, any step, however
small, is a step in the right direction. Collecting data on
treatment effectiveness can provide both an external benefit
in terms of marketing and an internal benefit in validating and
improving clinical success.
References
Authors pose 7 questions to address in designing outcomes system.
(1997, August). Behavioral Health Outcomes, 2(7), 9-10.
Burlingame, G. M., Lambert, M. J., & Reisinger, C. W. (1995).
Pragmatics of tracking mental health outcomes in a managed care
setting. Journal of Mental Health Administration, 22, 226-236.
Cohen, L. H., Sargent, M. M., & Sechrest, L. B. (1986). Use of
psychotherapy research by professional psychologists. American
Psychologist, 41, 198-206.
Falvey, E. (1989). Passion and professionalism: Critical rapprochement
for mental health research. Journal of Mental Health Counseling,
11, 86-95.
Fishman, D. B. (2000, May 3). Transcending the efficacy versus
effectiveness research debate: Proposal for a new, electronic "journal
of pragmatic case studies." Prevention and Treatment, 3, Article 8.
[On-line]. Available: http://journals.apa.org/prevention/volume3/
pre0030008a.html.
Granello, D. H., Granello, P. F., & Lee, F. (1999). Measuring treatment
outcomes and client satisfaction in a partial hospitalization proaram.
The Journal of Behavioral Health Services and Research, 26,50
63.
169
Granello, D. H., Granello, P. F., & Lee, F. (in press). Measuring
treatment outcome in a child and adolescent partial hospitalization
program. Administration and Policy in Mental Health.
Granello, D. H., & Hill, L. (2000). Measuring treatment outcome in
eating disorders programs: Recommendations for research design
and implementation. Manuscript submitted for publication.
Hansen, J. C. (1999). Test psychometrics. In J. W. Lichtenberg & R.
K. Goodyear (Eds.), Scientist-practitioner perspectives on test
interpretation (pp. 15-30). Boston: Allyn & Bacon.
Hauri, P., Sanborn, C., Corson, J., & Violette, J. (1988). Handbook for
beginning mental health researchers. New York: Haworth.
Havens, L. (1994). Some suggestions for making research more
applicable to clinical practice. In. P. F. Tally, H. H. Strupp, & S. F.
Butler (Eds.), Psychotherapy research and practice: Bridging the
gap (pp. 88-98). New York: Basic Books.
Lambert, M. J., Huefner, J. C., & Nace, D. K. (1997). The promise and
problems of psychotherapy research in a managed care setting.
Psychotherapy Research, 7,321-332.
Lambert, M. J., Ogles, B. M., & Masters, K. S. (1992). Choosing
outcome assessment devices: An organizational and conceptual
scheme. Journal of Counseling and Development, 70, 527-532.
McLeod, J. (1994). Doing counselling research. London: Sage.
Minn, S. M., & Namerow, M. J. (1991). Why study treatment outcome?
Hospital and Community Psychiatry, 42, 1007-1013.
Norcross, J. C. (2000, August 25). Toward the delineation of empirically
based principles in psychotherapy: Commentary on Beutler (2000).
Prevention and Treatment, 3, Article 28. [On-line] Available: http:/
/journals.apa.org/prevention/volume3/pre0030028c.html.
Norcross, J. C., & Prochaska, J. 0. (1983). Clinicians' theoretical
orientations: Selection, utilization, and efficacy. Professional
Psychology, 14, 197-208.
Plante, T. G., Anderson, E. N., & Boccaccini, M. T. (1999). Empirically
supported treatments and related contemporary changes in
psychotherapy practice: What do clinical ABPPs think? Clinical
Psychologist, 52, 23-31.
0
170
& Diaz, A. R. (1995). Measuring
Plante, T. G., Couchman, C. E., satisfaction among children and
treatment outcome and client Administration, 22, 261-269.
families. Journal of Mental Health
(1997). When reality doesn't fit
Sandell, R., Blomberg, J., & Lazar, A. psychoanalysis and long-term
the blueprint: Doing research onservice program. Psychotherapy
psychotherapy in a public health
Research, 7,333-344.
effectiveness of psychotherapy: The
Seligman, M. E. P. (1995). The
Psychologist, 50, 965-974.
Consumer Reports study. American
of counseling outcome research:
Sexton, T. L. (1996). The relevance
Journal of Counseling
Current trends and practical implications.
and Development, 24, 590-600.
clinical training: In pursuit of
Sexton, T. L. (2000). Restructuring Counselor Education and
evidence-based clinical training.
Supervision, 39, 218-227.
Bleuer, J. C., & Walz, G. R. (1997).
Sexton, T. L., Whiston, S. C.,
counseling practice and training.
Integrating outcome research into
Association.
Alexandria, VA: American Counseling
practice models in managed health
Sherman, C. F. (1992). Changing
Private Practice, 11 , 29-32.
care. Psychotherapy in
problems in identifying
Wampold, B. E. (1997). Methodological
7, 21-43.
efficacious studies. Psychotherapy Research,
hospital-based program for
Waxman, H. M. (1994). An inexpensive
Community Psychiatry, 45,160
outcome evaluation. Hospital and
162.
Reconstructing clinical training:
Whiston, S. C., & Coker, J. K. (2000).
Education and Supervision,
Implications from research. Counselor
39,228-253.
11
171
About the Authors
Darcy Haag Granello, is
education at The Ohio State an associate professor of counselor
University. She received her Ph.D. in
counselor education from The Ohio
University and her M.S. in mental
health counseling from Stetson
has two main research interests: University in De Land, Florida. She
conducting outcome research in
mental health and assessing and
clinical
of counselor trainees. She has enhancing the cognitive development
published more than 30 refereed
articles in national journals and
journal
five book chapters,
articles or book chapters related
including nine
to conducting outcome research. She
received a research award from ACES
in 1998, and is the list owner
for COUNSGRADS, the national
student mailing list server.
Paul F. Granello, is an assistant
at The Ohio State University. He professor of counselor education
education from The Ohio Universityreceived his Ph.D. in counselor
in Athens, Ohio, and his M.S. in
mental health counseling from
Stetson
University in De Land, Florida.
His two primary areas of research
are
integrating
wellness into mental
health counseling and conducting
outcome
research.
He has published
12 articles in national journals
and five book chapters, including
articles or chapters related to
eight
outcome research. He is a member of
The Ohio State University College
of Education Research Committee.
12
172
F
U.S. Department of Education
,S.1ESt
-.01-.?"41N,A
tlit:V.1742,1
Office of Educational Research and Improvement (OERI)
National Library of Education (NLE)
Educational Resources Information Center (ERIC)
0.0
\64.40)1
ERIC
NOTICE
Reproduction Basis
0
This document is covered by a signed "Reproduction Release
(Blanket)" form (on file within the ERIC system), encompassing all
or classes of documents from its source organization and, therefore,
does not require a "Specific Document" Release form.
This document is Federally-funded, or carries its own permission to
reproduce, or is otherwise in the public domain and, therefore, may
be reproduced by ERIC without a signed Reproduction Release form
(either "Specific Document" or "Blanket").
EFF-089 (3/2000)