Nothing Special   »   [go: up one dir, main page]

Best Practices For Conducting An RCA

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Best Practices for Conducting an RCA: Are There Any?

October 13, 2014 ‐ Leslie Proctor

Best Practices for Conducting an RCA: Are There Any?

By Shea Polancich, PhD, RN; Linda Roussel, DSN, RN; and Patricia Patrician, PhD,
RN

Patient safety has been a priority in the healthcare industry for 15 years. Despite the call to action
provided by the Institute of Medicine (IOM) in 1999 (Kohn, Corrigan & Donaldson, 2000), studies 5
and 10 years later revealed that while improvements in safety have been made, there is still
considerable work and attention needed in this domain (Wachter, 2004; Wachter, 2010).

During these years of studying safety in the healthcare environment, methods for examining adverse
events from industries such as engineering and aviation have been borrowed and applied. Incident
analysis has become a standard for risk mitigation and proactive patient safety, and several tools
have been developed to examine or critically analyze a patient safety failure or adverse event.

Purpose of the Review

The Root Cause Analysis (RCA) is one such tool that has been used in a variety of settings to
understand flaws in systems or processes. The RCA has been used by quality and safety professionals
since the 1997 mandate by The Joint Commission (TJC) as a means of understanding causation of
events and a basis for making system or process changes that mitigate risks (Percarpio, 2008). It is
important that the information obtained from the RCA process is reliable and valid. However, it has
been noted that the definition of what constitutes an adverse event, variation in the components of
RCA reports, and inconsistency in RCA action plan follow up exists despite the call to a more
standardized approach to the process (Wu, Lipshutz, & Pronovost, 2008).

After more than a decade of only minor to modest improvement in healthcare quality (Wachter,
2010), the question, “Why there is less than desirable progress?” is a subject for examination. One
could argue that with a greater focus on medical error there is more awareness and perhaps, better
reporting. However, from an improvement perspective, one could counter argue that after an event
was recognized and causes were identified there was a failure to “fix” the underlying problem. An
alternative hypothesis is that the analysis of the event did not produce the appropriate or most
correct factors contributing to causation. This last proposition could depend on the tool or technique
used to examine the event, and to produce the causative factors.

The purpose of this review is to examine the current literature specific to the technical performance
or conduct of the RCA, and to identify any knowledge gaps or variations that may limit the utility of
the findings from an event review to make improvements. Opportunities for improving the RCA
process will also be explored.
Methods for Incident Analysis or Event Review

There are several methods that may be used in analyzing events. These methods may include, but
are not limited to the following analyses:

• Inferential
• Failure Modes and Effects Analysis (FMEA)
• Fault Tree
• Ishikawa (Fishbone)
• Pareto
• Barrier
• Root Cause

Some of these methods have specific uses. For example, mathematical models such as inferential
techniques have been used to assign probability to factors that may result in a negative outcome or
event. The FMEA is typically a proactive process to identify failure points prior to an event, and to use
those failure points to proactively mitigate risk. And, Ishikawa diagrams have been used as a tool for
visually depicting factors considered to be causal components of an event.

However, no matter the technique or method used to examine an event or failure, the objective
remains the same. The purpose of incident analysis or RCA is to gain a comprehensive understanding
of an event, typically in sequence of steps, to examine any gaps or failures that occurred during the
steps, to ask “why” the failure or failures occurred, and then to critically explore and recommend
action items to prevent the identified cause(s) from reoccurring. One of the most common methods
or tools used to examine a patient safety event within the healthcare environment is the Root Cause
Analysis (RCA).

History of Root Cause Analysis

The RCA is a method of problem identification that ultimately helps in problem solving. The RCA
process is a systematic method of examining events in order to determine root causes or factors that
precipitated the event. The process is retrospective or reactive in nature, meaning it is performed
after an event has occurred. The process focuses not only on the apparent factors associated with an
incident, but also seeks to explore any hidden or latent factors for causation with the ultimate goal of
developing strategies to mitigate the risk for future occurrence.

The earliest uses of the RCA were in examining industrial accidents and in engineering. The method
was advanced by Sakichi Toyoda, the founder of Toyota Industries. Toyoda used the “5 Whys” to
understand factors associated with failures (Toyota, 2005). The 5 Whys is simply to repeat “why” at
least five times until the root cause of an incident or failure is discovered.

From the work of Toyoda, the RCA process grew, and other industries adopted and evolved this
problem solving method. Occupational health and safety used the process to examine events
through accident analysis. Manufacturing developed the process of quality management and used
the RCA for production failures. In addition, manufacturing and business used the RCA for process-
based failures. The RCA process has continued to evolve into various forms as a result.

In the 1980s and 1990s, as industries began to adopt Six Sigma, the RCA process was incorporated
into the examination of defects or failures in production (Pande, Neuman, Cavanagh, 2000). Since the
goal of attaining Six Sigma level performance is to decrease defects to a level of 3.4 per million,
understanding how defects are translated to failures is important to the healthcare industry. One key
milestone in the healthcare industry occurred in 1999 when the Institute of Medicine began to
publish information on the excessive number of deaths that resulted from inappropriate medical care
or clinical errors (IOM, 1999). Subsequently, the use of the RCA became a preferred method of
examining failures for the healthcare system.

Review of Literature

The majority of evidence in the literature regarding the performance of the RCA process comes from
descriptive studies and systematic reviews on certain aspects or components of the RCA. Using the
most recent five years of literature available, very limited evidence specific to the technical
performance of the RCA was found. Therefore, this literature review focuses on specific common
themes that were noted from the small subset of articles on the technical performance or conduct of
an RCA.

Data Sources and Literature Searches

The primary strategy used to search the literature was to identify published studies associated with
the use of RCA process. The MeSH heading of “root cause analysis” was used to examine the
literature.

The first database search was completed using PubMed for information from Medline. The search in
PubMed began with the use of the terms “Root Cause Analysis” with an additional set of qualifying
terms connected through the Boolean AND “Patient Safety.” The reason for the additional qualifier
was to limit the search to use for patient safety events and not for the global use of root cause
analysis.

The second search was completed using the Cumulative Index of Nursing and Allied Health
Literature (CINAHL) database to examine the nursing literature. CINAHL contains article citations with
abstracts, when available, from nursing and allied health disciplines. Again, the search strategy was
the same as for Medline through PubMed.

Using the MeSH heading of “Root Cause Analysis” AND “Patient Safety” the initial PubMed and
CINAHL searches combined identified over 300 articles. However, the search was further refined
using the search function limitation to include only the most recent five years and human subjects.
The result of this search yielded 76 articles from PubMed and 54 articles from CINAHL meeting these
criteria.
The second step was to identify from each of these search strategies articles relevant to the technical
performance of the RCA. Once these articles were identified, the third step was to read and
comprehensively examine the relevant papers, looking for commonalities and consistent themes
among them.

Data Synthesis and Analysis

The authors manually reviewed the 130 articles identified by the search. The majority of the articles
were related to the use of the RCA tool for incident analysis in a specific patient population or for a
specific type of clinical event, such as for patient falls or transfusion errors. The remaining articles
reviewed were specific to global discussion of incident analysis or the resulting outcomes. Out of this
latter group, eight articles were identified that specifically addressed some component of the
technical performance of the RCA. However, after manual review, two articles were excluded falling
outside the 5-year publication time window, and one was excluded as unrelated. See Table 1 for the
results of the review process for these six articles.

Setting and Country of Origin

All of the studies were conducted in the healthcare environment or related to the healthcare
environment. Of the six studies, three originated in the United States, two in the United Kingdom,
and one in the Netherlands.

Results

There were very few articles that were specific to the topic of the technical performance of the RCA
process, and there was very little consistency among them. In fact, only two of the articles specifically
discussed the steps for conducting the RCA process, and this was done from a broad perspective, but
did not provide the specific details for each step in the process. The themes from these articles
focused on a specific aspect of the process, such as the implementation of an action plan, the
outcomes or resulting improvement, and/or the training competencies of the facilitator.

The only consistent theme in this literature was the lack of improvement resulting from the RCA
process, a finding found in the literature from both the United States and the United Kingdom. This
theme was further expanded to concepts such as organizational factors impacting improvement, the
lack of evaluation from the improvement intervention, and a perceived limitation of the current RCA
process to produce results. Essentially, the RCA process seemed to be a “given” method to
investigate untoward events, however, the results of the action or improvement interventions may
not always achieve the desired results.

The work by Card, et al. (2012), Hettinger, et al. (2013), and Pham, et al. (2010) examined the most
common risk control strategies implemented as a result of the RCA process. In these studies, risk
mitigation strategies relied significantly on administrative controls such as changes or development
of policies, procedures, and training as opposed to design controls or engineering controls, such as
human factors engineering or process redesign. According to these authors, the ability to achieve
sustained improvement is impacted by the type and strength of the “control” prescribed for the
solution or intervention to the causes identified. In addition, the organizational attention or
willingness to accept and support the “control” selected will impact the success of the solution.
Pham, et al. (2010) have even proposed a new method to address the lack of improvement achieved
using the traditional RCA process. They recommended prioritization of RCA findings using a Likert
rating scale, limiting the need to address all problems at once.

Also of significance, this review has highlighted the lack of evidence-based articles published in the
last five years on the “best practice” methodology for conducting the RCA or the technical
performance. This finding is possibly related to the view of the conduct of the RCA as a toolbox
approach as opposed to a prescribed method (Nicolini, Waring, & Mengis, 2011). From the review, it
appears that variation in the tools used to conduct an RCA is the current state globally. In the United
States, The Joint Commission provides a toolkit for the performance of the RCA process and action
plan development
(http://www.jointcommission.org/Framework_for_Conducting_a_Root_Cause_Analysis_and_Action_Pla
n/). However, this approach is not consistently applied across all hospitals, nor is it prescribed by TJC
for their accredited facilities. In countries with national healthcare, such as the United Kingdom (UK),
the training and education for RCA facilitation also exists at the national level. The UK National
Health Service has trained more than 8000 individuals through the National Patient Safety Agency
(Nicolini, Waring, & Mengis, 2011). This approach is also not a prescribed method, but promotes a
variety of tools, including barrier analysis, brainstorming, brain writing, change analysis, 5 Whys,
narrative chronology, nominal group technique, tabular timeline, time person grid, and simple
timeline.

In all of the articles examined, the authors identified some degree of variation in the traditional use
of the RCA process. Whether the variation existed in the education and competency assessment of
the personnel who performed the RCA, the actual technical performance of the process, the tools
used, the findings, the outcomes, the evaluation, or the follow up, there was some aspect of the RCA
process that was inconsistent across the articles reviewed (Bowie, Skinner, de Wet, 2013; Smits et al,
2009). These variations exist at a local level, a system level, or a national level and could explain the
lack of expected outcomes of the RCA process.

Table 1. Articles that Address Technical Performance of Root Cause Analysis

Author(s) Date of Location Objective of Methods Discussion/Findings


Publication of Study Study
Determination • PRISMA causal tree
Smits, Cross
of inter-rater analysis is reliable tool
Janssen, De Netherlands; sectional, 2
reliability in for identifying root
Vet, Zwaan, rater
2009 constructing causes.
Timmermans, 30 hospital comparison,
causal trees • Training in PRISMA
Groenewege, units descriptive
and classifying increases reliability.
Wagner study
root causes
Nicolini, 2011 United Review the Ethnographic • Opportunity around
Waring, Kingdom; process of study contradictions
Mengis incident between potentially
2 large investigation, incompatible
acute NHS reporting, and organizational
hospitals translation of agendas and social
results into norms that drive the
practice process.
• Use of RCA as
governance tool not
necessarily a learning
tool.
• Diverse aims of RCA
may result in failure
to achieve sustainable
change and
improvement.
• Identified challenges
associated with
collecting information,
convening RCA
meeting, conducting
RCA meeting, drafting
reports, and making
and sustaining
changes post review.

• Top three barriers to


RCA success: lack of
time, unwilling
colleagues, and inter-
United professional
Cross-
Kingdom; Follow up and differences.
sectional,
evaluation of • Controlled trials on
Bowie, online survey
Single post-training efficacy of RCA
Skinner, 2013 of health
territorial experiences of lacking.
de Wet professionals,
health RCA trained • Ability to achieved
Descriptive
board, staff desired results from
study
Scotland RCA lacking.
• Lack of closed loop
accountability for
actions.
Author(s) Date of Location of Objective of Methods • Discussion/Findings
Publication Study Study
• Limitations to the
effectiveness of the
RCA for reducing risk.
• Difficulty in forming
causal statement and
developing action
plans.
• Political or
organizational factors
may impact
commitment to safety
solutions.
• RCA teams may not
have the expertise to
develop effective
solutions.
Propose
• Most events are rare,
adapting a risk
Pham, Kim, thus difficult or costly
prioritization
Natterman, to measure as rates.
and reduction
Cover, United • CAST prioritization
2010 process
Goeschel, States may be used to rate
modeled after
Wu, importance of each
the Commercial
Pronovost finding.
Aviation Safety
• Separate components
Team (CAST).
of the process into
different teams of
experts.
• Formalizing and
quantifying beliefs of
experts for prioritizing
intervention for belief
of working.
• Defining and
measuring specific
process measures to
ensure
implementation.
• Evaluating impact of
interventions.
Card, Ward, 2012 United To determine Systematic • Majority of controls
Clarkson States what tools were review were administrative.
being used to • Varied time and
generate risk monetary investment
controls after in the RCA process.
completion of • Lack of ability to
an RCA determine success of
improvement.
• Difficulty generating
and implementing risk
control strategies.
• High quality risk
control plans do no
reliably result from
current practices.

• True root causes may


not be addressed.
• Efficacy of the current
RCA process has been
United questioned.
To introduce
Zachary, States; • No peer reviewed
evidenced
Hettinger, studies that examine
based model
Fairbanks, Multi- impact of RCA .
for assisting
Hegde, healthcare Qualitative • Follow up in most
2013 hospital based
Rackoff, institution analysis institutions typically
RCA teams in
Wreathall, database not performed.
developing
Lewis, Bisantz, review • RCAs often on the
sustainable
Wears individual and not the
solutions
system.
• 14 standardized
solution categories
defined.

Conclusions

The results of the literature review and synthesis provided insight into the finding that the RCA
process is not a prescribed method and that the approach will vary. In addition the improvement
that results from the RCA process may be limited by a variety of factors. These factors range from
aspects of organizational culture to the strength of the improvement intervention.

Whether the improvement is related to the process itself or specific aspects associated with the
process is not clear. However, it is clear that there is a lack of current information on the process and
methods for conducting an RCA and that the results of the current processes that are being utilized
are not producing the desired effects in outcomes.

Given the findings supported in the literature of variation specific to the analysis of events or clinical
incidents, the lack of evidence to guide standardization around the technical aspects of performing
event analysis, and limited evidence to support improved RCA processes, there is sufficient impetus
for examining the RCA process in more detail, as well as improving the results obtained from the
process in order to achieve more systemic and sustainable improvements.

Shea Polancich is currently an assistant professor at the University of Alabama at Birmingham School
of Nursing with a joint appointment in quality and patient safety at the University of Alabama at
Birmingham Medical Center. Formerly, her roles included the director for quality and patient safety at
Vanderbilt University Medical Center, director of data analysis and measurement at Texas Health
Resources, NIH/NINR research intern, and health policy fellow at George Mason University. She served
on the NQF Patient Safety Reporting Framework Steering Committee and may be contacted at
polancs@uab.edu.

Linda Roussel is a professor at University of Alabama at Birmingham, School of Nursing. She is the
DNP program director and works in the clinical nurse leader and nursing health systems administration
graduate programs. Roussel has worked with the Robert Wood Johnson Initiative Transforming Care at
the Bedside and Frontline Engagement through the Improvement Science Research Network Quality
and safety are cornerstone to her practice and educating graduate students.

Pat Patrician is the Donna Brown Banton Endowed Professor at the University of Alabama at
Birmingham (UAB). Previously, she served 26 years in the US Army Nurse Corps, where she held
clinical, administrative, educational, and research positions. At UAB, she teaches in the nursing and
health systems administration program and supervises PhD and Doctorate of Nursing Practice
students. In addition, she is a senior nurse faculty/scholar in the Veteran’s Administration Quality
Scholars (VAQS) fellowship program that focuses on the science of quality improvement, and a national
consultant in the quality and safety education for nurses program. She is also scientist at the Center for
Outcomes and Effectiveness Research and Education and Scholar at the Lister Hill Center for Health
Policy at UAB.

REFERENCES

Bowie, P., Skinner, J., & de Wet, C. (2013) Training health care professionals in root cause analysis: A
cross sectional study of post-training experiences, benefits, and attitudes. BMC Health Services
Research, 13(50), 50.

Card, A., Ward, J., & Clarkson, J. (2012). Successful risk assessment may not always lead to successful
risk control: A systematic review of risk control after root cause analysis. Journal of Healthcare Risk
Management, 31(3), 6–12.
Hettinger, Z., Fairbanks, R., Hegde, S., Rackoff, A., Wreathall, J., Lewis, V., Bisantz, A., & Wears, R.
(2013). An evidence-based toolkit for the development of effective and sustainable root cause
analysis system safety solutions. Journal of Healthcare Risk Management, 33(2), 11–20.

Kohn, L. T., Corrigan, J., & Donaldson, M. S. (Eds.). (2000). To err is human: Building a safer health
system. Washington, DC: National Academy Press

Nicolini, D., Waring, J., & Mengis, J. (2011). Policy and practice in the use of root cause analysis to
investigate clinical adverse events: Mind the gap. Social Science & Medicine, 73, 217–225.

Pande, P. S., Neuman, R. P., & Cavanagh, R. R. (2000). The Six Sigma way : How GE, Motorola, and
other top companies are honing their performance. New York: McGraw-Hill.

Percarpio, K., Watts, B., & Weeks, W. (2008) The effectiveness of root cause analysis: What does the
literature tell us? Joint Commission Journal on Quality and Patient Safety, 34(7), 391–398.

Pham, J., Kim, G., Natterman, J., Cover, R., Goeschel, C., Wu, A., & Pronovost, P. (2010). ReCasting the
RCA: An improved model for performing root cause analyses. American Journal of Medical Quality,
25(3), 186–191.

Smits, M., Janssen, J., De Vet, R., Zwaan, L., Timmermans, D., Groenewege, P., & Wagner, C. (2009).
Analysis of unintended events in hospital inter-rater reliability of constructing causal trees and
classifying root causes. International Journal for Quality in Health Care, 21(4), 292–300.

Toyota (2005). Toyota traditions: Ask ‘why’ 5 times about every matter. Retrieved from
http://www.toyota-global.com/company/toyota_traditions/quality/mar_apr_2006.html

Wachter, R. M. (2010). Patient safety at ten: Unmistakable progress, troubling gaps. Health Affairs,
29(1), 165–173.

Wachter, R. M. (2004). The end of the beginning: Patient safety five years after ‘To Err is Human.’
Health Affairs, 23(11), 534–545.

Wu, A., Lipshutz, A., & Pronovost, P. (2008). Effectiveness and efficiency of root cause analysis in
medicine. JAMA, 299(6), 685–687.

You might also like