Introduction To Safety Culture
Introduction To Safety Culture
Introduction To Safety Culture
Elina Pietikäinen
Research
2010:07
Indicators of safety culture – selection
and utilization of leading safety
performance indicators
This report concerns a study which has been conducted for the Swedish
Radiation Safety Authority, SSM. The conclusions and viewpoints presented
in the report are those of the author/authors and do not necessarily coin-
cide with those of the SSM.
SSM Perspective
Background
SSM has identified a need for an overview, analysis and evaluation of
safety performance indicators and particularly safety culture indicators in
the domain of nuclear safety. Current safety performance indicators are
usually lagging i.e., measuring something that has happened. In order to
be able to monitor the effects of proactive safety work as well as anticipate
vulnerabilities the organizations should define leading indicators. Those
should be able to grasp organizational practices and processes that pre-
cede changes in the safety performance of the organization.
SSM 2010:07
The project was built on VTT´s work on the evaluation of safety critical
organisations and safety culture as well as IAEA´s ongoing work concern-
ing leading indicators of nuclear safety.
Results
The project has resulted in a broad overview of the definition of safety
performance indicators, the existing types of indicators and the utilization
of safety performance indicators in the nuclear industry. The project has
given deeper knowledge in the different kind of safety performance indi-
cators (leading and lagging) including safety culture indicators and how
they are related to safety management in the nuclear domain. A framework
for selection and use of safety performance indicators has been developed
supported with examples.
Project information
Project managers at SSM: Lars Axelsson and Per-Olof Sandén
Project reference: SSM 2009/2235
Project number: 1604
SSM 2010:07
Summary
Safety indicators play a role in providing information on organizational
performance, motivating people to work on safety and increasing or-
ganizational potential for safety. The aim of this report is to provide an
overview on leading safety indicators in the domain of nuclear safety.
The report explains the distinction between lead and lag indicators and
proposes a framework of three types of safety performance indicators
– feedback, monitor and drive indicators. Finally the report provides
guidance for nuclear energy organizations for selecting and interpreting
safety indicators. It proposes the use of safety culture as a leading safety
performance indicator and offers an example list of potential indicators
in all three categories. The report concludes that monitor and drive
indicators are so called lead indicators. Drive indicators are chosen prio-
rity areas of organizational safety activity.
They are based on the underlying safety model and potential safety acti-
vities and safety policy derived from it. Drive indicators influence control
measures that manage the sociotechnical system; change, maintain, rein-
force, or reduce something. Monitor indicators provide a view on the dy-
namics of the system in question; the activities taking place, abilities, skills
and motivation of the personnel, routines and practices – the organiza-
tional potential for safety. They also monitor the efficacy of the control
measures that are used to manage the sociotechnical system. Typically the
safety performance indicators that are used are lagging (feedback) indi-
cators that measure the outcomes of the sociotechnical system. Besides
feedback indicators, organizations should also acknowledge the important
role of monitor and drive indicators in managing safety.
SSM 2010:07
SSM 2010:07
Content
1 Introduction............................................................................... 3
2 Safety, performance and safety performance indicators.............. 5
2.1 What is a safety performance indicator?................................... 5
2.2 Functions of organizational safety performance indicators....... 6
2.3 Types of safety performance indicators.................................... 9
3 Utilization of safety performance indicators in the nuclear
industry........................................................................................... 11
3.1 Indicating nuclear safety........................................................ 11
3.2 Indicator systems ................................................................... 12
3.3 State-of-the-art on safety performance indicators ................... 14
4 Leading and lagging indicators of safety.................................. 19
4.1 Distinguishing lead from lag .................................................. 19
4.2 Leading indicators as precursors to harm or signs of changing
vulnerabilities.............................................................................. 20
5 Safety culture as a leading safety performance indicator .......... 23
5.1 Criteria for good safety culture .............................................. 23
5.2 Monitoring safety culture in the sociotechnical system .......... 24
6 Framework for the selection and use of safety performance
indicators ........................................................................................ 31
6.1 The role of indicators in safety management .......................... 31
6.2 The selection of key safety performance indicators................ 32
6.3 Relation of monitor indicators to performance ....................... 37
6.4 Making inferences about the level of safety ........................... 39
7 Conclusions ............................................................................. 43
Acknowledgements ......................................................................... 44
References....................................................................................... 45
Appendix A: Examples of drive indicators ...................................... 48
Appendix B: Examples of monitor indicators .................................. 56
Appendix C: Examples of feedback indicators................................. 63
SSM 2010:07
1 Introduction
Contemporary view on safety emphasises that safety critical
organizations should be able to proactively evaluate and manage
safety of their activities. This proactivity should be endorsed in the
organizational safety management. Safety, however, is a phenomenon
that is hard to describe, measure, confirm, and manage. Technical
reliability is affected by the performance of the employees.
Furthermore, the effect of the management actions, working
conditions and the culture of the organization can not be ignored when
evaluating the overall safety of the activities.
SSM 2010:07 3
happened (the past), what happens (the present) and what may happen
(the future), as well as knowing what to do and having the required
resources to do it.” The system should be controlled in a manner that it
remains within the boundaries of its safe performance. If safety is
understood as something more than the absence of risk and the
negative, the indicators should also be able to focus on the positive
side of safety - on presence of something (Hollnagel, 2008, p. 75;
Rollenhagen, 2010). This requires a model of the system as well as an
outline of how the system produces safety (Hollnagel, 2008; Reiman
& Oedewald, 2009).
SSM 2010:07 4
2 Safety, performance and
safety performance
indicators
2.1 What is a safety performance indicator?
The literature on safety performance indicators shows that the concept
of safety indicator is all but clear (see Safety Science, 47 (2009) for
the latest scientific discussion on the issue) and there are different
purposes for using safety indicators. For example, indicators can be
seen as national or international tools for defining political goals and
for following whether the goals are met (cf. Valtiovarainministeriö,
2005). Indicators can also be seen as tools for the authorities for
defining their regulative activities and the goals they expect safety
critical organizations to fulfil and for following whether theses goals
are met. Indicators can also be seen as a way to communicate safety
issues for the public (cf. Karjalainen, 2009, p. 88). Finally, safety
performance indicators can be used by the organization to gain
information on its current safety level and on the efficacy of its safety
improvement efforts.
SSM 2010:07 5
“safety” that we are talking about. What is it that we are trying to find
indications of?
We approach safety of the nuclear power plants from the point of view
of nuclear safety as distinct from for example occupational safety. We
define safety as an emergent property of the entire sociotechnical
system. Thus, safety is a dynamic property or a state that includes
people and technology. It is important to realise that safety is not a
system; the organization is (Reiman & Oedewald 2008). Safety
management requires the management of the organization. Safety
performance indicators should provide information on this
organizational ability to fulfil the core task. This means that they
should provide information on the safety culture of the organization.
SSM 2010:07 6
motivating the management and the personnel to take the necessary
action (cf. Hale, 2009, p. 479).
SSM 2010:07 7
Motivating the management and the personnel to take the necessary
action
Safety indicators are cues for the personnel about the priotirities and
interests of the management and they can shape the personnel’s ideas
on what safety or safe behaviour is or should be like. Thus, the
indicators steer the behaviour in the organization. Sometimes the
behaviour steering power of the indicators is intensified by embedding
the indicators into the incentive system of the organization.
Unfortunately, this steering effect remains often unintentional and
might lead to problems when the explicit goal of the safety indicators
is to monitor the safety level and not change or develop some specific
issue being measured.
SSM 2010:07 8
management’s commitment to safety and personnel’s interest
in safety)
- Unintended effect: the personnel become more interested in
managing the indicator itself rather than the phenomenon of
which it is supposed to provide an indication. For example, the
management optimizes the number of walk arounds and
neglects other (important) issues that are not being measured.
We will return in Section 6 to the difference between metric and
indicator. Here it is sufficient to say that a metric denotes the
operationalization of the indicator (how it is measured), whereas an
indicator denotes something that one wishes to measure with the use
of one or more metrics.
SSM 2010:07 9
Outcome indicators are usually similar to lagging indicators, and they
show the safety performance in terms of measures of past performance
e.g. injury rates, radiation doses, and incidents. Input indicators are
usually called leading indicators, and they monitor the processes that
are effecting and maintaining safety performance. These include
leadership, training activities and work processes. OECD’s guidance
document on safety performance indicators (2008, p. 5) argues that
“outcome indicators tell you whether you have achieved a desired
result (or when a desired safety result has failed). But, unlike activities
indicators, they do not tell you why the result was achieved or why it
was not.”
SSM 2010:07 10
3 Utilization of safety
performance indicators in
the nuclear industry
Different types of safety indicators have been utilized in the nuclear
industry for a long time. For example, unit capability factors and
INES-events have been used to indicate the (safety) performance of
the plant. High capability factors have been used as indicators for a
positive indicator of safety performance, whereas INES-events are a
negative indicator. Also WANO offers a set of performance indicators
including capability factors and unplanned reactor scrams (see below)
with trend data for several years and different power plants.
Figure 1. Indicators that the interviewees explicitly raised as signals of the safety
level of the plant (from Reiman et al. in press). The indicators have been arranged
according to general themes that emerged from the definitions – management and
owners, technical design of the plant, organizational activities, personnel, systems
and structures, and finally, the outcomes.
SSM 2010:07 11
Many people emphasized technical data and performance measures
that can be compared to other power plants – outcomes of the
organization. Another emphasis was on the organizational activities
that produce safety. Personnel-related issues were also considered
important indicators of the level of nuclear safety. What the
respondents seemed to lack was an overview of the relation of
different indications of the safety level. A few divided nuclear safety
explicitly into a) the technical condition of the plant and b) its
operation and management. (Reiman et al. in press)
SSM 2010:07 12
detailed basic data may change risk estimates in either direction”
(Kainulainen 2009, p. 121).
The above example also illustrates the point that was made in Section
2.1 that the utilization of the indicators is based on an understanding
of the sociotechnical system. When this understanding deepens it can
actually be seen as a decrease in safety level as measured by the safety
performance indicators. What actually happens then is of course not a
real decrease in safety but a calibration of the model to better
correspond with reality. In other words, the safety level has in reality
already been closer to the new decreased level than the old indicated
level, but the previous models of safety have been unable to indicate
it.
Chakraborty et al. (2003) point out that “PSA [the old acronym for
PRA] provides a formal and most logical means for quantifying the
safety significance of operational events, corrective actions, design
modifications, and changes in plant configuration (plant condition). In
other words, PSA appears to be a consistent framework for defining
the most meaningful set of SPIs, and for linking these with the most
effective top-level safety indicators.” PRA is focused on the
propability of the nuclear power plant to be safe in the future, and thus
it is a leading indicator of nuclear safety.
SSM 2010:07 13
- Daily average gross power for the reporting year
- Operation and operational events
- Annual maintenance outage – activities and performance
- Events during the year subject to special report
- INES-classified events (ten year trend)
- Non-compliancies during the year with Technical
Specifications
- Reliability of the plant’s safety functions (failures during the
year in the plant’s safety functions and the systems, equipment
and structures implementing them)
- Failures or signs of wear in the integrity of equipment and
structures critical to plant safety
- Fuel leaks
- Events in the treatment, storage or final disposal of low- and
intermediate-level waste
- Development of the plant and its safety – activities and
performance
- Management and safety culture – activities and performance
- Functionality of the management system – activities and
performance
- Personnel resources and competence – activities and
performance
- Operational experience feedback – activities and performance
- Occupational radiation safety – activities and performance
- Collective occupational radiation doses since the start of the
operation
- Annual radiation doses to the critical groups since the start of
operation
- Radioactive nuclides originating from the plant
- Emergency preparedness
SSM 2010:07 14
• There is no unified approach concerning terminology and definition of
“performance indicators”, “safety indicators”, and “safety performance
indicators”.
• Most widely applied is the WANO set of performance indicators (10
quantitative indicators reported annually by nearly all NPPs worldwide, in
order to monitor the safety and economic performance of NPPs).
• In many countries the WANO set, complemented by other indicators, is
used by utilities and regulators to monitor the safety performance of NPPs.
• There is practically no calibration of safety performance indicators in order
to give a quantitative measure of plant safety (resp. risk).
• Evaluation of safety performance indicators applies relative thresholds
which are based on past experience.
• Safety performance indicators are generally applied in combination with
other methods to monitor plant safety (e.g. inspections, PSA, precursor
studies).
• Approaches have been developed to monitor status and trends of safety
management and safety culture by means of specific indicators. Calibration
in terms of influence on plant safety (resp. risk) is not available.
• Similarly it is intended to find indicators to detect early signs of
deterioration of safety. Proposals have been developed, but there is no
accepted approach. Furthermore, the relationship of “safety culture and
organizational aspects” to fundamental PSA input parameters and models
needs to be better established using actuarial plant data.
• Plant specific PSAs, taking into account actual operational experience,
produce safety performance indicators (CDF, release category frequencies)
based on an integrated view. However, the current PSA methodology does
not take into account (potential) influences from safety management or
safety culture, which have not yet been manifested in the operational
experience.
SSM 2010:07 15
encourages the use of those safety performance indicators WANO has
developed (see below), that form the basis for the safety performance
indicators currently used in nuclear power plants.
IAEA (2000, 23) states that safety indicators chosen should include a
combination of indicators that reflect actual performance that is
sometimes called lagging indicators and those that provide an early
warning of declining performance that is sometimes called leading
indicators. The American Electric Power Research Institute (EPRI)
also emphasizes that there are more indicator types than just one.
EPRI strongly encourages the use of leading indicators for their
SSM 2010:07 16
member utilities and provides tools and guidelines for this (EPRI,
2000, 2001a). These tools and guidelines are constructed so that they
are also in line with the principles of INPO (Institute of Nuclear
Power Operations).
Next we will look more closely at the differences between leading and
lagging safety performance indicators.
SSM 2010:07 17
SSM 2010:07 18
4 Leading and lagging
indicators of safety
4.1 Distinguishing lead from lag
The distinction between leading and lagging safety performance
indicators is not clear cut. Some safety scientists and practitioners
have described them more as a continuum than two separate entities
and have even suggested that the distinction between leading and
lagging is not that important at all (Hale 2009).
SSM 2010:07 19
there an adequate recruitment procedure?” and “Is management
actively committed to, and involved in, safety activities”.
SSM 2010:07 20
- they provide early warning signs on potential weak areas or
vulnerabilities in the organizational risk control system or
technology,
Typically lead and lag indicators are considered on a time scale where
lead indicators precede harm and lag indicators follow harm.
According to that, lagging indicators can be used in providing
feedback on the functioning of the system to be used as further inputs
into the system. Lagging indicators would thus indicate the current
safety level of the system. We disagree with this definition.
Both Kjellén (2009, p. 486) and EPRI (2000) seem to view leading
indicators not as measures of precursor to harm but as measures of
signs of changing vulnerabilities. This means that leading indicators
should measure things that might one day become precursos to harm
or cause a precursor to harm. We agree with this perspective. All in all
we define leading indicators as follows (cf. Dyreborg 2009):
SSM 2010:07 21
Lead safety indicators indicate either the current state
and/or potential development of key organizational
functions or processes as well as the technical
infrastructure of the system. The current state includes a
view on the changing vulnerabilities of the organization
as well as its internal model of how it is creating safety.
The lead monitor indicators indicate the potential of the
organization to achieve safety. They do not directly
predict the safety related outcomes of the sociotechnical
system since these are also affected by numerous other
factors such as external circumstances, situational
variables and chance.
In the next chapter we present an organizational theoretical view on
safety indicators and system safety that parallels leading indicators
with safety culture.
SSM 2010:07 22
5 Safety culture as a
leading safety
performance indicator
5.1 Criteria for good safety culture
According to our approach (see Reiman et al., 2008; Reiman &
Oedewald, 2009), the essence of safety culture is the ability and
willingness of the organization to understand safety, hazards and
means of preventing them, as well as ability and willingness to act
safely, prevent hazards from actualising and promote safety. Safety
culture refers to a dynamic and adaptive state. It can be viewed as a
multilevel phenomenon of organizational dimensions, social processes
and psychological states of the personnel. Reiman and Oedewald
(2009, 43) have stated that a nuclear industry organization has a high-
level safety culture when the following criteria are met:
- Safety is genuinely valued and the members of the
organization are motivated to put effort on achieving high
levels of safety
- It is understood that safety is a complex phenomenon. Safety is
understood as a property of an entire system and not just
absence of incidents
- People feel personally responsible for the safety of the entire
system, they feel that they can have an effect on safety
- The organization aims at understanding the hazards and
anticipating the risks in their activities
- The organization is alert to the possibility of an unanticipated
event
- There are good prerequisites for carrying out the daily work
- The interaction between people promotes a formation of
shared understanding of safety as well as situational awareness
of ongoing activities
SSM 2010:07 23
identify those aspects of the organizational ability that have
vulnerabilities or can create vulnerabilities elsewhere in the
organization.
We argue that lagging indicators do not tell about the safety level of
the system or dynamics of the system’s functioning. Instead lag
indicators only tell about the outputs of the system. These outputs are
produced by the internal dynamics of the various organizational
dimensions influenced by external variability and chance. Likewise,
leading indicators are not only indicators of something that precede
harm as they have been conceptualized in frameworks based on
epidemiological accident models (cf. Hale, 2009). Leading indicators
either influence safety management priorities and the chosen actions
for safety improvement, or they tell about the dynamics of the
sociotechnical system (not about the inputs to the system or merely
about the functioning of safety barriers). These leading indicators are
labelled drive indicators and monitor indicators in this report,
respectively.
SSM 2010:07 24
The distinction between lead and lag indicators can be illustrated with
the help of Hollnagel’s (2008, p. 70) feedforward model of safety
management. Hollnagel (2008) argues that more emphasis needs to be
put into controlling the system by anticipated or expected disturbances
and deviations (feedforward) instead of actual outcomes (feedback).
In figure 4 we have created a model loosely based on Hollnagel’s
ideas (2008) to illustrate the three types of indications; feedforward, or
leading drive indicators, leading monitor indicators and lagging, or
feedback indicators.
Barriers and
corrective
actions LAG: Feedback
indicators
Potential
control mechanisms Risk control
Sociotechnical system
SSM 2010:07 25
Safety model and safety boundaries: This means the underlying,
often implicit model of what safety is and how it is achieved in an
organizational context. Safety boundaries refer to the perceived
hazards of the organization and the space that these hazards leave for
carrying out activities safely. Even though each employee has their
own more or less uniform model of safety, the element in figure 4
refers to the model of people involved in the selection and utilization
of safety performance indicators. The safety model defines the risks
that are perceived and it is thus “the Achilles heel of feedforward
control” (Hollnagel, 2008, p. 68). Disturbances that are not
acknowledged or foreseen in the model will no be transformed into
drive indicators and corresponding safety interventions either. For
more information on safety models, see e.g. Hollnagel (2004, 2008),
Reiman and Oedewald (2009) and EPRI (2000, appendix C).
SSM 2010:07 26
members of the organization do. Instead of constraining behaviour,
safety development aims for building up the know-how and other
prerequisites for the personnel to do their work well and safely in
changing situations. Both risk control and safety development are
needed to manage safety.
SSM 2010:07 27
results or situational actualization of the safety potential of the
organization. Thus, safety is not an outcome. Safety is a dynamic non-
event where non-events are not possible to characterize. Thus, we
have to look at the term ”dynamic” and search for the way the non-
event is created and acknowledge that we cannot ever reach the non-
event itself.
Barriers and
corrective
actions LAG: Feedback
indicators
Potential Feedback on the
control mechanisms Risk control
effectiveness of risk
control
Sociotechnical system
Figure 5. Sociotechnical system model of lead and lag indicators with the
information transfer lines added
SSM 2010:07 28
current (and future) safety level should always go through the monitor
indicators (see also Figure 7).
Figure 6. Examples of lag and lead indicators (for more examples see
appendixes A, B and C).
SSM 2010:07 29
SSM 2010:07 30
6 Framework for the
selection and use of
safety performance
indicators
6.1 The role of indicators in safety management
The selection strategies of the indicators for the three types differ. The
monitor indicators should be chosen based on an analysis of the
functioning of the sociotechnical system (an operational nuclear
power plant for example) in question, and the identified key success
factors. Feedback indicators should be chosen based on the
identification of critical signals of increased risk as well as otherwise
negative unwanted events. Even if occupational accidents do not
necessarily bear a relation to nuclear safety they are unwanted
negative events and as such they are worth measuring. Only for the
drive indicators does the typical advice given in safety indicator
guidance documents apply: They should be selected to reflect the key
issues of concern and priority areas of the organization. In that way,
several potential drive indicators can be prioritized according to the
current needs of the organization. Each year drive indicators can be
adjusted depending on the issues to address as well as findings from
the monitor indicators.
The indicator types can also be connected: The organization can select
some key area of concern as a drive indicator, e.g. competence
management, and then identify monitor and feedback indicators that
would allow a follow-up on the progress of competence management
activities (for examples of lead drive indicators of competence
management, see Appendix A of this report). Monitor indicators could
be the amount and quality of training that the organization gives as
well as the general knowledge level of the personnel (operationalized
as e.g. number and types of degrees among the personnel, test scores,
etc). Feedback indicators could be, e.g., the types of root causes found
from incidents (whether competence related or not), annual
performance evaluations done by superiors and increase in the quality
of work.
SSM 2010:07 31
- The indicator is valid; aka it measures what it intends to
measure
SSM 2010:07 32
of foreign particles (“trash”) at the process. Although fortunately most
power plants should find it difficult to make a reliable trend out of
these findings, the few instances nevertheless provide an important
lagging indicator about the state of the safety culture at the
organization.
Woods (2009, p. 499) reminds us about the lesson from the Columbia
accident investigation: “Organizations need mechanisms to assess the
risk that the organization is operating nearer to safety boundaries than
it realizes – a means to monitor for risks in how the organization
monitors its risks relative to a changing environment”. This
monitoring of how well the organization is monitoring its risks
(second-order or metamonitoring) is an important yet difficult
endeavour. Some monitor indicators provide information on the
ability of the organization to monitor its risks adequately – for
example mindfulness and vigilance (and especially the potential
discrepancy between external and internal audit findings, see
Appendix B) provides information on organizational blind spots. Also,
the “understanding of hazards” and “understanding of the
organizational core task” -indicators provide information on the ability
of the organization to correctly spot the hazards and evaluate their
risks in relation to the tasks that they carry out.
SSM 2010:07 33
5a. If the results of step 4 are inconsistent correct the indicators
SSM 2010:07 34
Step five follows the analysis of the indicator data. At this step
corrective or preventive actions are taken based on the findings. If the
results of the indicators are inconsistent the indicators have to be
corrected (step 5a). This can mean for example that monitor indicators
show a steady decline in safety level despite drive indicators showing
successful emphasis on the chosen safety management areas or the
feedback indicators showing increasing number of events while the
monitor indicators have not changed. In such a situation all the
indicators have to be analysed and their rationale and underlying
model questioned. If the inconsistencies are big enough the process
should return to the step one. The feedback indicators provide
information that can be used in correcting safety management
activities (step 5b). This means for example conducting a root cause
analysis for an event and defining corrective measures and
corresponding drive indicators for facilitating the implementation of
the measures. Monitor indicators provide a view on the current safety
level of the organization and point to the necessary changes in
priorities if safety level shows signs of degradation (5c). Finally, if the
three types of indicators consistently show inconsistent results, the
underlying model of safety might be flawed (5d). For example, if the
plant has numerous events and near-misses even when the monitor
indicators claim a high level of safety, the monitor indicators might be
based on too narrow a conception of safe performance. To conclude,
the selection and utilization of safety performance indicators is a
continuous process where all three types of indicators are analysed
and finetuned to better correspond with reality (cf. EPRI, 2000).
SSM 2010:07 35
Drive indicators are categorized as follows (see Appendix A):
- Technology management
- Leadership
o Safety communication
- Work management
o Resource management
o Integration of competence
o Subcontractor management
- Strategic management
o Change management
SSM 2010:07 36
- Organization and management
- Social processes
- Human factors
SSM 2010:07 37
managing the measure instead of safety differs depending on the type
of the measure; leading, lagging, activity or outcome. Hopkins (2009)
argues that activity indicators (as opposed to outcome indicators) are
most susceptible to management, since it is possible to reduce their
quality without sacrificing their quantity, e.g. by taking more people
into training at the same time. However, this critique presupposes that
indicators are always quantitative.
External
variability
Metric 1.1 e
e
Metric 1.2
e Phenomenon 1
e Indicator 1
Metric 1.n
Efficiency
e
Metric 2.1
e e
Metric 2.2 Indicator 2 Phenomenon 2
e Safety
Metric 2.n
e e
Metric 3.1 Indicator 3 Phenomenon 3
e Wellbeing
Metric 3.2
e
Metric 3.n
Phenomenon 4
SSM 2010:07 38
- The phenomenon in question cannot be accurate measured by
one indicator, rather multiple indicators are needed.
For example, occupational accidents can tell about the state of process
safety as measured by e.g. the number of reactor scrams and
development initiatives, or the use of human performance tools. This
is due to the fact that these are all affected partly by the same
underlying phenomena. In this case the underlying phenomena could
be workplace norms concerning thoroughness and proficiency. Still,
one cannot decipher solely from an increase in occupational accidents
that there is a problem with norms. Norms are only one possible
explanation and there is need for corroborative evidence from other
indicators before making any judgments.
SSM 2010:07 39
indicators to physical examination in health care. Body temperature is
a good indicator for a person’s health just as are pulse rate and blood
pressure. Medical examination often starts with checking these vital
statistics. However, sometimes a good state of these indicators does
not suffice to be certain that there is not something wrong. For
example a broken bone may not change these vital statistics. These
statistics also show large variability over individuals. An indicator is
thus always “just” an indicator. Its actual meaning needs to be thought
through carefully. As IAEA (2000, 1) points out in its tecdoc, the
actual values of the indicators are not intended to be direct measures
of safety. Instead safety performance can be inferred from the results.
EPRI (2000) also sees interpreting the meaning of indicator data as the
most essential step in the process of using leading indicators. Yet,
according to EPRI’s case studies, interpretation is also the point were
the process of using leading indicators is most likely to falter. Often
the data collection process assumes primary importance at the expence
of interpretation. EPRI recommends that leading indicator data should
be addressed in quarterly meetings of the management steering group
and other interested personnel in order to understand the big picture.
EPRI highlights the fact that data do not think, people do. The
indicator data as such is not interesting. The group work in
interpreting the data produces the only meaningful outcomes in the
process of utilizing leading indicators.
SSM 2010:07 40
that few if any of the indicators are totally independent of one another.
They are all measures of safety culture and probably have some
correlation with each other.
SSM 2010:07 41
SSM 2010:07 42
7 Conclusions
The purpose of safety performance indicators is to provide
information on safety, motivate people to work on safety and
contribute to change towards increased safety in the organization.
Differentiation of safety performance indicators and safety culture
indicators is unnecessary, since they should measure the same
phenomena.
SSM 2010:07 43
The selection and utilization of safety performance indicators is a
continuous process where all three types of indicators are analysed
and finetuned to better correspond with reality. The safety
performance of the plant is always inferred from the data from all the
indicators analysed together. There is no direct correspondence
between one indicator and nuclear safety. Rather the safety
performance indicators can provide a holistic view on the potential of
the nuclear power plant to guarantee nuclear safety and point out key
areas of concern where attention is required. This requires skill in
analysing the indicator data and interpreting the results in
organizational theoretical framework.
Acknowledgements
SSM 2010:07 44
References
Ale, B. (2009). More thinking about process safety indicators. Safety
Science, 47, 470-471.
Chakraborty, S. et al. (2003). Risk-based Safety Performance Indicators for
Nuclear Power Plants. Transactions of the 17th International Conference on
Structural Mechanics in Reactor Technology (SMiRT 17) Prague, Czech
Republic, August 17 –22, 2003.
Dahlgren, K. (2008). Lessons learned from international experiences. HUSC
seminar, December 4th, 2008, Stockholm, Sweden.
Dekker, S.W.A. (2005). Ten questions about human error. A new view of
human factors and system safety. New Jersey: Lawrence Erlbaum.
Dyreborg, J. (2009). The causal relation between lead and lag indicators.
Safety Science, 47, 474-475.
EPRI (2000). Guidelines for trial use of leading indicators of human
performance: the human performance assistance package. 1000647. Palo
Alto, CA: EPRI.
EPRI (2001a). Final report on leading indicators of human performance.
1003033. Palo Alto, CA & Washinton, DC: EPRI & U.S. Department of
energy.
EPRI (2001b). Predictive validty of leading indicators: Human performance
measures and organizational health. 1004670. Palo Alto, CA: EPRI.
Flodin, Y. & Lönnblad, C. (2004). Utveckling av system för
säkerhetsindikatorer. SKI Rapport 2004:01.
Grabowski, M., Ayyalasomayajula, P., Merrick, J., Harrald, J.R., & Roberts,
K. (2007). Leading indicators of safety in virtual organizations. Safety
Science, 45, 1013−1043.
Grote, G. (2009). Response to Andrew Hopkins. Safety Science, 47, 478.
Hale, A. (2009). Why safety performance indicators? Safety Science, 47,
479−480.
Hollnagel, E. (2004). Barriers and accident prevention. Aldershot: Ashgate.
Hollnagel, E. & Woods, D.D. (2006). Epilogue – Resilience Engineering
Precepts. In E. Hollnagel, D.D. Woods and N. Leveson, eds. Resilience
engineering. Concepts and precepts. Aldershot: Ashgate
Hollnagel, E. (2008). Safety management - looking back or looking forward.
In E. Hollnagel, C.P. Nemeth and S. Dekker (Eds.), Resilience Engineering
Perspectives, Volume 1. Remaining sensitive to the possibility of failure.
Aldershot: Ashgate.
Hopkins, A. (2009). Thinking about process safety indicators. Safety
Science, 47, 460−465.
Hopkins, A. (2009b). Reply to comments. Safety Science, 47, 508-510.
HSE. (2006). Developing process safety indicators. Health and Safety
Executive. HSE Books.
Hudson, P.T.W. (2009). Process indicators: Managing safety by the
numbers. Safety Science, 47, 483-485.
SSM 2010:07 45
IAEA (1999). Safe management of the operating lifetimes of nuclear power
plants. INSAG-14. Vienna: IAEA.
IAEA (2000). Operational safety performance indicators for nuclear power
plants. Vienna: IAEA.
IAEA (2002). Self-assessment of safety culture in nuclear installations.
Highlights and good practices. IAEA-TECDOC-1321. Vienna: IAEA.
IAEA. (2003). Periodic safety review of nuclear power plants. Safety
Standards Series No. NS-G-2.10. Vienna: IAEA.
IAEA (2006). The management system for facilities and activities. Safety
Requirements No. GS-R-3. Vienna: IAEA.
IAEA (2008). SCART Guidelines. Reference report for IAEA Safety Culture
Assessment Review Team (SCART). Vienna, February 2008.
Kainulainen, E. (2009). (Ed.), Regulatory control of nuclear safety in Finland.
Annual report 2008. STUK-B 105. Helsinki: STUK.
Kjellén, U. (2009). The safety measurement problem revisited. Safety
Science, 47, 486-489.
Mearns, K. (2009). From reactive to proactive – can LPIs deliver? Safety
Science, 47, 491−492.
OECD. (2003). Guidance on safety performance indicators. OECD
Environment, Health and Safety Publications. Series on Chemical Accidents
No. 11. Paris: OECD Publications.
OECD. (2008). Guidance on developing safety performance indicators
related to chemical accident prevention, preparedness and response. For
industry. Second Edition. OECD Environment, Health and Safety
Publications. Series on Chemical Accidents No. 19. Paris: OECD
Publications.
Rasmussen, J. (1997). Risk management in a dynamic society: A modelling
problem. Safety Science, 27, 183-213.
Reason, J. (1997). Managing the risks of organizational accidents.
Aldershot: Ashgate.
Reiman, T. & Oedewald, P. (2007). Assessment of Complex Sociotechnical
Systems – Theoretical issues concerning the use of organizational culture
and organizational core task concepts. Safety Science 45, 745-768.
Reiman, T. & Oedewald, P. (2008). Turvallisuuskriittiset organisaatiot –
Onnettomuudet, kulttuuri ja johtaminen. Helsinki: Edita.
Reiman, T. & Oedewald, P. (2009). Evaluating safety critical organizations.
Focus on the nuclear industry. Swedish Radiation Safety Authority,
Research Report 2009:12.
Reiman, T., Pietikäinen, E. & Oedewald, P. (2008). Turvallisuuskulttuuri.
Teoria ja arviointi. VTT Publications 700. Espoo: VTT. Available from:
http://www.vtt.fi/inf/pdf/publications/2008/P700.pdf.
Reiman, T., Pietikäinen, E., Kahlbom, U. & Rollenhagen, C. (In press).
Safety Culture in the Finnish and Swedish Nuclear Industries – History and
present. NKS report.
Rollenhagen, C. (2010). Can focus on safety culture become an excuse for
not rethinking design of technology? Safety Science, 48, 268-278.
SSM 2010:07 46
Step-Change in Safety (2001). Leading performance indicators: a guide for
effective use. Available at:
[http://www.stepchangeinsafety.net/stepchange/News/StreamContentPart.as
px?ID=1517]
Valtiovarainministeriö (2005). Indikaattorit ohjauksen ja seurannan välineinä.
Valtiovarainministeriön indikaattorityöryhmän raportti. Keskustelunaloite 73.
Valtiovarainministeriö. Kansantalousosasto.
WANO (2009). 2008 Performance Indicators. Available at:
[http://www.wano.org.uk/PerformanceIndicators/PI_Trifold/PI_2008_TriFold.
pdf]
Weick, K. E. & Sutcliffe, K.M. (2007). Managing the unexpected. Resilient
performance in an age of uncertainty. Second Edition. San Francisco:
Jossey-Bass.
Woods, D.D. (2009). Escaping failure of foresight. Safety Science, 47, 498-
501.
Woods, D.D. & Hollnagel, E. (2006). Prologue: Resilience engineering
concepts. In E. Hollnagel, D.D. Woods & N. Leveson (Eds.), Resilience
engineering. Concepts and precepts. Aldeshot: Ashgate.
Wreathall, J. (2009). Leading? Lagging? Whatever! Safety Science, 47,
493−494.
Zwetsloot, G.I.J.M. (2009). Prospects and limitations of process safety
performance indicators. Safety Science, 47, 495−497.
SSM 2010:07 47
Appendix A: Examples of
drive indicators
A concise summary list of potential leading drive indicators is presented.
The list should be considered a pragmatic tool to guide attention to the
relevant aspects, not a formal auditing check list or an indicator set. The
main categories are based on Reiman and Oedewald (2009; see also Reiman
et al 2008), and the specific contents of the categories include input from
OECD (2008), and IAEA (1999, 2000, 2002, 2003, 2008).
Organizational functions
• There is an access to the appropriate tools and data for design and
engineering (METRIC)
SSM 2010:07 48
• There is a procedure for the identification of possible degradation
mechanisms (METRIC)
• Safety goals are defined both for short and long term (METRIC)
SSM 2010:07 49
• Reporting of deviations, worries and own mistakes is encouraged
by the management (METRIC)
• The personnel are informed about the overall safety level and
current challenges on a regular basis (METRIC)
SSM 2010:07 50
• Information that is relevant for work is easily accessible
(METRIC)
SSM 2010:07 51
• All areas of operation are covered by adequate and documented
procedures (METRIC)
SSM 2010:07 52
• Competence is maintained for both new and old technology
(METRIC)
SSM 2010:07 53
• Contractor and purchase management (INDICATOR)
SSM 2010:07 54
events, near misses and maintenance history at the organization
(METRIC)
• The amount and pace of changes that the organization can handle
is considered when planning changes (METRIC)
SSM 2010:07 55
Appendix B: Examples of
monitor indicators
A concise summary list of potential leading monitor indicators is presented.
The list should be considered a pragmatic tool to guide attention to the
relevant aspects, not a formal auditing check list or an indicator set. The
main categories are based on Reiman and Oedewald (2009), and the specific
contents of the categories include input from OECD (2008), IAEA (1999,
2000, 2002, 2003, 2006, 2008) and Weick and Sutcliffe (2007). The
technical condition of the plant is not dealt with in this report due to its
plant-specific nature and the fact that the focus of this report is mainly on
human and organizational factors.
There needs to be fewer monitor indicators than there are drive indicators.
This is due to the fact that all the monitor indicators should be analysed and
monitored regularly whereas drive indicators are selected depending on
prioritization. Thus, too many indicators provide an information overload.
Nevertheless, the number of indicators should be sufficient to provide a
reliable view on the status of safety culture and system safety at the
organization.
• The quality and clarity of the safety policy and safety goals
(METRIC)
60
SSM 2010:07
• The clarity of integration of the consideration of process safety,
HSE (health, occupational safety, environment) and security issues
(METRIC)
SSM 2010:07 57
• Strategy and external relations (INDICATOR)
• The extent to which the work load of workers is not too high nor
too low (METRIC)
• The extent to which the personnel the demands of the tasks are in
line with the skills of the workers (METRIC)
• The extent to which the personnel the time pressure that workers
feels is not too high (METRIC)
• The extent to which the personnel feel that they can influence
safety related issues (METRIC)
• The extent to which the personnel understand the task and goals of
the organization (METRIC)
SSM 2010:07 58
• The extent to which the personnel know the safety policies and the
operating principles of the organization (METRIC)
• The extent to which the personnel understands the hazards that are
connected to their work (METRIC)
SSM 2010:07 59
• Sense of personal responsibility (INDICATOR)
• The extent to which the personnel are able to perceive that they
have an effect on the outcome of their work, and that their way of
working (inc. attitudes) influences that of the others. (METRIC)
Social processes
• Sensemaking and joint attribution of meaning to past, present and
future events (INDICATOR)
SSM 2010:07 60
• The extent to which past successes are not considered as
guarantees of future success. (METRIC)
• The extent to which there exists a strong social identity that allows
the personnel to feel as belonging to the organization. (METRIC)
• The extent to which habits and routines are reflected from time to
time (METRIC)
SSM 2010:07 61
• Optimizing and local adaptation (INDICATOR)
• The extent to which the gap between work as prescribed and work
as actually done is known and monitored at the organization
(METRIC)
SSM 2010:07 62
Appendix C: Examples of
feedback indicators
Systems, structures and components
• Fuel leaks
Human factors
• Sick leave
• Turnover
SSM 2010:07
Strålsäkerhetsmyndigheten
Swedish Radiation Safety Authority