Nothing Special   »   [go: up one dir, main page]

Introduction To Safety Culture

Download as pdf or txt
Download as pdf or txt
You are on page 1of 72
At a glance
Powered by AI
The document discusses leading and lagging safety performance indicators and their use in monitoring nuclear safety.

The study aims to provide an overview of selecting and using leading safety performance indicators in the nuclear safety domain.

The study discusses leading and lagging safety performance indicators, including safety culture indicators, and how they relate to safety management in nuclear facilities.

Authors: Teemu Reiman

Elina Pietikäinen

Research

2010:07
Indicators of safety culture – selection
and utilization of leading safety
performance indicators

Report number: 2010:07 ISSN: 2000-0456


Available at www.stralsakerhetsmyndigheten.se
Title: Indicators of safety culture – selection and utilization of leading safety performance
indicators
Report number: 2010:07
Author: Teemu Reiman & Elina Pietikäinen; VTT, Technical Research Centre of Finland
Date: Mars 2010

This report concerns a study which has been conducted for the Swedish
Radiation Safety Authority, SSM. The conclusions and viewpoints presented
in the report are those of the author/authors and do not necessarily coin-
cide with those of the SSM.

SSM Perspective

According to the Swedish RadiationSafety Authority’s Regulations con-


cerning Safety in Nuclear Facilities (SSMFS 2008:1) “the licensee shall en-
sure that safety in the nuclear activity is routinely monitored and followed
up, deviations are identified and handled so that safety us maintained
and continuously develops according to the objectives and directives that
apply” (2 Chap., 9 §, 8 point). The deviations may concern deviations
from safety goals and directives as well as deviations from procedures and
instructions that are applied in the nuclear activity. Safety indicators can
be a suitable aid in the monitoring and follow up of the nuclear activity.
However, safety indicators or safety performance indicators can also be an
aid in the proactive safety management of a nuclear activity.

SSM expects, as a part of the safety management, that safety culture to be


regularly assessed by the licensees and indicators of safety culture can be
a useful tool both for licensees and the regulators.

Background
SSM has identified a need for an overview, analysis and evaluation of
safety performance indicators and particularly safety culture indicators in
the domain of nuclear safety. Current safety performance indicators are
usually lagging i.e., measuring something that has happened. In order to
be able to monitor the effects of proactive safety work as well as anticipate
vulnerabilities the organizations should define leading indicators. Those
should be able to grasp organizational practices and processes that pre-
cede changes in the safety performance of the organization.

Objectives of the project


The overall objective of the project was to provide an overview of the se-
lection and effects of leading safety performance indicators in the domain
of nuclear safety. The project should provide guidance on the selection
and interpretation of leading indicators as well as information on the
theoretical justification of the intended measures. Indicators should
be categorized on the bases of the underlying phenomena they seek to
measure as well as based on the nature of data they produce. The project
should also propose a tentative model of the influence of the leading indi-
cators on nuclear safety in terms of their effects.

SSM 2010:07
The project was built on VTT´s work on the evaluation of safety critical
organisations and safety culture as well as IAEA´s ongoing work concern-
ing leading indicators of nuclear safety.

Results
The project has resulted in a broad overview of the definition of safety
performance indicators, the existing types of indicators and the utilization
of safety performance indicators in the nuclear industry. The project has
given deeper knowledge in the different kind of safety performance indi-
cators (leading and lagging) including safety culture indicators and how
they are related to safety management in the nuclear domain. A framework
for selection and use of safety performance indicators has been developed
supported with examples.

Effect on SSM supervisory and regulatory task


This framework for selection and examples of safety performance indica-
tors, including safety culture indicators, will give a good support for the
development of the regulatory indicators in the area. Also, the project has
given further knowledge in how to evaluate safety critical organisations
with the emphasis on the nuclear industry (see Evaluation safety-critical
organisations – emphasis on the nuclear industry, SSM Report Research
2009:12).

Project information
Project managers at SSM: Lars Axelsson and Per-Olof Sandén
Project reference: SSM 2009/2235
Project number: 1604

SSM 2010:07
Summary
Safety indicators play a role in providing information on organizational
performance, motivating people to work on safety and increasing or-
ganizational potential for safety. The aim of this report is to provide an
overview on leading safety indicators in the domain of nuclear safety.
The report explains the distinction between lead and lag indicators and
proposes a framework of three types of safety performance indicators
– feedback, monitor and drive indicators. Finally the report provides
guidance for nuclear energy organizations for selecting and interpreting
safety indicators. It proposes the use of safety culture as a leading safety
performance indicator and offers an example list of potential indicators
in all three categories. The report concludes that monitor and drive
indicators are so called lead indicators. Drive indicators are chosen prio-
rity areas of organizational safety activity.

They are based on the underlying safety model and potential safety acti-
vities and safety policy derived from it. Drive indicators influence control
measures that manage the sociotechnical system; change, maintain, rein-
force, or reduce something. Monitor indicators provide a view on the dy-
namics of the system in question; the activities taking place, abilities, skills
and motivation of the personnel, routines and practices – the organiza-
tional potential for safety. They also monitor the efficacy of the control
measures that are used to manage the sociotechnical system. Typically the
safety performance indicators that are used are lagging (feedback) indi-
cators that measure the outcomes of the sociotechnical system. Besides
feedback indicators, organizations should also acknowledge the important
role of monitor and drive indicators in managing safety.

The selection and use of safety performance indicators is always based


on an understanding (a model) of the sociotechnical system and safety.
The safety model defines what risks are perceived. It is important that
the safety performance indicators can help in reflecting on this model.
Key questions to ask when selecting and utilizing safety performance in-
dicators are 1) what is required from the nuclear power plant to perform
safely and 2) what is required from the organization in order to be aware
of its safety level and enhance its safety performance.

The indicators should provide information on whether these require-


ments are met or not, where the organization should put more effort to
meet the requirements and finally, does the organization have an ac-
curate view on the requirements.

SSM 2010:07
SSM 2010:07
Content
1 Introduction............................................................................... 3
2 Safety, performance and safety performance indicators.............. 5
2.1 What is a safety performance indicator?................................... 5
2.2 Functions of organizational safety performance indicators....... 6
2.3 Types of safety performance indicators.................................... 9
3 Utilization of safety performance indicators in the nuclear
industry........................................................................................... 11
3.1 Indicating nuclear safety........................................................ 11
3.2 Indicator systems ................................................................... 12
3.3 State-of-the-art on safety performance indicators ................... 14
4 Leading and lagging indicators of safety.................................. 19
4.1 Distinguishing lead from lag .................................................. 19
4.2 Leading indicators as precursors to harm or signs of changing
vulnerabilities.............................................................................. 20
5 Safety culture as a leading safety performance indicator .......... 23
5.1 Criteria for good safety culture .............................................. 23
5.2 Monitoring safety culture in the sociotechnical system .......... 24
6 Framework for the selection and use of safety performance
indicators ........................................................................................ 31
6.1 The role of indicators in safety management .......................... 31
6.2 The selection of key safety performance indicators................ 32
6.3 Relation of monitor indicators to performance ....................... 37
6.4 Making inferences about the level of safety ........................... 39
7 Conclusions ............................................................................. 43
Acknowledgements ......................................................................... 44
References....................................................................................... 45
Appendix A: Examples of drive indicators ...................................... 48
Appendix B: Examples of monitor indicators .................................. 56
Appendix C: Examples of feedback indicators................................. 63

SSM 2010:07
1 Introduction
Contemporary view on safety emphasises that safety critical
organizations should be able to proactively evaluate and manage
safety of their activities. This proactivity should be endorsed in the
organizational safety management. Safety, however, is a phenomenon
that is hard to describe, measure, confirm, and manage. Technical
reliability is affected by the performance of the employees.
Furthermore, the effect of the management actions, working
conditions and the culture of the organization can not be ignored when
evaluating the overall safety of the activities.

Scientists in the field of safety critical organizations state that safety


emerges when an organization is willing and capable of working
according to the demands of their task, and when they understand the
changing vulnerabilities of their work (Dekker, 2005; Woods &
Hollnagel, 2006; Reiman & Oedewald, 2007). Adopting this point of
view we state that managing the organization and its sociotechnical
phenomena is the essence of management of safety (Reiman &
Oedewald, 2009). Thus, management of safety relies on a systematic
anticipation, monitoring and development of organizational
performance. Various safety indicators play a key role in providing
information on current organizational safety performance. An
increasing emphasis has been placed also on the role of indicators in
providing information to be used in anticipation and development of
organizational performance. These indicators are called leading
indicators.

The safety performance indicators that have commonly been used


have often been lagging – measuring outcomes of activities or things
and events that have already happened. In order to be able to monitor
the effects of proactive safety work as well as anticipate
vulnerabilities the organizations should define leading indicators.
Those should be able to grasp organizational practices and processes
that antecede (lead) changes in safety performance of the organization.
Hollnagel (2008) calls this kind of control feed-forward control. This
kind of control relies on anticipated effects instead of past outcomes
contrary to the traditional feedback-based safety management.

Understanding and managing organizational processes and practices


has become the primary concern of safety management and science
(Reason, 1997; Reiman & Oedewald, 2007). Safety management has
been conceptualised as culminating in the problem of system control
in complex sociotechnical environments (Rasmussen, 1997; Reiman
& Oedewald, 2009). Hollnagel and Woods (2006, p. 348) summarize
that “in order to be in control it is necessary to know what has

SSM 2010:07 3
happened (the past), what happens (the present) and what may happen
(the future), as well as knowing what to do and having the required
resources to do it.” The system should be controlled in a manner that it
remains within the boundaries of its safe performance. If safety is
understood as something more than the absence of risk and the
negative, the indicators should also be able to focus on the positive
side of safety - on presence of something (Hollnagel, 2008, p. 75;
Rollenhagen, 2010). This requires a model of the system as well as an
outline of how the system produces safety (Hollnagel, 2008; Reiman
& Oedewald, 2009).

The aim of this report is to provide an overview on leading safety


indicators in the domain of nuclear safety. The report first aims at
clarifying the purposes and types of safety performance indicators.
The report explains the distinction between lead and lag indicators and
proposes a framework of three types of safety performance indicators
– feedback, monitor and drive indicators. Finally the report provides
guidance for nuclear energy organizations for selecting and
interpreting leading safety indicators. It proposes the use of safety
culture as a leading safety performance indicator and offers an
example list of potential safety performance indicators in all three
indicator categories.

SSM 2010:07 4
2 Safety, performance and
safety performance
indicators
2.1 What is a safety performance indicator?
The literature on safety performance indicators shows that the concept
of safety indicator is all but clear (see Safety Science, 47 (2009) for
the latest scientific discussion on the issue) and there are different
purposes for using safety indicators. For example, indicators can be
seen as national or international tools for defining political goals and
for following whether the goals are met (cf. Valtiovarainministeriö,
2005). Indicators can also be seen as tools for the authorities for
defining their regulative activities and the goals they expect safety
critical organizations to fulfil and for following whether theses goals
are met. Indicators can also be seen as a way to communicate safety
issues for the public (cf. Karjalainen, 2009, p. 88). Finally, safety
performance indicators can be used by the organization to gain
information on its current safety level and on the efficacy of its safety
improvement efforts.

The definition of safety is all but clear. In practice the different


definitions of the measured object (safety) that are used explicitly or
implicitly affect the selection of the indicators and the interpretation of
the collected data. Many indicators embed an idea of safety as an
absence something or the missing inadequacy of something, e.g., the
fewer the number of unplanned scrams or INES rated events, the
higher the safety level. Another bad example would the using the
number of human errors to postulate the safety level, i.e. the fewer
human errors the higher the safety level. Often the concept of safety
remains undefined in the indicator system. This leads to the above
mentioned examples where interpretations about safety level are made
based on scarce and often deficient data.

Chakraborty et al. (2003) argue that a “nuclear power plant Safety


Performance Indicator (SPI) is a basic parameter (described
qualitatively or quantitatively) that is perceived as having potential
meaning (or relationship) to plant safety”. Wreathall (2009, p. 494)
defines a safety indicator as follows: “Indicators are proxy measures
for items identified as important in the underlying model(s) of safety”.
Similarly to Wreathall’s view, we see that defining safety performance
indicators and their purpose should start by defining what is this

SSM 2010:07 5
“safety” that we are talking about. What is it that we are trying to find
indications of?

The selection and use of safety performance indicators is always based


on an understanding of the sociotechnical system and system safety.
This understanding is often at least partly implicit or tacit
understanding, meaning more or less justifiable opinions on what is
important for nuclear safety and what things should be taken care of
when assuring nuclear safety. These opinions then affect both the
selection and the interpretation of the safety performance indicators.
In this report we use a term safety model to indicate this underlying
model of how safety is created in the sociotechnical system. We argue
that in order to be able to select and utilize safety performance
indicators in a manner that they would approximate the correct level
of nuclear safety the safety model should be systemic incorporating
people, technology and the organization.

We approach safety of the nuclear power plants from the point of view
of nuclear safety as distinct from for example occupational safety. We
define safety as an emergent property of the entire sociotechnical
system. Thus, safety is a dynamic property or a state that includes
people and technology. It is important to realise that safety is not a
system; the organization is (Reiman & Oedewald 2008). Safety
management requires the management of the organization. Safety
performance indicators should provide information on this
organizational ability to fulfil the core task. This means that they
should provide information on the safety culture of the organization.

According to our definition, the essence of safety culture is the ability


and willingness of the organization to understand safety, hazards and
means of preventing them, as well as ability and willingness to act
safely, prevent hazards from actualising and promote safety. Safety
culture refers to a dynamic and adaptive state. It can be viewed as a
multilevel phenomenon of organizational dimensions, social processes
and psychological states of the personnel.

To conclude, in this report safety performance indicators are


approached from organizational point of view. The indicators are seen
as organizational tools for the evaluation and improvement of safety
used as part of the safety management process of the organizations.

2.2 Functions of organizational safety performance indicators


When looked from an organizational point of view the purposes of
safety indicators can roughly be categorized into three groups; a)
monitoring the level of safety in the organization, b) changing and
developing the means of managing safety in the organization, and c)

SSM 2010:07 6
motivating the management and the personnel to take the necessary
action (cf. Hale, 2009, p. 479).

Monitoring the level of safety in the organization

In their documents and guidelines both IAEA and WANO seem to


emphasize the monitoring function of safety indicators. They see
safety performance indicators primarily as a way to monitor the level
of safety performance of the plant (cf. IAEA, 2000, p. 1; WANO,
2009). Often the monitoring is accomplished by looking at trends of
the indicator data over some period of time. For example, a guideline
by IAEA (2000, p. 1) states that “specific indicator trends over a
period of time can provide an early warning to plant management to
investigate the causes behind observed changes.”

Safety management process should utilize the indicators for example


as triggers for investigating in-depth whether there is substance for
concern in the organization (Wreathall, 2009, p. 494). These
investigations can be made e.g. by a small focused audit, by a field
investigation or by a survey of the workforce. These in turn provide a
more focused and indepth indicator of the status of the area of
concern.

The challenge in using safety performance indicators for monitoring


the current safety level is the unclear causal link between past events
and the current safety performance. Monitoring should not rely solely
on lagging indicators but also on indicators of current activities and
the potential of the organization to succeed in the future. We will
return to this topic in various sections of this report.

Changing and developing the means of managing safety in the


organization

A partly distinct purpose for using safety indicators besides


monitoring the safety level is to use them for change or improvement.
First of all, safety indicators can be used as a tool for setting specific
development goals and measuring the effectiveness of improvements
(cf. IAEA, 2000; WANO, 2009). Second, safety performance
indicators can be used to facilitate change and development in the
desired direction. This can be done by selecting indicators that
promote the wanted behaviour and new practices or inhibit unwanted
activity. For example, if the organization is implementing a practice of
having bre-job briefings before safety significant tasks are started the
amount of such briefings can be selected as a safety performance
indicator to be followed annually or even more often.

SSM 2010:07 7
Motivating the management and the personnel to take the necessary
action

Besides helping in goal setting and progress evaluation, the process of


utilizing leading indicators and the selected safety performance
indicators as such can also have an effect on the actual safety
performance. Leading indicator process itself offers intrinsic value in
helping to address the role of organizational factors in human
performance (EPRI, 2001b). This is an important point that has not
always been given sufficient attention when discussing the selection
and use of safety indicators.

Safety indicators are cues for the personnel about the priotirities and
interests of the management and they can shape the personnel’s ideas
on what safety or safe behaviour is or should be like. Thus, the
indicators steer the behaviour in the organization. Sometimes the
behaviour steering power of the indicators is intensified by embedding
the indicators into the incentive system of the organization.
Unfortunately, this steering effect remains often unintentional and
might lead to problems when the explicit goal of the safety indicators
is to monitor the safety level and not change or develop some specific
issue being measured.

Safety performance indicators can also be used to explicitly motivate


certain kind of behaviour from the employees or the management.
Hudson (2009, p. 484) reminds that “to shape managers’ behaviour
most organizations will require indicators that can show significant
variation on a quarterly or annual basis”. Safety performance
indicators should aim at countering the focus on short term production
effects such as cost cutting that manifests in safety only after the
manager already has probably moved on. However, Hudson (Ibid.)
points out that in order to influence motivation the effect of the
measure on the performance of the plant should be understood. Thus,
the indicators should be experienced as meaningfull by the personnel.

To summarize, safety indicators can have different types of effects on


the behaviour in the organization:
- Direct effects on the measured metric: selection of some
specific indicator increases that kind of behaviour (e.g.
counting the number of management walk arounds per month
increases the amount of management walk arounds)
- Direct effects on the indicated phenomenon: the selection of
some specific indicator increases the underlying
(psychological) phenomena (e.g. counting the number of
management walk arounds per month increases the

SSM 2010:07 8
management’s commitment to safety and personnel’s interest
in safety)
- Unintended effect: the personnel become more interested in
managing the indicator itself rather than the phenomenon of
which it is supposed to provide an indication. For example, the
management optimizes the number of walk arounds and
neglects other (important) issues that are not being measured.
We will return in Section 6 to the difference between metric and
indicator. Here it is sufficient to say that a metric denotes the
operationalization of the indicator (how it is measured), whereas an
indicator denotes something that one wishes to measure with the use
of one or more metrics.

2.3 Types of safety performance indicators


Safety performance indicators can measure various aspects of nuclear
safety. Sometimes safety performance indicators are focused only on
human performance or human factors and sometimes the object is
nuclear safety in general. We have emphasized that the object of
safety performance indicators should be the functioning of the
sociotechnical system and thus nuclear safety in general.

Different categorizations of safety performance indicators exist in the


literature. We can differentiate at least six typologies of indicators:

- outcome versus activity based indicators

- leading versus lagging indicators

- input versus output indicators

- process versus personnel indicators

- positive versus negative indicators

- technical versus human factors indicators

It is important to note that these categorizations are partly


overlapping, especially concerning the first three categories. For
example, the division between outcome and activity indicators are
often considered similar to that of the division between lagging
(outcome) and leading (activity) indicators. OECD (2003) defines
activities indicators as means for measuring actions or conditions that
should maintain or lead to improvements in safety. Outcome
indicators in turn measure the results, effects or consequences of these
activities.

SSM 2010:07 9
Outcome indicators are usually similar to lagging indicators, and they
show the safety performance in terms of measures of past performance
e.g. injury rates, radiation doses, and incidents. Input indicators are
usually called leading indicators, and they monitor the processes that
are effecting and maintaining safety performance. These include
leadership, training activities and work processes. OECD’s guidance
document on safety performance indicators (2008, p. 5) argues that
“outcome indicators tell you whether you have achieved a desired
result (or when a desired safety result has failed). But, unlike activities
indicators, they do not tell you why the result was achieved or why it
was not.”

In this report we categorize indicators into three types of indicators,


feedback, monitor and drive indicators. The feedback and drive
indicators correspond closely with outcome and activity indicators,
respectively. The monitor indicators are a set of indicators often
neglected in previous discussions on safety performance indicators.
They indicate the current level of safety in the organization. We will
return to these indicator types in Section 4, after looking at the past
utilization of indicators at the nuclear industry.

SSM 2010:07 10
3 Utilization of safety
performance indicators in
the nuclear industry
Different types of safety indicators have been utilized in the nuclear
industry for a long time. For example, unit capability factors and
INES-events have been used to indicate the (safety) performance of
the plant. High capability factors have been used as indicators for a
positive indicator of safety performance, whereas INES-events are a
negative indicator. Also WANO offers a set of performance indicators
including capability factors and unplanned reactor scrams (see below)
with trend data for several years and different power plants.

3.1 Indicating nuclear safety


In an NKS-project conducted together with Carl Rollenhagen and Ulf
Kahlbom (see Reiman et al. in press) we asked 30 experts from the
Finnish and Swedish nuclear organizations (power companies,
regulators, and consultants) what issues they would consider if they
would have a task of evaluating the nuclear safety of a given power
plant. Figure 1 illustrates a combination of all the answers that we
received (see Reiman et al. in press).

Owners’ safety commitment


Attitudes
Management safety commitment Trust in management
Management safety policy and Management Way of thinking about safety / risk awareness
priorities and owners Reporting of deviations
Management safety under- Rule compliance / use of procedures
standing and risk awareness
Understanding the safety significance of
one’s work

Maintenance of the plant Openness


Operating experience and Personnel Competence
learning from events Personnel safety commitment
Fuel integrity, leaks
Communication practices Events (INES etc.)
Decision making practices Unplanned scrams
Planning activity and plant Organizational The nuclear Radiation dosages
life time management Outcomes
activities process Load factors
Way of operating the plant
Quality of PRA Maintenance backlog
Quality management Radioactive releases
Research activity Quality of instructions Investments in safety
Systems and
Proactive learning practices structures Functioning of safety systems
Change management Documentation
Resources
Management system
Safety case
Technical construction Number of trains, diversity and
Design basis of the plant Original redundancy of safety systems
Failure tolerance technical design

Figure 1. Indicators that the interviewees explicitly raised as signals of the safety
level of the plant (from Reiman et al. in press). The indicators have been arranged
according to general themes that emerged from the definitions – management and
owners, technical design of the plant, organizational activities, personnel, systems
and structures, and finally, the outcomes.

SSM 2010:07 11
Many people emphasized technical data and performance measures
that can be compared to other power plants – outcomes of the
organization. Another emphasis was on the organizational activities
that produce safety. Personnel-related issues were also considered
important indicators of the level of nuclear safety. What the
respondents seemed to lack was an overview of the relation of
different indications of the safety level. A few divided nuclear safety
explicitly into a) the technical condition of the plant and b) its
operation and management. (Reiman et al. in press)

In terms of this study it is noteworthy that the responses can be


categorized according to whether they indicate outcomes,
organizational activities or current states or structures in the
organization (system and structures as well as personnel). Clearly the
experts in the Nordic nuclear industry considered that nuclear safety
cannot reliably be evaluated by relying on only one type of indicator.
Rather several sources of information are needed.

3.2 Indicator systems


In Finland the regulator, STUK, has developed an indicator system for
supervising the nuclear safety of the Finnish nuclear power plants.
The indicator system divides nuclear safety into three sectors: 1)
safety and quality culture, 2) operational events, and 3) structural
integrity. These three sectors are divided into a total of 14 indicators
(figure 2).

Figure 2. STUK’s indicator system, from Kainulainen (2009, p. 88)

An interesting indicator in terms of this study is the accident risk of


nuclear facilities. This indicator is based on the result of probabilistic
risk analyses (PRA) (figure 3). STUK reminds that “when assessing
the indicator, it must be remembered that it is affected by both the
development of the power plant and the development of the
calculation model. Plant modifications and changes in methods,
carried out to remove risk factors, will decrease the indicator value.
An increase of the indicator value may be due to the model being
extended to new event groups, or the identification of new risk factors.
In addition, developing more detailed models or obtaining more

SSM 2010:07 12
detailed basic data may change risk estimates in either direction”
(Kainulainen 2009, p. 121).

The above example also illustrates the point that was made in Section
2.1 that the utilization of the indicators is based on an understanding
of the sociotechnical system. When this understanding deepens it can
actually be seen as a decrease in safety level as measured by the safety
performance indicators. What actually happens then is of course not a
real decrease in safety but a calibration of the model to better
correspond with reality. In other words, the safety level has in reality
already been closer to the new decreased level than the old indicated
level, but the previous models of safety have been unable to indicate
it.

Chakraborty et al. (2003) point out that “PSA [the old acronym for
PRA] provides a formal and most logical means for quantifying the
safety significance of operational events, corrective actions, design
modifications, and changes in plant configuration (plant condition). In
other words, PSA appears to be a consistent framework for defining
the most meaningful set of SPIs, and for linking these with the most
effective top-level safety indicators.” PRA is focused on the
propability of the nuclear power plant to be safe in the future, and thus
it is a leading indicator of nuclear safety.

Figure 3. PRA calculations for the Finnish plants 1999-2008, from


Kainulainen (2009, p. 121)

However, Chakraborty et al. (2003) note that within the PSA


framework does not address the risk influence of management and
organizational aspects and thus it is not easy to assess the
appropriateness of the safety performance indicators that are proposed
for assessment of management and organizational factors.

Besides the actual safety performance indicators that were depicted in


figure 2, STUK publicizes each year the following information from
Olkiluoto 1&2 and Loviisa 1&2 nuclear power plants (Kainulainen
2009):
- Unit cabability factors / load factors (ten year trend)

SSM 2010:07 13
- Daily average gross power for the reporting year
- Operation and operational events
- Annual maintenance outage – activities and performance
- Events during the year subject to special report
- INES-classified events (ten year trend)
- Non-compliancies during the year with Technical
Specifications
- Reliability of the plant’s safety functions (failures during the
year in the plant’s safety functions and the systems, equipment
and structures implementing them)
- Failures or signs of wear in the integrity of equipment and
structures critical to plant safety
- Fuel leaks
- Events in the treatment, storage or final disposal of low- and
intermediate-level waste
- Development of the plant and its safety – activities and
performance
- Management and safety culture – activities and performance
- Functionality of the management system – activities and
performance
- Personnel resources and competence – activities and
performance
- Operational experience feedback – activities and performance
- Occupational radiation safety – activities and performance
- Collective occupational radiation doses since the start of the
operation
- Annual radiation doses to the critical groups since the start of
operation
- Radioactive nuclides originating from the plant
- Emergency preparedness

This information is not explicitly considered as safety performance


indicator information. However, many of the issues that STUK attends
to do indicate the safety level of the power plants, and as such they
can also be considered safety performance indicators – just qualitative
in type.

3.3 State-of-the-art on safety performance indicators


In their study on safety performance indicators in eight countries and
eleven partner organizations representing regulatory organizations,
utilities, and technical support organizations at the nuclear field
Chakraborty et al. (2003, p. 2) summarize the state-of-the art of the
application of safety performance indicators as follows:
• In all countries operating nuclear power plants performance indicators are
either being tracked or are being proposed that can be applied to monitor the
safety performance of the plants.

SSM 2010:07 14
• There is no unified approach concerning terminology and definition of
“performance indicators”, “safety indicators”, and “safety performance
indicators”.
• Most widely applied is the WANO set of performance indicators (10
quantitative indicators reported annually by nearly all NPPs worldwide, in
order to monitor the safety and economic performance of NPPs).
• In many countries the WANO set, complemented by other indicators, is
used by utilities and regulators to monitor the safety performance of NPPs.
• There is practically no calibration of safety performance indicators in order
to give a quantitative measure of plant safety (resp. risk).
• Evaluation of safety performance indicators applies relative thresholds
which are based on past experience.
• Safety performance indicators are generally applied in combination with
other methods to monitor plant safety (e.g. inspections, PSA, precursor
studies).
• Approaches have been developed to monitor status and trends of safety
management and safety culture by means of specific indicators. Calibration
in terms of influence on plant safety (resp. risk) is not available.
• Similarly it is intended to find indicators to detect early signs of
deterioration of safety. Proposals have been developed, but there is no
accepted approach. Furthermore, the relationship of “safety culture and
organizational aspects” to fundamental PSA input parameters and models
needs to be better established using actuarial plant data.
• Plant specific PSAs, taking into account actual operational experience,
produce safety performance indicators (CDF, release category frequencies)
based on an integrated view. However, the current PSA methodology does
not take into account (potential) influences from safety management or
safety culture, which have not yet been manifested in the operational
experience.

Chakraborty et al. (2003) propose that the development of risk-based


safety performance indicators “should follow the PSA hierarchy that
includes the relevant indicators representing, for instance:
• Initiating events
• Reliability of functions, systems, trains and components
• Mitigation potential of engineering systems
• Mitigation potential of emergency actions” (Ibid., p. 4).
They (Ibid.) further note that organizational and management
influences should be included in the indicator framework but offer
limited guidance on how to accomplish this.

IAEA (2000, 1) leaves the choice of specific safety performance


indicators up to the organizations by stating that “each plant needs to
determine which indicators best serve its needs. Selected indicators
should not be static, but should be adapted to the conditions and
performance of the plant, with consideration given to the cost/benefit
of maintaining any individual indicator.” However, IAEA presents a
hierarchical structure or framework for supporting indicator selection
and utilization and provides examples of suitable indicators. It

SSM 2010:07 15
encourages the use of those safety performance indicators WANO has
developed (see below), that form the basis for the safety performance
indicators currently used in nuclear power plants.

The WANO Performance Indicator Programme supports the exchange


of operating experience information by collecting, trending and
disseminating nuclear plant performance. Specific key indicator areas
are intended to give a quantitative indication of nuclear plant safety
and reliability, plant efficiency and personnel safety areas. In 2008
these key indicator areas were:
- unit capability
- unplanned capability loss
- forced loss rate
- collective radiation exposure
- unplanned automatic scrams per 7 000 hours critical
- industrial safety accidents rate
- safety system performance
- fuel reliability
- chemistry performance
- grid-related loss factors
- contractor industrial safety accident rate (WANO, 2009).

WANO members report on most of these indicators on a quarterly


basis. The data is collected through WANO members' Web site,
trended and posted on the WANO members' Web site. WANO
published and distributed its first performance indicator report in
1991. The level of reporting has grown so that in 2008 82 percent of
the operating nuclear power plants reported all eleven indicators
(WANO, 2009).

In practice, WANO safety indicators are often complemented with


other indicators in the nuclear plants. For example, when Flodin &
Lönnblad (2004) reviewed safety performance indicators in use by the
Swedish utilities, they found that the selection of indicators was based
both on the WANO indicators and on indicators defined by the users
themselves. The Swedish utilities used well over 20 indicators for
follow-up of safety at the plants, including the 8 WANO indicators
that were available at that time.

IAEA (2000, 23) states that safety indicators chosen should include a
combination of indicators that reflect actual performance that is
sometimes called lagging indicators and those that provide an early
warning of declining performance that is sometimes called leading
indicators. The American Electric Power Research Institute (EPRI)
also emphasizes that there are more indicator types than just one.
EPRI strongly encourages the use of leading indicators for their

SSM 2010:07 16
member utilities and provides tools and guidelines for this (EPRI,
2000, 2001a). These tools and guidelines are constructed so that they
are also in line with the principles of INPO (Institute of Nuclear
Power Operations).

Next we will look more closely at the differences between leading and
lagging safety performance indicators.

SSM 2010:07 17
SSM 2010:07 18
4 Leading and lagging
indicators of safety
4.1 Distinguishing lead from lag
The distinction between leading and lagging safety performance
indicators is not clear cut. Some safety scientists and practitioners
have described them more as a continuum than two separate entities
and have even suggested that the distinction between leading and
lagging is not that important at all (Hale 2009).

The categorization of safety performance indicators into lead and lag


is dependent on the underlying model of safety. If one has a
mechanistic and technical-oriented view on nuclear safety, near-
misses can be considered leading indicators. More systemic and
dynamic view of an organization and system safety would not view
near-misses as leading indicators, rather more as indicators of past
safety performance. Another typical safety model emphasizes the
latent failures (pathogens) of the sociotechnical system as creating
conditions for accident (Reason, 1997).

A working group for the UK Oil and Gas Industry (Step-Change in


Safety, 2001, 3) has defined leading safety indicators as “something
that provides information that helps the user respond to changing
circumstances and take actions to achieve desired outcomes or avoid
unwanted outcomes” while lagging indicators were seen as “the
outcomes resulting from our actions”. The working group used the
analogy of sailing yacht as an example of leading and lagging
indicators. In a yacht, the compass, wind indicator and radar provide
information that can be used to control the boat to maximise speed in
the direction that we want to go, whilst avoiding danger. They can
thus be seen as leading indicators, which provide information about
the current situation that can affect future performance. The log on the
other hand provides a measure of how far we have travelled. This
parallels lagging indicators, which are the outcomes of our actions.

OECD’s guidance document on safety performance indicators at the


chemical industry (2008, p. 5) defined leading indicators (or in their
usage Activities Indicators) as follows: “Activities indicators are
designed to help identify whether enterprises/organizations are taking
actions believed necessary to lower risks.” Examples of activities
indicators given in the document include “Are there systematic
procedures for hazard identification and assessment?”, “Are safety
issues adequately addressed in regular meetings of employees?”, “Is

SSM 2010:07 19
there an adequate recruitment procedure?” and “Is management
actively committed to, and involved in, safety activities”.

HSE (2006) defines leading indicators as follows: “The leading


indicator identifies failings or ‘holes’ in vital aspects of the risk
control system discovered during routine checks on the operation of a
critical activity within the risk control system”. The definition seems
to view accidents from an epidemiological model (Hollnagel, 2004)
and emphasize the indicators’ role in identifying latent failures and
system deficiencies before they manifest. Hale (2009, p. 479)
emphasizes that the indicator is leading or lagging in respect to
whether “it leads or lags the occurrence of harm, or at least the loss of
control in the scenario leading to harm”.

The health metaphor can be used to illustrate the challenges of


measuring safety. It has for long been pointed out that health of an
individual human being is something more than the absence of
illnesses or injuries. Health is an active state requiring and enabling
certain activities; acquisition of nutrition, exercise, vitality. Often
people do not explicitly consider their health or they take it for granted
until the negative signs of health surface. These negative signs such as
high blood pressure or rise in temperature are lagging indicators.
Safety has close parallels to health. Safety is also a state of activity,
not only absence of accidents or incidents. Monitoring safety requires
more than monitoring the signs of “illnesses”, that is, incidents,
deficiencies, errors. One must also be able to monitor the activities,
processes and mental states of the personnel that contribute to the
level of safety that the organization is producing. It is not enough just
to note that there have been no incidents during the year or the trend
of the incidents is declining. One must also know why the situation is
so, and how the current safety management processes are contributing
to the safety level.

4.2 Leading indicators as precursors to harm or signs of


changing vulnerabilities
Several reasons for using leading indicators have been proposed in the
literature:

- they provide information on where to focus improvement


efforts,

- they direct attention to proactive measures of safety


management rather than reactive follow up of negative
occurrences or trending of events,

SSM 2010:07 20
- they provide early warning signs on potential weak areas or
vulnerabilities in the organizational risk control system or
technology,

- they focus on precursors to undesired events rather than the


undesired events themselves,

- they provide information on the effectiveness of the safety


efforts underway and

- they tell about the organizational health, not only sickness or


absence of it.

Typically lead and lag indicators are considered on a time scale where
lead indicators precede harm and lag indicators follow harm.
According to that, lagging indicators can be used in providing
feedback on the functioning of the system to be used as further inputs
into the system. Lagging indicators would thus indicate the current
safety level of the system. We disagree with this definition.

Kjellén (2009, p. 486) defines a leading safety performance indicator


as an indicator that changes before the actual risk level has changed.
This definition deviates from many current usages and definitions of
the concept. The distinction between indicators that change before and
after the actual risk level changes is an important one. It also has
important implications for the requirements of leading indicators. For
the indicator to be sensitive to changes in the organizational risk
control system that predate the rise of the risk level, it cannot focus on
“failings”, “holes” or even “near-misses” or “deviations”. The
indicator has to provide information on the activities and the
organizational means of controlling risk.

EPRI’s definition of leading indicators resembles Kjellén’s definition


in some important aspects. According to EPRI (2000, A-3), “leading
indicators provide information about developing or changing
conditions and factors that tend to influence future human
performance”. Thus “effective leading indicators provide a basis for
predicting or forecasting situations in which the potential exists for a
change in human performance, either for better or worse.”

Both Kjellén (2009, p. 486) and EPRI (2000) seem to view leading
indicators not as measures of precursor to harm but as measures of
signs of changing vulnerabilities. This means that leading indicators
should measure things that might one day become precursos to harm
or cause a precursor to harm. We agree with this perspective. All in all
we define leading indicators as follows (cf. Dyreborg 2009):

SSM 2010:07 21
Lead safety indicators indicate either the current state
and/or potential development of key organizational
functions or processes as well as the technical
infrastructure of the system. The current state includes a
view on the changing vulnerabilities of the organization
as well as its internal model of how it is creating safety.
The lead monitor indicators indicate the potential of the
organization to achieve safety. They do not directly
predict the safety related outcomes of the sociotechnical
system since these are also affected by numerous other
factors such as external circumstances, situational
variables and chance.
In the next chapter we present an organizational theoretical view on
safety indicators and system safety that parallels leading indicators
with safety culture.

SSM 2010:07 22
5 Safety culture as a
leading safety
performance indicator
5.1 Criteria for good safety culture
According to our approach (see Reiman et al., 2008; Reiman &
Oedewald, 2009), the essence of safety culture is the ability and
willingness of the organization to understand safety, hazards and
means of preventing them, as well as ability and willingness to act
safely, prevent hazards from actualising and promote safety. Safety
culture refers to a dynamic and adaptive state. It can be viewed as a
multilevel phenomenon of organizational dimensions, social processes
and psychological states of the personnel. Reiman and Oedewald
(2009, 43) have stated that a nuclear industry organization has a high-
level safety culture when the following criteria are met:
- Safety is genuinely valued and the members of the
organization are motivated to put effort on achieving high
levels of safety
- It is understood that safety is a complex phenomenon. Safety is
understood as a property of an entire system and not just
absence of incidents
- People feel personally responsible for the safety of the entire
system, they feel that they can have an effect on safety
- The organization aims at understanding the hazards and
anticipating the risks in their activities
- The organization is alert to the possibility of an unanticipated
event
- There are good prerequisites for carrying out the daily work
- The interaction between people promotes a formation of
shared understanding of safety as well as situational awareness
of ongoing activities

The above-mentioned dimensions can be seen as criteria in an


organizational evaluation. If an organization shows all the above-
mentioned characteristics, it has a high-level safety culture and thus a
high potential for managing its activities safely. In practice, however,
organizations show varying degrees of safety value and motivation.
Furthermore, the risk and safety conceptions of the personnel are
usually partially accurate and partially flawed. Thus the indicators
have to reach the social and structural aspects of the organizations and
provide information on how well the organization is able and willing
to carry out its core task. Especially important in this regard is to

SSM 2010:07 23
identify those aspects of the organizational ability that have
vulnerabilities or can create vulnerabilities elsewhere in the
organization.

Reiman and Oedewald (2009) propose that when evaluating an


organization and its safety culture, four main elements of an
organization should be taken into account. Those are the
organizational functions, social processes and psychological
properties of the personnel (see also Reiman et al., 2008). The basis
for the criteria used in the evaluation is the fourth element of the
organization; the organizational core task and production technology.
This is the source of the inherent hazards of the sociotechnical system.
Organizational evaluation is one type of means of providing safety
performance indicator data. Thus, the criteria used in organizational
evaluation can also be used when considering the question of what
should the safety performance indicators aim at indicating?

5.2 Monitoring safety culture in the sociotechnical system


Adopting the view on the organizational safety culture described in
Section 5.1 has implications for safety performance indicators. The
framework is based on presence of certain organizational attributes
instead of absence of indications of harm. Thus, also the selected
safety indicators should be able to show a presence of certain
dimensions and measure their level. We argue that the preoccupation
with the concepts of harm and accident in the discussion on indicators
has led to a neglect of the critical issue worth indicating: the
functioning of the sociotechnical system including the way it is
currently producing safety (not necessarily – or hopefully – harm and
accidents).

We argue that lagging indicators do not tell about the safety level of
the system or dynamics of the system’s functioning. Instead lag
indicators only tell about the outputs of the system. These outputs are
produced by the internal dynamics of the various organizational
dimensions influenced by external variability and chance. Likewise,
leading indicators are not only indicators of something that precede
harm as they have been conceptualized in frameworks based on
epidemiological accident models (cf. Hale, 2009). Leading indicators
either influence safety management priorities and the chosen actions
for safety improvement, or they tell about the dynamics of the
sociotechnical system (not about the inputs to the system or merely
about the functioning of safety barriers). These leading indicators are
labelled drive indicators and monitor indicators in this report,
respectively.

SSM 2010:07 24
The distinction between lead and lag indicators can be illustrated with
the help of Hollnagel’s (2008, p. 70) feedforward model of safety
management. Hollnagel (2008) argues that more emphasis needs to be
put into controlling the system by anticipated or expected disturbances
and deviations (feedforward) instead of actual outcomes (feedback).
In figure 4 we have created a model loosely based on Hollnagel’s
ideas (2008) to illustrate the three types of indications; feedforward, or
leading drive indicators, leading monitor indicators and lagging, or
feedback indicators.

Potential safety Safety


activities development
Environment
LEAD: Monitor (external
Priority indicators variability)
areas

Safety model Conception of


safety safety LEAD: Drive Actions and Sosiotechnical
and safety current safety Outcomes
criteria goals indicators measures activity
boundaries level

Barriers and
corrective
actions LAG: Feedback
indicators
Potential
control mechanisms Risk control

Sociotechnical system

Figure 4. The sociotechnical system model indicates the influence of various


organizational elements on selection and utilization of safety performance
indicators. The model differentiates three types of safety indicators. The
“outcomes” in the model indicate situation specific outputs of the system and
not emergent properties of the system such as nuclear safety.

Figure 4 illustrates that the safety model prevalent in the organization


creates the criteria that the organization uses in making interpretations
about the current level of nuclear safety. This conception of current
safety level influences the goals that the top management sets for the
organization to achieve. These goals again influence what criteria are
selected for the drive indicators. The selection of drive indicators is
influenced by two parallel organizational functions; that of risk
control and that of safety development. Drive indicators are turned
into actions that influence the sociotechnical activity. Monitor
indicators provide a view on the dynamics of the system in question;
on the activities taking place, abilities, skills and motivation of the
personnel, routines and practices – the organizational potential for
safety. After this potential has actualized in specific situations into
outcomes, the feedback indicators can provide a view on the outputs
of the sociotechnical system. Figure 4 differentiates the following nine
elements:

SSM 2010:07 25
Safety model and safety boundaries: This means the underlying,
often implicit model of what safety is and how it is achieved in an
organizational context. Safety boundaries refer to the perceived
hazards of the organization and the space that these hazards leave for
carrying out activities safely. Even though each employee has their
own more or less uniform model of safety, the element in figure 4
refers to the model of people involved in the selection and utilization
of safety performance indicators. The safety model defines the risks
that are perceived and it is thus “the Achilles heel of feedforward
control” (Hollnagel, 2008, p. 68). Disturbances that are not
acknowledged or foreseen in the model will no be transformed into
drive indicators and corresponding safety interventions either. For
more information on safety models, see e.g. Hollnagel (2004, 2008),
Reiman and Oedewald (2009) and EPRI (2000, appendix C).

Conception of current safety level: The conception of current safety


level refers to views on the level of safety at the power plant held by
the top management and other people involved in selecting and
interpreting safety indicators. As with the safety model, the
conception is seldom homogenous within the group in charge on
safety indicators, but for clarity’s sake the figure presumes these
conceptions can be grouped together. The conception of safety level
influences the goals that are set for the drive indicators as well as
safety interventions (how much gap is perceived between the present
state and an ideal state).

Risk control: This means the organizational approach aimed at


controlling the variance in human behaviour and technological
performance by means of various safety barriers. Safety barriers can
be physical, functional, symbolic or incorporeal (Hollnagel, 2004).
Physical barriers include the containment building in the nuclear
power plant as well as walls, doors, valves, fences, safety belts, filters
and so on. A functional barrier system works by impeding the action
to be carried out by setting preconditions that have to be met before an
action can be carried out (e.g. a lock). Symbolic barriers require an act
of interpretation in order to achieve their purpose (e.g. signs, signals).
Finally, incorporeal barriers lack material form or substance and
depend on the knowledge of the user. Typical incorporeal barriers are
rules, guidelines, safety principles, restrictions and laws. (Hollnagel,
2004.)

Safety development: Safety development refers to the organizational


approach aimed at improving the organizational conditions for
achieving safety. Safety development can focus on improving the
processes of the organization as well as enhanching the personnel’s
awareness and understanding concerning the work that they and other

SSM 2010:07 26
members of the organization do. Instead of constraining behaviour,
safety development aims for building up the know-how and other
prerequisites for the personnel to do their work well and safely in
changing situations. Both risk control and safety development are
needed to manage safety.

Drive indicators: Drive indicators are measures of the fulfilment of


the selected safety management activities. Thus, they are chosen
priority areas of the organizational safety activity. They are based on
potential safety activities from the safety model and the priority areas
defined by the safety policy. The drive indicators are turned into
control measures that are used to manage the sociotechnical system;
change, maintain, reinforce, or reduce something. The main function
of the drive indicators is to direct the sociotechnical activity by
motivating certain safety management activities.

Monitor indicators: These indicators reflect the potential and


capacity of the organization to perform safely. The indicators monitor
the functioning of the system including but not limited to the efficacy
of the control measures. These indicators monitor the internal
dynamics of the sosiotechnical system.

Feedback indicators: Feedback indicators measure the outcomes of


the sociotechnical system. An outcome means a temporary end result
of a continuous process or an organizational activity. An important
qualifier of an outcome is that outcome always follows something; it
is a result or consequence of some other factor or combination of
factors and circumstances.

Sosiotechnical activity: Sosiotechnical activity refers to all the


activities, work, tasks and processes (physical and social) taking place
in the sociotechnical system.

Sosiotechnical system: The common term for an organization


composed of people and technology. The name reminds of the fact
that technology is always designed, used and maintained by people, as
well as of the fact that people do not act in a social and technical
vacuum but rather in a sosiotechnical context with its shared norms
and tools. The safety performance indicators should provide
information on the sociotechnical system and its capability for safety.
The challenge comes from the fact that safety performance indicators
are always selected and utilized within the same system that they are
supposed to measure.

In addition to the nine elements the figure includes “outcomes” as


outputs from the sociotechnical system and “environmental
influences” as inputs into the system. Outcomes are situational end

SSM 2010:07 27
results or situational actualization of the safety potential of the
organization. Thus, safety is not an outcome. Safety is a dynamic non-
event where non-events are not possible to characterize. Thus, we
have to look at the term ”dynamic” and search for the way the non-
event is created and acknowledge that we cannot ever reach the non-
event itself.

Environmental influences refer to deviations and disturbances beyond


control of the organization. These deviations still have an effect on the
situational performance and outcomes of the sociotechnical system,
for better or worse.

Figure 4 illustrates that the underlying safety model provides the


potential control mechanisms as well as a view on potential safety
improvement activities. These areas are then tackled with drive
indicators in terms of priority areas of safety development, corrective
measures of deficiencies in existing safety barrieres or implementation
of new safety barriers. What has been omitted from the figure 4 is the
feedback of information from the indicators into the safety model and
the two safety management strategies. Figure 5 illustrates the
information and feedback that each indicator type provides.

Potential safety Safety Information on


activities development current activities
Environment
LEAD: Monitor (external
Priority indicators variability)
areas

Safety model Conception of


safety safety LEAD: Drive Actions and Sosiotechnical
and safety current safety Outcomes
criteria goals indicators measures activity
boundaries level

Barriers and
corrective
actions LAG: Feedback
indicators
Potential Feedback on the
control mechanisms Risk control
effectiveness of risk
control

Sociotechnical system

Figure 5. Sociotechnical system model of lead and lag indicators with the
information transfer lines added

In figure 5, it is worthwile to note that there are no lines from the


feedback indicators to the conception of safety level or to the safety
development. In practice feedback indicators are often used to define
safety priorities or make conclusions about the level of safety. That is
not a correct use of the feedback indicators. These function only
within the predefined risk control framework, finetuning and adjusting
the selected safety barriers and making corrective actions to safety
systems. The influence to safety model and to the understanding of

SSM 2010:07 28
current (and future) safety level should always go through the monitor
indicators (see also Figure 7).

However, feedback indicators can provide clues about the functioning


of the organization - if they are analyzed from that perspective. When
used in this manner, indicators indicating a small event in terms of
outcomes (e.g. an unplanned reactor scram) might tell more about the
current functioning of the system than indicators that show a large
event (for example, partial loss of cooling accident). This is due to the
fact that large events already change the sociotechnical system; they
have immediate consequences for the technical systems, they are
interpreted and made sense of by the personnel, investigations and
other initiatives to prevent the event from recurring are made. Smaller
events go easily unnoticed in the sociotechnical system, and thus by
inspecting more closely (with the use of monitor indicators and other
data) what led to these events organizations can learn a lot about the
dynamics of their organization.

Figure 6 shows examples of lagging indicators as well as the two


types of leading indicators – monitor and drive indicators.

Unplanned scrams, INES The current condition of safety


Technology rated incidents, unavailability systems
of safety systems etc.

Quality of organizational safety


What near-misses have happened, How adequate the safety management
Organization how the organization has reacted, system is, how good practices the
management activities; change
management, risk management,
event reports etc. organization has, etc.
leadership, hazard identification etc
How good the behaviour of the How motivated and responsible the
Personnel personnel regarding safety issues
personnel are, how well hazards are
has been, occupational accidents, understood etc.
injuries etc.

Lagging – feedback Leading – monitor Leading – drive

Figure 6. Examples of lag and lead indicators (for more examples see
appendixes A, B and C).

As proposed by IAEA (2000), the selection of safety performance


indicators should always start by considering what is required from an
organization or a NPP to perform safely. When focusing on leading
safety indicators specifically, the basic question goes: what is required
from an organization in order to be aware of its safety level and
enhance its safety performance. Interestingly this is what safety
culture studies have been trying to find out for years. In fact, several
writers have connected the concept of leading safety performance
indicators to safety culture concept and proposed the use of safety
culture or climate as a leading safety indicator (cf. Mearns, 2009;
Grabowsky et al., 2007, see also Zwetsloot, 2009, 495). It is both
practical and economical to consider safety indicators and safety
culture indicators together, not as separate measurement and
improvement tools that in the worst case are collected and handled by
different actors in the organization.

SSM 2010:07 29
SSM 2010:07 30
6 Framework for the
selection and use of
safety performance
indicators
6.1 The role of indicators in safety management
The selection strategies of the indicators for the three types differ. The
monitor indicators should be chosen based on an analysis of the
functioning of the sociotechnical system (an operational nuclear
power plant for example) in question, and the identified key success
factors. Feedback indicators should be chosen based on the
identification of critical signals of increased risk as well as otherwise
negative unwanted events. Even if occupational accidents do not
necessarily bear a relation to nuclear safety they are unwanted
negative events and as such they are worth measuring. Only for the
drive indicators does the typical advice given in safety indicator
guidance documents apply: They should be selected to reflect the key
issues of concern and priority areas of the organization. In that way,
several potential drive indicators can be prioritized according to the
current needs of the organization. Each year drive indicators can be
adjusted depending on the issues to address as well as findings from
the monitor indicators.

The indicator types can also be connected: The organization can select
some key area of concern as a drive indicator, e.g. competence
management, and then identify monitor and feedback indicators that
would allow a follow-up on the progress of competence management
activities (for examples of lead drive indicators of competence
management, see Appendix A of this report). Monitor indicators could
be the amount and quality of training that the organization gives as
well as the general knowledge level of the personnel (operationalized
as e.g. number and types of degrees among the personnel, test scores,
etc). Feedback indicators could be, e.g., the types of root causes found
from incidents (whether competence related or not), annual
performance evaluations done by superiors and increase in the quality
of work.

Characteristics of effective safety performance indicators in managing


safety are (Dupont; Hale 2009, p. 480):

SSM 2010:07 31
- The indicator is valid; aka it measures what it intends to
measure

- The indicator is reliable;

- The indicator is sensitive to changes in what it is measuring

- The indicator is not susceptible to bias or manipulation

- The indicator is cost effective

- The indicator is interpreted by different groups in the same


way

- The indicator is broadly applicable across company operations

- The indicator is easily and accurately communicated

Selection of safety indicators should always start from the


consideration of what are the key issues to monitor, manage and
change. Only after these issues have been identified should one start to
define safety management actions that seek to address the key issues
as well as indicators to help the process. The safety indicators are
utilized as part of the safety management process, not as an
independent goal or function as such. The role of the safety
performance indicators is to provide information on safety,
motivate people to work on safety and contribute to change
towards increased safety.

6.2 The selection of key safety performance indicators


When selecting the indicators it is important first to consider what
needs to be monitored and not how these are monitored (OECD 2008,
p. 17, see also EPRI, 2000). Otherwise the selection of indicators can
be biased by relying on what is considered as possible or convenient
to measure, and not on what information needs to be obtained about
the safety level of the organization. The operationalization of the
indicator is herein called “metric” (sometimes called ‘measure) and
the difference between metrics and indicators is illustrated in figure 7.

Grote warns about relying only on indicators where data is easily


available: “focus on frequency may lead people to focus on indicators
purely because they are frequent, but which happen to be completely
irrelevant for increasing [production] safety” (Grote 2009, p. 478). An
example would be counting and trending the amount of trash found on
the plant area; it might give an indication about the housekeeping
practices of the organization, but does not necessarily bear any
relation to process safety. Another rarer phenomenon is the presence

SSM 2010:07 32
of foreign particles (“trash”) at the process. Although fortunately most
power plants should find it difficult to make a reliable trend out of
these findings, the few instances nevertheless provide an important
lagging indicator about the state of the safety culture at the
organization.

Woods (2009, p. 499) reminds us about the lesson from the Columbia
accident investigation: “Organizations need mechanisms to assess the
risk that the organization is operating nearer to safety boundaries than
it realizes – a means to monitor for risks in how the organization
monitors its risks relative to a changing environment”. This
monitoring of how well the organization is monitoring its risks
(second-order or metamonitoring) is an important yet difficult
endeavour. Some monitor indicators provide information on the
ability of the organization to monitor its risks adequately – for
example mindfulness and vigilance (and especially the potential
discrepancy between external and internal audit findings, see
Appendix B) provides information on organizational blind spots. Also,
the “understanding of hazards” and “understanding of the
organizational core task” -indicators provide information on the ability
of the organization to correctly spot the hazards and evaluate their
risks in relation to the tasks that they carry out.

The monitoring of the organizational capability for monitoring its


risks can also be done by comparing the effect of drive indicators on
the feedback and monitor indicators. If there is no effect or the effect
is not in line with the goals of the drive indicators, the indicators and
the safety management methods might be based on an inadequate
model of safety. This is illustrated in figure 7 where a process model
of selection and utilization of safety performance indicators is
presented.

SSM 2010:07 33
5a. If the results of step 4 are inconsistent correct the indicators

What should be Have the selec-


ted issues been
emphasized
emphasized

3. Define drive 4. Collect and


indicators analyze data
Safety model and
Safety policy
safety boundaries What should Have drive
indicators
be in place or
taking place changed the
system
2. Define safety 5. Act on the
1. Define key 3. Define monitor 4. Collect and
management findings
issues to manage indicators analyze data
activities
What would Have monitor
indicate indicators
change in predicted
performance performance

3. Define feed- 4. Collect and


back indicators analyze data

5b. Based on feedback indicators correct the selected activities

5c. Based on monitor indicators change priorities


5d. If results from step 4 remain inconsistent correct the underlying model of safety

Figure 7. Process model for selection and utilization of safety performance


indicators

As illustrated in figure 7 the process for selection and utilization of


safety performance indicators starts by defining the key issues to
manage. This definition is influenced by the underlying safety model.
Second step consists of defining safety management activities based
on the key content issues to be managed. These activities are concrete
initiatives, methods or practices that the organization carries out in
order to assure its safety.

Step three is the actual selection of indicators. Key questions to ask at


this step are 1) what issues or content areas should be emphasized in
the organization (define drive indicators for them) 2) what systems
and structures should be in place and what processes should be
happening (define monitor indicators for them) and 3) what would
indicate a change in performance (define feedback indicators for
them).

Step four is the ever-ongoing step of collecting and analysing the


indicator data. This step is challenging and wrong conclusions from
the indicators can contribute to a decline in the safety level by e.g.
misaligned safety activities or false belief in the efficacy of the
preventative measures already taken. Sections 6.3 and 6.4 provide
some guidance on interpreting indicator data. In terms of monitor
indicators the crucial thing is to gather information on the current
functioning of the sociotechnical system. This requires data on the
technical condition of the plant, group processes at the organization,
organizational factors and human resources (called “psychological
properties” below).

SSM 2010:07 34
Step five follows the analysis of the indicator data. At this step
corrective or preventive actions are taken based on the findings. If the
results of the indicators are inconsistent the indicators have to be
corrected (step 5a). This can mean for example that monitor indicators
show a steady decline in safety level despite drive indicators showing
successful emphasis on the chosen safety management areas or the
feedback indicators showing increasing number of events while the
monitor indicators have not changed. In such a situation all the
indicators have to be analysed and their rationale and underlying
model questioned. If the inconsistencies are big enough the process
should return to the step one. The feedback indicators provide
information that can be used in correcting safety management
activities (step 5b). This means for example conducting a root cause
analysis for an event and defining corrective measures and
corresponding drive indicators for facilitating the implementation of
the measures. Monitor indicators provide a view on the current safety
level of the organization and point to the necessary changes in
priorities if safety level shows signs of degradation (5c). Finally, if the
three types of indicators consistently show inconsistent results, the
underlying model of safety might be flawed (5d). For example, if the
plant has numerous events and near-misses even when the monitor
indicators claim a high level of safety, the monitor indicators might be
based on too narrow a conception of safe performance. To conclude,
the selection and utilization of safety performance indicators is a
continuous process where all three types of indicators are analysed
and finetuned to better correspond with reality (cf. EPRI, 2000).

Dyreborg (2009, p. 475) also points out the important distinction


between the necessary countermeasures for lead and lag indicators:
“Decreasing lead indicator performance levels calls for improvement
of existing risk control parameters, whereas decreasing lag indicator
performance levels without such a lead indicator decrease, calls for a
revision of the risk control, i.e., reconsidering the causal relation
between lead and lag indicators.” In our model this means that if
feedback indicators show a decrease without explanation from the
monitor indicators, the underlying safety model might need revising.
Correspondingly, a decrease in monitor indicators requires
improvement of safety management activities directed by the drive
indicators.

A concise summary lists of potential safety performance indicators are


presented in Appendixes A, B and C. The lists should be considered a
pragmatic tool to guide attention to the relevant aspects, not as a
formal auditing check list or an indicator set.

SSM 2010:07 35
Drive indicators are categorized as follows (see Appendix A):

- Technology management

o Process for hazard identification and risk management

o Process for design and engineering

o Process for plant life management

- Leadership

o Management safety leadership

o Superiors’ safety activity

o Safety communication

- Work management

o Communication and cooperation practices

o Process for work and procedure management

o Resource management

o Practices of organizational learning

- Human resource management

o Competence management and training

o Integration of competence

o Subcontractor management

- Strategic management

o Setting of safety policy and safety goals

o Operation and maintenance of the plant

o Change management

o Contingency planning and emergency preparedness

Monitor indicators are categorized in the following manner (see


Appendix B):

SSM 2010:07 36
- Organization and management

- Psychological states and conceptions

- Social processes

- Technical condition of the plant

Feedback indicators are grouped into four categories (see Appendix


C):

- Systems, structures and components

- Human factors

- Process safety performance

- Organizational safety performance

There needs to be fewer monitor indicators than there are drive


indicators in any given organization. This is due to the fact that all the
monitor indicators should be followed regularly whereas drive
indicators are selected depending on prioritization and the specific
needs of the organization. Thus, too many indicators provide an
information overload. Nevertheless, the number of indicators should
be sufficient to provide a reliable view on the status of safety culture
and system safety at the organization. Thus, the indicators presented in
Appendix A-C are not all meant to be taken into use, but rather they
represent the scope of potential indicators. Also, the indicator lists
should not be considered inclusive in terms of covering all potential or
even necessary indicators in terms of guaranteeing nuclear safety.

6.3 Relation of monitor indicators to performance


Safety performance indicators are just what the name implies,
indicators of safety performance. As such, the indicators themselves
are not that important. More important is what they tell about the
safety performance, aka what they are indicating. Problems occur
when management is driven by a goal of optimizing the indicators and
not the phenomena underlying them. Hopkins (2009, p. 464) calls this
“managing the measure rather than managing safety”. In such case the
indicators are no longer indicating what they were supposed to
indicate. They become loosely coupled to the phenomenon of interest.
This means that they still have a connection to safety performance, but
the connection is neither direct nor just one of indication – the act of
optimizing certain indicator also has an effect on the underlying
phenomena. This effect might show in other indicators, or it might
remain hidden as a latent factor in the organization. The effect of

SSM 2010:07 37
managing the measure instead of safety differs depending on the type
of the measure; leading, lagging, activity or outcome. Hopkins (2009)
argues that activity indicators (as opposed to outcome indicators) are
most susceptible to management, since it is possible to reduce their
quality without sacrificing their quantity, e.g. by taking more people
into training at the same time. However, this critique presupposes that
indicators are always quantitative.

External
variability
Metric 1.1 e

e
Metric 1.2
e Phenomenon 1
e Indicator 1
Metric 1.n
Efficiency

e
Metric 2.1
e e
Metric 2.2 Indicator 2 Phenomenon 2
e Safety
Metric 2.n

e e
Metric 3.1 Indicator 3 Phenomenon 3
e Wellbeing
Metric 3.2
e
Metric 3.n

Phenomenon 4

Figure 8. The relation between metrics (how something is measured),


indicators (what is being measured), phenomena (what is the indicator an
indication of) and safety. Dotted arrows indicate that something is inferred
from something else (with the associated measurement error). Straight
arrows indicate that one thing influences the other thing.

The relation of safety performance indicator and safety can be clouded


by various factors depicted in Figure 8:

- The indicator can be a valid indicator of the underlying


phenomenon, but the phenomenon does not bear a relationship
to nuclear safety

- The indicator can be a valid indicator of the underlying


phenomenon, but the effect of the phenomenon on system
safety is clouded by the effect of other relevant phenomena
(this is the problem with most indicators)

- The indicator as such can be a valid indicator of the underlying


phenomenon, but the operationalization (metric) of the
indicator is such that the measure has a high degree of error
(calibration error, hesitancy in reporting, optimizing the score
instead of the attending the phenomenon etc).

SSM 2010:07 38
- The phenomenon in question cannot be accurate measured by
one indicator, rather multiple indicators are needed.

- Multiple causal links and directions; e.g. careless attitude


toward personal risks causes occupational accidents (lag
indicator) that decrease overall employee wellbeing (lead
indicator) and trust in the safety management systems (lead
indicator) with a combined effect of decreased system safety
and increase in unsafe behaviours (lag indicator).

For example, occupational accidents can tell about the state of process
safety as measured by e.g. the number of reactor scrams and
development initiatives, or the use of human performance tools. This
is due to the fact that these are all affected partly by the same
underlying phenomena. In this case the underlying phenomena could
be workplace norms concerning thoroughness and proficiency. Still,
one cannot decipher solely from an increase in occupational accidents
that there is a problem with norms. Norms are only one possible
explanation and there is need for corroborative evidence from other
indicators before making any judgments.

6.4 Making inferences about the level of safety


In modern, complex safety-critical organizatios accidents often result
from a combination of various circumstances, deficiencies and
variabilities in performance which by themselves would have been
harmless (Hollnagel 2004). This represents a challenge for safety
performance indicators since they are always piecemeal and abstracted
from the everyday work. If an organization where all the indicators
suggest a good level of safety can suffer a major accident, what use
are safety performance indicators in the first place? This fact
emphasizes the importance of having leading indicators that focus on
development; safety can never be guaranteed by relying on lagging
indicators, rather it needs a continuous focus on lagging indicators of
past deficiencies, leading indicators of current technical,
organizational and human conditions and leading indicators of
technical, organizational and human processes that drive safety
forward.

The above example also gives emphasis on using multiple indicators


to evaluate system safety amd recognising the limitations of the used
indicators. The value of any one individual indicator may be of no
significance if treated in an isolated manner, but may be important
when considered in the context of other indicators (IAEA, 2000). As
Mearns (2009) points out, indicators do not necessarily represent
reality, but are an attempt to reflect the truth in the form of multiple
and different forms of data. Ale (2009, 470) compares industrial safety

SSM 2010:07 39
indicators to physical examination in health care. Body temperature is
a good indicator for a person’s health just as are pulse rate and blood
pressure. Medical examination often starts with checking these vital
statistics. However, sometimes a good state of these indicators does
not suffice to be certain that there is not something wrong. For
example a broken bone may not change these vital statistics. These
statistics also show large variability over individuals. An indicator is
thus always “just” an indicator. Its actual meaning needs to be thought
through carefully. As IAEA (2000, 1) points out in its tecdoc, the
actual values of the indicators are not intended to be direct measures
of safety. Instead safety performance can be inferred from the results.
EPRI (2000) also sees interpreting the meaning of indicator data as the
most essential step in the process of using leading indicators. Yet,
according to EPRI’s case studies, interpretation is also the point were
the process of using leading indicators is most likely to falter. Often
the data collection process assumes primary importance at the expence
of interpretation. EPRI recommends that leading indicator data should
be addressed in quarterly meetings of the management steering group
and other interested personnel in order to understand the big picture.
EPRI highlights the fact that data do not think, people do. The
indicator data as such is not interesting. The group work in
interpreting the data produces the only meaningful outcomes in the
process of utilizing leading indicators.

When making inferences one of the biggest questions is the standard


against which the indicator is evaluated. Comparison with others is
one of the ways of interpreting the meaning of indicator results; if one
is in the worst quartile the indicator shows a low level of safety in
comparison to the plants at the highest quartile. This necessitates that
the phenomenon that is being measured has a normal distribution
within the population of all organizations. Otherwise even a bad result
can look good if the other organizations score even worse. Thus,
relying on absolute scores is often a better option.

Timescale is another variable to be considered when making


inferences: often both external comparison and internal assessment are
based on trends. This means that if the performance shows a steady
regression along a certain timeline (that is not happening at peer
organizations) there is a cause for concern. Again, trending is also
relative not absolute, and judgment is based on extrapolating past
performance into the future. Another way of trending is to project
current organizational activities into future and make changes to
counteract, maintain or strengthen those projected trends. This
requires a good model of the organization and can be considered an
instance of the feed-forward strategy advocated by Hollnagel (2008).
Whatever the strategy for making inferences it has to be remembered

SSM 2010:07 40
that few if any of the indicators are totally independent of one another.
They are all measures of safety culture and probably have some
correlation with each other.

SSM 2010:07 41
SSM 2010:07 42
7 Conclusions
The purpose of safety performance indicators is to provide
information on safety, motivate people to work on safety and
contribute to change towards increased safety in the organization.
Differentiation of safety performance indicators and safety culture
indicators is unnecessary, since they should measure the same
phenomena.

Safety indicators are tools for effective safety management process.


Safety management needs a continuous focus on lagging indicators of
past deficiencies, leading indicators of current technical,
organizational and human conditions and leading indicators of
technical, organizational and human processes that drive safety
forward. Drive indicators are chosen priority areas of organizational
safety activity. They are based on the underlying safety model and
potential safety activities and safety policy derived from it. Drive
indicators influence control measures that manage the sociotechnical
system; change, maintain, reinforce, or reduce something. Monitor
indicators provide a view on the dynamics of the system in question;
the activities taking place, abilities, skills and motivation of the
personnel, routines and practices – the organizational potential for
safety. They also monitor the efficacy of the control measures that are
used to manage the sociotechnical system. Typically the safety
performance indicators that are used are lagging (feedback) indicators.
Besides feedback indicators, organizations should also acknowledge
the important role of monitor and drive indicators in managing safety.

When selecting the indicators it is important first to consider what


needs to be monitored, what are the critical goals of the organization,
i.e. the core task that needs to be taken care of? PRA should also be
utilised in identifying the most safety significant issues to monitor.
The selection and use of safety performance indicators is always based
on an understanding (a model) of the sociotechnical system and
safety. The safety model defines what risks are perceived. It is
important that the safety performance indicators can help in reflecting
on this model. Key questions to ask when selecting and utilizing
safety performance indicators are 1) what is required from the nuclear
power plant to perform safely and 2) what is required from the
organization in order to be aware of its safety level and enhance its
safety performance. The indicators should provide information on
whether these requirements are met or not, where the organization
should put more effort to meet the requirements and finally, does the
organization have an accurate view on the requirements.

SSM 2010:07 43
The selection and utilization of safety performance indicators is a
continuous process where all three types of indicators are analysed
and finetuned to better correspond with reality. The safety
performance of the plant is always inferred from the data from all the
indicators analysed together. There is no direct correspondence
between one indicator and nuclear safety. Rather the safety
performance indicators can provide a holistic view on the potential of
the nuclear power plant to guarantee nuclear safety and point out key
areas of concern where attention is required. This requires skill in
analysing the indicator data and interpreting the results in
organizational theoretical framework.

Acknowledgements

The authors would like to acknowledge the contribution and valuable


comments made by Ivonne A. Herrera and Pia Oedewald to the earlier
versions of this report.

SSM 2010:07 44
References
Ale, B. (2009). More thinking about process safety indicators. Safety
Science, 47, 470-471.
Chakraborty, S. et al. (2003). Risk-based Safety Performance Indicators for
Nuclear Power Plants. Transactions of the 17th International Conference on
Structural Mechanics in Reactor Technology (SMiRT 17) Prague, Czech
Republic, August 17 –22, 2003.
Dahlgren, K. (2008). Lessons learned from international experiences. HUSC
seminar, December 4th, 2008, Stockholm, Sweden.
Dekker, S.W.A. (2005). Ten questions about human error. A new view of
human factors and system safety. New Jersey: Lawrence Erlbaum.
Dyreborg, J. (2009). The causal relation between lead and lag indicators.
Safety Science, 47, 474-475.
EPRI (2000). Guidelines for trial use of leading indicators of human
performance: the human performance assistance package. 1000647. Palo
Alto, CA: EPRI.
EPRI (2001a). Final report on leading indicators of human performance.
1003033. Palo Alto, CA & Washinton, DC: EPRI & U.S. Department of
energy.
EPRI (2001b). Predictive validty of leading indicators: Human performance
measures and organizational health. 1004670. Palo Alto, CA: EPRI.
Flodin, Y. & Lönnblad, C. (2004). Utveckling av system för
säkerhetsindikatorer. SKI Rapport 2004:01.
Grabowski, M., Ayyalasomayajula, P., Merrick, J., Harrald, J.R., & Roberts,
K. (2007). Leading indicators of safety in virtual organizations. Safety
Science, 45, 1013−1043.
Grote, G. (2009). Response to Andrew Hopkins. Safety Science, 47, 478.
Hale, A. (2009). Why safety performance indicators? Safety Science, 47,
479−480.
Hollnagel, E. (2004). Barriers and accident prevention. Aldershot: Ashgate.
Hollnagel, E. & Woods, D.D. (2006). Epilogue – Resilience Engineering
Precepts. In E. Hollnagel, D.D. Woods and N. Leveson, eds. Resilience
engineering. Concepts and precepts. Aldershot: Ashgate
Hollnagel, E. (2008). Safety management - looking back or looking forward.
In E. Hollnagel, C.P. Nemeth and S. Dekker (Eds.), Resilience Engineering
Perspectives, Volume 1. Remaining sensitive to the possibility of failure.
Aldershot: Ashgate.
Hopkins, A. (2009). Thinking about process safety indicators. Safety
Science, 47, 460−465.
Hopkins, A. (2009b). Reply to comments. Safety Science, 47, 508-510.
HSE. (2006). Developing process safety indicators. Health and Safety
Executive. HSE Books.
Hudson, P.T.W. (2009). Process indicators: Managing safety by the
numbers. Safety Science, 47, 483-485.

SSM 2010:07 45
IAEA (1999). Safe management of the operating lifetimes of nuclear power
plants. INSAG-14. Vienna: IAEA.
IAEA (2000). Operational safety performance indicators for nuclear power
plants. Vienna: IAEA.
IAEA (2002). Self-assessment of safety culture in nuclear installations.
Highlights and good practices. IAEA-TECDOC-1321. Vienna: IAEA.
IAEA. (2003). Periodic safety review of nuclear power plants. Safety
Standards Series No. NS-G-2.10. Vienna: IAEA.
IAEA (2006). The management system for facilities and activities. Safety
Requirements No. GS-R-3. Vienna: IAEA.
IAEA (2008). SCART Guidelines. Reference report for IAEA Safety Culture
Assessment Review Team (SCART). Vienna, February 2008.
Kainulainen, E. (2009). (Ed.), Regulatory control of nuclear safety in Finland.
Annual report 2008. STUK-B 105. Helsinki: STUK.
Kjellén, U. (2009). The safety measurement problem revisited. Safety
Science, 47, 486-489.
Mearns, K. (2009). From reactive to proactive – can LPIs deliver? Safety
Science, 47, 491−492.
OECD. (2003). Guidance on safety performance indicators. OECD
Environment, Health and Safety Publications. Series on Chemical Accidents
No. 11. Paris: OECD Publications.
OECD. (2008). Guidance on developing safety performance indicators
related to chemical accident prevention, preparedness and response. For
industry. Second Edition. OECD Environment, Health and Safety
Publications. Series on Chemical Accidents No. 19. Paris: OECD
Publications.
Rasmussen, J. (1997). Risk management in a dynamic society: A modelling
problem. Safety Science, 27, 183-213.
Reason, J. (1997). Managing the risks of organizational accidents.
Aldershot: Ashgate.
Reiman, T. & Oedewald, P. (2007). Assessment of Complex Sociotechnical
Systems – Theoretical issues concerning the use of organizational culture
and organizational core task concepts. Safety Science 45, 745-768.
Reiman, T. & Oedewald, P. (2008). Turvallisuuskriittiset organisaatiot –
Onnettomuudet, kulttuuri ja johtaminen. Helsinki: Edita.
Reiman, T. & Oedewald, P. (2009). Evaluating safety critical organizations.
Focus on the nuclear industry. Swedish Radiation Safety Authority,
Research Report 2009:12.
Reiman, T., Pietikäinen, E. & Oedewald, P. (2008). Turvallisuuskulttuuri.
Teoria ja arviointi. VTT Publications 700. Espoo: VTT. Available from:
http://www.vtt.fi/inf/pdf/publications/2008/P700.pdf.
Reiman, T., Pietikäinen, E., Kahlbom, U. & Rollenhagen, C. (In press).
Safety Culture in the Finnish and Swedish Nuclear Industries – History and
present. NKS report.
Rollenhagen, C. (2010). Can focus on safety culture become an excuse for
not rethinking design of technology? Safety Science, 48, 268-278.

SSM 2010:07 46
Step-Change in Safety (2001). Leading performance indicators: a guide for
effective use. Available at:
[http://www.stepchangeinsafety.net/stepchange/News/StreamContentPart.as
px?ID=1517]
Valtiovarainministeriö (2005). Indikaattorit ohjauksen ja seurannan välineinä.
Valtiovarainministeriön indikaattorityöryhmän raportti. Keskustelunaloite 73.
Valtiovarainministeriö. Kansantalousosasto.
WANO (2009). 2008 Performance Indicators. Available at:
[http://www.wano.org.uk/PerformanceIndicators/PI_Trifold/PI_2008_TriFold.
pdf]
Weick, K. E. & Sutcliffe, K.M. (2007). Managing the unexpected. Resilient
performance in an age of uncertainty. Second Edition. San Francisco:
Jossey-Bass.
Woods, D.D. (2009). Escaping failure of foresight. Safety Science, 47, 498-
501.
Woods, D.D. & Hollnagel, E. (2006). Prologue: Resilience engineering
concepts. In E. Hollnagel, D.D. Woods & N. Leveson (Eds.), Resilience
engineering. Concepts and precepts. Aldeshot: Ashgate.
Wreathall, J. (2009). Leading? Lagging? Whatever! Safety Science, 47,
493−494.
Zwetsloot, G.I.J.M. (2009). Prospects and limitations of process safety
performance indicators. Safety Science, 47, 495−497.

SSM 2010:07 47
Appendix A: Examples of
drive indicators
A concise summary list of potential leading drive indicators is presented.
The list should be considered a pragmatic tool to guide attention to the
relevant aspects, not a formal auditing check list or an indicator set. The
main categories are based on Reiman and Oedewald (2009; see also Reiman
et al 2008), and the specific contents of the categories include input from
OECD (2008), and IAEA (1999, 2000, 2002, 2003, 2008).

Organizational functions

• Process for hazard identification and risk management (INDICATOR)

• Proactive measures are in place to identify new hazards and


improve existing safety measures (METRIC)

• PRA is utilized in decision making (METRIC)

• Hazard identification and risk assessments are used to develop


policies, procedures and practices (METRIC)

• Responsibilities for hazard identification are clear in the


organization (METRIC)

• Hazard identification deals with technical, human and


organizational issues in adequate depth (METRIC)

• Adequate barriers are set against the identified hazards (METRIC)

• Independent safety reviews are carried out regularly and


proactively (METRIC)

• Human performance tools are used in assessing the risks of


individual tasks (METRIC)

• Process for design and engineering (INDICATOR)

• There is an access to the appropriate tools and data for design and
engineering (METRIC)

• There is a procedure to ensure that key safety issues are addressed


in the design and engineering phase of the plant and its
components (METRIC)

• There is a procedure to maintain and update the plant design basis


documentation (METRIC)

• Process for plant life management (INDICATOR)

• Systematic ageing management programme exists (METRIC)

SSM 2010:07 48
• There is a procedure for the identification of possible degradation
mechanisms (METRIC)

• Operating experience and research are utilized in identifying plant


life management issues (METRIC)

• There is a long term plan for monitoring the condition of safety


critical components and assuring that safety functions remain
available in future (METRIC)

• There is a long term plan for maintaining the integrity of the


pressure vessel (METRIC)

• There is a procedure for repairing or replacing parts to prevent or


remedy unacceptable degradation (METRIC)

• Setting of safety goals and safety policy (INDICATOR)

• Safety policy is defined (METRIC)

• Safety policy is reviewed and updated regularly (METRIC)

• Clear safety goals are set (METRIC)

• Safety goals are relevant for the organization (METRIC)

• Safety goals are defined both for short and long term (METRIC)

• Personnel participate in setting safety goals (METRIC)

• There is an action program for reaching the safety goals


(METRIC)

• The action program includes responsibilities and accountabilities


(METRIC)

• Follow-up on safety goals is done on a regular basis (METRIC)

• Management safety leadership (INDICATOR)

• Owners of the power plant show a commitment to safety activities


(METRIC)

• Management is actively committed to, and visibly involved in,


safety activities (METRIC)

• Safety is a clearly recognized value at the organization (METRIC)

• Safety is a criteria in management decisions (METRIC)

• Conservative decision making is practiced in ambiguous situations


(METRIC)

• Positive feedback is given on safety conscious behaviour of the


personnel (METRIC)

SSM 2010:07 49
• Reporting of deviations, worries and own mistakes is encouraged
by the management (METRIC)

• Management invests financially in safety (METRIC)

• Immediate superiors’ safety activity (INDICATOR)

• Immediate superior supports the organizing of work and


management of daily routines (METRIC)

• Superior provides positive feedback on safety conscious behaviour


of the personnel (METRIC)

• Superior provides fair treatment of subordinates, understanding


that errors are natural, but not all violations can be tolerated
(METRIC)

• Superior monitors the personnel’s coping skills, stress and fatigue


levels as well as technical skills (METRIC)

• Reporting of deviations, worries and own mistakes is encouraged


by the management (METRIC)

• Safety communication (INDICATOR)

• Feedback is provided to personnel on near-misses and incidents


(METRIC)

• There is adequate information dissemination on safety issues


received from other organizations (METRIC)

• The personnel are reminded about safety issues in meetings and


internal communiqués (METRIC)

• The personnel are informed about the overall safety level and
current challenges on a regular basis (METRIC)

• Open communication on both positive and negative issues exists


in the organization (METRIC)

• There are both formal and informal communication channels for


raising safety concerns in the organization – up to the highest level
if necessary (METRIC)

• The safety significance of various rules and procedures is clearly


communicated to the personnel (METRIC)

• Communication and cooperation practices (INDICATOR)

• There are sufficient exchange opportunities for safety relevant


information within and between units (METRIC)

• Work climate supports team work and knowledge sharing


(METRIC)

SSM 2010:07 50
• Information that is relevant for work is easily accessible
(METRIC)

• The bottlenecks of information flow have been identified and


controlled (METRIC)

• Information flow in change of shifts situations is assured


(METRIC)

• Integration of the know-how of various professional groups


(INDICATOR)

• Professional groups appreciate each others’ competence and role


(METRIC)

• Variety of views and opinions are encouraged and decisions are


based on expertise not formal position (METRIC)

• Human and organizational factors are integrated into technical


investigations and projects (METRIC)

• The hands-on experience of technicians is utilised by foremen,


managers and engineers (METRIC)

• Different safety fields (occupational safety, process safety,


radiation safety, environmental safety, security) are coordinated
and their interfaces are considered (METRIC)

• Resource management (INDICATOR)

• The availability of sufficient workforce is ensured (METRIC)

• All the plant functions (maintenance, operations, engineering,


safety, administration, human resources) have sufficient resources
(METRIC)

• Tasks are allocated in a manner that promotes both work


motivation including skill development as well as the safe and
efficient carrying out of the given task (METRIC)

• Tools and instruments are appropriate and up-to-date (METRIC)

• Work conditions support safe work (METRIC)

• There is a system for ensuring that time pressure does not


compromise quality in safety-critical tasks (METRIC)

• Product and tool purchasing is based on knowledge of their


conditions of use as well as their potential hazards (METRIC)

• Human performance issues such as fatigue and communication are


taken into account in work schedule planning (METRIC)

• Process for work management and procedure management


(INDICATOR)

SSM 2010:07 51
• All areas of operation are covered by adequate and documented
procedures (METRIC)

• Procedures and instructions are up-to-date and revised accordingly


(METRIC)

• Revisions in procedures and instructions are communicated to the


users (METRIC)

• The safety relevance of the procedures and instructions is clearly


stated in them (METRIC)

• Procedures and instructions are clear and easily understood by


those who have to apply them (METRIC)

• The know-how of the “shop-floor” personnel is utilised in creating


and revising of rules and instructions (METRIC)

• Safety procedures are coordinated with or integrated in operating


procedures (METRIC)

• The discrepancy between formal rules and actual work is


monitored (METRIC)

• Work Permit System is implemented and continuously developed


(METRIC)

• The interfaces and interaction of various work processes is


identified (METRIC)

• Competence management and training (INDICATOR)

• An adequate system for identification of current competence


profiles exists (METRIC)

• There are clear objectives established for training programs


(METRIC)

• There is adequate training in (a) technical areas, (b) safety issues


including human factors and the nature of safety and accidents,
and (c) the uncertainties and potential hazards of nuclear power
(METRIC)

• There is a sufficient number of refresher courses on basic safety


and technical issues (METRIC)

• There is an adequate system for familiarization and induction of


new personnel (METRIC)

• There is a mechanism in place to ensure that the scope, content


and quality of the training programs are adequate (METRIC)

• Feedback is gathered from the trainees and it is utilized in


developing the training program (METRIC)

SSM 2010:07 52
• Competence is maintained for both new and old technology
(METRIC)

• Simulators and simulated operations are utilized in training


(METRIC)

• Operating events (own plant as well as outside) are utilized as


training material (METRIC)

• An adequate recruitment procedure exists for identifying


competence needs and selecting suitable candidates (METRIC)

• Operation and maintenance of the plant (INDICATOR)

• The plant is operated in a safe manner according to its technical


specifications (METRIC)

• There is a program of preventive maintenance in place and it is


revised according to maintenance history (METRIC)

• There is a system for documenting history data on equipment and


their maintenance actions (METRIC)

• History data is used in analysis of reliability and maintenance


needs of the equipment (METRIC)

• Condition monitoring for equipment is utilised to target preventive


maintenance (METRIC)

• Conservative decision making principle is applied in making


decisions about the operational safety of the plant (METRIC)

• External cooperation (INDICATOR)

• There are well-established channels for communication with the


national authorities (METRIC)

• There is a policy or procedure for cooperation and communication


with community organizations and the media (METRIC)

• There are well-established channels for communication and


system for supporting and funding external research on nuclear
safety related issues (METRIC)

• There is a well-developed system for communication and co-


operation with current and potential suppliers and customers to the
enterprise (METRIC)

• There is a well-developed system for sharing and discussing safety


related information with other safety-critical organizations
(METRIC)

• The organization actively participates in the international


cooperation on nuclear safety related issues (METRIC)

SSM 2010:07 53
• Contractor and purchase management (INDICATOR)

• There is a process for purchase of outside work (METRIC)

• Contractors are trained on safety culture issues and work practices


of the plant (METRIC)

• The know-how of the contractors’ personnel is ensured (METRIC)

• A record of contractor safety performance is utilised in decision


making concerning contracts (METRIC)

• Contractors have possibilities for expressing safety worries and


providing safety proposals on issues they notice (METRIC)

• The knowledge needed in-house is analysed and measures to


maintain it are taken (METRIC)

• There is a procedure for control of products including their


specifications and requirements as well as activities for inspection,
testing, verification and validation of the products (METRIC)

• Practices of organizational learning (INDICATOR)

• There is a comprehensive system for reporting incidents and other


learning experiences such as near misses (METRIC)

• There is a systematic corrective action program in place to deal


with deviations (METRIC)

• Operating experience is collected and analysed from other nuclear


power plants (METRIC)

• There exists practices for the identification of new vulnerabilities


(METRIC)

• There is a system for gathering development initiatives from the


personnel (METRIC)

• There is a system for investigation and analysis of internal


incidents that takes into account technical, human and
organizational factors in equal degree (METRIC)

• Development initiatives are carried out and followed upon


(METRIC)

• Daily work practices create an increasing awareness of the hazards


of the work (METRIC)

• Adequate reactive and proactive indicators of process safety and


safety culture have been defined and are followed up (METRIC)

• There is a system for analysing the common safety related


findings (trends, root causes, changes, variety of corrective
actions, generalizability to other components / equipment) from

SSM 2010:07 54
events, near misses and maintenance history at the organization
(METRIC)

• Internal and external safety assessments and audits are utilised to


improve safety performance (METRIC)

• Change management (INDICATOR)

• There is a clear definition of what constitutes a technical change or


an organizational change (METRIC)

• The amount and pace of changes that the organization can handle
is considered when planning changes (METRIC)

• There is a procedure for planning, implementing and follow-up on


technical and organizational changes (METRIC)

• Technological changes are anticipated, and their risks are


evaluated (METRIC)

• A risk assessment is done for organizational changes prior to


committing to one (METRIC)

• Usability and maintainability issues of new technology, tools and


modifications are considered in already design and
implementation stages (METRIC)

• Human and organizational factors are adequately considered in


change management (METRIC)

• It is assured that the organizational memory is not lost with the


changes by e.g. documentation and knowledge transfer (METRIC)

• Contingency planning and emergency preparedness (INDICATOR)

• The organization has an adequate on-site emergency preparedness


plan (METRIC)

• There is regular training on emergencies on-site (METRIC)

• There is an adequate system for alarming within the enterprise as


well as for external alarming of authorities and the public
(METRIC)

• The organization has provided adequate information on the


potential hazards and accident scenarios to the public authorities
such as first response personnel, police, military, medical
facilities, and the environmental authorities (METRIC)

SSM 2010:07 55
Appendix B: Examples of
monitor indicators
A concise summary list of potential leading monitor indicators is presented.
The list should be considered a pragmatic tool to guide attention to the
relevant aspects, not a formal auditing check list or an indicator set. The
main categories are based on Reiman and Oedewald (2009), and the specific
contents of the categories include input from OECD (2008), IAEA (1999,
2000, 2002, 2003, 2006, 2008) and Weick and Sutcliffe (2007). The
technical condition of the plant is not dealt with in this report due to its
plant-specific nature and the fact that the focus of this report is mainly on
human and organizational factors.

There needs to be fewer monitor indicators than there are drive indicators.
This is due to the fact that all the monitor indicators should be analysed and
monitored regularly whereas drive indicators are selected depending on
prioritization. Thus, too many indicators provide an information overload.
Nevertheless, the number of indicators should be sufficient to provide a
reliable view on the status of safety culture and system safety at the
organization.

Organization and management


• Management system (INDICATOR)

• The extent to which the management system aligns with and


contributes to the achievement of organizational goals (METRIC)

• The quality and clarity of the safety policy and safety goals
(METRIC)

• The quality and clarity of standards and expectations for safety


behaviour (METRIC)

• The clarity of the organizational structure including the extent to


which roles and responsibilities have been clearly and
unambiguously described (METRIC)

• The clarity of the description of how work is to be prepared,


reviewed, carried out, recorded, assessed and improved
(METRIC)

• The identification of the interaction and interfaces of the various


work processes (METRIC)

• The quality of procedures for hazard identification, assessment


and control (METRIC)

• The quality of the operating experience and corrective actions


program (METRIC)

60
SSM 2010:07
• The clarity of integration of the consideration of process safety,
HSE (health, occupational safety, environment) and security issues
(METRIC)

• The extent to which the system provides the means to support


individuals and teams in carrying out their tasks safely and
effectively (METRIC)

• Human resources (INDICATOR)

• Extent to which the personnel has been trained in accordance with


the planned training programme (METRIC)

• Extent to which the personnel have a knowledge of the work


processes (METRIC)

• Extent to which the personnel have suitable skills, knowledge and


experience to carry out their tasks safely and effectively
(METRIC)

• Work conditions (INDICATOR)

• The quality of documentation and procedures (METRIC)

• Documentation relating to the original design basis is available


and up to date to reflect all the modifications made to the plant and
procedures since its commissioning (METRIC)

• Time pressure and work load in safety-critical tasks (METRIC)

• The amount of slack resources to cope with unexpected or


demanding situations (METRIC)

• Staffing in critical posts (METRIC)

• Work practices (INDICATOR)

• The extent to which human performance tools are utilized in daily


practice (METRIC)

• The extent of personnel compliance with safety rules (METRIC)

• The extent to which work is carried out in accordance to the


processes described in the management system (METRIC)

• The extent of visible management commitment to safety and the


management system (METRIC)

• The extent to which the decision making in the organization


utilizes all the necessary competence and is transparent in its
content and progress (METRIC)

• The extent to which information is effectively communicated


throughout the organization and to the external stakeholders
(METRIC)

SSM 2010:07 57
• Strategy and external relations (INDICATOR)

• The adequacy of the maintenance program (METRIC)

• The budget for safety improvements (METRIC)

• Relations to corporate headquarters are open and based on mutual


trust, and organizational goals are in line with those of the
headquarters’ (METRIC)

• Relations to the regulator are open and honest (METRIC)

Psychological states and conceptions


• Work and safety motivation (INDICATOR)

• The extent to which the personnel feel that their work is


meaningful and important (METRIC)

• The extent to which the personnel have a motivation to spend


effort on safety related issues (METRIC)

• The extent to which the personnel are interested in safety matters,


and try to learn more on hazards and safety (METRIC)

• The extent to which the personnel prioritize safety over production


in conflict situations or under time pressure (METRIC)

• Sense of control (INDICATOR)

• The extent to which the personnel have a realistic sense of control,


which enables them to perceive their capabilities and limitations,
and to learn from their job (METRIC)

• The extent to which the work load of workers is not too high nor
too low (METRIC)

• The extent to which the personnel the demands of the tasks are in
line with the skills of the workers (METRIC)

• The extent to which the personnel the time pressure that workers
feels is not too high (METRIC)

• The extent to which the personnel feel that they can influence
safety related issues (METRIC)

• Understanding of the organizational core task (INDICATOR)

• The extent to which the personnel understand the task and goals of
the organization (METRIC)

• The extent to which the personnel understand how their task


relates to the overall goals of the organization (METRIC)

SSM 2010:07 58
• The extent to which the personnel know the safety policies and the
operating principles of the organization (METRIC)

• Understanding of hazards (INDICATOR)

• The extent to which the personnel understands the hazards that are
connected to their work (METRIC)

• The extent to which the personnel understand the safety


significance of their work along with its connections to the work
of the others (METRIC)

• The extent to which the personnel understand the hazards


stemming from human and organizational factors related issues in
addition to the inherent technological hazards (METRIC)

• The extent to which the personnel are aware of the limitations of


human performance capacity (METRIC)

• The extent to which the personnel understand the safety


significance of their own tasks (METRIC)

• The extent to which the personnel understands the relevant ageing


phenomena of the systems, structures and components (METRIC)

• The extent of awareness of technical / physical condition of


systems, structures and components (METRIC)

• Understanding of safety (INDICATOR)

• The extent to which the complex and emergent nature of safety (a


dynamic non-event) is understood along with the fact that safety
must be created every day (METRIC)

• The extent to which the organization’s contribution to safety by


the means of norms, practices and shared values and meanings is
understood (METRIC)

• The extent to which errors are understood as being a natural part


of work at all levels of the organization (METRIC)

• The extent to which Human Factors are considered a neutral


phenomena and not something to be avoided (i.e., a negative
phenomenon). (METRIC)

• The extent to which the personnel have basic knowledge of human


performance issues. (METRIC)

• The extent to which the defence-in-depth principle is understood


among the personnel. (METRIC)

SSM 2010:07 59
• Sense of personal responsibility (INDICATOR)

• The extent to which the personnel have a willingness to spend


personal effort on safety issues and take responsibility for their
actions. (METRIC)

• The extent to which the personnel are able to perceive that they
have an effect on the outcome of their work, and that their way of
working (inc. attitudes) influences that of the others. (METRIC)

• The extent to which the personnel have a sense of personal


ownership for an equipment, an area of plant or the entire
operations of the plant. (METRIC)

• The extent to which the personnel exhibit a wider responsibility


for the overall safety of the organization (METRIC)

• Mindfulness and vigilance (INDICATOR)

• The extent to which the personnel reflect the social dynamics of


the organization. (METRIC)

• The extent to which the personnel continuously seek to identify


new risks and enhance their view on the hazards of their work.
(METRIC)

• The extent to which the personnel at all levels exhibit a


questioning attitude. (METRIC)

• The extent to which the personnel remain humble toward their


knowledge of the hazards and their competence (METRIC)

• The extent to which the personnel are aware of the limitations of


standard operating procedures and more detailed instructions
(METRIC)

• The extent to which external audits provide results that are in


accordance with the finding of internal audits or prevalent
conceptions of the personnel (METRIC)

• The extent to which the personnel continuously search for


improvements in organizational systems and procedures
(METRIC)

Social processes
• Sensemaking and joint attribution of meaning to past, present and
future events (INDICATOR)

• The extent to which the organization remains open to multiple


interpretations of possible future scenarios, and does not force a
single truth on its employees. (METRIC)

SSM 2010:07 60
• The extent to which past successes are not considered as
guarantees of future success. (METRIC)

• The extent to which history of the organization is considered as


socially constructed and subject to change. (METRIC)

• The extent to which argumentation is based on facts and accuracy


as much as possible instead of formal position of the arguer and
the attractiveness of the argument for the organizational self-
image. (METRIC)

• The extent to which the meanings given to past events do not


constrain the necessary actions related to nuclear safety by, e.g.,
labelling safety issues in negative terms or event investigations as
blame seeking. (METRIC)

• Norms and values related to safety (INDICATOR)

• The extent to which nuclear safety is a shared value in the


organization. (METRIC)

• The extent to which safety conscious behaviour and uncertainty


expression is socially accepted and supported. (METRIC)

• The extent to which the relationships between the management


and the personnel are based on trust. (METRIC)

• The extent to which the relations between various personnel


groups are based on trust and shared safety norms. (METRIC)

• The extent to which there is an open atmosphere concerning


reporting of errors and deviations. (METRIC)

• The extent to which there exists a strong social identity that allows
the personnel to feel as belonging to the organization. (METRIC)

• The extent to which the norms and stereotypes created by the


subgroups in the organization are not counterproductive to
cooperation with other groups. (METRIC)

• Habit and routine formation (INDICATOR)

• The amount of routine work and routine tasks at the organization


(METRIC)

• The extent to which habits and routines are reflected from time to
time (METRIC)

• The extent to which tasks and situations where routines may


develop and where they might have consequences for safety are
identified (METRIC)

• The extent to which routines are based on a good understanding of


their safety significance (METRIC)

SSM 2010:07 61
• Optimizing and local adaptation (INDICATOR)

• The extent to which tasks are adapted to the circumstances on the


field, aka how much adaptation is there (METRIC)

• The extent to which the local adaptations are based on


understanding of their effects on safety (METRIC)

• The extent to which there exists a management awareness of the


adaptations and trade-offs taking place at the field. (METRIC)

• The extent to which there exists an awareness of the adaptations


and trade-offs taking place at the organization. (METRIC)

• The extent to which the gap between work as prescribed and work
as actually done is known and monitored at the organization
(METRIC)

SSM 2010:07 62
Appendix C: Examples of
feedback indicators
Systems, structures and components

• Ratio of preventive and corrective maintenance

• Number of unplanned automatic reactor scrams

• Capability factors for the units

• Percentage of safety critical equipment that fail inspection / test

• Fuel leaks

• Equipment forced outage rate

Past process safety performance

• Availability of safety systems

• Number of INES events

• Number of safety critical equipment that fail to operate as designed

• Number of unplanned automatic scrams

Human factors

• Lost-time incidents (Industrial safety accident rate)

• Sick leave

• Radiation doses / exposure

• Turnover

• Job satisfaction and work motivation scores from yearly surveys

• Amount of procedure violations

• Root causes of events dealing with human behaviour issues

Past organizational safety performance

• Structural / equipment anomalies discovered by inspections vs. chance

• Non-compliances with Tech Specs

• Recurrence of incidents with similar root causes

• Backlog of corrective actions

SSM 2010:07
Strålsäkerhetsmyndigheten
Swedish Radiation Safety Authority

SE-171 16 Stockholm Tel: +46 8 799 40 00 E-mail: registrator@ssm.se


Solna strandväg 96 Fax: +46 8 799 40 10 Web: stralsakerhetsmyndigheten.se

You might also like