Iec TR 80002-1-2009
Iec TR 80002-1-2009
Iec TR 80002-1-2009
TECHNICAL
REPORT
colour
inside
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form
or by any means, electronic or mechanical, including photocopying and microfilm, without permission in writing from
either IEC or IEC's member National Committee in the country of the requester.
If you have any questions about IEC copyright or have an enquiry about obtaining additional rights to this publication,
please contact the address below or your local IEC member National Committee for further information.
TECHNICAL
REPORT
colour
inside
INTERNATIONAL
ELECTROTECHNICAL
COMMISSION PRICE CODE
XB
ICS 11.040.01 ISBN 2-8318-1061-9
–2– TR 80002-1 © IEC:2009(E)
CONTENTS
FOREWORD...........................................................................................................................4
INTRODUCTION.....................................................................................................................6
1 General ............................................................................................................................7
1.1 Scope......................................................................................................................7
1.2 Normative references ..............................................................................................7
2 Terms and definitions .......................................................................................................8
3 General requirements for RISK MANAGEMENT ......................................................................8
3.1 R ISK MANAGEMENT PROCESS .....................................................................................8
3.2 Management responsibilities ................................................................................. 11
3.3 Qualification of personnel ...................................................................................... 13
3.4 R ISK MANAGEMENT plan .......................................................................................... 14
3.5 R ISK MANAGEMENT FILE ........................................................................................... 16
4 R ISK ANALYSIS ................................................................................................................. 17
B ibliography.......................................................................................................................... 6 1
34H 71H
I ndex .................................................................................................................................... 6 2
35H 72H
Figure 2 – FTA showing RISK CONTROL measure which prevents incorrect software
outputs from causing HARM ................................................................................................... 2 8 75H
Figure A.1 – Relationship between sequence of events, HARM and HAZARD ........................... 4 1 76H
TR 80002-1 © IEC:2009(E) –3–
FOREWORD
1) The International Electrotechnical Commission (IEC) is a worldwide organization for standardization comprising
all national electrotechnical committees (IEC National Committees). The object of IEC is to promote
international co-operation on all questions concerning standardization in the electrical and electronic fields. To
this end and in addition to other activities, IEC publishes International Standards, Technical Specifications,
Technical Reports, Publicly Available Specifications (PAS) and Guides (hereafter referred to as “IEC
Publication(s)”). Their preparation is entrusted to technical committees; any IEC National Committee interested
in the subject dealt with may participate in this preparatory work. International, governmental and non-
governmental organizations liaising with the IEC also participate in this preparation. IEC collaborates closely
with the International Organization for Standardization (ISO) in accordance with conditions determined by
agreement between the two organizations.
2) The formal decisions or agreements of IEC on technical matters express, as nearly as possible, an international
consensus of opinion on the relevant subjects since each technical committee has representation from all
interested IEC National Committees.
3) IEC Publications have the form of recommendations for international use and are accepted by IEC National
Committees in that sense. While all reasonable efforts are made to ensure that the technical content of IEC
Publications is accurate, IEC cannot be held responsible for the way in which they are used or for any
misinterpretation by any end user.
4) In order to promote international uniformity, IEC National Committees undertake to apply IEC Publications
transparently to the maximum extent possible in their national and regional publications. Any divergence
between any IEC Publication and the corresponding national or regional publication shall be clearly indicated in
the latter.
5) IEC provides no marking procedure to indicate its approval and cannot be rendered responsible for any
equipment declared to be in conformity with an IEC Publication.
6) All users should ensure that they have the latest edition of this publication.
7) No liability shall attach to IEC or its directors, employees, servants or agents including individual experts and
members of its technical committees and IEC National Committees for any personal injury, property damage or
other damage of any nature whatsoever, whether direct or indirect, or for costs (including legal fees) and
expenses arising out of the publication, use of, or reliance upon, this IEC Publication or any other IEC
Publications.
8) Attention is drawn to the Normative references cited in this publication. Use of the referenced publications is
indispensable for the correct application of this publication.
9) Attention is drawn to the possibility that some of the elements of this IEC Publication may be the subject of
patent rights. IEC shall not be held responsible for identifying any or all such patent rights.
The main task of IEC technical committees is to prepare International Standards. However, a
technical committee may propose the publication of a technical report when it has collected
data of a different kind from that which is normally published as an International Standard, for
example "state of the art".
IEC 80002-1, which is a technical report, has been prepared by a joint working group of
subcommittee 62A: Common aspects of electrical equipment used in medical practice, of IEC
technical committee 62: Electrical equipment in medical practice, and ISO technical
committee 210: Quality management and corresponding general aspects for MEDICAL DEVICES .
TR 80002-1 © IEC:2009(E) –5–
Full information on the voting for the approval of this technical report can be found in the
report on voting indicated in the above table. In ISO, the technical report has been approved
by 16 P-members out of 17 having cast a vote.
This publication has been drafted in accordance with the ISO/IEC Directives, Part 2.
• TERMS USED THROUGHOUT THIS TECHNICAL REPORT THAT HAVE BEEN DEFINED IN C LAUSE 2 AND
ALSO GIVEN IN THE INDEX : IN SMALL CAPITALS .
A list of all parts of the IEC 80002 series, published under the general title Medical device
software, can be found on the IEC website.
The committee has decided that the contents of this publication will remain unchanged until
the maintenance result date indicated on the IEC web site under "http://webstore.iec.ch" in
the data related to the specific publication. At this date, the publication will be
• reconfirmed,
• withdrawn,
• replaced by a revised edition, or
• amended.
IMPORTANT – The “colour inside” logo on the cover page of this publication indicates
that it contains colours which are considered to be useful for the correct understanding
of its contents. Users should therefore print this publication using a colour printer.
–6– TR 80002-1 © IEC:2009(E)
INTRODUCTION
Software is often an integral part of MEDICAL DEVICE technology. Establishing the SAFETY and
effectiveness of a MEDICAL DEVICE containing software requires knowledge of what the
software is intended to do and demonstration that the implementation of the software fulfils
those intentions without causing any unacceptable RISKS .
It is important to understand that software is not itself a HAZARD , but software may contribute
to HAZARDOUS SITUATIONS . Software should always be considered in a SYSTEM perspective and
software RISK MANAGEMENT cannot be performed in isolation from the SYSTEM.
Complex software designs can permit complex sequences of events which may contribute to
HAZARDOUS SITUATIONS . Much of the TASK of software RISK MANAGEMENT consists of identifying
those sequences of events that can lead to a HAZARDOUS SITUATION and identifying points in
the sequences of events at which the sequence can be interrupted, preventing HARM or
reducing its probability.
Software sequences of events which contribute to HAZARDOUS SITUATIONS may fall into two
categories:
These categories are specific to software, arising from the difficulty of correctly specifying and
implementing a complex SYSTEM and the difficulty of completely verifying a complex SYSTEM .
Since it is very difficult to estimate the probability of software ANOMALIES that could contribute
to HAZARDOUS SITUATIONS , and since software does not fail randomly in use due to wear and
tear, the focus of software aspects of RISK ANALYSIS should be on identification of potential
software functionality and ANOMALIES that could result in HAZARDOUS SITUATIONS – not on
estimating probability. R ISKS arising from software ANOMALIES need most often to be evaluated
on the SEVERITY of the HARM alone.
R ISK MANAGEMENT is always a challenge and becomes even more challenging when software
is involved. The following clauses contain additional details regarding the specifics of software
and provide guidance for understanding ISO 14971:2007 in a software perspective.
This technical report is organized to follow the structure of ISO 14971:2007 and guidance is
provided for each RISK MANAGEMENT activity in relation to software.
There is some intentional REDUNDANCY in the information provided due to the iterative nature
of RISK MANAGEMENT activities in the software LIFE - CYCLE .
TR 80002-1 © IEC:2009(E) –7–
1 General
1.1 Scope
This technical report provides guidance for the application of the requirements contained in
ISO 14971:2007, Medical devices— Application of risk management to medical devices to
MEDICAL DEVICE SOFTWARE with reference to IEC 62304:2006, Medical device software—
Software life cycle processes. It does not add to, or otherwise change, the requirements of
ISO 14971:2007 or IEC 62304:2006.
This technical report is aimed at RISK MANAGEMENT practitioners who need to perform RISK
MANAGEMENT when software is included in the MEDICAL DEVICE / SYSTEM , and at software
engineers who need to understand how to fulfil the requirements for RISK MANAGEMENT
addressed in ISO 14971.
It should be noted that even though ISO 14971 and this technical report focus on MEDICAL
DEVICES , this technical report may be used to implement a SAFETY RISK MANAGEMENT PROCESS
for all software in the healthcare environment independent of whether it is classified as a
MEDICAL DEVICE .
This technical report is not intended to be used as the basis of regulatory inspection or
certification assessment activities.
For the purposes of this technical report, “should” is used to indicate that amongst several
possibilities to meet a requirement, one is recommended as being particularly suitable,
without mentioning or excluding others, or that a certain course of action is preferred but not
necessarily required. This term is not to be interpreted as indicating requirements.
The following referenced documents are indispensable for the application of this document.
For dated references, only the edition cited applies. For undated references, the latest edition
of the referenced document (including any amendments) applies.
For the purposes of this document, the terms and definitions given in ISO 14971:2007,
IEC 62304:2006 and the following terms and definitions apply.
2.1
DIVERSITY
a form of REDUNDANCY in which the redundant elements use different (diverse) components,
technologies or methods to reduce the probability that all of the elements fail simultaneously
due to a common cause
2.2
REDUNDANCY
provision of multiple components or mechanisms to achieve the same function such that
failure of one or more of the components or mechanisms does not prevent the performance of
the function
2.3
SAFETY - RELATED SOFTWARE
software that can contribute to a HAZARDOUS SITUATION or software used in the implementation
of RISK CONTROL measures
3.1.1 General
The MANUFACTURER shall establish, document and maintain throughout the LIFE-CYCLE an
ongoing PROCESS for identifying HAZARDS associated with a MEDICAL DEVICE, estimating and
evaluating the associated RISKS, controlling these RISKS, and monitoring the effectiveness of
the controls. This PROCESS shall include the following elements:
- RISK ANALYSIS;
- RISK EVALUATION;
- RISK CONTROL;
Where a documented product realization PROCESS exists, such as that described in Clause 7
of ISO 13485:2003 [1] 1, it shall incorporate the appropriate parts of the RISK MANAGEMENT
PROCESS.
NOTE 1 A documented quality management system PROCESS can be used to deal with SAFETY in a
systematic manner, in particular to enable the early identification of HAZARDS and HAZARDOUS SITUATIONS
in complex MEDICAL DEVICES and systems.
—————————
1) Figures in square brackets refer to the Bibliography.
TR 80002-1 © IEC:2009(E) –9–
NOTE 2 A schematic representation of the RISK MANAGEMENT PROCESS is shown in Figure 1. Depending
on the specific LIFE-CYCLE phase, individual elements of RISK MANAGEMENT can have varying emphasis.
Also, RISK MANAGEMENT activities can be performed iteratively or in multiple steps as appropriate to the
MEDICAL DEVICE. Annex B contains a more detailed overview of the steps in the RISK MANAGEMENT
PROCESS.
Risk analysis
Risk assessment
safety of the medical device
• Identification of hazards
• Estimation of the risk(s) for
each hazardous situation
Risk evaluation
Risk control
Risk management
• Risk control option analysis
• Implementation of risk
control measure(s)
• Residual risk evaluation
• Risk/benefit analysis
• Risks arising from risk
control measures
• Completeness of risk control
S AFETY is a property of the SYSTEM (in this case the whole MEDICAL DEVICE ), which can include
software. R ISK MANAGEMENT should be performed on the SYSTEM comprising the software and
its whole hardware environment. Software RISK MANAGEMENT activities should not be
conducted in isolation from the SYSTEM.
While software aspects of RISK MANAGEMENT can not be effectively performed in isolation from
overall MEDICAL DEVICE RISK MANAGEMENT , there are activities that may be best performed by
software engineers as an integral part of the software LIFE - CYCLE . There are also elements of
software that require more focus and different explanation than that provided in
ISO 14971:2007 for overall MEDICAL DEVICE RISK MANAGEMENT . It is important to stress that
even the software aspects of RISK MANAGEMENT need to focus on the MEDICAL DEVICE RISK in
order to be effective.
– 10 – TR 80002-1 © IEC:2009(E)
NOTE 1 Software aspects of RISK MANAGEMENT can not be effectively performed in isolation from overall MEDICAL
DEVICE RISK MANAGEMENT because of the interdependence of hardware failures, software failures, and hardware
and software RISK CONTROL measures.
NOTE 2 For instance all software failures are systematic not random (as many types of hardware
failures/breakdowns are) and their probability can not be accurately estimated. Therefore, the way in which the
probability component of RISK is applied to software is quite different. See 4.4.3.
There are many opportunities for software engineers to contribute to the overall SAFETY of the
MEDICAL DEVICE during the early stages of MEDICAL DEVICE design. The role of software in the
SAFETY of the MEDICAL DEVICE should be considered before the SYSTEM design is finalised.
By participating in the MEDICAL DEVICE design PROCESS , the software engineer can contribute
to SAFETY -related decisions concerning software-related RISKS as the design evolves. Such
decisions should include but not be limited to:
3.1.2 Iteration
Typical software development LIFE - CYCLES often use iteration. The use of iteration allows:
IEC 62304:2006 requires iteration of RISK MANAGEMENT activities and coordination with SYSTEM
design activities throughout the software LIFE - CYCLE . For example, during software
development, Subclause 5.2.4 of IEC 62304:2006 requires the re-evaluation of the MEDICAL
DEVICE RISK ASSESSMENT when software requirements are established. This re-evaluation may
cause an update to SYSTEM requirement specifications and the MEDICAL DEVICE RISK
ASSESSMENT . R ISK EVALUATION should be repeated at all stages from requirements via
ARCHITECTURE and design to the implementation of software.
ISO 14971 does not prescribe a design and development PROCESS , and it generally only
requires RISK MANAGEMENT steps to be done before and after (not during) the implementation
of the design (including RISK CONTROL measures). For example, when a RISK CONTROL
measure has been implemented, ISO 14971 requires that it be reviewed to ensure that it has
not caused any further HAZARDS and HAZARDOUS SITUATIONS . This should not be interpreted as
an instruction to perform this review only after the implementation is complete—it is
advantageous to address any further HAZARDS as soon as they become apparent. This implies
iteration within the implementation of the RISK CONTROL measure.
It is important that all DELIVERABLES are kept consistent at all times. Iteration is a threat to the
consistency of DELIVERABLES . It is therefore important that rigorous configuration management
be used to ensure that all effects of a change are identified and all relevant DELIVERABLES are
updated after a change. This is particularly important if software is involved, since software
can be changed rapidly and an apparently small change can have unexpected side effects. All
information related to the software needs to be up to date in order to avoid miscommunication
between engineers. Proposals for software changes are examined for side-effects, especially
side-effects which affect SAFETY . This may lead to repetition of parts of the RISK MANAGEMENT
PROCESS .
TR 80002-1 © IEC:2009(E) – 11 –
R ISK MANAGEMENT should begin early with substantial input to the MEDICAL DEVICE
specification, taking SAFETY into consideration in early design phases, i.e. a pro-active design
approach is preferable to a reactive design approach. With a pro-active approach, SAFETY is
considered along with other customer needs and captured as early SAFETY requirements.
While a reactive approach is sometimes unavoidable (for example when a legacy product is
updated), the proactive approach is usually the most effective, quickest and cheapest way to
achieve a safe MEDICAL DEVICE .
– from the outset the SYSTEM specification not only includes what the MEDICAL DEVICE should
do but also identifies the SYSTEM behaviours that should be avoided in order to reduce the
RISK ;
– from the outset a SYSTEM ARCHITECTURE can be planned that can be demonstrated to
provide the desired features while avoiding or preventing unsafe states;
– as the ARCHITECTURE is elaborated into a full design, RISK CONTROL measures can be
developed while avoiding rework;
– the choice of SAFETY approaches and RISK CONTROL measures can be made early (for
example, inherent SAFETY by design can be maximized and information for SAFETY
minimized).
– the use of simple hardware SAFETY mechanisms to avoid excessive demands on SAFETY -
RELATED SOFTWARE ITEMS ;
TOP MANAGEMENT shall provide evidence of its commitment to the RISK MANAGEMENT PROCESS
by:
- ensuring the provision of adequate resources
and
- ensuring the assignment of qualified personnel (see 3.3) for RISK MANAGEMENT.
– 12 – TR 80002-1 © IEC:2009(E)
NOTE The documents can be incorporated within the documents produced by the MANUFACTURER’S
quality management system and these documents can be referenced in the RISK MANAGEMENT FILE.
Both ISO 14971:2007 and IEC 62304:2006 assume that a quality management system is in place.
The RISK MANAGEMENT requirements for TOP MANAGEMENT are listed in Subclause 3.2 of
ISO 14971:2007.
NOTE Subclause 3.1 of ISO 14971:2007 states that RISK MANAGEMENT can be an integral part of a quality
management system and Subclause 4.1 of IEC 62304:2006 states that the demonstration of the MANUFACTURER ’ S
ability to consistently meet customer requirements and applicable regulatory requirements can be by the use of a
quality management system that complies with ISO 13485 or a quality management system required by national
regulation. IEC 62304:2006 also provides guidance on the provisions of Subclause 4.1 in Annex B.4, stating that it
is necessary to establish RISK MANAGEMENT as an integral part of a quality management system as an overall
framework for the application of appropriate software engineering methods and techniques.
TOP MANAGEMENT is responsible for putting in place the necessary organizational structure,
adequate resources, accountability, and training (see 3.3) for an effective RISK MANAGEMENT
PROCESS as well as for the safe design and maintenance of MEDICAL DEVICE SOFTWARE .
If there are RISK CONTROL measures applied to outsourced PROCESSES or products, the RISK
CONTROL measures and their importance should be documented and clearly communicated to
the supplier within a contractual agreement.
TR 80002-1 © IEC:2009(E) – 13 –
3.3.1 General
Persons performing RISK MANAGEMENT tasks shall have the knowledge and experience
appropriate to the tasks assigned to them. These shall include, where appropriate, knowledge
and experience of the particular MEDICAL DEVICE (or similar MEDICAL DEVICES) and its use, the
technologies involved or RISK MANAGEMENT techniques. Appropriate qualification RECORDS
shall be maintained.
NOTE RISK MANAGEMENT tasks can be performed by representatives of several functions, each
contributing their specialist knowledge.
Team members involved in the development and maintenance of the SOFTWARE SYSTEM
should have the knowledge and experience appropriate to the TASKS assigned to them. It is
fundamental that the person assigned to TASKS with RISK MANAGEMENT implications has the
required knowledge of RISK MANAGEMENT . The involvement of a multidisciplinary team,
including clinical experts (such as clinical support and technical service experts, and experts
on other relevant subjects), software engineers, SYSTEM designers, experts on
usability/human factors engineering, and domain experts, and the degree and type of their
interaction with the software engineering and test staff should also be considered with respect
to RISK MANAGEMENT .
This may require the development of a training program for the individuals to ensure full
understanding of the required activities.
Also, the qualification of the member in the RISK MANAGEMENT team with respect to software
should be considered and may require special training.
The following subclauses should provide an overview of the field of required knowledge which
should be considered.
At all stages of the design of a MEDICAL DEVICE , it is important to deploy knowledge of the
INTENDED USE . This is particularly important both for designers of software and for staff
carrying out RISK MANAGEMENT of software. The complex behaviour of software can easily
contribute to misuse or to confusion of the user, leading to previously unforeseen HAZARDS
and HAZARDOUS SITUATIONS . A thorough appreciation of clinical practice will allow RISK
managers to identify HAZARDS and HAZARDOUS SITUATIONS , and allow software engineers to
avoid HAZARDS and HAZARDOUS SITUATIONS or to design RISK CONTROL measures.
M ANUFACTURERS should ensure that clinical experts (such as clinical support and technical
service experts, and experts on other relevant subjects) are available to participate in, or at
least advise on, both the design activities and the RISK MANAGEMENT activities.
In addition, MANUFACTURERS should consider training software engineers and RISK managers
in the clinical use of the MEDICAL DEVICE .
Experienced software developers and testers learn to be realistic about the difficulty of
discovering all software defects during testing, and hence the density of software defects
– 14 – TR 80002-1 © IEC:2009(E)
In all these TASKS , experience will lead to an appreciation of the sort of things that can go
wrong with software and with software development PROCESSES , and an awareness of the
difficulties of making a change while maintaining the integrity of a software design.
3.4.1 General
R ISK MANAGEMENT activities shall be planned. Therefore, for the particular MEDICAL
DEVICE being considered, the MANUFACTURER shall establish and document a RISK
MANAGEMENT plan in accordance with the RISK MANAGEMENT PROCESS . The RISK
MANAGEMENT plan shall be part of the RISK MANAGEMENT FILE .
a) the scope of the planned RISK MANAGEMENT activities, identifying and describing the
MEDICAL DEVICE and the LIFE-CYCLE phases for which each element of the plan is
applicable;
d) criteria for RISK acceptability, based on the MANUFACTURER’S policy for determining
acceptable RISK, including criteria for accepting RISKS when the probability of occurrence
of HARM cannot be estimated;
e) VERIFICATION activities;
NOTE 2 Not all parts of the plan need to be created at the same time. The plan or parts of it can be developed over
time.
NOTE 3 The criteria for RISK acceptability are essential for the ultimate effectiveness of the RISK MANAGEMENT
PROCESS. For each RISK MANAGEMENT plan the MANUFACTURER should choose appropriate RISK acceptability criteria.
- indicating in a matrix, such as Figures D.4 and D.5, which combinations of probability of HARM and SEVERITY of
HARM are acceptable or unacceptable;
- further subdividing the matrix (e.g. negligible, acceptable with RISK minimization) and requiring that RISKS first be
made as low as reasonably practicable before determining that they are acceptable (see D.8).
TR 80002-1 © IEC:2009(E) – 15 –
Whichever option is chosen, it should be determined according to the MANUFACTURER’S policy for determining criteria
for RISK acceptability and thus be based upon applicable national or regional regulations and relevant International
Standards, and take into account available information such as the generally accepted state of the art and known
stakeholder concerns (see 3.2). Refer to D.4 for guidance on establishing such criteria.
If the plan changes during the LIFE-CYCLE of the MEDICAL DEVICE, a RECORD of the changes
shall be maintained in the RISK MANAGEMENT FILE.
The RISK MANAGEMENT plan should address the fact that software is part of the MEDICAL DEVICE
by including:
– a description of the MEDICAL DEVICE including what functionality of the MEDICAL DEVICE will
be implemented in software;
– a statement that software will be developed according to IEC 62304;
– a reference to software development aspects unique to software RISK MANAGEMENT (see
Note);
– the RISK acceptance criteria for software-caused or software-controlled RISKS if they differ
from acceptance criteria for other components of the MEDICAL DEVICE .
NOTE A reference to the software development plan may be the simplest way to address the inclusion of software
development aspects unique to software RISK MANAGEMENT . See also 3.4.2 and 3.4.3 which discuss the relationship
between the RISK MANAGEMENT plan and the software development plan and also specific RISK -related topics of the
software development plan according to IEC 62304.
One reason that RISK acceptance criteria for software-caused or software-controlled RISKS
might differ from acceptance criteria for other components is that the probability of HARM
cannot be estimated. In this case the RISK acceptance criteria should be based on the
SEVERITY of HARM . (See 4.4.3 for a discussion of probability for software caused HARM ). If it
can be concluded that the HAZARD is of little practical consequence, the RISK can be judged to
be acceptable and no RISK CONTROL measures are necessary. However, for significant
HAZARDS , that is, HAZARDS which could inflict HARM of high SEVERITY , no level of exposure can
be identified that corresponds to a RISK so low that the RISK is acceptable. In this case RISK
CONTROL measures need to be implemented.
R ISK acceptance criteria for RESIDUAL RISK where probability cannot be estimated should take
into account the RISK CONTROL measures that have been implemented and the effectiveness of
those RISK CONTROL measures in reducing the probability of occurrence of HARM . R ISK
CONTROL measures should be a combination of all reasonable practicable measures, fulfil
applicable standards and regulations, and be state of the art (see Annex D.4 of
ISO 14971:2007).
When planning the activities related to collection and review of relevant production and POST -
PRODUCTION information the following specific aspects for software should be considered:
– The MANUFACTURER should make it possible for the originator of a complaint to identify and
report the software VERSION .
– 16 – TR 80002-1 © IEC:2009(E)
3.4.2 Relationship between RISK MANAGEMENT plan and software development plan
The requirements of ISO 14971 for a RISK MANAGEMENT plan and the requirements of
IEC 62304 for a software development plan should not be taken as requiring specific
documents with specific titles. Planning elements may be embodied in any documents to suit
the MANUFACTURER ’ S quality management system, provided that:
– the combination of the planning documents satisfy the requirements of both standards in a
verifiable manner;
– all plans are consistent with each other;
– all plans are available for use in a timely manner;
– all plans are kept up to date to reflect changing circumstances.
3.4.3 Specific RISK -related topics of the software development plan according
to IEC 62304
The software development plan should ensure that the software development PROCESS ,
standards, methods, and tools associated with the development of software (described in the
software development plan according to Clause 5 of IEC 62304:2006) are effective RISK
CONTROL measures (see 6.2.2.6 for a discussion of PROCESS as a RISK CONTROL measure). This may
be done by provision of evidence by other organisations, suppliers, and other projects within
the organization. If not known, plan for and verify effectiveness within the project.
When establishing the MEDICAL DEVICE RISK MANAGEMENT PROCESS , aspects unique to software
RISK MANAGEMENT should be considered, such as safe coding standards, VERIFICATION
methods (e.g. formal proofs, peer reviews, walk-throughs, simulations, etc), and use of
syntactic and logic checkers. If such aspects are considered RISK CONTROL measures, then
they would also be subject to VERIFICATION (see Table B.2 for examples of verifying RISK
CONTROL measures).
Software RISK MANAGEMENT activities should be addressed for each stage of MEDICAL DEVICE
development in plans, PROCEDURES , and training, as appropriate.
For the particular MEDICAL DEVICE being considered, the MANUFACTURER shall establish and
maintain a RISK MANAGEMENT FILE. In addition to the requirements of other clauses of this
International Standard, the RISK MANAGEMENT FILE shall provide traceability for each identified
HAZARD to:
NOTE 1 The RECORDS and other documents that make up the RISK MANAGEMENT FILE can form part of other
documents and files required, for example, by a MANUFACTURER’S quality management system. The RISK MANAGEMENT
FILE need not physically contain all the RECORDS and other documents; however, it should contain at least references
or pointers to all required documentation. The MANUFACTURER should be able to assemble the information referenced
in the RISK MANAGEMENT FILE in a timely fashion.
NOTE 2 The RISK MANAGEMENT FILE can be in any form or type of medium.
TR 80002-1 © IEC:2009(E) – 17 –
The software PROCESS should set up a system that makes this TRACEABILITY possible, starting
from the software-related HAZARDS and the software RISK CONTROL measures and tracing their
implementation to the corresponding SAFETY -related software requirements and the SOFTWARE
ITEMS that satisfy those requirements.
All of the above should be traceable to their VERIFICATION (see Subclause 7.3.3 of
IEC 62304:2006).
Because software may change frequently during development, and because it may be
released in different VERSIONS , that part of the RISK MANAGEMENT FILE relating to software may
also be subject to change and multiple VERSIONS .
Table 1 lists IEC 62304:2006 requirements for documentation to be included in the RISK
MANAGEMENT FILE in addition to ISO 14971:2007 requirements.
IEC 62304:2006
Subclause RISK MANAGEMENT FILE documentation
4.3f) Rationale for using a lower software SAFETY class (than the SOFTWARE
SYSTEM ) for a SOFTWARE ITEM in the SOFTWARE SYSTEM which does not
implement SAFETY -related functions
7.1.5 Sequences of events that could result in a HAZARDOUS SITUATION that are
identified in Subclause 7.1.2 of IEC 62304:2006
7.2.1 R ISK CONTROL measures defined for each potential cause of the SOFTWARE
ITEM contributing to a HAZARDOUS SITUATION
4 RISK ANALYSIS
4 R ISK ANALYSIS
RISK ANALYSIS shall be performed for the particular MEDICAL DEVICE as described in 4.2 to 4.4.
The implementation of the planned RISK ANALYSIS activities and the results of the RISK ANALYSIS
shall be recorded in the RISK MANAGEMENT FILE.
– 18 – TR 80002-1 © IEC:2009(E)
NOTE 1 If a RISK ANALYSIS, or other relevant information, is available for a similar MEDICAL DEVICE, that analysis or
information can be used as a starting point for the new analysis. The degree of relevance depends on the differences
between the devices and whether these introduce new HAZARDS or significant differences in outputs, characteristics,
performance, or results. The extent of use of an existing analysis is also based on a systematic evaluation of the
effects the changes have on the development of HAZARDOUS SITUATIONS.
NOTE 3 Additional guidance on RISK ANALYSIS techniques for IN VITRO DIAGNOSTIC MEDICAL DEVICES is given in
Annex H.
NOTE 4 Additional guidance on RISK ANALYSIS techniques for toxicological HAZARDS is given in Annex I.
In addition to the RECORDS required in 4.2 to 4.4, the documentation of the conduct and results
of the RISK ANALYSIS shall include at least the following:
b) identification of the person(s) and organization who carried out the RISK ANALYSIS;
NOTE 5 The scope of the RISK ANALYSIS can be very broad (as for the development of a new device with which a
MANUFACTURER has little or no experience) or the scope can be limited (as for analyzing the impact of a change to an
existing device for which much information already exists in the MANUFACTURER’S files).
As described in ISO 14971:2007, RISK ANALYSIS is a term used to encompass three distinct
activities;
– identification of INTENDED USE ,
– identification of known or foreseeable HAZARDS (and their causes), and
– estimating the RISK of each HAZARD and HAZARDOUS SITUATION.
It is essential to recognize that RISK ANALYSIS is performed as an integral part of the entire
software development PROCESS —not as one or two discrete events—for it to be effective
because HAZARD and failure mode information accrue over the software development LIFE -
CYCLE PROCESS and need to be considered at each stage of design.
Since it is very difficult to estimate the probability of software ANOMALIES that could contribute
to HAZARDOUS SITUATIONS , the focus of software aspects of RISK ANALYSIS is on identification of
potential software functionality and ANOMALIES that could result in HAZARDOUS SITUATIONS —not
on estimating probability. See 4.4.3 for more details on estimating probability.
The SEVERITY of the worst case HARM for which software is a contributing factor is a primary
input in determining the level of rigour of software development PROCESSES (see Subclause
4.3 of IEC 62304:2006). The information provided in 4.2, 4.3, and 4.4 is intended to help
identify software-specific aspects of an effective RISK MANAGEMENT PROCESS . In addition,
software aspects of the RISK ANALYSIS should be identifiable in the resulting documentation
and should include both software used to implement RISK CONTROL measures for hardware
failures and software causes for HAZARDS and their associated RISK CONTROL measures.
4.2 I NTENDED USE and identification of characteristics related to the SAFETY of the
MEDICAL DEVICE
4.2.1 General
4.2 I NTENDED USE and identification of characteristics related to the SAFETY of the
MEDICAL DEVICE
TR 80002-1 © IEC:2009(E) – 19 –
For the particular MEDICAL DEVICE being considered, the MANUFACTURER shall document the
INTENDED USE and reasonably foreseeable misuse. The MANUFACTURER shall identify and
document those qualitative and quantitative characteristics that could affect the SAFETY of the
MEDICAL DEVICE and, where appropriate, their defined limits. This documentation shall be
maintained in the RISK MANAGEMENT FILE.
NOTE 1 In this context, misuse is intended to mean incorrect or improper use of the MEDICAL DEVICE.
NOTE 2 Annex C contains questions such as those relating to use that can serve as a useful guide in identifying
MEDICAL DEVICE characteristics that could have an impact on SAFETY.
While each MEDICAL DEVICE has an INTENDED USE , the potential for misuse—intentional or
otherwise—should also be considered. While this is not a software-specific concern, the use
of software may lead to an increased RISK of misuse because:
– the MEDICAL DEVICE ’s behaviour is more complex and therefore more difficult to master or
understand;
– the user may become over-reliant on software, not understanding its limitations;
– the MEDICAL DEVICE may be configurable, and the user may be unaware of the current
configuration.
– MEDICAL DEVICES may communicate with MEDICAL and non- MEDICAL DEVICES in a manner
that cannot be anticipated in detail by the MEDICAL DEVICE MANUFACTURER .
The person responsible for producing SYSTEM requirements and the software engineer have a
joint responsibility to RECORD in the RISK MANAGEMENT FILE the INTENDED USE of the SYSTEM
including its software, together with all SYSTEM and software requirements that relate to
SAFETY and safe use. The software engineer is specifically responsible for identifying aspects
of INTENDED USE that are too subtle to be apparent at the SYSTEM level.
Software makes it possible to design flexible user interfaces, which may affect users’
behaviour, leading to new forms of reasonably foreseeable misuse. Common misuses arise
from misunderstanding of an over-complex user interface and over-reliance on software to
avoid errors and unsafe states. It is important to anticipate such misuse and to change the
design to avoid them as far as possible.
See IEC 62366 [5] for a usability PROCESS complementary to ISO 14971.
The use of software in MEDICAL DEVICES makes possible a range of interconnections and
intercommunication between MEDICAL and non- MEDICAL DEVICES . Such connections and
communications are likely to give rise to new uses (and misuses) of the SYSTEM comprising
– 20 – TR 80002-1 © IEC:2009(E)
the MEDICAL DEVICE and the interconnected devices. While it is easy to foresee that such new
uses and misuses may occur, it is not easy for the MEDICAL DEVICE MANUFACTURER to identify
all such uses and misuses if the interconnections and intercommunication are unrestricted.
It is therefore important for MANUFACTURERS to specify a limited set of INTENDED USES for the
MEDICAL DEVICE ’ S communication interfaces and to design the interfaces as far as possible to
limit interconnections and communications to those which are safe.
For example, the software may check treatment data that is entered using the built-in
interface of a MEDICAL DEVICE for consistency and reasonableness based on the identity of the
user and patient and the context of the data preparation. If data is prepared elsewhere and
imported into the MEDICAL DEVICE using a network connection, it may not be possible to apply
the same checks. In such cases, the MANUFACTURER may consider making the software
checks available to the network user as a network application, and/or restricting the data
import to trusted sources, and writing a comprehensive manual for those responsible for
network connections in a clinical environment.
IEC 80001-1 [6] covers the integration of MEDICAL DEVICES into IT networks in a clinical
environment. In particular, it delineates the responsibilities of the MANUFACTURER and the
person who integrates a MEDICAL DEVICE into the IT network.
NOTE The examples of possible HAZARDS in E.2 and H.2.4 can be used as guidance by the MANUFACTURER to
initiate HAZARD identification.
The objective of HAZARD identification is to permit the analysis of all foreseeable HAZARDS , and
the design and implementation of effective RISK CONTROL measures.
Unlike heat, electrical energy or suspended masses, software is not itself a HAZARD (a
potential source of HARM ); contact with software cannot cause injury. However, software may
cause a person to be exposed to a HAZARD, in other words it may contribute to a HAZARDOUS
SITUATION . Software failures (of any kind) often facilitate the transformation of a HAZARD into a
HAZARDOUS SITUATION .
Thus, although software rarely introduces new HAZARDS , it often changes the HAZARDOUS
SITUATION . More importantly for the MANUFACTURER , it may shift the responsibility for
avoidance of HAZARDOUS SITUATIONS from user to the MANUFACTURER .
For example, a scalpel presents an obvious cutting HAZARD . However, the MANUFACTURER
traditionally took no responsibility for this HAZARD beyond ergonomic design, as the HAZARD
was assumed to be entirely in the control of the surgeon. If the scalpel is part of a remote
surgery SYSTEM, the same HAZARD exists, but responsibility for avoiding the cutting HAZARD is
now shared by the MANUFACTURER who supplies the software that controls the scalpel.
TR 80002-1 © IEC:2009(E) – 21 –
This means that RISK CONTROL of some HAZARDS which was, in the absence of software, solely
dependent on the professional use of a MEDICAL DEVICE , is now shifted to software RISK
MANAGEMENT by the MANUFACTURER .
An important case is the HAZARD of mistreatment due to the mishandling of data. This was
always a HAZARD , but when the data was handled on the clinician’s desk, it was not the
MANUFACTURER ’ S responsibility. Many MEDICAL DEVICES now use software to generate, store,
manipulate or use data; this makes the HAZARD partly the MANUFACTURER ’ S responsibility.
Software may contribute to a HAZARDOUS SITUATION in several ways, including some of the
following (see also Annex B):
– the software may correctly implement an unsafe SYSTEM requirement, leading to behaviour
whose hazardous nature is not appreciated until actual HARM occurs;
– the software specification may incorrectly implement a SYSTEM requirement, leading to
undesired behaviour that is correct according to the software specification;
– the software design and implementation may be faulty, leading to behaviour that is
contrary to the software specification. Obvious faults might arise from misunderstanding of
the software specification, and errors converting the specification into code. Less obvious
faults could arise from unforeseen interactions between SOFTWARE ITEMS and between the
software and its infrastructure, including hardware and the operating SYSTEM.
To adequately identify HAZARDS , clinical use of the MEDICAL DEVICE must be well understood.
Also, software presents the particular challenge of complexity, including possible complex
user interfaces. Therefore, software HAZARD identification cannot be done in isolation. It
should be done at the SYSTEM level by a multidisciplinary team including clinical experts (such
as clinical support and technical service experts), software engineers, SYSTEM designers, and
experts on usability/human factors engineering (see also 3.3).
H AZARD identification should consider HARM that could result from the nature of the MEDICAL
DEVICE (for example cutting, irradiating or electrocution of the patient) and also additional
HAZARDS related to the use of software. The latter could include, for example:
Identified HAZARDS should include both HAZARDS related to software that is operating in
accordance with its specification and also HAZARDS relating to software ANOMALIES (see also
6.1).
– 22 – TR 80002-1 © IEC:2009(E)
Often the user interface of the MEDICAL DEVICE is made more complex by software. In
particular a MEDICAL DEVICE that incorporates software will often handle information. While
this may be justified in terms of benefits to the patient, additional HAZARDS should be
considered relating to incorrect, or incorrectly used, information, for example:
– incorrect data entry;
– user misreading displays;
– user misunderstanding or ignoring alarms;
– overloading of users with excessive data or excessive number of alarms (see
IEC 62366 [5]).
4.4.1 General
NOTE 1 To identify HAZARDOUS SITUATIONS not previously recognized, systematic methods covering the specific
situation can be used (see Annex G).
NOTE 3 HAZARDOUS SITUATIONS can arise from slips, lapses, and mistakes.
For each identified HAZARDOUS SITUATION, the associated RISK(S) shall be estimated using
available information or data. For HAZARDOUS SITUATIONS for which the probability of the
occurrence of HARM cannot be estimated, the possible consequences shall be listed for use in
RISK EVALUATION and RISK CONTROL. The results of these activities shall be recorded in the RISK
MANAGEMENT FILE.
NOTE 4 RISK ESTIMATION incorporates an analysis of the probability of occurrence and the consequences.
Depending on the application, only certain elements of the RISK ESTIMATION PROCESS might need to be considered.
For example, in some instances it will not be necessary to go beyond an initial HAZARD and consequence analysis.
See also D.3.
NOTE 5 RISK ESTIMATION can be quantitative or qualitative. Methods of RISK ESTIMATION, including those resulting
from systematic faults, are described in Annex D. Annex H gives information useful for estimating RISKS for in vitro
diagnostic MEDICAL DEVICES.
NOTE 6 Information or data for estimating RISKS can be obtained, for example, from:
a) published standards;
c) field data from similar MEDICAL DEVICES already in use, including published reported incidents;
e) clinical evidence;
g) expert opinion;
To estimate software related RISK it is first necessary to identify the HAZARDOUS SITUATIONS
that include software. The software may be either the initiating cause of the sequence of
events leading to a HAZARDOUS SITUATION , or it may be elsewhere in the sequence, as in the
case of software intended to detect a hardware malfunction. The software could include a
SOUP component or the reuse of a previously developed component.
R ISK ESTIMATION is based on the probability of HARM and the SEVERITY of HARM from each
identified HAZARDOUS SITUATION. Because it is very difficult to estimate the probability of HARM
arising from a software ANOMALY (see 4.4.3), the probability of a software ANOMALY occurring
must be used with care in estimating the RISK of a HAZARDOUS SITUATION including software
ANOMALY in the sequence of events leading to HARM .
Fault tree analysis (FTA) is a traditional top down method (see IEC 61025 [3]) often used
beginning with the MEDICAL DEVICE as a whole. FTA is primarily used to analyze causes of
HARM . It postulates that a HARM occurs and uses Boolean logic to identify the events or
conditions that must be present for the HARM to occur. The events or conditions are analysed
in increasing detail until a point is reached where one or more RISK CONTROL measures can be
identified which would prevent the HARM . FTA can be used to identify the SOFTWARE ITEMS
involved in a sequence of events that results in a HAZARDOUS SITUATION .
Failure modes and effect analysis (FMEA) is a bottom-up approach (see IEC 60812 [2]) that
begins with a component or subsystem (for software this is a SOFTWARE ITEM in IEC 62304),
and poses the question: If this element failed, what would be the consequences?
In view of the difficulty of anticipating which software defects will be present in each
SOFTWARE ITEM , the starting point for FMEA would be to list the safety-related requirements of
each SOFTWARE ITEM and consider the question: If this requirement is not met, what would be
the consequences?
This leads to the identification of SOFTWARE ITEMS whose failure could cause HARM , and
identification of what types of failure need to be prevented.
When identifying sequences or combinations of events that can result in a HAZARDOUS SITUATION, it
is easiest to focus on software directly related to the essential performance of the MEDICAL
DEVICE (e.g. an algorithm that calculates glucose levels in blood) and the specific causes for
related HAZARDS . It is also important to consider software causes that could result in subtle
failure modes and therefore cause one or more MEDICAL DEVICE HAZARDS . Refer to Annex B for
examples of software causes.
NOTE Specific causes are defects in software whose functionality is clearly related to the clinical functionality of the device
and lead to one of the device HAZARDS. An example is a defect in an algorithm for calculating a test result.
While it is difficult to anticipate exactly what may fail in a SOFTWARE ITEM, it is possible to
identify categories of defects, each of which has well-known RISK CONTROL measures. For
example, corruption of data is a class of fault which could be detected and prevented by using
a checksum procedure. See Annex B for examples of software causes with suggested ways of
handling them. M ANUFACTURERS should maintain their own lists of categories of software
defects relevant to their own products.
– 24 – TR 80002-1 © IEC:2009(E)
4.4.3 Probability
Software ANOMALIES in a particular VERSION of software will be present in all copies of that
software. However, the probability of a software ANOMALY leading to a software failure is very
difficult to estimate, because of the random nature of the inputs to each separate copy of the
software.
HAZARD
Sequence of events
Exposure (P1)
HAZARDOUS
SITUATION
P2
HARM
Probability
SEVERITY of of occurrence RISK
the HARM of HARM
P1 × P2
IEC 1837/09
In many cases, estimating the probability of occurrence of HARM may not be possible, and the
RISK should be evaluated on the basis of the SEVERITY of the HARM alone. RISK ESTIMATION in
these cases should be focused on the SEVERITY of the HARM resulting from the HAZARDOUS
SITUATION .
TR 80002-1 © IEC:2009(E) – 25 –
Although it may not be possible to estimate the probability of the occurrence of a software
failure, it is obvious that many RISK CONTROL measures reduce the probability that such a
failure would lead to a HAZARDOUS SITUATION . Consider, for example, memory corruption
resulting from a software ANOMALY . A checksum of the memory could detect the failure and
reduce the probability of a HAZARDOUS SITUATION. The checksum does not guarantee any
possible corruption would be detected. Rather, it would detect the vast majority of such
corruption and thus lower the RISK to an acceptable level. Although the probability of a
HAZARDOUS SITUATION cannot be estimated either before or after the checksum is
implemented, it can be asserted that the probability of a HAZARDOUS SITUATION after the
checksum is in place is lower than it was before implementing the checksum. It is the
responsibility of the MANUFACTURER to demonstrate that the RISK CONTROL measure is effective
in meeting the acceptability criteria for RESIDUAL RISK that was identified in the RISK
MANAGEMENT plan.
In summary, software RISK ESTIMATION should focus primarily on SEVERITY and the relative
probability of HARM if a failure should occur rather than on attempts to estimate the probability
of each possible software failure.
NOTE This helps provide distinctions between HAZARDS of the same SEVERITY to allow greater focus on those with
higher probability of actual HARM .
4.4.4 S EVERITY
The SEVERITY estimate for a software-caused RISK has an impact on the software development
PROCESS to be used. According to IEC 62304 the rigour of the PROCESS depends on the
SEVERITY of the HARM the software might cause.
Whereas it is open to the MANUFACTURER how SEVERITY levels are defined for the purpose of
RISK EVALUATION according to ISO 14971, it is helpful to define these SEVERITY levels such that
there is a relationship to the software SAFETY classification of IEC 62304. Otherwise it might
be necessary to classify the SEVERITY twice, once for RISK EVALUATION needed for the overall
RISK MANAGEMENT PROCESS and in addition for determining the SAFETY class of the software
according to IEC 62304.
5 RISK EVALUATION
5 R ISK EVALUATION
For each identified HAZARDOUS SITUATION, the MANUFACTURER shall decide, using the criteria
defined in the RISK MANAGEMENT plan, if RISK reduction is required. If RISK reduction is not
required, the requirements given in 6.2 to 6.6 do not apply for this HAZARDOUS SITUATION (i.e.,
proceed to 6.7). The results of this RISK EVALUATION shall be recorded in the RISK MANAGEMENT
FILE.
NOTE 2 Application of relevant standards, as part of the MEDICAL DEVICE design criteria, might constitute RISK
CONTROL activities, thus meeting the requirements given in 6.3 to 6.6.
As described in 4.4.3, it is difficult to estimate the probability of software failures. When this
results in the inability to estimate the probability of HARM then RISK should be evaluated on the
basis of the SEVERITY of the HARM alone.
– 26 – TR 80002-1 © IEC:2009(E)
6 RISK CONTROL
6 R ISK CONTROL
When RISK reduction is required, RISK CONTROL activities, as described in 6.2 to 6.7, shall be
performed.
The MANUFACTURER shall identify RISK CONTROL measure(s) that are appropriate for reducing
the RISK(S) to an acceptable level.
The MANUFACTURER shall use one or more of the following RISK CONTROL options in the priority
order listed:
NOTE 1 If implementing option b) or c), MANUFACTURERS can follow a PROCESS where reasonably practicable RISK
CONTROL measures are considered and the option providing the appropriate reduction in RISK is chosen before
determining whether the RISK is acceptable.
NOTE 2 RISK CONTROL measures can reduce the SEVERITY of the HARM or reduce the probability of occurrence of
the HARM, or both.
NOTE 3 Many standards address inherent SAFETY, protective measures, and information for SAFETY for MEDICAL
DEVICES. In addition, many other MEDICAL DEVICE standards have integrated elements of the RISK MANAGEMENT
PROCESS (e.g. electromagnetic compatibility, usability, biocompatibility). Relevant standards should be applied as part
of the RISK CONTROL option analysis.
NOTE 4 For RISKS for which the probability of occurrence of HARM cannot be estimated, see D.3.2.3.
The RISK CONTROL measures selected shall be recorded in the RISK MANAGEMENT FILE.
If, during RISK CONTROL option analysis, the MANUFACTURER determines that required RISK
reduction is not practicable, the MANUFACTURER shall conduct a RISK/benefit analysis of the
RESIDUAL RISK (proceed to 6.5).
6.2.1.1 General
In a complex SYSTEM there may be many sequences of events which can lead to HAZARDOUS
SITUATIONS .
It may not be possible or necessary to apply a RISK CONTROL measure to each
TR 80002-1 © IEC:2009(E) – 27 –
event in such sequences. It is sufficient to apply RISK CONTROL measures to selected events to
reduce the overall probability of HARM to an acceptable level.
In the following three subclauses, an overview is given how three types of RISK CONTROL
measures can be implemented in software. In addition, it is discussed which events need RISK
CONTROL measures at all (see 6.2.1.5).
– simplifying the user interface to reduce the probability of human errors in use;
– specifying software design rules to avoid software ANOMALIES .
– using only static memory allocation to avoid software ANOMALIES related to dynamic
memory allocation;
– using a restricted VERSION of a programming language to avoid structures which are likely
to lead to programming errors.
Protective measures for a MEDICAL DEVICE that uses software can be implemented in either
hardware or software. The design of a protective measure should demonstrate that the
protective measure is independent of the function to which it is applied. This is relatively easy
to achieve if a software protective measure is applied to hardware or vice-versa.
In choosing protective measures that are implemented in software and applied to software, it
is important to avoid the possibility of multiple failures arising from one cause. If a protective
measure detects and/or prevents a HAZARDOUS SITUATION, the MANUFACTURER should
demonstrate an adequate segregation between the protective measure and software features
that provide essential performance.
For example, software which provides treatment for a patient may run on one processor, while
the software which implements software protective measures runs on a separate processor.
The use of software in a MEDICAL DEVICE is likely to lead to more complex behaviour as seen
by the user. This is likely to lead to an increased reliance on information for SAFETY , ranging
from simple on-screen warnings to complex user manuals and defined training courses. The
size and complexity of such written material can be reduced by attention to good user-
interface design (see IEC 62366 [5]).
– 28 – TR 80002-1 © IEC:2009(E)
Many sequences of events can lead to HAZARDOUS SITUATIONS . It may not be possible or
necessary to apply a RISK CONTROL measure to each event in such sequences. It is sufficient
to apply RISK CONTROL measures to carefully selected events to reduce the overall probability
of HARM to an acceptable level.
In deciding which events should be detected, prevented, or made less likely to occur, it is
obviously helpful to map out the sequences of events that could lead to HAZARDOUS
SITUATIONS . While a Fault Tree Analysis (see 4.4.2) does not show sequences of events, it
can be used to identify such sequences. Correct operation of a RISK CONTROL measure would
appear as a FALSE input to an AND gate, leading to the prevention of HARM irrespective of
the other inputs to the AND gate. Figure 2 shows part of an FTA diagram, in which an
incorrect software output (itself the output of a sequence of events), is prevented from
causing injury to a patient by a RISK CONTROL measure which is designed to detect unsafe
conditions and prevent the harmful effects of the output (for example by halting an action).
The incorrect software output can only cause injury to the patient if the RISK CONTROL measure
fails. Note that the sequence of events leading to the incorrect software output does not need
to be explored in detail to ensure that it cannot lead to an injury to the patient.
IEC 1838/09
A RISK CONTROL measure which limits the range of an input to the software can prevent unsafe
outputs. Less obviously, it can also reduce the probability of the input leading to HARM due to
an ANOMALY in the software, because it reduces the probability that the software will operate
in unexpected ways that may not have been tested (see 4.4.3).
Limiting the range of an input to the software can be done using a software or hardware RISK
CONTROL measure. For example:
TR 80002-1 © IEC:2009(E) – 29 –
– a software RISK CONTROL measure can check input and reject unsafe or inconsistent
values;
– a hardware RISK CONTROL measure could consist of a locked room to prevent unauthorised
people from inputting data.
A RISK CONTROL measure placed at the output of a MEDICAL DEVICE or its software could check
that the software’s output values are within a safe range and internally consistent and could
prevent HARM if it is not. This could be achieved, for example, by:
– a software RISK CONTROL measure that checks the output values and prevents them from
departing from a safe range;
– a hardware RISK CONTROL measure that limits the energy applied to a patient;
– a RISK CONTROL measure consisting of a combination of a warning label and a hard-wired
STOP switch. This RISK CONTROL measure assumes that a competent operator is able to
detect the HAZARDOUS SITUATION.
In addition to RISK CONTROL measures applied at inputs and outputs of the MEDICAL DEVICE or
its software, RISK CONTROL measures may also be applied to inputs and outputs of software
components. This allows inputs and outputs of smaller parts of the software to be checked,
and HARM prevented.
It may not be possible to specify a single range for a parameter within which the MEDICAL
DEVICE operates safely. However, it may be possible to specify a “safe operating envelope”, in
other words a combination of parameters that form a boundary within which the MEDICAL
DEVICE operates safely. Software can be used to assess whether the MEDICAL DEVICE is
operating inside the safe operating envelope. For example, software could combine the
measured temperature of an applied part and the time of exposure to detect the possibility of
burning the patient.
There are cases when the safe range for the software’s inputs and outputs is known by a
clinician but cannot be anticipated in the design of a MEDICAL DEVICE . In such cases, RISK
CONTROL measures can be used to ensure that the MEDICAL DEVICE does exactly what is
specified by the clinician. A hardware or software RISK CONTROL measure can be used to
detect inconsistencies between the software’s outputs and inputs.
For example, the clinician may prescribe treatments, using a MEDICAL DEVICE , which vary
widely from one patient to the next. A HAZARDOUS SITUATION cannot be detected only by
analysing input or output values. A software RISK CONTROL measure can nonetheless be
applied that ensures that the outputs of the MEDICAL DEVICE accurately match the inputs (the
intended prescription).
6.2.2.1 Overview
It can be useful to segregate SOFTWARE ITEMS and assign software SAFETY classes to the
SOFTWARE ITEMS to distinguish highly critical SOFTWARE ITEMS (e.g. those that could result in
death if they are defective) from SOFTWARE ITEMS that could not affect SAFETY . See Subclause
4.3 of IEC 62304:2006 for software SAFETY classification.
– 30 – TR 80002-1 © IEC:2009(E)
Assigning software SAFETY classes can serve as a basis for greater rigour and focus in
VERIFICATION and configuration management activities for more critical SOFTWARE ITEMS . If this
is done, side-effects should be carefully considered and the less critical SOFTWARE ITEMS
should be rated the same as any more critical SOFTWARE ITEMS they could affect. It should
also be noted that IEC 62304 allows for different methods to be used within an ACTIVITY or
TASK (See Subclause 5.1.4 of IEC 62304:2006 which requires that methods be defined). The
MANUFACTURER may decide to create a scheme for differentiating SOFTWARE ITEMS which have
the software SAFETY classification of Class C. For example, the MANUFACTURER may use a
more formal method of VERIFICATION (i.e., code inspection versus code review) for SOFTWARE
ITEMS with high complexity.
Note that SOFTWARE ITEMS could be classified initially as SAFETY -related and then through
certain RISK CONTROL measures or design choices be treated as less critical. Properly
performed RISK MANAGEMENT can result in reducing SAFETY -related software to the smallest
possible subset through isolation and inherent safe design.
Ensuring software SAFETY requires a variety of activities throughout the product development
LIFE - CYCLE . Reliability techniques such as formal methods for failure analysis do not comprise
complete RISK MANAGEMENT methods. It is also important to recognize that reliability and
SAFETY , although often related, are not identical attributes. A LIFE - CYCLE PROCESS that focuses
on reliability may not achieve adequate SAFETY .
Some specific RISK CONTROL measures are described in more detail and guidance is given on
how to address specific causes for RISKS in 6.2.2.2, 6.2.2.3, 6.2.2.4, 6.2.2.5, and 6.2.2.6.
6.2.2.2.1 Overview
Software ARCHITECTURE should describe features of the software used to control RISK by
inherently safe design, as well as software mechanisms for protective measures to reduce
RISK .
A HAZARD associated with a software control function might be avoided, for example, by using
hardware to implement the function. (Similarly, a HAZARD associated with a hardware function
(wear, fatigue) might be avoided by using software.)
A particular problem in SYSTEMS using software is the perception that there is no limit to the
extent to which software can share a physical infrastructure. This perception is false.
It is a normal rule of SYSTEM design that the SYSTEM should include sufficient resources to
perform all necessary TASKS when required. This rule should be applied to software as well as
to hardware. If a SOFTWARE ITEM has a role in SAFETY , then RISK ASSESSMENT should address
the following questions:
– Can the SAFETY - RELATED SOFTWARE ITEM gain access to its processor when needed?
– Can the SAFETY - RELATED SOFTWARE ITEM be guaranteed enough processor time to
complete its TASK before an unsafe state develops into an accident?
– Can it be demonstrated that no other SOFTWARE ITEM can corrupt or otherwise interfere
with the SAFETY - RELATED SOFTWARE ITEM?
TR 80002-1 © IEC:2009(E) – 31 –
If SAFETY - RELATED SOFTWARE has to share a processor with non- SAFETY - RELATED SOFTWARE ,
then the above questions are particularly important, as SAFETY functions will compete for
resources with non- SAFETY functions (see 6.2.2.2.4 on segregation).
Development methods should be chosen to make all of the above issues visible to the
designer. For example, it is not enough to design a SAFETY - RELATED SOFTWARE ITEM as a
PROCESS which, all being well, will run when the operating system gets around to it. The
development method should support deliberate design of scheduling, priority and timing.
Many functions of a MEDICAL DEVICE may be required to be available to assure the SAFETY of
the patient or user. Such functions may include clinical functions that cannot be interrupted or
delayed, and functions that implement protective RISK CONTROL measures.
Fault tolerant design will usually make use of REDUNDANCY . This may be a simple duplication
of a vital component to provide continued operation when one component fails, or it may
consist of additional components which detect a failure and switch operation to an alternative
mode of operation, possibly with limited functionality.
Fault tolerance may be used to continue vital functions in the event of a software failure. In
this case, simple REDUNDANCY using multiple copies of the same software will probably be
insufficient, as the same defect will be present in each copy of the software.
In such cases, DIVERSITY is required. For example, additional software may be used to detect
a software error and to perform a recovery program. The additional software should avoid
sharing any features with the software that it monitors, thereby eliminating the possibility that
one software defect will cause the failure of both.
In more critical cases, two or more SOFTWARE ITEMS may perform the same function but they
may be independently designed and implemented, starting from a common specification. This
is “ DIVERSITY programming”. Note, however, that there is a tendency for the same mistakes to
be made by different development engineers, which would invalidate the DIVERSITY . Note also
that the common specification may contain an incorrect requirement. Finally, some method,
such as voting, must be used to ensure that the malfunctioning software is prevented from
having any effect. At least 3 diverse SOFTWARE ITEMS would be needed to implement a voting
scheme.
When using REDUNDANCY , with or without DIVERSITY , to provide fault tolerance, it is important
to signal to the user that there is a failure. Otherwise, a fault tolerant MEDICAL DEVICE may
appear to operate safely when in fact it is operating with reduced SAFETY .
It is possible for software defects to lead to errors in unrelated software running on the same
hardware. The manufacturer should select methods of segregating SAFETY - RELATED SOFTWARE
ITEMS from the non- SAFETY - RELATED SOFTWARE ITEMS such that non- SAFETY - RELATED SOFTWARE
ITEMS cannot interfere with the operation of the SAFETY - RELATED SOFTWARE ITEMS (see
Subclause 5.3.5 of IEC 62304:2006), and should demonstrate that the segregation is
effective. This includes demonstrating the appropriate use of resources (physical or time) by
the SOFTWARE ITEMS to avoid unintended contention between the items.
Effective segregation between SOFTWARE ITEMS must address the following possible ways in
which SOFTWARE ITEMS may be subject to unexpected interaction.
– 32 – TR 80002-1 © IEC:2009(E)
S OFTWARE ITEMS may interact in unintended ways when they contend for time on shared
hardware (for example processors, storage devices and other input/output devices). This can
prevent a SOFTWARE ITEM from running at the intended time. The provision of sufficient
hardware is an architectural feature (see 6.2.2.2.2) which should be subject to adequate
specification and planning to ensure that sufficient time is available to run all SOFTWARE ITEMS
when required.
S OFTWARE ITEMS may co-exist in the same memory; this can cause one SOFTWARE ITEM
unexpectedly to change data belonging to another SOFTWARE ITEM. In extreme cases, one
SOFTWARE ITEM may accidentally change the coding of another SOFTWARE ITEM . Many
processors and operating systems offer hardware-assisted methods of segregating memory
usage. Where such methods exist, they should always be used. Most such methods will guard
against unintended interaction even when there is a defect in one of the SOFTWARE ITEMS .
S OFTWARE ITEMS may also interact in unintended ways when they share variables, including
global variables, environment variables, and operating system parameters; this can result in
unintended communication between SOFTWARE ITEMS if there is a defect in one of the
SOFTWARE ITEMS . The sharing of variables between SOFTWARE ITEMS should be minimised. If it
is necessary, rules should be published to all engineers to ensure that the shared variables
are only changed by a small number of specified SOFTWARE ITEMS and that all other SOFTWARE
ITEMS only read the shared variables without changing them.
The strongest form of segregation consists of running SOFTWARE ITEMS that should not interact
on separate processors. However, careful architectural design as recommended above may
provide an appropriate degree of segregation on a single processor.
Testing the SYSTEMS in a lab environment may indicate sufficient physical and time resources
for the test cases given, while the application load or the execution environment (other
processes running on the same box) in the field makes the software fail in a way that causes
HARM .
On the other hand, when test cases in the lab do show that there is low performance and
invalid measures are taken to hastily speed-up the software, these measures possibly break
the design and add other RISKS through unforeseen side-effects.
a) data flow corruption is prevented: non- SAFETY -related SOFTWARE ITEMS cannot modify
SAFETY -related data;
Events that cause any of the above to be violated, e.g. hardware failure, should be detected
and cause the SYSTEM to take necessary actions to ensure continued SAFETY .
In many cases, it is not practical to avoid all HAZARDS by inherent safe design or to implement
fault-tolerance for all potential failures. In those cases, protective measures are the next best
approach to managing a potential HAZARD . These measures typically operate by detecting a
potentially HAZARDOUS SITUATION , and either intervening automatically to mitigate
consequences, or generating an alarm so that the user may intervene.
TR 80002-1 © IEC:2009(E) – 33 –
For example, a therapeutic x-ray SYSTEM may have an interlock SYSTEM using software logic
or hardware that shuts down the x-ray generator if any door to the facility is opened. The
interlock function has no role in delivering the therapy. Its sole purpose is to mitigate the
HARM of unintentional radiation exposure.
In some cases (i.e., where loss of MEDICAL DEVICE functionality does not pose a HAZARD ),
SAFETY may be achieved at the expense of mission. For example, a failure of a laboratory
blood analyzer to provide a result may not in some cases be hazardous, but providing an
incorrect result could be. In this example, shutting down the analyzer when defensive
programming checks indicate unexpected faults, rather than continuing to operate, reduces
RISKS . In a fail-safe ARCHITECTURE , a SYSTEM or component fault, or other hazardous
condition, may lead to a loss of function, but in a manner that preserves the SAFETY of
operators and patients. In a fail-operational SYSTEM, the SYSTEM may continue to operate
safely, but with degraded performance (e.g. reduced capacity or slower response time).
An important class of RISK CONTROL measure is one which improves the probability of
preventing a HAZARDOUS SITUATION .
Consideration should be given to the frequency with which software RISK CONTROL measures
should run. The software RISK CONTROL measure should run sufficiently frequently to detect
the condition before it causes HARM .
RISK CONTROL measures can reduce the probability of HARM originating from software
ANOMALIES . There will usually be places in the software ARCHITECTURE where a RISK CONTROL
measure can reduce the probability of HARM irrespective of the nature of preceding events. If
this is done carefully, it is not necessary to anticipate the exact nature of software ANOMALIES
in order to prevent them from causing HARM.
In cases where this approach is not practicable, for example if a preventive measure is
implemented in software, methods should be used to assure the integrity of the software (see
6.2.2.6).
When this is not possible, an effective software development PROCESS may be used to reduce
the probability of occurrence of the software ANOMALIES . There is strong consensus that
PROCESS RISK CONTROL measures are beneficial when considered in combination with other
types of RISK CONTROL measures and if defined in detail.
If it can be established that the confidence in the software to perform its intended function
reliably is very high, and the confidence that the software is fault-free is very high, the
software may be treated as a high-integrity component. To achieve this high level of
– 34 – TR 80002-1 © IEC:2009(E)
confidence, the MANUFACTURER must demonstrate that the software development PROCESS can
predictably produce highly reliable, fault-free software. The use of such a PROCESS can then
be claimed to reduce the probability of occurrence of software ANOMALIES .
It is accepted that increasing the rigour of the software development PROCESS can reduce the
number of software ANOMALIES . It should be noted that while testing on MEDICAL DEVICE
SOFTWARE can reduce the number of software ANOMALIES , it cannot be assumed that when the
software passes all planned tests, no software ANOMALIES remain. This is because in clinical
use the inputs to the software will include sequences that were not part of the planned tests.
Since MEDICAL DEVICE SOFTWARE is too complex to test exhaustively, rigorous testing can only
be viewed as a method to lower the probability of HAZARDOUS SITUATIONS . Testing by itself,
however, is not sufficient to establish confidence that the software can be treated as a high-
integrity component.
A starting point for defining a rigorous software development PROCESS would be to include the
activities and TASKS specified in IEC 62304 in the software development PROCESS . Additional
items to consider when developing a rigorous software development PROCESS might include:
– staff competency – skills, qualifications, experience and training (who develops the
software?);
– methods –the suitability of the specification, design, coding and testing methods (what is
the PROCESS of development?);
– rigour, formality, and scope of reviews and inspections (how much static analysis is
performed?);
– tools –quality of tools such as compilers, requirements TRACEABILITY , and configuration
management tools (what tools are used during development of software?).
The use of SOUP is often determined during the SYSTEM design. The higher the potential RISKS
of the MEDICAL DEVICE , the more closely potential failure modes of SOUP should be analyzed
and RISK CONTROL measures identified. S OUP generally can not be modified to incorporate new
RISK CONTROL measures needed to monitor or isolate SOUP to prevent it from contributing to
HAZARDS or HAZARDOUS SITUATIONS if it fails. Nor is there adequate internal design information
available to identify all the potential HAZARDS caused by SOUP . Therefore the SYSTEM and
software ARCHITECTURE should be designed to provide RISK CONTROL measures needed to
monitor or isolate SOUP to prevent it from causing HAZARDS if it fails.
When SOUP is included in the MEDICAL DEVICE , care must be taken to prevent compromising
MEDICAL DEVICE SAFETY . Such a situation may call for the introduction of “wrappers” or a
middleware ARCHITECTURE . The middleware may:
a) prevent the use of features of the SOUP that one does not want to use,
b) perform logical checks to ensure correct information is transferred between the SOUP and
the MEDICAL DEVICE SOFTWARE , or
Another critical issue related to SOUP is the use of commercial operating and communication
systems. An ARCHITECTURE should be established that permits changes (e.g. stability,
SECURITY ) to the software platform, based on a thorough RISK ASSESSMENT without
compromising SAFETY . Such a RISK ASSESSMENT should include an analysis of the frequency of
changes necessary to ensure MEDICAL DEVICE SAFETY integrity; such as installing network
SECURITY patches.
The MANUFACTURER shall implement the RISK CONTROL measure(s) selected in 6.2.
Implementation of each RISK CONTROL measure shall be verified. This VERIFICATION shall be
recorded in the RISK MANAGEMENT FILE.
The effectiveness of the RISK CONTROL measure(s) shall be verified and the results shall be
recorded in the RISK MANAGEMENT FILE.
NOTE The VERIFICATION of effectiveness can include validation activities.
Once RISK CONTROL measures are identified they need to be implemented and their
effectiveness verified.
V ERIFICATION that the RISK CONTROL measures have been properly implemented and are
effective at controlling the RISK is essential for software. Both analysis and testing are likely to
be necessary. Key aspects to consider include:
a) TRACEABILITY to assure that all SAFETY - RELATED SOFTWARE ITEMS are identified and all
SAFETY -related functionality is specified, implemented, and tested in all relevant VERSIONS
and variants (e.g. for different platforms, languages, or MEDICAL DEVICE models) of the
software;
b) greater rigour and coverage when testing RISK CONTROL measures including testing under
a wide range of abnormal and stress conditions;
c) focus on regression testing RISK CONTROL measures and SAFETY -related functionality when
changes are made, even if the changes are not intended to affect SAFETY .
If a process is used that is thought to be rigorous enough to create high-integrity software, the
results must still be verified.
A NOMALY information collected during VERIFICATION and validation activities is often useful if
root cause analysis was performed and anomaly criticality ratings (see Subclause 9.1 of
IEC 62304:2006) associated with SAFETY are tracked and evaluated. A NOMALIES with SAFETY
consequences can be evaluated to determine if they were identified in the RISK ASSESSMENT
and if identified RISK CONTROL measures would suffice for them once implemented. Data on
software ANOMALIES may be used to demonstrate software development PROCESS
effectiveness, or to identify aspects of the software development PROCESS that need
improvement in order to claim it reduces RISK .
The adequacy of software RISK CONTROL measures may not be as obvious as the adequacy of
hardware RISK CONTROL measures. For this reason when dealing with software, one should
consider if the documentation of the RISK ASSESSMENT requires other formats than traditionally
would be required. One way that can be useful is to write “ SAFETY cases” (see Annex E).
– 36 – TR 80002-1 © IEC:2009(E)
After the RISK CONTROL measures are applied, any RESIDUAL RISK shall be evaluated using the
criteria defined in the RISK MANAGEMENT plan. The results of this evaluation shall be recorded
in the RISK MANAGEMENT FILE.
If the RESIDUAL RISK is not judged acceptable using these criteria, further RISK CONTROL
measures shall be applied (see 6.2).
For RESIDUAL RISKS that are judged acceptable, the MANUFACTURER shall decide which
RESIDUAL RISKS to disclose and what information is necessary to include in the ACCOMPANYING
DOCUMENTS in order to disclose those RESIDUAL RISKS.
R ESIDUAL RISK due to software needs to be included in the RESIDUAL RISK associated with the
MEDICAL DEVICE at the SYSTEM level. Given the difficulty in estimating the probability of
software ANOMALIES , RESIDUAL RISK EVALUATION usually involves determining whether all
sequences of events which lead to unacceptable RISK have RISK CONTROL measures to reduce
the probability of their occurrence or to limit the SEVERITY of HARM to an acceptable level as
defined in the RISK MANAGEMENT plan (see 3.4.1).
Any software ANOMALIES identified (during VERIFICATION and validation activities) that are not
corrected should be analyzed to determine if they affect SAFETY - RELATED SOFTWARE (see
Subclause 5.8.3 of IEC 62304:2006). If so, RISK from these ANOMALIES needs to be evaluated
and used in evaluating the RESIDUAL RISK of any HAZARDOUS SITUATION they can impact.
If the RESIDUAL RISK is not judged acceptable using the criteria established in the RISK
MANAGEMENT plan and further RISK CONTROL is not practicable, the MANUFACTURER may gather
and review data and literature to determine if the medical benefits of the INTENDED USE
outweigh the RESIDUAL RISK. If this evidence does not support the conclusion that the medical
benefits outweigh the RESIDUAL RISK, then the RISK remains unacceptable. If the medical
benefits outweigh the RESIDUAL RISK, then proceed to 6.6.
For RISKS that are demonstrated to be outweighed by the benefits, the MANUFACTURER shall
decide which information for SAFETY is necessary to disclose the RESIDUAL RISK.
The results of this evaluation shall be recorded in the RISK MANAGEMENT FILE.
The effects of the RISK CONTROL measures shall be reviewed with regard to:
Any new or increased RISKS shall be managed in accordance with 4.4 to 6.5.
The results of this review shall be recorded in the RISK MANAGEMENT FILE.
ISO 14971 does not prescribe a design and development PROCESS . One effect of this is that
when a RISK CONTROL measure has been implemented, ISO 14971 simply requires that it be
reviewed to ensure that it has not caused any further HAZARDS . This should not be interpreted
as an instruction to examine this question only after the implementation is complete.
For software RISK CONTROL measures, it is especially important that this review should not be
left until the software has been implemented. As soon as the software RISK CONTROL measure
has been specified, it should be placed under configuration management and reviewed to
discover adverse side effects including the inadvertent creation of new HAZARDS or HAZARDOUS
SITUATIONS .
Implementation of a RISK CONTROL measure that makes the software design significantly more
complex may increase the potential for additional software ANOMALIES or cause new
HAZARDOUS SITUATIONS . R ISK CONTROL measures should be as simple as practicable and
should always be subjected to a new RISK ASSESSMENT .
This review should be repeated (as a minimum) after software design and after SOFTWARE
SYSTEM test.
The MANUFACTURER shall ensure that the RISK(S) from all identified HAZARDOUS SITUATIONS
have been considered. The results of this activity shall be recorded in the RISK MANAGEMENT
FILE.
Subclause 7.3.3 of IEC 62304:2006 should be considered for inclusion in the completeness of
RISK CONTROL when software is a contributing factor in HAZARDOUS SITUATIONS .
– 38 – TR 80002-1 © IEC:2009(E)
After all RISK CONTROL measures have been implemented and verified, the MANUFACTURER
shall decide if the overall RESIDUAL RISK posed by the MEDICAL DEVICE is acceptable using the
criteria defined in the RISK MANAGEMENT plan.
If the overall RESIDUAL RISK is not judged acceptable using the criteria established in the RISK
MANAGEMENT plan, the MANUFACTURER may gather and review data and literature to determine
if the medical benefits of the INTENDED USE outweigh the overall RESIDUAL RISK. If this evidence
supports the conclusion that the medical benefits outweigh the overall RESIDUAL RISK, then the
overall RESIDUAL RISK can be judged acceptable.
For an overall RESIDUAL RISK that is judged acceptable, the MANUFACTURER shall decide which
information is necessary to include in the ACCOMPANYING DOCUMENTS in order to disclose the
overall RESIDUAL RISK.
The results of the overall RESIDUAL RISK EVALUATION shall be recorded in the RISK MANAGEMENT
FILE.
Compliance is checked by inspection of the RISK MANAGEMENT FILE and the ACCOMPANYING
DOCUMENTS.
The evaluation of overall RESIDUAL RISK requires implementation of all RISK CONTROL
measures. This includes the software being evaluated in the context of each different SYSTEM
configuration that the software is used in.
The results of SYSTEM test activities (with respect to all software functionalities and hardware
RISK CONTROL ) should be evaluated in conjunction with the acceptance criteria. All remaining
residual software ANOMALIES are to be documented in the RISK MANAGEMENT FILE and should
be evaluated to ensure that they do not contribute to an unacceptable RISK (see Subclauses 5.8.2
and 5.8.3 of IEC 62304:2006). Where necessary, the evaluation should be accepted by
independent interdisciplinary reviews with clinical/application experts. It may also be
necessary to include information in the ACCOMPANYING DOCUMENTS.
Prior to release for commercial distribution of the MEDICAL DEVICE, the MANUFACTURER shall
carry out a review of the RISK MANAGEMENT PROCESS. This review shall at least ensure that:
The results of this review shall be recorded as the RISK MANAGEMENT report and included in the
RISK MANAGEMENT FILE.
The responsibility for review should be assigned in the RISK MANAGEMENT plan to persons
having the appropriate authority [see 3.4 b)].
Clause 6 and Subclause 7.3.3 of IEC 62304:2006 should be considered for inclusion as part
of the review of the RISK MANAGEMENT PROCESS .
The MANUFACTURER shall establish, document and maintain a system to collect and review
information about the MEDICAL DEVICE or similar devices in the production and the POST-
PRODUCTION phases.
When establishing a system to collect and review information about the MEDICAL DEVICE, the
MANUFACTURER should consider among other things:
a) the mechanisms by which information generated by the operator, the user, or those
accountable for the installation, use and maintenance of the MEDICAL DEVICE is collected
and processed;
or
The system should also collect and review publicly available information about similar MEDICAL
DEVICES on the market.
This information shall be evaluated for possible relevance to SAFETY, especially the following:
- if previously unrecognized HAZARDS or HAZARDOUS SITUATIONS are present or
- if the estimated RISK(S) arising from a HAZARDOUS SITUATION is/are no longer acceptable.
1) the impact on previously implemented RISK MANAGEMENT activities shall be evaluated and
shall be fed back as an input to the RISK MANAGEMENT PROCESS and
2) a review of the RISK MANAGEMENT FILE for the MEDICAL DEVICE shall be conducted; if there is
a potential that the RESIDUAL RISK(S) or its acceptability has changed, the impact on
previously implemented RISK CONTROL measures shall be evaluated.
The results of this evaluation shall be recorded in the RISK MANAGEMENT FILE.
NOTE 1 Some aspects of POST-PRODUCTION monitoring are the subject of some national regulations. In such cases,
additional measures might be required (e.g. prospective POST-PRODUCTION evaluations).
Software RISK MANAGEMENT continues throughout the software LIFE - CYCLE , including the
software maintenance PROCESS (see Clause 6 of IEC 62304:2006) and software problem
resolution PROCESS (see Clause 9 of IEC 62304:2006).
The use of the software problem resolution PROCESS (see Clause 9 of IEC 62304:2006)
integrates the RISK MANAGEMENT activities in the investigation of the software problem and the
evaluation of the problem’s relevance to SAFETY . It is important to involve a multidisciplinary
team including clinical experts, software engineers, SYSTEM designers, and experts on
usability/human factors engineering in this investigation and evaluation of the problem (see
3.3).
S OUP is also an important aspect of the software maintenance plan and POST - PRODUCTION
RISK MANAGEMENT activities. Some SOUP , by its characteristics (e.g. virus protect software),
may have frequent updates which the MANUFACTURER should take into consideration for the
software maintenance plan(s).
Released SOUP patches or updates from MANUFACTURERS may include additional functionality
that is not essential for the SAFETY and effectiveness of the MEDICAL DEVICE . These SOUP
updates should be analyzed for excessive components which can be eliminated from the
medical software release to avoid unexpected changes which may lead to a HAZARDOUS
SITUATION .
As with any SOFTWARE ITEM change, the MANUFACTURER should know which SOFTWARE ITEMS
are affected by the SOUP update and perform regression testing (see Subclauses 7.4, 8.2, and
9.7 of IEC 62304:2006).
TR 80002-1 © IEC:2009(E) – 41 –
Annex A
(informative)
Discussion of definitions
A couple of key terms from ISO 14971 are HAZARD and HARM. Understanding these is
important for understanding the standard. H ARM is physical injury or damage to the health of
people, or damage to property or the environment. H AZARD is defined as a “potential source of
HARM ”, and there are (many) causes for the HAZARDS .
According to the definitions in ISO 14971:2007, a HAZARD cannot result in HARM until such
time as a sequence of events or other circumstances (including normal use) lead to a
HAZARDOUS SITUATION (see Figure A.1). This sequence of events includes both single events
and combination of events. Annex D.2 of ISO 14971:2007 provides guidance on HAZARDS and
HAZARDOUS SITUATIONS . Annex E of ISO 14971:2007 provides guidance on examples of
HAZARDS , foreseeable sequences of events, and HAZARDOUS SITUATIONS .
H AZARD
Sequence of Events
H AZARDOUS
S ITUATION
H ARM
IEC 1839/09
Unlike heat, electrical energy or suspended masses, software is not itself a HAZARD (a
potential source of HARM ); contact with software cannot cause injury. However, software may
cause a person to be exposed to a HAZARD, in other words it may contribute to a HAZARDOUS
SITUATION . Software failures (of any kind) often facilitate the transformation of a HAZARD into a
HAZARDOUS SITUATION .
Annex B
(informative)
Table B.1 lists functional areas of software often related to HAZARDS and provides examples of
conditions that are potential causes for HAZARDS . It also provides example questions to ask
during software development that can help lead to improved RISK CONTROL . Some of the
information in this table may not apply to all MEDICAL DEVICE SOFTWARE. The relevancy for
specific MEDICAL DEVICES depends on the INTENDED USE of the MEDICAL DEVICE , the SYSTEM-
level design of the MEDICAL DEVICE , the role of the software in the MEDICAL DEVICE , and other
factors. This table is intended only as a starting point.
This table is not exhaustive but is an aid to the thought PROCESS used for developing safe and
effective software.
Shutdown/safe mode/ Safe mode action is not adequate Has the clinical staff reviewed safe
recovery Safe mode action creates new HAZARDS mode scenarios?
Is safe mode state apparent to the
user?
Crowded or poor user interface masks the Has the clinical staff reviewed
“real” alarm condition protective measures for usability?
User interface
Actions to take in response to the alarm
are not clear
Are detected errors logged?
Persistent error is signalling a pending Is log large enough?
failure
Log Is log storage reliable?
Logged alarms are associated with wrong
patient How is log cleared?
Is the user aware when log is cleared?
Has INTENDED USE environment been
Background noise suppresses alarm
considered for audible alarm design?
sound
Have users been involved in
Alarm volume so loud that operators find
Audibility developing requirements for user
alternative means to disable alarm
interface design?
Audio SYSTEM has failed and user is
How is audio SYSTEM verified per
unaware of failure
power up or per patient?
– 44 – TR 80002-1 © IEC:2009(E)
User input is an out of range value Does software check data entry for
Data entry validity?
User inputs in-range but unintended value
Does it require supervisory user login
to confirm highly critical inputs or error
overrides?
Concealed shutoff control How many “layers” must user navigate
Accessibility to access SAFETY - related function?
Touch screen controls do not function
with surgical gloves
Alarm “automatically” causes display to Have all automatic screen switches
Screen changes
switch screens been evaluated?
How will colour blind operator interpret
Use of colours without regard for common
error message?
forms of colour blindness
User interface design Have users been involved in
User can’t determine which condition
developing requirements for user
causes alarm
interface design?
TR 80002-1 © IEC:2009(E) – 45 –
Display
Is there a technique being used to
Orientation reversal ensure correct orientation of image?
Diagnostic images
Patient incorrectly associated with patient How are images associated with
patient?
What frequency content is required for
display?
Calibration/Self-test Poor instructions for use cause the user Does software perform a
to fail to correctly calibrate the MEDICAL reasonableness or validity check of
Range checks DEVICE which results in erroneous calibration values, i.e. slope or offset?
calibration constant
Assay calibration Is user aware of auto-cal or auto-
Auto-zeroing operation done with non- zero?
(Software specific zero signal, i.e. unexpected pressure in
calibration) cuff or in line, or force on transducer
Hardware fault is detectable but never Are all hardware faults reported to
reported to user user?
The MEDICAL DEVICE continues to be used Should hardware fault be checked at
Hardware fault detection in this condition power-up, before each treatment or
session or on a continuous basis such
Hardware fault occurs after power-up as once per second?
software only checks for hardware fault at
power-up
Does software enforce completion of
cycle?
User aborts cleaning or disinfection
Self-cleaning Can software detection of incomplete
PROCESS mid-cycle
cleaning or disinfection cycle be
defeated?
– 46 – TR 80002-1 © IEC:2009(E)
Displayed value is not being updated but How is the user made aware of
user is unaware “frozen” display?
Display
Display writes are performed at two or Is video “context” saved before pre-
more priority levels. emption?
Is sampling rate appropriate for
frequency content of signal?
Wrong data acquisition timing or sampling
Measurement Is the measurement value stored in
rate
consistent units throughout software
layers?
Interfaces
Does each software function verify
passed parameters?
Function passes value in microlitres but Does software language support more
driver expects value in millilitres robust type checking?
Bad argument passing Bad pointer passed Is software designed with consistent
Pointer to volatile memory passed and units for values throughout the
values are lost before processing software package?
Are arguments modified at higher
priority processing layer?
Has software been designed to
Software enters endless loop waiting on tolerate any physical network
response from host computer connection condition?
MEDICAL DEVICES on network are given Can remote connection degrade
Network identical “names” resulting in data loss SYSTEM performance by repeatedly
sending commands or bogus data?
Networking data processing dominates
CPU cycles starving SAFETY or INTENDED Does the MEDICAL DEVICE check that
USE functions the network name is not already in
use?
TR 80002-1 © IEC:2009(E) – 47 –
Data
S YSTEM accesses wrong patient RECORD Can there be display of multiple
and user interface display does not make independent identifiers to put the user
Clinical information it apparent in the loop of detecting mix-ups?
S YSTEM stores data from patient in wrong Can critical identifiers be embedded
archive with actual data as a cross-check?
What reports will be used for clinical
purposes?
Report provides incorrect data or
What is the SEVERITY of HARM if the
Reports identifies it in the wrong sequence or
data is incorrect?
without units
How likely is it that a clinician would
notice the problem?
How can data corruption be detected
Data corruption due to side-effects from prior to use of the data?
Databases
SYSTEM level failures or SOUP Can this be done with each use
instead of only at start-up?
Diagnostic
Performance
During peak load or when capacity
limits are reached will data or timing
be lost or affected in undetectable
ways?
Will inputs and outputs be queued up
Critical timing is affected during peak load
in a correct deterministic sequence
Sequence of transactions/inputs/outputs under peak loads?
Capacity/load/response is affected or data is lost under peak
Have critical functionality and RISK
time loads
CONTROL measures been tested under
Motor control is affected under peak these stress conditions?
SYSTEM loads
Have RISK CONTROL measures been
implemented to detect limits?
Can interrupts be set to segregate
critical time constrained functionality
from other functionality?
Table B.2 identifies examples of these potential software causes for HAZARDS and some
possible RISK CONTROL measures to consider for each example. Requirements-based SYSTEM
testing is often ineffective at identifying these types of software causes or verifying the RISK
CONTROL measures associated with them and the effectiveness of the RISK CONTROL
measures. The columns on the right of the table provide guidance on the type of static or
dynamic VERIFICATION method(s) that might be appropriate for each example. Table B.3
provides examples of static or dynamic VERIFICATION method(s).
Inspection
Arithmetic
Table B.2 – Examples of software causes that can introduce side-effects (continued)
V ERIFICATION
types:
Test (unit, integration)
Inspection
Hardware-related
EEPROM usage: long access times, Use special access modes (Page/Burst mode), write only when Ø T
wear-out data changes, cache writes and update EEPROM only at power-
loss
CPU/Hardware failures Power-on CPU Check, program image CRC check, RAM test, Ø
Clock check, watchdog check, non-volatile storage check, timeouts
and reasonability checks on hardware responses, short-to
power/ground checks on sensors, test sensor response with known
signal
Noise Debounce digital inputs, filter analogue inputs, all interrupts –
used and unused – have interrupt service routines (ISRs)
Peripheral interface ANOMALIES Start-up delays for ADC/DAC, verify timing and other interface Ø T
requirements are always met, reasonability checks
Timing
Race conditions Identify and protect (lock) shared resources during updates, S
Implement a single, non-re-entrant PROCESS to handle all
accesses to the shared resource, shared resource analyses
Missed time deadlines Specified timing requirements, appropriate real-time design T
algorithms, eliminate priority-inversion and deadlock issues by
design, avoid non-deterministic timing constructs, verify all timing
assumptions in completed code
NOTE Non-deterministic timing constructs include: recursive
routines, waiting for a response from hardware, dynamic memory
allocation, and virtual memory (swapping memory pages to/from
disk)
Missed interrupts Simple interrupt ARCHITECTURE (fewest priorities), ISRs are short T
and fast, disabling of interrupt(s) is infrequent and of short
duration, appropriate real-time design
Excessive jitter in outputs ISRs are short and fast, avoid non-deterministic timing constructs, T
appropriate real-time design, update all outputs at beginning of
periodic TASK or in high-priority ISR
Watchdog time-outs Determine longest time between watchdog timer resets and set T
time-out appropriately, set time-out to longest period possible
while still avoiding HAZARDOUS SITUATION
Moding
Abnormal termination Exit traps, run-time error traps, appropriate watchdog timer design, D
validity checks on all program inputs, power-up self test, remove
all “debugging” code and other non-production features from
program before release, insure “special” modes (service,
manufacturing) can not be entered inadvertently
Power loss/recovery/sequencing Power-up checks: CPU, program image CRC, RAM, clock,
problems watchdog, non-volatile storage, peripherals, etc., appropriate state
design, initialization of time-averaged values, re-initialization of
peripherals, store/recover SYSTEM state from non-volatile storage,
external voltage monitor / reset circuitry
Start-up/shut-down ANOMALIES Power-up checks (see above), proper initialization of peripherals
and data, appropriate non-volatile memory usage, appropriate
state design
Enter/exit of low power modes Appropriate interrupt design Ø T
– 50 – TR 80002-1 © IEC:2009(E)
Table B.2 – Examples of software causes that can introduce side-effects (continued)
V ERIFICATION
types:
Test (unit, integration)
Inspection
Data issues
Volatile data Verify volatile storage class is used for all data changed by Ø S
hardware, ISRs, or different priority TASKS
Unintended aliasing Sample data at least 2x faster than the largest frequency
component of the signal (Nyquist criteria), limit the bandwidth of
the signal
Use of intermediate data Insure that any data which is assumed to be synchronous in time Ø
is updated all at once, shared resource analysis
NOTE A sequence of calculations
should never be performed on a
global (or shared) variable when the
calculation can be interrupted or pre-
empted. Instead, perform all the
calculations on a temporary variable
and update the global variable with a
single, un-interruptible instruction.
Interface issues
Table B.2 – Examples of software causes that can introduce side-effects (continued)
V ERIFICATION
types:
Test (unit, integration)
Inspection
Miscellaneous
S YSTEM deadlocks Simple locking strategies ( PROCESS can only have one resource S
locked at a given time), deadlock analysis
Re-entrancy Ensure that all functions, including those in third-party libraries, D
which are called from interrupts (or multiple TASKS of different
priorities) are designed to be re-entrant
Stack overflow Run-time stack guards, high-water marks, stack analysis S
Logic errors/syntax Use source code analysis tool (such as Lint) and/or max. compiler Ø Ø S
warning level
Dual DIVERSITY and cross-checks at critical control points
Infinite loops Loop counters, loop timeouts, watchdog timer Ø Ø
There are many methods available to facilitate assurance that RISK CONTROL methods are
likely to perform as intended, some more resource-intensive than others. No single method is
sufficient. Some of these methods are identified below in Table B.3.
– 52 – TR 80002-1 © IEC:2009(E)
Table B.3 – Methods to facilitate assurance that RISK CONTROL methods are likely to
perform as intended
Testing should account for a variety of types of tests (e.g. stress, boundary, timing, power
failure, fault, SOUP failure, etc.) to assure SAFETY - RELATED SOFTWARE is tested under an
adequate range of conditions rather than focusing exclusively on requirement-based testing.
TR 80002-1 © IEC:2009(E) – 53 –
Annex C
(informative)
Table C.1 lists potential software related pitfalls to avoid during RISK MANAGEMENT activities
(per ISO 14971:2007 clauses) and during the software LIFE - CYCLE (per IEC 62304:2006
clauses).
• it is always sufficient to use RISK CONTROL measures at the beginning or end of software sequences
of events leading to HAZARDOUS SITUATIONS .
– Running unit tests on critical components only once, early in the development PROCESS , without repeating
them as part of regression testing.
– Focusing testing exclusively on dynamic, black-box, SYSTEM level techniques and not performing static
and dynamic white-box VERIFICATION .
– 56 – TR 80002-1 © IEC:2009(E)
IEC 62304:2006 Subclauses 5.6 – 5.7 Software integration, integration testing, and
SYSTEM testing
– Failure to involve those with adequate clinical knowledge in the evaluation of residual ANOMALIES .
– Evaluation of the significance of residual ANOMALIES based solely on the functional symptom detected
rather then a complete root cause analysis to determine all potential side-effects under a range of
conditions.
– Overlooking the fact that once a third party is no longer distributing a specific SOUP VERSION , those
VERSIONS will not be available for failure investigations and field corrections.
– Because a specific tool and VERSION of the tool was not archived (such as a compiler) the ability to create
the specific software VERSION is lost.
– The life of the MEDICAL DEVICE might well be longer than the life of the current archive media. As the
MANUFACTURER replaces old archive media with new, they should plan a migration path for old archives
onto the new media.
IEC 62304:2006 Subclauses 6.2 - 6.3 Problem and modification analysis and
implementation
– Assuming that a small functional change can not affect SAFETY.
– Expanding the use of the MEDICAL DEVICE to new target populations, new disease indications, new types of
users (e.g. nurses instead of surgeons) or new platforms without re-examining the existing RISK CONTROL
measures and the appropriateness of the user interface.
– Setting problem resolution resource priorities based on the reported symptoms of field problems without
determining root cause and potential side-effects.
– Software designed for non-clinical purposes (e.g. billing) contains clinical data which subsequently is
distributed for clinical purposes without appropriate RISK MANAGEMENT .
TR 80002-1 © IEC:2009(E) – 57 –
Annex D
(informative)
Table D.1 lists the development activities of IEC 62304:2006 and associated software RISK
MANAGEMENT activities. Note that this table is not intended to represent a strict sequential
waterfall LIFE - CYCLE .
a) the scope of the planned RISK MANAGEMENT activities, identifying and describing the MEDICAL
DEVICE and the LIFE-CYCLE phases for which each element of the plan is applicable;
5.2 Software • Analyze INTENDED USE • Decide whether RISK • Identify software RISK
requirements and reasonably reduction is required CONTROL measures for
analysis foreseeable misuse and for the identified RISK identified RISKS (e.g.
users to identify caused by hardware
HAZARDS
• Determine if software failures and user error)
RISK CONTROL
• Identify known and measures alone would • Initial identification of
foreseeable HAZARDS in be adequate or if software RISK CONTROL
relationship to clinicians, hardware RISK measures for software
patients, and service CONTROL measures are failures that could
personnel and anyone necessary or desirable impact design. (e.g.
else that comes in and feasible inherent safe design or
contact with the MEDICAL protective measures)
DEVICE .
• Identify software to
• Take into account increase detectability,
product stages, such as reduce SEVERITY and
installation and /or reduce probability
assembly, training, of a HAZARDOUS
usage, upgrade and SITUATION .
maintenance
• Review RISK CONTROL
• Identify sequences or measures for new
combinations of events HAZARDS
that could result in
HAZARDOUS SITUATIONS
5.5 S OFTW ARE UNIT • Identify additional • Re-evaluation of • Verify RISK CONTROL
implementation and potential causes for adequacy of RISK measures under a
VERIFICATION HAZARDS CONTROL measures by range of conditions
challenging RISK on the range of
• Evaluation of each test CONTROL measures platforms
failure for similar code under a range of
implementations conditions and by testing • Regression testing of
RISK CONTROL
with representative
users in representative measures prior to
environments final release
• Complete
TRACEABILITY and
coverage analysis to
assure RISK CONTROL
measures are
implemented and
tested
• Complete
TRACEABILITY and
coverage analysis to
assure RISK CONTROL
measures are
implemented and
tested
6 Maintenance PROCESS
6.1 Establish • Plan how RISK MANAGEMENT will be performed for changes, enhancements, and fixes
software and how field usage information will be monitored and analyzed to assess adequacy
maintenance plan of RISK CONTROL and opportunities for additional RISK reduction
6.3 Modification • Similar to development PROCESS but with a focus on the impact of changes to:
implementation • affect existing RISK CONTROL measures;
• introduce new causes for HAZARDS ;
• introduce new INTENDED USE functionality that introduces new HAZARDS ;
• regression testing of SAFETY -related code.
• Software release according to Subclause 5.8 of IEC 62304:2006
– 60 – TR 80002-1 © IEC:2009(E)
Annex E
(informative)
S AFETY cases
While in industries such as military SYSTEMS , the off shore oil industry, rail transport and the
nuclear industry the concept of SAFETY cases is widely known, this technique is not mandatory
for the MEDICAL DEVICES industry nor is this annex intended to initiate additional requirements
beyond ISO 14971.
This technical report proposes that a SAFETY case could be a means of structuring,
documenting, and communicating the demonstration of an adequate level of SAFETY of a
MEDICAL DEVICE . The SAFETY case could also assist in ensuring SAFETY is maintained
throughout the lifetime of the MEDICAL DEVICE .
A SAFETY case uses the results of the RISK MANAGEMENT PROCESS to articulate why the
software is safe enough for its INTENDED USE and why it meets all regulatory requirements
(and could do so in the relevant regulatory terminology).
One could view a SAFETY case as a RISK MANAGEMENT or RESIDUAL RISK summary with
references to more detailed documentation for supporting information and the evidence in the
RISK MANAGEMENT FILE . It could also include cross references to demonstrate specification and
test coverage for all RISK CONTROL measures.
For more information on elements and construction of a SAFETY case see A Methodology for
Safety Case Development [9].
Two papers that provide a good introduction to SAFETY cases and goal structured notation are
Systematic Approach to Safety Case Management [10] and The Goal Structuring Notation – A
Safety Argument Notation [11].
TR 80002-1 © IEC:2009(E) – 61 –
Bibliography
NOTE The Joint Working Group does not endorse the content of any of the technical reference books listed. They
are offered by way of providing additional information relating to the guidance of applying the requirements of
ISO 14971 to MEDICAL DEVICE SOFTW ARE .
[1] ISO 13485, Medical devices – Quality management systems – Requirements for
regulatory purposes
[2] IEC 60812, Analysis techniques for system reliability – Procedure for failure mode and effects
analysis (FMEA)
[4] IEC 61882, Hazard and operability studies (HAZOP studies) – Application guide
[5] IEC 62366, Medical devices – Application of usability engineering to medical devices
[6] IEC 80001-1 2) , Application of risk management to information technology (IT) networks
incorporating medical devices – Part 1: Roles, responsibilities and activities
[7] PULLUM, L. Software fault tolerant techniques and implementation. Boston: Artech House,
2001
[8] BANATRE, M., LEE, P. (Eds)., Hardware and Software Architectures for Fault Tolerance:
Experiences and Perspectives. Berlin, Germany: Springer-Verlag, 1994
[9] BISHOP, P., BLOOMFIELD, R. (1998), A Methodology for Safety Case Development,
Safety-Critical Systems Symposium
http://www.adelard.co.uk/resources/papers/pdf/sss98web.pdf
[10] KELLY, T. P., Systematic Approach to Safety Case Management, Proceedings of SAE
2004 World Congress, Detroit, March 2004 (Proceedings published by the Society for
Automotive Engineers)
[11] WEAVER, R. A., KELLY, T. P., The Goal Structuring Notation – A Safety Argument
Notation, Proceedings of the Dependable Systems and Networks 2004 Workshop on
Assurance Cases, July 2004
—————————
2) To be published.
– 62 – TR 80002-1 © IEC:2009(E)
Index
ACCOMPANYING DOCUMENT , 37 RISK ,5, 8, 9, 10, 13, 14, 15, 17, 18, 21, 22, 23,
ACTIVITY , 29 24, 25, 29, 30, 31, 32, 33, 34, 35, 36, 37, 52,
ANOMALY , 5, 11, 17, 20, 22, 23, 24, 26, 27, 30, 54, 55, 56, 57, 58
32, 33, 34, 35, 36, 37, 39, 41, 48, 52, 53, 55 acceptance criteria, 14
ARCHITECTURE , 9, 10, 26, 29, 32, 33, 34, 48, 54 manager, 12
DELIVERABLE , 9 RISK ANALYSIS , 5, 17, 22, 52
DIVERSITY , 30, 50 RISK ASSESSMENT , 9, 29, 34, 36, 54, 55
Definition, 7 RISK CONTROL , 20, 25, 36, 37, 42, 47, 50, 53,
HARM , 5, 14, 17, 19, 20, 22, 23, 24, 26, 27, 28, 55, 56, 57, 58
31, 32, 35, 40, 41, 46, 52, 53, 56 measure, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17,
HAZARD , 5, 9, 12, 14, 16, 17, 19, 20, 21, 22, 23, 18, 19, 20, 22, 24, 25, 26, 27, 28, 29, 30,
24, 29, 31, 32, 33, 36, 40, 41, 42, 47, 52, 53, 32, 33, 34, 35, 36, 37, 43, 47, 52, 53, 54,
54, 55, 56, 57, 58 55, 56, 57, 58, 59
foreseeable, 19 methods, 28, 51
identification, 19, 20 RISK ESTIMATION , 22, 23, 24
HAZARDOUS SITUATION , 5, 7, 9, 12, 16, 17, 19, RISK EVALUATION , 9, 24, 35
20, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33, RISK MANAGEMENT , 5, 6, 7, 8, 9, 10, 11, 12, 13,
35, 36, 39, 40, 41, 48, 52, 53, 54, 56 14, 15, 20, 26, 28, 29, 39, 52, 53, 54, 55, 56,
INTENDED USE , 12, 17, 18, 19, 42, 45, 52, 56, 57, 58, 59
58, 59 plan, 14, 15, 24, 35, 56
LIFE - CYCLE , 8, 28, 29, 39, 52, 54, 56, 57 process, 6, 7, 9, 11, 15, 17, 20, 24, 38, 39,
MEDICAL DEVICE , 11 59
product development, 29 report, 37
software, 5, 9 team, 12
software development, 9, 17 RISK MANAGEMENT FILE , 15, 16, 18, 37, 59
MANUFACTURER , 11, 12, 14, 15, 16, 18, 19, 20, SAFETY , 5, 6, 8, 9, 10, 16, 17, 18, 20, 26, 28,
22, 24, 26, 29, 33, 39, 55, 56 29, 30, 31, 32, 33, 34, 39, 43, 44, 45, 54, 55,
MEDICAL DEVICE , 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 56, 57, 59
15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 28, case, 34, 59
30, 32, 33, 34, 35, 36, 37, 38, 39, 42, 43, 44, information for, 10, 26
45, 46, 52, 53, 54, 55, 56, 59, 60 -related, 10, 16, 29, 30, 31, 34, 43, 53, 54,
non-, 18 57, 58
MEDICAL DEVICE SOFTW ARE , 11, 33, 39, 42 requirements, 10
POST - PRODUCTION , 39 software safety class, 24, 28, 29
information, 14, 38, 56 SAFETY - RELATED SOFTW ARE , 10, 29, 30, 31, 34,
monitoring, 14 35, 47, 51, 55
PROBLEM REPORT , 13, 16 Definition, 7
PROCEDURE , 15, 39, 52 non-, 30
PROCESS , 9, 15, 24, 29, 30, 32, 33, 36, 39, 42, SECURITY , 34, 52
43, 44, 46, 48, 54, 55, 58 SEVERITY , 14, 17, 22, 23, 24, 35, 46, 53, 54, 56
configuration management, 36 SOFTW ARE ITEM , 9, 10, 16, 20, 22, 28, 29, 30,
design, 9 31, 34, 39, 53
outsourced, 11 SOFTW ARE SYSTEM , 12, 16, 27, 31, 36, 39
product development, 28 SOUP , 14, 22, 33, 34, 39, 49, 51, 52, 58
risk analysis, 52 SYSTEM , 5, 6, 8, 9, 10, 18, 19, 20, 25, 26, 29,
risk management, 6, 7, 9, 11, 15, 17, 20, 24, 31, 32, 33, 35, 37, 42, 44, 45, 46, 47, 48, 52,
38, 39, 59 54, 55, 59
software, 11, 16 designer, 12, 20, 39
software development, 9, 13, 15, 17, 24, 32, requirements, 18
33, 34 TASK , 5, 12, 13, 29, 33
software problem resolution, 39 TOP MANAGEMENT , 11
usability, 18 TRACEABILITY , 16, 33, 34, 54, 55, 57, 58
RECORD , 16, 18, 46 USE ERROR , 54
REDUNDANCY , 5, 7, 30, 57 VERIFICATION , 15, 16, 29, 34, 35, 43, 47, 54, 56,
Definition, 7 57
RESIDUAL RISK , 14, 24, 35, 37, 39, 59 VERSION , 9, 14, 16, 23, 26, 49, 55, 58
TR 80002-1 © IEC:2009(E) – 63 –
___________
INTERNATIONAL
ELECTROTECHNICAL
COMMISSION
3, rue de Varembé
PO Box 131
CH-1211 Geneva 20
Switzerland
Tel: + 41 22 919 02 11
Fax: + 41 22 919 03 00
info@iec.ch
www.iec.ch