Nothing Special   »   [go: up one dir, main page]

100% found this document useful (2 votes)
225 views161 pages

CLC TS50701 (2021) e

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 161

TECHNICAL SPECIFICATION CLC/TS 50701

SPÉCIFICATION TECHNIQUE
TECHNISCHE SPEZIFIKATION July 2021

ICS 35.030; 45.020

English Version

Railway applications - Cybersecurity

Applications ferroviaires - Cybersécurité Bahnanwendungen - IT-Sicherheit

This Technical Specification was approved by CENELEC on 2021-05-11.

CENELEC members are required to announce the existence of this TS in the same way as for an EN and to make the TS available promptly
at national level in an appropriate form. It is permissible to keep conflicting national standards in force.

CENELEC members are the national electrotechnical committees of Austria, Belgium, Bulgaria, Croatia, Cyprus, the Czech Republic,
Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the
Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland,
Turkey and the United Kingdom.

European Committee for Electrotechnical Standardization


Comité Européen de Normalisation Electrotechnique
Europäisches Komitee für Elektrotechnische Normung

CEN-CENELEC Management Centre: Rue de la Science 23, B-1040 Brussels

© 2021 CENELEC All rights of exploitation in any form and by any means reserved worldwide for CENELEC Members.

Ref. No. CLC/TS 50701:2021 E


CLC/TS 50701:2021 (E)

Contents Page

European foreword .................................................................................................................................. 6


Introduction .............................................................................................................................................. 7
1 Scope ...............................................................................................................................................8
2 Normative references .....................................................................................................................8
3 Terms, definitions and abbreviations...........................................................................................8
3.1 Terms and definitions ........................................................................................................................8
3.2 Abbreviations ..................................................................................................................................24
4 Railway system overview ............................................................................................................26
4.1 Introduction .....................................................................................................................................26
4.2 Railway asset model .......................................................................................................................27
4.3 Railway physical architecture model ...............................................................................................28
4.4 High-level railway zone model ........................................................................................................29
5 Cybersecurity within a railway application lifecycle ................................................................31
5.1 Introduction .....................................................................................................................................31
5.2 Railway application and product lifecycles .....................................................................................31
5.3 Activities, synchronization and deliverables ...................................................................................31
5.4 Cybersecurity context and cybersecurity management plan ..........................................................35
5.5 Relationship between cybersecurity and essential functions .........................................................35
5.5.1 General ...........................................................................................................................................35
5.5.2 Defence in depth ............................................................................................................................35
5.5.3 Security-related application conditions ...........................................................................................36
5.5.4 Interfaces between the safety and the cybersecurity processes ...................................................37
5.6 Cybersecurity assurance process ...................................................................................................38
6 System definition and initial risk assessment ..........................................................................39
6.1 Introduction .....................................................................................................................................39
6.2 Identification of the system under consideration ............................................................................40
6.2.1 Definition of the SuC.......................................................................................................................40
6.2.2 Overall functional description .........................................................................................................41
6.2.3 Access to the SuC ..........................................................................................................................41
6.2.4 Essential functions..........................................................................................................................41
6.2.5 Assets supporting the essential functions ......................................................................................42
6.2.6 Threat landscape ............................................................................................................................42
6.3 Initial risk assessment ........................................................................................................................42
6.3.1 Impact assessment.........................................................................................................................42
6.3.2 Likelihood assessment ...................................................................................................................43
6.3.3 Risk evaluation ...............................................................................................................................44
6.4 Partitioning of the SuC ....................................................................................................................45

2
CLC/TS 50701:2021 (E)

6.4.1 Criteria for zones and conduits breakdown .................................................................................... 45


6.4.2 Process for zones and conduits breakdown .................................................................................. 45
6.5 Output and documentation ............................................................................................................. 46
6.5.1 Description of the system under consideration .............................................................................. 46
6.5.2 Documentation of the initial risk assessment ................................................................................. 46
6.5.3 Definition of zones and conduits .................................................................................................... 46
7 Detailed risk assessment ............................................................................................................ 47
7.1 General aspects .............................................................................................................................. 47
7.2 Establishment of cybersecurity requirements ................................................................................. 48
7.2.1 General ........................................................................................................................................... 48
7.2.2 Threat identification and vulnerability identification ........................................................................ 49
7.2.3 Vulnerability identification ............................................................................................................... 51
7.2.4 Risk acceptance principles ............................................................................................................. 51
7.2.5 Derivation of SL-T by explicit risk evaluation ................................................................................. 53
7.2.6 Determine initial SL ........................................................................................................................ 55
7.2.7 Determine countermeasures from EN IEC 62443-3-3 ................................................................... 56
7.2.8 Risk estimation and evaluation ...................................................................................................... 56
7.2.9 Determine security level target ....................................................................................................... 58
7.2.10 Cybersecurity requirements specification for zones and conduits ................................................. 58
8 Cybersecurity requirements ....................................................................................................... 59
8.1 Objectives ....................................................................................................................................... 59
8.2 System security requirements ........................................................................................................ 59
8.3 Apportionment of cybersecurity requirements ................................................................................ 74
8.3.1 Objectives ....................................................................................................................................... 74
8.3.2 Break down of system requirements to subsystem level ............................................................... 75
8.3.3 System requirement allocation at component level ....................................................................... 75
8.3.4 Specific consideration for implementation of cybersecurity requirement on components ............. 76
8.3.5 Requirement breakdown structure as verification .......................................................................... 76
8.3.6 Compensating countermeasures ................................................................................................... 77
9 Cybersecurity assurance and system acceptance for operation ........................................... 78
9.1 Overview ......................................................................................................................................... 78
9.2 Cybersecurity case ......................................................................................................................... 79
9.3 Cybersecurity verification ................................................................................................................ 80
9.3.1 General ........................................................................................................................................... 80
9.3.2 Cybersecurity integration and verification ...................................................................................... 80
9.3.3 Assessment of results .................................................................................................................... 82
9.4 Cybersecurity validation .................................................................................................................. 82
9.5 Cybersecurity system acceptance .................................................................................................. 83
9.5.1 Independence ................................................................................................................................. 83
9.5.2 Objectives ....................................................................................................................................... 83
9.5.3 Activities ......................................................................................................................................... 83

3
CLC/TS 50701:2021 (E)

9.5.4 Cybersecurity handover .................................................................................................................83


10 Operational, maintenance and disposal requirements ............................................................83
10.1 Introduction .....................................................................................................................................83
10.2 Vulnerability management ..............................................................................................................84
10.3 Security patch management ...........................................................................................................85
10.3.1 General ...........................................................................................................................................85
10.3.2 Patching systems while ensuring operational requirements ..........................................................86
Annex A (informative) Handling conduits .......................................................................................... 89
Annex B (informative) Handling legacy systems .............................................................................. 92
Annex C (informative) Cybersecurity design principles .................................................................. 98
Annex D (informative) Safety and security ...................................................................................... 127
Annex E (informative) Risk acceptance methods ........................................................................... 131
Annex F (informative) Railway architecture and zoning ............................................................... 140
Annex G (informative) Cybersecurity deliverables content ........................................................... 158
Bibliography ......................................................................................................................................... 161

Figures
Figure 1 — Segregation of IT and OT .................................................................................................. 27
Figure 2 — Railway asset model (example) ........................................................................................ 28
Figure 3 — Railway physical architecture model (example) ............................................................. 29
Figure 4 — Generic high-level railway zone model (example) .......................................................... 30
Figure 5 — Defence in depth with example of measures .................................................................. 36
Figure 6 — Relationship TRA and SA.................................................................................................. 39
Figure 7 — Initial risk assessment flowchart ..................................................................................... 40
Figure 8 — Detailed risk assessment flowchart ................................................................................. 49
Figure 9 — Explicit risk evaluation flowchart ..................................................................................... 54
Figure 10 — Handling of SL-C .............................................................................................................. 77
Figure 11 — Cybersecurity assurance ................................................................................................ 78
Figure 12 — Cybersecurity case concept ........................................................................................... 79
Figure 13 — Cybersecurity assurance during integration and validation activities ...................... 81
Figure 14 — General vulnerability handling flowchart ...................................................................... 85
Figure 15 — Vulnerability and outage time during system update (maintenance phase)
[example] ................................................................................................................................... 87
Figure 16 — Vulnerability and outage time during system update with observation phases
[example] ................................................................................................................................... 88
Figure A.1 — Zones and conduits example ........................................................................................ 90
Figure D.1 — Security as an environmental condition for safety ................................................... 128
Figure F.1 — Adopted generic high-level railway zone model (example) ..................................... 148
Figure F.2 — Example of a railway system zone model .................................................................. 149

4
CLC/TS 50701:2021 (E)

Tables
Table 1 — Security-related activities within a railway application lifecycle (EN 50126-1) ............. 32
Table 2 — Examples of function related supporting assets in regard to the defence in depth
layers ......................................................................................................................................... 36
Table 3 — Qualitative Impact Assessment example .......................................................................... 43
Table 4 — Likelihood assessment matrix – Example ........................................................................ 44
Table 5 — Risk matrix example ............................................................................................................ 44
Table 6 — System Security Requirements and Foundational Classes ........................................... 61
Table E.1 — Risk acceptance categories acc. EN 50126-1 ............................................................. 131
Table E.2 — Mapping severity categories acc. EN 50126-1 to cybersecurity severity ................ 132
Table E.3 — Likelihood assessment criteria .................................................................................... 132
Table E.4 — Mapping Likelihood to accessibility and Probability ................................................. 133
Table E.5 — Impact assessment matrix – Example 2 ...................................................................... 134
Table E.6 — Likelihood assessment matrix – Example 2 ................................................................ 135
Table E.7 — Risk acceptance matrix – Example 2 ........................................................................... 136
Table E.8 — Impact assessment matrix – Example 3 ...................................................................... 137
Table E.9 — Likelihood assessment matrix – Example 3 ................................................................ 138
Table E.10 — Likelihood conversion table – Example 3 ................................................................. 138
Table E.11 — Risk acceptance matrix – Example 3 ......................................................................... 138
Table E.12 — Risk Severity / Mitigation matrix – Example 3 ........................................................... 139
Table F.1 — Railway system glossary ............................................................................................... 140
Table F.2 — Example – Evaluating groups of criticalities for landside-landside
communication ....................................................................................................................... 143
Table F.3 — Example – Zone criticality definition for landside-landside communication........... 144
Table F.4 — Example – Landside-landside communication matrix basic structure .................... 145
Table F.5 — Example – Communication matrix - landside to landside ......................................... 146
Table F.6 — Example – Rolling stock zone model ........................................................................... 150
Table F.7 — Example – Communication matrix - rolling stock to rolling stock ............................ 151
Table F.8 — Example – Communication matrix - landside to rolling stock .................................. 154
Table F.9 — Example – Communication matrix - rolling stock to landside .................................. 155

5
CLC/TS 50701:2021 (E)

European foreword

This document (CLC/TS 50701:2021) has been prepared by CLC/TC 9X “Electrical and electronic
applications for railways”.

Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. CENELEC shall not be held responsible for identifying any or all such patent rights.

Any feedback and questions on this document should be directed to the users’ national committee. A
complete listing of these bodies can be found on the CENELEC website.

6
CLC/TS 50701:2021 (E)

Introduction

The aim of this document is to introduce the requirements as well as recommendations to address
cybersecurity within the railway sector.
Due to digitization and the need for more performance and better maintainability, previously isolated
industrial systems are now connected to large networks and increasingly use standard protocols and
commercial components. Because of this evolution, cybersecurity becomes a key topic for these industrial
systems, including critical systems such as railway systems.
The purpose of this document is that, when a railway system is compliant to this document, it can be
demonstrated that this system is at the state of the art in terms of cybersecurity, that it fulfils its targeted
Security Level and that its security is maintained during its operation and maintenance.
This document intends to:
— provide requirements and guidance on cybersecurity activities and deliverables

— be adaptable and applicable to various system lifecycles

— be applicable for both safety and non-safety related systems

— identify interfaces between cybersecurity and other disciplines contributing to railway system
lifecycles

— be compatible and consistent with EN 50126-1 when it is applied to the system under consideration

— due to lifecycle differences between safety and cybersecurity, separate safety approval and
cybersecurity acceptance as much as possible

— identify the key synchronization points related to cybersecurity between system integrator and asset
owner

— provide harmonized and standardized way to express technical cybersecurity requirements

— provide cybersecurity design principles promoting simple and modular systems

— allow the usage of market products such as industrial COTS compliant with the 62443 series.

7
CLC/TS 50701:2021 (E)

1 Scope

This document provides the railway operators, system integrators and product suppliers, with guidance
and specifications on how cybersecurity will be managed in the context of EN 50126-1 RAMS lifecycle
process. This document aims at the implementation of a consistent approach to the management of the
security of the railway systems. This document can also be applied to the security assurance of systems
and components/equipment developed independently of EN 50126-1:2017.
This document applies to Communications, Signalling and Processing domain, to Rolling Stock and to
Fixed Installations domains. It provides references to models and concepts from which requirements and
recommendations can be derived and that are suitable to ensure that the residual risk from security
threats is identified, supervised and managed to an acceptable level by the railway system duty holder.
It presents the underlying security assumptions in a structured manner.
This document does not address functional safety requirements for railway systems but rather additional
requirements arising from threats and related security vulnerabilities and for which specific measures and
activities need to be taken and managed throughout the lifecycle. The aim of this document is to ensure
that the RAMS characteristics of railway systems / subsystems / equipment cannot be reduced, lost or
compromised in the case of intentional attacks.
The security models, the concepts and the risk assessment process described in this document are based
on or derived from IEC/EN IEC 62443 series standards. This document is consistent with the application
of security management requirements contained within IEC 62443-2-1 which in turn are based on
EN ISO/IEC 27001 and EN ISO 27002.

2 Normative references

The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
EN 50126-1:2017, Railway Applications - The Specification and Demonstration of Reliability, Availability,
Maintainability and Safety (RAMS) - Part 1: Generic RAMS Process

EN IEC 62443-3-2:2020, Security for industrial automation and control systems - Part 3-2: Security risk
assessment for system design

EN IEC 62443-3-3:2019 1, Industrial communication networks - Network and system security - Part 3-3:
System security requirements and security levels

IEC 62443-2-1:2010, Industrial communication networks - Network and system security - Part 2-1:
Establishing an industrial automation and control system security program

3 Terms, definitions and abbreviations

3.1 Terms and definitions

For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— IEC Electropedia: available at http://www.electropedia.org/

— ISO Online Browsing Platform: available at http://www.iso.org/obp

NOTE The correspondence of the terms IACS, Solution and System used in EN IEC 62443 series with the
terms in this document can need further clarification in future issues of the TS. Particularly, when using EN IEC 62443

1 Document impacted by EN IEC 62443-3-3:2019/AC:2019-10.

8
CLC/TS 50701:2021 (E)

definitions and requirements, the term “IACS” is to be understood and replaced by, “railway application” or “railway
system” as relevant in the context.

3.1.1
acceptance
<for a product, system or process>
status achieved by a product, system or process once it has been agreed that it is suitable for its intended
purpose

[SOURCE: EN 50126-1:2017, 3.1]

3.1.2
access
<in cybersecurity>
ability and means to communicate with or otherwise interact with a system in order to use system
resources

Note 1 to entry: Access may involve physical access (authorization to be allowed physically in an area, possession
of a physical key lock, PIN code, or access card or biometric attributes that allow access) or logical access
(authorization to log in to a system and application, through a combination of logical and physical means).

3.1.3
access control
<control>
protection of system resources against unauthorized access

[SOURCE: EN IEC 62443-4-1:2018, 3.1.2]

3.1.4
access control
<process>
process by which use of system resources is regulated according to a security policy and is permitted by
only authorized entities (users, programs, processes, or other systems) according to that policy

Note 1 to entry: Access control includes identification and authentication requirements specified in other parts of
the IEC 62443 series.

[SOURCE: EN IEC 62443-4-1:2018, 3.1.3]

3.1.5
accident
unintended event or series of events that results in death, injury, loss of a system or service, or
environmental damage

[SOURCE: IEC 60050 821:2017, 821-12-02]

3.1.6
achieved security level
measure of the security level achieved in the deployed security architecture, elsewhere, sometimes
referred to as the “as-built” security level

Note 1 to entry: Actual security level will vary over time based on natural degradations, induced events and
maintenance of security mechanisms.

9
CLC/TS 50701:2021 (E)

3.1.7
application
software programs executing on the infrastructure that are used to interface with the process of the control
system itself

Note 1 to entry: Attributes include executable, typically execute on personal computers (PCs) or embedded
controllers

Note 2 to entry: This definition doesn't apply to the term “Railway Application”

3.1.8
approval
permission for a product or process to be marketed or used for stated purposes or under stated conditions

Note 1 to entry: Approval can be based on fulfilment of specified requirements or completion of specified
procedures.

[SOURCE: IEC 60050-902:2013, 902-06-01]

3.1.9
asset
physical or logical object owned by or under the custodial duties of an organization and having either a
perceived or actual value to the organization

[SOURCE: IEC 62443-2-1:2010, 3.1.3]

3.1.10
asset owner
individual or organization responsible for one or more IACS

Note 1 to the entry: In the context of this document, an asset owner is a railway duty holder.

[SOURCE: EN IEC 62443-4-1:2018, 3.1.6, modified – Note 1 to entry has been added]

3.1.11
attack
attempt to gain access to an information processing system in order to produce damage

Note 1 to entry: The damage can be e.g. destruction, disclosure, alteration, unauthorized use.

[SOURCE: IEC 60050-171:2019, 171-08-12]

3.1.12
attack surface
physical and functional interfaces of a system that can be accessed and, therefore, potentially exploited

Note 1 to entry: The size of the attack surface for a software interface is proportional to the number of methods and
parameters defined for the interface. Simple interfaces, therefore, have smaller attack surfaces than complex
interfaces.

Note 2 to entry: The size of the attack surface and the number of vulnerabilities are not necessarily related to each
other.

[SOURCE: EN IEC 62443-2-4:2019, 3.1.2]

10
CLC/TS 50701:2021 (E)

3.1.13
attack vector
method or means by which an attacker can gain access to the system under consideration in order to
deliver a payload or malicious outcome

Note 1 to entry: Attack vectors enable attackers to exploit the vulnerabilities of the system under consideration,
including the human element.

Note 2 to entry: Examples of attack vectors include and not limited to USB key, e-mail attachment, wireless
connection, compromised credentials, phishing, man in the middle attack, etc.

3.1.14
audit
systematic, independent, documented process for obtaining records, statements of fact or other relevant
information and assessing them objectively to determine the extent to which specified requirements are
fulfilled

[SOURCE: IEC 60050-902:2013, 902-03-04, modified – Note 1 to entry has been removed]

3.1.15
authentication
provision of assurance that a claimed characteristic of an identity is correct

Note 1 to entry: Not all credentials used to authenticate an identity are created equally. The trustworthiness of the
credential is determined by the configured authentication mechanism. Hardware or software-based mechanisms can
force users to prove their identity before accessing data on a device. A typical example is proving the identity of a
user usually through an identity provider.

Note 2 to entry: Authentication is usually a prerequisite to allowing access to resources in a control system.

[SOURCE: EN IEC 62443-4-1:2018, 3.1.9]

3.1.16
authorization
<in cybersecurity>
right or a permission that is granted to a system entity to access a system resource

[SOURCE: IEC/TR 62443-3-1:2009, 3.1.7]

3.1.17
boundary
software, hardware, or other physical barrier that limits access to a system or part of a system

3.1.18
boundary device
communication security asset, within a zone or conduit, that provides a protected interface between a
zone and a conduit

3.1.19
communication channel
<in cybersecurity>
specific logical or physical communication link between assets

Note 1 to entry: A channel facilitates the establishment of a connection.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.9]

11
CLC/TS 50701:2021 (E)

3.1.20
communication path
logical connection between a source and one or more destinations, which could be devices, physical
processes, data items, commands, or programmatic interfaces

Note 1 to entry: The communication path is not limited to wired or wireless networks, but includes other means of
communication such as memory, procedure calls, state of physical plant, portable media, and human interactions.

3.1.21
compensating countermeasure
countermeasure employed in lieu of or in addition to inherent security capabilities to satisfy one or more
security requirements

EXAMPLE

— (component-level): locked cabinet around a controller that does not have sufficient cyber access control
countermeasures.

— (control system/zone-level): physical access control (guards, gates and guns) to protect a control room to restrict
access to a group of known personnel to compensate for the technical requirement for personnel to be uniquely
identified by the IACS.

— (component-level): a vendor’s programmable logic controller (PLC) cannot meet the access control capabilities
from an end-user, so the vendor puts a firewall in front of the PLC and sells it as a system.

[SOURCE: EN IEC 62443-4-2:2019, 3.1.9]

3.1.22
compromise
violation of the security of a system such that an unauthorized disclosure or modification on sensitive
information may have occurred, or unauthorized behaviour of the controlled physical process may have
occurred

3.1.23
conduit
<in cybersecurity>
logical grouping of communication channels, between connecting two or more zones, that share common
security requirements

Note 1 to entry: A conduit is allowed to traverse a zone as long as the security of the channels contained within the
conduit is not impacted by the zone.

[SOURCE: EN IEC 62443-4-2:2019, 3.1.11]

3.1.24
confidentiality
<in cybersecurity>
assurance that information is not disclosed to unauthorized individuals, processes, or devices

Note 1 to entry: When used in the context of an IACS, confidentiality refers to protecting IACS data and information
from unauthorized access.

[SOURCE: EN IEC 62443-4-2:2019, 3.1.12]

12
CLC/TS 50701:2021 (E)

3.1.25
connection
<in cybersecurity>
association established between two or more endpoints which supports the establishment of a session

[SOURCE: EN IEC 62443-4-2:2019, 3.1.13]

3.1.26
control network
time-critical network that is typically connected to equipment that controls physical processes

Note 1 to entry: The control network can be subdivided into zones, and there can be multiple separate control
networks within one company or site.

3.1.27
control system
<in industrial automation and control system>
hardware and software components of an IACS

Note 1 to entry: Control systems are composed of field devices, embedded control devices, network devices, and
host devices (including workstations and servers.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.16, modified – Note 1 to entry has been added]

3.1.28
countermeasure
action, device, procedure, or technique that reduces a threat, a vulnerability, or an attack by eliminating
or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective
action can be taken

Note 1 to entry: The term “control” is also used to describe this concept in some contexts. The term
countermeasure has been chosen for this standard to avoid confusion with the term “control” in the context of “process
control” and “control system”.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.17]

3.1.29
cybersecurity
<in railway application>
set of activities and measures taken with the objective to prevent, detect, react to unauthorized access or
cyberattack which could lead to an accident, an unsafe situation, or railway application performance
degradation

Note 1 to entry: It is recognized that the term “cybersecurity” has a broader meaning in other standards and
guidance, often including non-malevolent threats, human errors, and protection against natural disasters. Those
aspects, except human errors degrading security controls, are not included in this document.

3.1.30
data diode
boundary device which ensures that data between two separate networks is only transmitted in one
direction

Note 1 to entry: data diode can be either of the physical or logical type (firewall)

13
CLC/TS 50701:2021 (E)

3.1.31
defence in depth
approach to defend the system against any particular attack using several independent methods

Note 1 to entry: Defence in depth implies layers of security and detection, even on single systems, and provides
the following features:

— is based on the idea that any one layer of protection, may and probably will be defeated;

— attackers are faced with breaking through or bypassing each layer without being detected;

— a flaw in one layer can be mitigated by capabilities in other layers;

— system security becomes a set of layers within the overall network security; and

— each layer should be autonomous and not rely on the same functionality nor have the same failure modes as the
other layers.

[SOURCE: EN IEC 62443-4-1:2018, 3.1.15, modified – defense has been replaced by defence]

3.1.32
demilitarized zone
common, limited network of servers joining two or more zones for the purpose of controlling data flow
between zones

Note 1 to entry: Demilitarized zones (DMZs) are typically used to avoid direct connections between different zones.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.19]

3.1.33
denial of service
prevention or interruption of authorized access to a system resource or the delaying of system operations
and functions

[SOURCE: IEC/TR 62443-3-1:2009, 3.1.21]

3.1.34
digital signature
result of a cryptographic transformation of data which, when properly implemented, provides the services
of origin authentication, data integrity, and signer non-repudiation

[SOURCE: IEC/TR 62443-3-1:2009, 3.1.22]

3.1.35
encryption
<of data>
transformation of data in order to hide their semantic content using cryptography

Note 1 to entry: The reverse process is called decryption.

[SOURCE: IEC 60050-171:2019, 171-08-09]

14
CLC/TS 50701:2021 (E)

3.1.36
essential function
function or capability that is required to maintain health, safety, the environment and availability for the
equipment under control

Note 1 to the entry: Essential functions include, but are not limited to, the safety instrumented function (SIF),
the control function and the ability of the operator to view and manipulate the equipment under control. The loss of
essential functions is commonly termed loss of protection, loss of control and loss of view respectively. In some
industries additional functions such as history may be considered essential.

[SOURCE: EN IEC 62443-4-2:2019, 3.1.20]

3.1.37
firewall
functional unit that mediates all traffic between two networks and protects one of them or some part
thereof against unauthorized access

[SOURCE: IEC 60050-732:2010, 732-06-01, modified – The notes to entry have been omitted]

3.1.38
gateway
<for computer networks>
functional unit that connects two computer networks with different network architectures and protocols

[SOURCE: IEC 60050-732:2019, 732-01-17, modified – The notes to entry have been omitted]

3.1.39
handover
<from system integrator to asset owner>
act of turning a railway solution over to the asset owner

Note 1 to entry: Handover effectively transfers responsibility for operations and maintenance of a railway solution
from the integration service provider to the asset owner and generally occurs after successful completion of system
test, often referred to as Site Acceptance Test (SAT).

3.1.40
host computer
host
in a computer network, a computer that provides end users with services such as computation and data
base access and that may perform network control functions

[SOURCE: IEC 60050-732:2010, 732-01-14]

3.1.41
impact
evaluated consequence of a particular event

Note 1 to entry: Impact may be expressed in terms of numbers of injuries and/or fatalities, extent of environmental
damage and/or magnitude of losses such as property damage, material loss, loss of intellectual property, lost
production, market share loss, and recovery costs.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.27, modified – Note 1 to entry has been added]

15
CLC/TS 50701:2021 (E)

3.1.42
incident
<in cybersecurity>
event that is not part of the expected operation of a system or service that causes, or may cause, an
interruption to, or a reduction in, the quality of the service provided by the control system

[SOURCE: EN IEC 62443-3-3:20191, 3.1.28]

3.1.43
integration service provider
service provider that provides integration activities for an automation solution including design,
installation, configuration, testing, commissioning, and handover

Note 1 to entry: Integration service providers are often referred to as integrators or Main Automation Contractors
(MAC).

[SOURCE: EN IEC 62443-2-4:2019, 3.1.9]

3.1.44
integrity
<of data>
property that sensitive data has not been modified or deleted in an unauthorized and undetected manner

[SOURCE: IEC 60050-171:2019, 171-08-05, modified – “of data that have not been altered or destroyed”
has been replaced with “that sensitive data has not been modified or deleted”]

3.1.45
intrusion
<in cybersecurity>
unauthorized act of compromising a system

Note 1 to entry: See “attack”.

3.1.46
intrusion detection
security service that monitors and analyses system events for the purpose of finding, and providing real-
time or near real-time warning of, attempts to access system resources in an unauthorized manner

3.1.47
least privilege
basic principle that holds that users (humans, software processes or devices) should be assigned the
fewest privileges consistent with their assigned duties and functions

Note 1 to entry: Least privilege is commonly implemented as a set of roles in an IACS.

[SOURCE: EN IEC 62443-4-2:2019, 3.1.28]

3.1.48
legacy system
any kind of system which is already in operation

3.1.49
likelihood
<of occurrence>
weighted factor based on a subjective analysis of the probability that a given threat is capable of exploiting
a given vulnerability or a set of vulnerabilities

[SOURCE: NIST SP 800-30: September 2012]

16
CLC/TS 50701:2021 (E)

3.1.50
non-repudiation
<in cybersecurity>
ability to prove the occurrence of a claimed event or action and its originating entities

Note 1 to entry: The purpose of non-repudiation is to resolve disputes about the occurrence or non-occurrence of
the event or action and involvement of entities in the event.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.33]

3.1.51
patch management
area of systems management that involves monitoring, acquiring, testing, scheduling and installing
software patches (code change) to a product

Note 1 to entry: See IEC TR 62443-2-3 for additional information.

Note 2 to entry: Patch management also applies to the process of keeping included 3rd party libraries current
before releasing a product.

[SOURCE: EN IEC 62443-4-1:2018, 3.1.21]

3.1.52
penetration
successful unauthorized access to a protected system resource

3.1.53
privilege
authorization or set of authorizations to perform specific functions, especially in the context of a computer
operating system

Note 1 to entry: Examples of functions that are controlled using privilege include acknowledging alarms, changing
setpoints and modifying control algorithms.

3.1.54
protection profile
<in cybersecurity>
generic implementation independent cybersecurity requirement specification for a class or type of
components or specific configuration setting of different components which is typically created by a user
or user community

Note 1 to entry: A protection profile document is a combination of threats, security objectives, assumptions,
security functional requirements, security assurance requirements and rationales.

3.1.55
Purdue model
model that was adopted from the Purdue Enterprise Reference Architecture (PERA) model by ISA-99,
and used as a concept model for ICS network segmentation

Note 1 to entry: It is an industry adopted reference model that shows the interconnections and interdependencies
of all the main components of a typical ICS.

17
CLC/TS 50701:2021 (E)

3.1.56
railway application
part of a running railway system which consists of a technical system with its organizational and
procedural measures

Note 1 to entry: It corresponds in the railway domain to the concept of “IACS in operation” of 62443.

[SOURCE (for IACS): IEC/TS 62443-1-1:2009, 3.2.57]

3.1.57
railway duty holder
body with the overall accountability for operating a railway system within the legal framework

Note 1 to entry: Railway duty holder accountabilities for the overall system or its parts and lifecycle activities are
sometimes split between one or more bodies or entities. For example:

— the owner(s) of one or more parts of the system assets and their purchasing agents;

— the operator of the system;

— the maintainer(s) of one or more parts of the system;

Note 2 to entry: Typically, the railway duty holders are railway undertakings and the infrastructure managers. Such
splits are based on either statutory instruments or contractual agreements. Such responsibilities are defined at the
earliest stages of a system lifecycle.

[SOURCE: EN 50126-1:2017, 3.48]

3.1.58
reference model
structure that allows the modules and interfaces of a system to be described in a consistent manner

[SOURCE (for IACS): IEC/TS 62443-1-1:2009, 3.2.81]

3.1.59
remote access
access to a control system by any user (human, software process or device) communicating from outside
the perimeter of the zone being addressed

[SOURCE: EN IEC 62443-3-3:20191, 3.1.35]

3.1.60
residual risk
<in cybersecurity>
risk that remains after existing countermeasures are implemented (such as, the net risk or risk after
countermeasures are applied)

[SOURCE: EN IEC 62443-3-2:2020, 3.1.13]

3.1.61
risk
<in cybersecurity>
expectation of loss expressed as the likelihood that a particular threat will exploit a particular vulnerability
with a particular consequence

[SOURCE: EN IEC 62443-3-2:2020, 3.1.14]

18
CLC/TS 50701:2021 (E)

3.1.62
risk assessment
<in cybersecurity>
process that systematically identifies potential vulnerabilities to valuable system resources and threats to
those resources, quantifies loss exposures and consequences based on probability of occurrence, and
(optionally) recommends how to allocate resources to countermeasures to minimize total exposure

Note 1 to entry: Types of resources include physical, logical and human.

Note 2 to entry: Risk assessments are often combined with vulnerability assessments to identify vulnerabilities and
quantify the associated risk. They are carried out initially and periodically to reflect changes in the organization's risk
tolerance, vulnerabilities, procedures, personnel and technological changes.

3.1.63
risk mitigation
prioritizing, evaluating, and implementing the appropriate risk-reducing controls/countermeasures
recommended from the risk management process

3.1.64
role
set of connected behaviours, privileges and obligations associated with all users (humans, software
processes or devices) of an IACS

Note 1 to entry: The privileges to perform certain operations are assigned to specific roles.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.36]

3.1.65
safety
freedom from unacceptable risk

Note 1 to entry: Risk related to human health or to the environment.

[SOURCE: IEC 60050-903:2013, 903-01-19, modified – Note 1 to entry has been added]

3.1.66
safety-related application condition
those conditions which need to be met in order for a system to be safely integrated and safely operated

Note 1 to entry: Application conditions can be e.g.: operational restrictions (e.g. speed limit, maximum duration of
use), operational rules, maintenance restrictions (e.g. requested maintenance intervals) or environmental conditions.

[SOURCE: EN 50126-1:2017, 3.75]

3.1.67
secret
condition of information being protected from being known by any system entities except those intended
to know it

[SOURCE: IEC/TS 62443-1-1:2009, 3.2.98]

19
CLC/TS 50701:2021 (E)

3.1.68
security
<in cybersecurity>
condition of system resources being free from unauthorised access and from unauthorised or accidental
change, destruction or loss

[SOURCE: IEC/TS 62443-1-1:2009, 3.2.99]

3.1.69
security architecture
<in cybersecurity>
plan and set of principles describing the security services that a system is required to provide to meet the
needs of its users, the system elements required to implement the services, and the performance levels
required in the elements to deal with the threat environment

Note 1 to entry: In this context, security architecture would be an architecture to protect the control network from
intentional or unintentional security events.

[SOURCE: IEC/TS 62443-1-1:2009, 3.2.100]

3.1.70
security assurance
<in cybersecurity>
grounds for confidence that the set of intended security controls/countermeasures in an information
system are effective in their application and that an entity meets its security objectives

[SOURCE: NIST SP 800-39, amended]

3.1.71
security incident
<in cybersecurity>
security compromise that is of some significance to the asset owner or failed attempt to compromise the
system whose result could have been of some significance to the asset owner

Note 1 to entry: The term “of some significance’ is relative to the environment in which the security compromise is
detected. For example, the same compromise may be declared as a security incident in one environment and not in
another. Triage activities are often used by asset owners to evaluate security compromises and identify those that
are significant enough to be considered incidents.

Note 2 to entry: In some environments, failed attempts to compromise the system, such as failed login attempts,
are considered significant enough to be classified as security incidents.

[SOURCE: EN IEC 62443-2-4:2019, 3.1.16]

3.1.72
security level
<in cybersecurity>
measure of confidence that the zone, conduit or a component thereof is free from vulnerabilities and
functions in the intended manner

Note 1 to entry: Vulnerabilities can either be designed into the SuC, inserted at any time during its lifecycle or
result from changing threats. Designed-in vulnerabilities may be discovered long after the initial deployment of the
SuC, e.g. an encryption technique has been broken or an improper policy for account management such as not
removing old user accounts. Inserted vulnerabilities may be the result of a patch or a change in policy that opens a
new vulnerability.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.38, modified]

20
CLC/TS 50701:2021 (E)

3.1.73
security objective
<in cybersecurity>
aspect of security whose purpose is to use certain mitigation measures, such as confidentiality, integrity,
availability, user authenticity, access authorization, accountability, etc

[SOURCE: IEC/TS 62443-1-1:2009, 3.2.109]

3.1.74
security patch
<in cybersecurity>
software patch that is relevant to the security of a software component

Note 1 to entry: For the purpose of this definition, firmware is considered software.

Note 2 to entry: Software patches may address known or potential vulnerabilities, or simply improve the security
of the software component, including its reliable operation.

[SOURCE: EN IEC 62443-2-4:2019, 3.1.17]

3.1.75
security policy
set of rules that specify or regulate how a system or organization provides security services to protect its
assets

[SOURCE: IEC/TS 62443-1-1:2009, 3.2.112]

3.1.76
security program
<in cybersecurity>
portfolio of security services, including integration services and maintenance services, and their
associated policies, procedures, and products that are applicable to the IACS

Note 1 to entry: The security program for IACS service providers refers to the policies and procedures defined by
them to address security concerns of the IACS.

[SOURCE: EN IEC 62443-2-4:2019, 3.1.18]

3.1.77
security-related application condition
<in cybersecurity>
those conditions which need to be met in order for a system to be securely integrated and securely
operated

Note 1 to entry: Application conditions can be e.g.: operational restrictions (e.g. access control usage), operational
rules, maintenance rules (antimalware update periodicity) or environmental conditions (e.g. external PKI
infrastructure)

3.1.78
security services
<in cybersecurity>
mechanisms used to provide confidentiality, data integrity, authentication, or no repudiation of information

3.1.79
sensitive data
data that is likely to cause its owner some adverse impact if either it becomes known to others when not
intended or it is modified without consent of the affected stakeholder. It thus requires protection from
unauthorized disclosure or modification

21
CLC/TS 50701:2021 (E)

3.1.80
service provider
<in cybersecurity>
individual or organization (internal or external organization, manufacturer, etc.) that provides a specific
support service and associated supplies in accordance with an agreement with the asset owner

Note 1 to entry: This term is used in place of the generic word “vendor” to provide differentiation.

[SOURCE: EN IEC 62443-2-4:2019, 3.1.19]

3.1.81
session
semi-permanent, stateful, interactive information interchange between two or more communicating
devices

Note 1 to entry: Typically, a session has a clearly defined start process and end process.

[SOURCE: EN IEC 62443-3-3:20191, 3.1.40]

3.1.82
system integrator
person or company that specializes in bringing together component subsystems into a whole and
ensuring that those subsystems perform in accordance with project specifications

[SOURCE: EN IEC 62443-4-1:2018, 3.1.34]

3.1.83
system under consideration
collection of assets that are needed to provide and operate a railway application including any relevant
network infrastructure assets

Note 1 to entry: A SuC consists of one or more zones and related conduits. All assets within a SuC belong to either
a zone or conduit.

[SOURCE: EN IEC 62443-3-2:2020, 3.1.19, modified to be adapted to railway context]

3.1.84
threat
<in cybersecurity>
circumstance or event with the potential to adversely affect operations (including mission, functions,
image or reputation), assets, control systems or individuals via unauthorized access, destruction,
disclosure, modification of data and/or denial of service

[SOURCE: EN IEC 62443-3-3:20191, 3.1.44]

3.1.85
threat environment
<in cybersecurity>
environment summary of information about threats, such as threat sources, threat vectors and trends,
that have the potential to adversely impact a defined target (e.g. company, facility or SuC)

[SOURCE: EN IEC 62443-3-2:2020, 3.1.19]

22
CLC/TS 50701:2021 (E)

3.1.86
threat source
<in cybersecurity>
intent and method targeted at the intentional exploitation of a vulnerability, or a situation and method that
may accidentally trigger a vulnerability

[SOURCE: EN IEC 62443-3-2:2020, 3.1.20]

3.1.87
threat vector
<in cybersecurity>
path or means by which a threat source can gain access to an asset

[SOURCE: EN IEC 62443-3-2:2020, 3.1.21]

3.1.88
verification
confirmation, through the provision of objective evidence, that specified requirements have been fulfilled

Note 1 to entry: The term “verified” is used to designate the corresponding status.

Note 2 to entry: Design verification is the application of tests and appraisals to assess conformity of a design to
the specified requirement.

Note 3 to entry: Verification is conducted at various lifecycle phases of development, examining the system and its
constituents to determine conformity to the requirements specified at the beginning of that lifecycle phase.

[SOURCE: EN 50126-1:2017, 3.83, modified – “life cycle” has been replaced by lifecycle]

3.1.89
vulnerability
flaw or weakness in a system's design, implementation, or operation and management that could be
exploited to violate the system's integrity or security policy

[SOURCE: EN IEC 62443-3-2:2020, 3.1.24]

3.1.90
zone
grouping of logical or physical assets based upon risk or other criteria, such as criticality of assets,
operational function, physical or logical location, required access (e.g. least privilege principles) or
responsible organization

Note 1 to entry: Collection of logical or physical assets that represents partitioning of a system under consideration
on the basis of their common security requirements, criticality (e.g. high financial, health, safety, or environmental
impact), functionality, logical and physical (including location) relationship.

[SOURCE: EN IEC 62443-3-2:2020, 3.1.25]

23
CLC/TS 50701:2021 (E)

3.2 Abbreviations

For the purposes of this document, the following abbreviations apply:

ANSSI agence nationale de la sécurité des systèmes d’information


API application programming interface
APN access point name
ATP automatic train protection
CCTV closed-circuit television
CIA confidentiality, integrity, and availability
CISO chief Information security officer
COTS commercial off the shelf
CPU central processing unit
CR component requirement
CRS cybersecurity requirements specification
CVE common vulnerabilities and exposures
DAS driver advisory system
DC data confidentiality
DMI driver machine interface
DMZ demilitarized zone
DoS denial of service
EMS energy management system
ENISA European Network and Information Security Agency
ERA European Railway Agency
ERP enterprise resource planning
ERTMS European Rail Traffic Management System
ETCS European Train Control System
EVC European Vital Computer
FR foundational requirement
HVAC heating, ventilation and air-conditioning
HW Hardware
IAC identification and authentication control
IACS industrial automation and control system(s)
ICS industrial control systems
ID identifier
IDS intrusion detection system
IEC International Electrotechnical Commission
I/O input/output

24
CLC/TS 50701:2021 (E)

IP internet protocol
ISO international organization for standardization
IT information technology
IXL Interlocking
LAN local area network
MAC media (or medium) access control
MCG mobile communication gateway
NDR network device requirement
NIDS network intrusion detection system
NIS network and information systems
NIST [US] national institute of standards and technology
NMS network monitoring system
OSI open systems interconnect
OT operational technology
PACIS Public Address and Customer Information System
PAS public address system
PIN personal identification number
PIS passenger information system
PKI public key infrastructure
RA resource availability
RAM reliability availability maintainability
RAMS Reliability Availability Maintainability Safety
RBC radio block centre
RDF restricted data flow
SAT site acceptance test
SecRAC security-related application condition
SG security gateway
SIEM security information and event management
SIL safety integrity level
SL security level
SL-A achieved security level
SL-C capability security level
SL-T target security level
SR system requirement
SRAC safety-related application condition
STES safety tunnel earthing system

25
CLC/TS 50701:2021 (E)

SuC system under consideration


SW software
TCMS train control management system
TCN train communication network
TMS traffic management system
TRA threat risk assessment
TRE timely response to events
TS technical specification
TSI technical specification for interoperability (from ERA)
USB universal serial bus
VPN virtual private network
WLAN wireless local area network

4 Railway system overview

4.1 Introduction

This Clause describes the railway system covered by this document. There might be deviations in
different countries where the NIS directive is applied; or even between different railway operators. The
‘system under consideration’ (SuC) referred in several parts of this document is always a part of the
railway system.
The following figure shows the segregation of an enterprise IT in an industrial environment as proposed
by the IEC 62443 series. A key element of this architecture is the segregation of IT (information
technology) and OT (operation technology) by technical (e.g. DMZ for data exchange between the
business zone and the control zone) as well as procedural means (e.g. different user management
policies in the two zones). In this document the railway system is defined as the equivalent to the control
zone in IEC/TS 62443-1-1. The proposed segregation of IT and OT in the railway system is shown in
Figure 1.

26
CLC/TS 50701:2021 (E)

Figure 1 — Segregation of IT and OT

The continuous lowering of the cost of modern solutions is leading to the substitution of the classic
pyramidal or hierarchical structure of automation and control systems with more flexible and adaptable
architectures, which span over multiple locations and independent organizations, with very different
responsibilities and competences. The reference model in Figure 1 thus is only used to match the
definition of ‘Industrial Automation and Control Systems’ to the ‘Railway system’, which is described in
this Clause.
The main goal of railway cybersecurity is to protect railway system essential functions, even in the case
they are threatened by some malicious attackers. In other words, cybersecurity of a railway is effective
when no hardware or software assets contributing to essential functions can be exfiltrated, modified,
corrupted or deleted by unauthorized people or processes. Since modification of data or processes may
force systems to malfunction, lack of cybersecurity can result in operational interruptions, damage of
assets or, even worse, death and injuries of human beings.
4.2 Railway asset model

The railway operator shall define an asset model of its railway system. Assets shall be divided in groups
corresponding to physical areas and functional criticality level (e.g. signalling, command and control,
auxiliary, comfort, public). The resulting model is an input to define the SuC.
Figure 2 shows an informative example of an railway asset model. Assets are divided in four groups in
order to show their physical area. Each asset is identified by its functional name (e.g. “Traffic management
system”) and coloured using a five-colour scheme to indicate their function classes. Assets of the same
function class are more likely to be interconnected, though they are often separated in different virtual or
physical networks (i.e. public network, comfort network, auxiliary network, command and control network,
signalling network in trains). Nevertheless, there are also logical connections between different functional
classes; for instance, between a Traffic management system and an interlocking.

27
CLC/TS 50701:2021 (E)

Figure 2 — Railway asset model (example)

Railway operators can identify the scope - or the assets - of a specific SuC as a subset in the Railway
asset model shown in Figure 2 or a similar documentation.
Annex F.1 includes a description of the functionality of the assets in short terms.
NOTE Beside the assets shown in Figure 2 there are a lot of network devices (switches, routers, etc.) spread
all over the railway system (including trains) that are also regarded as assets to be protected from cybersecurity
attacks. These components can be considered either as part of the communications domain or as a part of the
specific system (i.e. signalling systems). For example, the MCG in Figure 2 that is shown as a part of ‘Cmd and
control’ could also be regarded as part of the communication domain that provides conduits for application with
different criticality.

4.3 Railway physical architecture model

One of the main challenges of the railway system is its large geographical extent ranging from a few
kilometres up to several thousand kilometres. Therefore, the network types used range from local area
networks up to wide area networks and can also include the use of public network connections.
Furthermore, many subsystems include a variety of products and communication protocols demanding a
thorough network architecture of the railway system.
Figure 3 shows exemplarily a domain-oriented view of the railway system within the scope of the technical
specification.
From a cybersecurity perspective, the distributed locations of the different components and subsystems
as well as their physical security features are to be considered, especially in risk analysis.
For instance, assets located along the track are more prone to a direct physical attack than assets in a
control centre. On the other hand, a traffic management system, that may have interfaces to the enterprise
world (ERP systems, mail servers and other office systems), may be more exploitable by a Denial-of-
Service attack or other threats (e.g. ransomware CryptoLocker) in the IT domain.
Railway operators and system integrators can identify the network-oriented connections between the
components of the SuC in the Railway physical architecture model.

28
CLC/TS 50701:2021 (E)

Figure 3 — Railway physical architecture model (example)

Concerning the ‘Train to ground’ communication, the security level attributed to related subsystems
depends on the supported application (e.g. signalling, public services). The set of security measures (e.g.
private APN, secured exchange between train to ground over a public network, hardening of exposed
components, etc.) should fulfil the security needs of the supported applications.
4.4 High-level railway zone model

The railway operator should establish a high-level railway zone model to be used as input for the SuC
identification, initial risk assessment and SuC zoning activities. The zoning principles used should be
similar to the one described in Clause 6.
The definition of zones includes measures for encapsulation of functionality to keep a particular service
alive in case of an incident in another zone; the same way as capabilities to isolate an incident by closing
the gateways to the infected zone.
The combination of zones, conduits, subsystems and zone priorities results in a generic zoning model
including communication rules (see F.2). Figure 4 shows the generic railway zoning reference according
to the design principles of IEC/TS 62443-1-1:2019, the zoning principles of EN IEC 62443-3-2:2020 and
the risk-based approach.
Data flow between rolling stock and land-based subsystems should be reduced to a minimum of conduits
to control and detect unallowed data flow and malware by security devices.

29
CLC/TS 50701:2021 (E)

Figure 4 — Generic high-level railway zone model (example)

30
CLC/TS 50701:2021 (E)

The final zone model depends on existing systems and zones, the result of the threat risk assessment
and the target architecture of each railway operator.
Following two topics are part of the zoning concept:
— Communication and human interactions in high criticality zones should be monitored, logged and
stored for forensics at least at the subsystem boundaries (see also EN IEC 62443-3-3 / SR 2.8).

— security devices between zones with different criticality that protect the zone with the higher criticality
should be managed by the responsible organization of the higher criticality zone (see also
EN IEC 62443-3-2, ZCR 3.1).

F.2 contains a practical example of how to evaluate the criticality of a zone.

5 Cybersecurity within a railway application lifecycle

5.1 Introduction

This Clause provides an overview of the cybersecurity activities to be carried out during the lifecycle of a
railway application. It is given within the framework of the lifecycle described in EN 50126-1, but different
lifecycles can be applied depending on the system under consideration (SuC).
To comply with this Clause when a different lifecycle is used for an entire SuC or parts of it, the
cybersecurity activities as presented in this Clause shall be undertaken and deliverables shall be
exchanged during defined synchronization points.
5.2 Railway application and product lifecycles

In the IEC 62443 framework, which is the basis of this document, the lifecycle of an Industrial Automation
and Control System (IACS) can be distinguished from the lifecycle of products that are integrated into the
system under consideration during the Integration phase. For the purpose of this document, the IACS is
to be understood as the system under consideration of the railway application.
The integration of products designed and possibly certified in accordance with different cybersecurity
standards is an important option to ensure flexibility and cost-effectiveness of the railway SuC.
Thus, product lifecycle (EN IEC 62443-4-1) is not in the scope of this document, and therefore no
synchronization points or deliverables are prescribed for the corresponding lifecycle phases (6 and 7) of
EN 50126-1.
5.3 Activities, synchronization and deliverables

The following table provides an overview of:


— Basic descriptions of the security activities relevant to the lifecycle of the railway SuC

— Necessary synchronization points required to achieve coordination between the security activities
and all the stakeholders: system engineering, safety, RAM, V&V, T&C activities.

— Deliverables to be exchanged.

Many activities in the table below include references to subsequent clauses of this document. Within the
referenced clauses, additional guidance and rationale is provided for the corresponding activities,
including roles and responsibilities.
In Table 1, the relevant security-related activities as well as synchronization points and corresponding
deliverables are allocated along the lifecycle phases of EN 50126-1. To reach conformity with this Clause,
the listed activities need to be performed, synchronization points checked, and deliverables exchanged,
independently of the lifecycle to be applied. In order to allow for the use of pre-developed cybersecurity
components, no synchronization points or deliverables are prescribed in phases 6 and 7.

31
CLC/TS 50701:2021 (E)

Table 1 — Security-related activities within a railway application lifecycle (EN 50126-1)

Phase Synchronization points and Cybersecurity activities


(EN 50126-1) deliverables

0 Prerequisites — Railway operator’s security


program is established
— Manufacturer’s and integrator’s
secure development process
is established
— Legal and regulatory
framework is identified
1 Concept SuC Identification: — Review of the level of security
→ Operational environment incl. achieved up to now
existing security-related controls — Analysis of the project's
and High-Level zone model (see security implication and
Clause 4) context (incl. generic threats)
→ Applicable security standards (see 5.4)
→ Purpose and scope — Alignment with railway
operator / asset owner and
← Project cybersecurity
stakeholder’s security goals
management plan (incl.
cybersecurity context, goals and — Consideration of security
lifecycle activities (see lifecycle aspects (patch
Annex G, G.2) management, monitoring, etc.)
(see Clause10)
— Plan cybersecurity-related
activities
2 System definition and System definition: — Review of the logical and
operational context → System boundaries physical network plans
→ Initial system architecture, incl. — * Initial Risk Assessment for
list of functions, interfaces and the SuC (see 6.3)
generic systems — * Partitioning of the SuC into
→ Logical and physical network zones and conduits (see 6.4)
plans — * Documentation of
← Logical and physical network components, interfaces and
plans review characteristics for each zone
and conduit (see 6.5)
Operational context and criticality:
*: This activity and the
→ Essential functions corresponding synchronization
← Initial risk analysis results point may also be conducted in
← Zones and Conduits phase 3.

3 Risk analysis and DRA: — Detailed Risk Assessment


evaluation → Functional Requirements (linked (DRA) (see Clause 7):
to essential functions) - Derive technical (e.g. SL-T),
← Initial Threat Log physical and organizational
countermeasures or
← Potential Updates (Zones and assumptions for zones and
Conduits, network plans)
conduits
— Consider business continuity
aspects (incl. incidence
response and recovery) for the
SuC

32
CLC/TS 50701:2021 (E)

Phase Synchronization points and Cybersecurity activities


(EN 50126-1) deliverables

4 Specification of system CRS release: — SuC-specific refinement of


requirements ← System Cybersecurity normative requirements (see
Requirements Specification incl. Clause 8)
security-related application — Definition of organizational and
conditions physical requirements
— Definition of security-related
application conditions (see
Clause 7)
5 Architecture and CRS breakdown: — DRA update, incl. assessment
apportionment of system → System architecture breakdown of the SL-C for components
requirements to components, incl. SuC inventory and definition of compensating
countermeasures including
← Subsystem Cybersecurity security-related application
Requirements Specification incl.
condition (see 8.3)
security-related application
conditions, technical and — Assigning the technical
organizational compensating security requirements to
countermeasures components (see 8.3)
— Assigning responsibilities for
the organizational and
physical requirements for the
railway operator/maintainer or
asset owner
— Establish third party
management for security
aspects, including supplier
security capabilities and
support contracts
6 Design and No synchronization — Follow product lifecycle (EN
implementation IEC 62443-4-1 and 4–2 or
other adequate cybersecurity
standards)
— Consider and prevent potential
conflicts between component
cybersecurity functionality and
functional architecture
7 Manufacture or No synchronization — Follow product lifecycle (EN
Procurement IEC 62443-4-1 and 4–2 or
other adequate cybersecurity
standards)
— Consider and prevent potential
conflicts between component
cybersecurity functionality and
functional architecture
8 Integration Component integration: — Review of logical and physical
← Qualification of security network plans, list of systems
components and installed applications after
implementation
→ Integration test documentation
including SuC environment — Qualification of security
information components and functions
from test integration results,
penetration testing and
vulnerability scanning (see
9.3)
— Check and ensure removal of
unnecessary software,
hardware and services

33
CLC/TS 50701:2021 (E)

Phase Synchronization points and Cybersecurity activities


(EN 50126-1) deliverables
— Update of threat risk
assessment based on the
results of phases 6 “Design &
implementation” to 8
“Integration”
9 System Validation Cybersecurity validation: — Verification of security
← Cybersecurity case incl. security- guidelines
related application conditions — Validation of configuration and
← Security guidelines security functionality (see 9.4)
— Applicability verification of
organizational requirements
and security-related
application conditions (see
9.4)
10 System Cybersecurity acceptance: — Finalization of cybersecurity
acceptance ← Cybersecurity case updated incl. case and security-related
security-related application guidelines (see 9.2 and 9.5)
conditions — Security Handover between
← Security guidelines updated System Integrator and railway
operator
— Review of business continuity
aspects (incl. incidence
response and recovery) for the
SuC.
11 Operation, maintenance Cybersecurity case updates: — Maintenance of the logical and
and performance ← Continuous updates of the physical network plan and also
monitoring cybersecurity case based on the list of IT systems and
vulnerabilities, incidents and installed applications
changes including security patches — Perform security and
vulnerability monitoring and
risk-based incidence response
including risk analysis
updates. (see Clause 10)
— Perform patch and/or
configuration management
(see Clause 10)
— Perform data backup and
auditing procedures to enable
data recovery
— Maintenance of restrictive
access authorizations
— If applicable, prepare and
perform security (compliance)
audits
12 Decommissioning Secrets Disposal: — Disposal of components taking
← Disposal report security criteria into account
(will be addressed later in
Clause 10)
Key

→ Information to be provided by all stakeholders contributing to the development, delivery, operation,


maintenance and disposal of the SuC (e.g. System Engineering, Safety, RAM, V&V, etc.) to the
cybersecurity process
← Information to be provided by the cybersecurity process to the other stakeholders
Underlined text are synchronization points

34
CLC/TS 50701:2021 (E)

NOTE all inputs provided during one phase are by default available for the following phases.

The railway operator security program shall be consistent with the application of security management
requirements contained within the IEC 62443-2-1 and which are based on EN ISO/IEC 27001.
5.4 Cybersecurity context and cybersecurity management plan

Asset owners shall define the applicable cybersecurity context, they:


a) shall develop and document an understanding of their cybersecurity threats, which includes
identifying threat sources, systems, architectures of systems, organizational structures and
interfaces relevant to cybersecurity, and legal and regulatory requirements (Cybersecurity context)

b) shall communicate the cybersecurity context to their senior managers and board members and
ensure they are informed about cybersecurity risk

c) should regularly (at least annually) review the threat context to detect and highlight any emerging
trends that can change the risk context that the asset owner operates in;

d) shall implement a systematic methodology for the identification, evaluation and management of cyber
threats. Any risk assessment framework satisfying these requirements may be selected, e.g.
ISO 27005, NIST SP 800-30, ANSSI (EBIOS 2010 and EBIOS RM).

The system integrator shall define its cybersecurity management plan with the applicable security context
and the risk assessment framework.
See informative Annex G, G.2 for typical content of the cybersecurity management plan.
5.5 Relationship between cybersecurity and essential functions

5.5.1 General

For a railway application to operate in a safe and fully functional manner, its essential functions need to
be protected. Essential functions are defined as functions or capabilities which are required to maintain
the safety (including health and environment) and availability of the system. For railway applications, a
loss of protection, loss of control or loss of view would be considered as a loss of essential functions.
Since attacks on the system can lead to losses of either of these properties, security countermeasures
need to be implemented to provide appropriate protection without affecting these functions negatively.
In contrast to the engineering domain of functional safety, the availability of railway applications needs to
be ensured at the same level of priority when considering security functions. While losses of availability
for trains or railway networks might be considered safe in the scope of functional safety, continuous
operation is one of the primary goals of security. Civil unrest as well as public relations and financial
damage to the operating entity due to loss of availability all need to be considered as part of the scope of
security.
5.5.2 Defence in depth

“Defence in depth” is one of the guiding principles to provide appropriate security for the essential
functions of all systems. At its core, it reduces the susceptibility of systems to attacks by eliminating single
points of failure within the systems’ protections. Layered security mechanisms increase the security of
the system as a whole, since attacks that successfully cause one mechanism to fail, still need to penetrate
one or several remaining mechanisms.
The principle of defence in depth is illustrated in Figure 5 below. When applying defence in depth to
railway essential functions, integrity and availability are to be considered with the highest priority.

35
CLC/TS 50701:2021 (E)

Figure 5 — Defence in depth with example of measures

The table below provide examples of areas concerned by each layer of the defence in depth for PACIS
and ERTMS.
Table 2 — Examples of function related supporting assets in regard to the defence in depth
layers

PACIS ERTMS
(Information for Passengers on board) Control and Command
Policy Maintenance procedure of the PACIS Organizational procedure for management of keys
Physical Cabinets for server; Physical ports not Equipment Room, RBC cabinet, On-board cabin for
exposed to outside for Displays or HMI EVC
Perimeter Firewall with others Ethernet On-Board Segregation of critical networks
Network
Internal Ethernet On-Board PACIS Network RBC network
Network
Host On-Board Server; Displays; HMI Vital computer (RBC, EVC)
Application Information for Passenger SW (PIS) ETCS SW
Data Packmedia (including pictures, text, etc.) Movement Authority

5.5.3 Security-related application conditions

Security-related application conditions (SecRACs) are a set of conditions that need to be satisfied to fulfil
the cybersecurity requirements.
In the case a zone, conduit or SuC cannot meet the targeted security requirements inherently,
compensating countermeasures shall be defined. If their implementation takes place externally to the
considered SuC, they are exported as security-related application conditions (SecRACs).
Security-related application conditions (SecRACs) can be either:
— Technical countermeasures introduced outside of the considered SuC.

or

36
CLC/TS 50701:2021 (E)

— Organisational and procedural countermeasures.

NOTE IEC 62443-2-1 and EN IEC 62443-2-4 are possible sources.

or
— Any combination of these.

For secure operation, it is necessary for the asset owner to be compliant with the security-related
application conditions.
SecRACs to be fulfilled by the end user (railway duty holders) shall be documented in the cybersecurity
case of the SuC as primary document and in the security guidelines as secondary documents. Security
guidelines for installation, operation, maintenance and disposal of the SuC shall be referenced by the
Security Case and handed over during the SuC Handover phase
SecRACs are an essential part of the cybersecurity case (refer to Clause 9 and Annex G, G.2).
It is important, that SecRACs are communicated to stakeholders of essential functions. Particularly,
SecRACs may be used as a basis for the coordination of the cybersecurity and safety process.
5.5.4 Interfaces between the safety and the cybersecurity processes

5.5.4.1 Principles

To ensure the necessary stability and viability of safety-related documentation and approval, it is
advisable to separate cybersecurity and safety issues as far as possible and coordinate them adequately
in order to decouple the different lifecycles and the approval processes. Otherwise, each change affecting
the security of the system may trigger a new safety approval.
However, communication and interaction between safety and cybersecurity teams should be
implemented throughout the lifecycle and appropriate review processes and intervals should be
maintained.
Particularly, the safety process shall inform the cybersecurity process about the safety-related functions
or assets to be protected. This information shall be documented and serves as an input for the
cybersecurity risk assessment.
The cybersecurity process deals with the risks stemming from attacks and malicious interventions,
implementing cybersecurity requirements in the SuC or exporting some security-related application
conditions (SecRAC).
If cybersecurity threats have potential impacts on the safety of the system, the cybersecurity case shall
include or refer evidence on how security threats with the potential to affect safety-related functions have
been evaluated and how protection against the adverse influence has been achieved.
The cybersecurity case and its SecRACs shall be communicated to the Safety management. If Safety
management considers SecRACs to potentially impact safety-related functions, they can then be
considered as SRACs.
5.5.4.2 Possible implementation through high level cybersecurity objectives

One possible solution to achieve the necessary levels of separation and coordination between
cybersecurity and safety processes is to define only a limited number of coordinated cybersecurity
objectives on a high level. These objectives need to be fulfilled through security requirements or security-
related application conditions.
EXAMPLE 1 For safety-critical communication, the high-level cybersecurity objective could be “The safety-critical
messages has to be protected against manipulation”.

EXAMPLE 2 The high-level cybersecurity objectives can also be related to a safety-critical item list. In this case,
the high-level cybersecurity objective could be “The functions of the safety-critical items has to be protected against
manipulation.”

37
CLC/TS 50701:2021 (E)

The fulfilment of the high-level cybersecurity objectives is demonstrated as part of the cybersecurity case.
Either they are fulfilled by the cybersecurity functions or under certain security conditions and
assumptions (SecRAC).
The cybersecurity case shall be maintained and updated regularly. If the cybersecurity functions are
changed, it has to be demonstrated that the safety-related cybersecurity objectives still hold (including
the exported SecRAC). If necessary, the cybersecurity case can be assessed and certified according to
the relevant cybersecurity standards.
As a result of this documentation structure, the documentation regarding functional safety can be
considered stable and viable so long as the cybersecurity process is properly adapted to changing threat
scenarios. Hence, while the security documentation may be subject to frequent updates as a result of the
volatile threat landscape, the safety approval can remain valid.
5.6 Cybersecurity assurance process

The cybersecurity assurance and acceptance process for railway applications has the goal to successfully
validate security functions implemented and complete the handover process from the System Integrator
to the asset owner. It is based on the confidence placed in implemented countermeasures required by
the cybersecurity requirements specification (CRS) for the SuC. The evidence delivered to the asset
owner is the cybersecurity case (incl. cybersecurity validation report). More detailed information and
guidance on the structure and handling of the cybersecurity case can be found in Clause 9.
A generic overview of the relationship between the threat risk assessment (TRA) and the system
cybersecurity assurance is shown in Figure 6 below. Identification of threats and vulnerabilities during the
TRA considers technical, organisational as well as physical aspect of the SuC. The CRS defines security
countermeasures, which enables the achievement of an acceptable level of residual risk, as
demonstrated by the detailed risk assessment (DRA). The CRS, once implemented has to be verified.
This is achieved by applying assurance techniques. In the next step, system cybersecurity validation is
performed assuring the SuC is fulfilling the requirements demanded by the operational environment
where it is implemented (configuration). The documented results of all these verification and validation
activities are documented in the cybersecurity case.

38
CLC/TS 50701:2021 (E)

Figure 6 — Relationship TRA and SA

6 System definition and initial risk assessment

6.1 Introduction

This Clause provides the overview of the system definition and its scope. It defines the rules and the
process how to handle cybersecurity activities on system level with the aim to identify:
— the system under consideration (SuC)

— the essential functions of the SuC

— the assets supporting the essential functions

— possible threats for the given SuC

— initial risk assessment of the unmitigated cybersecurity risks

— the separation into zones and conduits.

Figure 7 illustrates the different steps to be applied to perform the initial risk assessment and to derive a
security architecture in “zones and conduits” (see also the lifecycle phase 2 in Clause 5). The process
corresponds to EN IEC 62443-3-2 and the references relate to the zone and conduit requirements (ZCR)
defined there.

39
CLC/TS 50701:2021 (E)

Figure 7 — Initial risk assessment flowchart

6.2 Identification of the system under consideration

6.2.1 Definition of the SuC

The system under consideration (SuC) is a constituent part of the railway system (see Clause 4), which
can be understood as a system of nested systems, each comprising subsystems and components, which
together provide the required functionality. Cybersecurity threats may endanger the correct execution of
this functionality as well as the integrity of safety-related functions and therefore endanger the secure
behaviour of the system with potential impact on safety, finance or system availability. The identification
of these cybersecurity threats (ref. subclause 6.2.) needs a description of the SuC, the functions this SuC
provides and all interfaces of the SuC. The following rules apply for the description of the SuC:
— The definition of the SuC shall contain the scope and boundary of the system, which is to be
developed and assessed.

— The SuC is part of an operational system to control railway related technical systems.

— The functional and architectural description of the SuC should follow the hierarchical approach given
in EN 50126-1:2017, 5.2 to identify it exactly including borders and external interfaces

NOTE Generally, a draft of the SuC is expected to be supplied by the overall system engineering process (see
phase 2 in 5.2).

40
CLC/TS 50701:2021 (E)

6.2.2 Overall functional description

The location of the SuC within the railway architecture model (see Clause 4) shows to which kind of
system the SuC belongs (i.e. rolling stock, signalling, infrastructure etc.) and which main functions are
supported. The cybersecurity threat identification of the SuC (including known vulnerability identification)
shall be based on a SuC, in which the functions, the system boundaries and the provided functionality
are defined. Therefore, this identification of the main functions should be detailed by giving information
related to:
a) the objective (intended purpose) and the mission profile of the SuC comprising the definition of the
system functions, the system borders and the interfaces

b) the operational scenarios, which define, how the SuC will be used, and which actors are interfering
or interfacing with the SuC

c) the context of implementation and use

d) the planned lifetime and therefore possibly necessary system updates in HW and SW

e) maintenance plans and concepts for the SuC

f) constraints by the environment which integrates the SuC.

6.2.3 Access to the SuC

Compromising the SuC is possible via access to the SuC (e.g. through interfaces, where an access is
possible). Such access includes human-machine interfaces as well as technical interfaces which enable
a rogue device to be added to the system and communication interfaces via a network. A complete list of
the SuC interface should be provided with the definition of:
— Function for which the interface is used

— Protocol for the transmission via networks (if already available)

— Functional data being used

— Impact (loss of confidentiality, integrity or availability)

— Function of neighbouring systems

— Organizational interfaces.

6.2.4 Essential functions

The essential functions of the SuC are the key functions needed for the operation of the railway
application. They include, but are not limited to, functions related to safety, availability or control. For
example, these may include traction and braking, door control, signalling, passenger information and
communication functions.
If the essential functions are compromised, this normally means loss of protection, loss of confidential
data, loss of control, loss of data integrity and status or loss of connection to the device.
NOTE The security functions protect the essential functions.

Knowing the essential functions of the SuC is crucial to protect them and to avoid imposing security
requirements that could limit or even compromise them. The essential functions shall be identified by the
overall system engineering process and shall be provided as input for the cybersecurity process (see
phase 2 in 5.2). The essential functions may be identified by requirements from the system engineering
process that are labelled with the appropriate properties e.g. safety-related.

41
CLC/TS 50701:2021 (E)

6.2.5 Assets supporting the essential functions

Assets supporting the essential functions of the SuC shall be clearly identified in the system definition
with their relationship to their corresponding essential function.
6.2.6 Threat landscape

To build an appropriate cybersecurity strategy, shared between all the stakeholders, the first step is to
identify and agree on a consistent list of generic cybersecurity threats that could jeopardize the railway
application. Agreement on threat landscape is crucial as discrepancies in the set of considered threats
by the different stakeholders will lead to risk underestimation and lack of control measure implementation.
Hence, all stakeholders should participate in a process to agree on a generally accepted threat landscape.
The threat landscape should be based on recognized and accepted threat library or reports and built with
a high-level approach providing an overview of the threats applicable to the railway sector.
Threat libraries and reports like the following should be taken as inputs:
— ENISA Threat Landscape Yearly report

— ISO/IEC 27005

— NIST SP 800-30.

Finally, the threat landscape should:


— be defined or at least approved by the asset owner

— be updated at least once a year (or according to contractual requirements)

— provide mapping to the input threat libraries or reports

— provide rationale for not considered threats.

It should be noted that natural hazards are out of scope of this threat landscape.
If the stakeholders agree that a detailed risk assessment is necessary then in the initial risk assessment
the threats can be grouped into classes of generic threats, e.g. threats on integrity. All identified threats
shall be entered into the threat log (usually created in lifecycle phase 3). The threat landscape is the most
important input to be extended in the detailed threat identification (see 7.2.1).
6.3 Initial risk assessment

6.3.1 Impact assessment

The first step of the initial risk assessment consists in the impact assessment: for each asset supporting
the essential functions of the SuC, the consequences of losing the integrity, availability or confidentiality
of the asset shall be evaluated.
The impact evaluation in the initial risk assessment shall measure the worst global impact in case of loss
of the asset properties. The global impact shall be evaluated not only at the business level, but also at
the environmental, societal and human impact levels.
To enable a consistent approach for threat risk assessment between asset owners, system integrator and
product suppliers, it is important that all the stakeholders share same impact assessment reference.
For the railway application, at least the following criteria should be considered:
— Human health and safety

— Operational availability

— Financial impact

42
CLC/TS 50701:2021 (E)

Table 3 is given as an example for qualitative impact assessment (see Annex E for more examples).
Table 3 — Qualitative Impact Assessment example

Impact Human health and safety Operational availability Financial impact

Most of operations disturbed Could lead to organization


A One or several fatalities
during more than 1 week bankrupt
Most of operation disturbed
between 1 day and 1 week. Impact in a significant way the
Several severe or critical
B organization annual budget
injuries Important operation disturbed (>10 % of revenue)
during more than 1 week
Most of operation disturbed
One severe injury or several between 1 h and 1 day Impact in a significant way the
C injuries requiring
Important operation disturbed organization annual benefits.
hospitalization
between 1 day and 1 week
One injury requiring
hospitalization or several light Important operation disturbed Impact not visible on annual
D
injuries (not requiring any less than 1 day. basis
hospitalization)

In the initial risk assessment, the worst-case consequence shall be used to determine the considered
impact for each asset property.
6.3.2 Likelihood assessment

The threat landscape may change suddenly, and experience is of limited use for cybersecurity. Therefore,
likelihood can be evaluated only on a qualitative or semiquantitative scale. Because of that, asset owners
shall make use of intelligence reports and other information sources to determine the potential attackers
that they might be targeted by.
Likewise, this information should be shared with the system integrators before performing any threat risk
assessment since it will have a direct impact in the cybersecurity requirements of the final product.
For a railway SuC, at least the following aspects should be considered when estimating the likelihood of
a threat:
— The exposure of the asset to be attacked depending e.g. of its ease of informational or physical
access

— The physical location of the asset

— Capability needed to perform the attack, e.g. sophistication and effort

The choice of the factors, that influence likelihood, should be justified (see Annex E for examples).
Subjective factors as well as highly dependent or correlated factors should be avoided.
As an example likelihood could be estimated from scales based on exposure and vulnerability of the asset
(see Table 4). In this example the resulting likelihood L is calculated from Exposure and Vulnerability by
L = EXP+VUL-1.

43
CLC/TS 50701:2021 (E)

Table 4 — Likelihood assessment matrix – Example

Rating Exposure (EXP) Vulnerability (VUL)


1 Highly restricted logical or physical access for — Successful attack is only possible for a small
attacker, e.g. group of attackers with high hacking skills (high
— highly restricted network and physical capabilities needed)
access, or — Vulnerability is only exploitable with high effort,
— product or components cannot be acquired and if strong technical difficulties can be solved,
by attacker or only with high effort non-public information about inner workings of
system is required
— State of the art security measures to counter
the threat
— High chance for attacker to be traced and
prosecuted
2 Restricted logical or physical access for — Successful attack is feasible for an attacker with
attacker, e.g. average hacking skills (medium capabilities
— internal network access required, or needed)
— restricted physical access, or — Vulnerability is exploitable with medium effort,
requiring special technology, domain or tool
— product or components can be acquired by
knowledge
attacker with medium effort
— Some security measures to counter the threat
— Medium chance for attacker to be traced and
prosecuted
3 Easy logical or physical access for attacker, — Successful attack is easy to perform, even for
e.g. an unskilled attacker (little capabilities needed)
— Internet access sufficient, or — Vulnerability can be exploited easily with low
— public physical access, or effort, since no tools are required, or suitable
attack tools freely exist.
— attacker has access as part of daily work,
operation, or maintenance activities, or — No or only weak security measures to counter
the attack caused by the threat
— product or components can be acquired by
attacker with low effort — Low chance for attacker to be traced and
prosecuted

In the initial risk assessment, for each asset the worst-case result (i.e. the highest likelihood without any
cybersecurity countermeasure) shall be used to determine the likelihood of the occurrence of a threat.
6.3.3 Risk evaluation

The risk evaluation may be determined by a risk matrix in which the likelihood and the impact of the threat
are related. The risk acceptance criteria shall be defined or at least approved by the asset owner.
The risk matrix in Table 5 is a general example. It should be calibrated, justified and agreed with the asset
owner. Detailed examples can be found in Annex E.
Table 5 — Risk matrix example

Likelihood
Risk matrix example
1 2 3 4 5
D Low Low Low Medium Significant
Threat impact

C Low Low Medium Significant High


B Low Medium Significant High High
A Medium Significant High High Very high

The initial risk evaluation shall be performed for each asset supporting the essential functions of the SuC
as defined by the system definition considering the mission profile and the identified threat landscape. It
is permitted to concentrate the initial risk evaluation depending on the most critical property lost by the
asset (Integrity, Availability or Confidentiality). In the initial risk assessment, the risk ranking of the assets

44
CLC/TS 50701:2021 (E)

is determined by the risk matrix, because in the initial risk assessment the evaluation is performed as a
worst-case evaluation without any countermeasures.
6.4 Partitioning of the SuC

6.4.1 Criteria for zones and conduits breakdown

On the basis of the output of the initial risk assessment, the assets shall be assigned to consistent security
zones connected by conduits. This means all assets in the same zone and all data sent by the same
conduit share the same or similar cybersecurity requirements.
NOTE 1 In case of a pre-existing system, it is possible to reuse as input the result of previous security analyses.

The following criteria should be used to partition the SuC into Zones and Conduits:
a) Risk of the assets, in terms of Integrity, Availability and Confidentiality

b) Type of interfaces or connection to the other parts of the SuC (e.g. wireless)

c) Physical or logical location

d) Access requirements

e) Operational function

f) Organization responsibilities for each asset

g) Safety aspect

h) Technology lifecycle, e.g. product lifecycle, obsolescence.

In the railway framework, “risk”, “physical location” and “safety aspect” are commonly used criteria to
break down the SuC into Zones and Conduits.
NOTE 2 In the railway context, examples of operational functions are braking, traction control, doors open/close,
train control, diagnostic, maintenance.

6.4.2 Process for zones and conduits breakdown

The objective followed by grouping the assets into zones and conduits is to identify the assets that share
common cybersecurity requirements and to group them to share the mitigation means.
The following aspects shall be considered according to EN IEC 62443 for the process to define zones
and conduits:
a) The business assets (IT) and control assets (OT) shall be separated in different zones.

Refer to EN IEC 62443-3-2 ZCR-3.2.

b) Safety-related assets shall be grouped in dedicated zones which are logically or physically separated
from zones which are not safety related. However, if non-safety assets are allocated to a such zone,
the complete zone is considered as safety related.

Refer to EN IEC 62443-3-2 ZCR-3.3.

c) Temporarily connected devices should be included in zones separated from assets that are intended
to be permanently connected.

Refer to EN IEC 62443-3-2 ZCR-3.4.

45
CLC/TS 50701:2021 (E)

d) Wireless devices should be included in zones separated from the ones with the wired devices.

Refer to EN IEC 62443-3-2 ZCR-3.5.

e) (Remote) Devices that are permitted to make connections to the SuC via networks external to the
SuC should be grouped into a separate zone or zones.

Refer to EN IEC 62443-3-2 ZCR-3.6.

f) Zones shall contain the security device protecting the perimeter at conduit edges.

Refer to EN IEC 62443-3-3, SR 5.2.

Exceptions from the above requirements shall be justified by risk analysis.


As a refinement according to the rules above, the following design rules should be considered for
maintenance application:
— Direct (maintenance) access from business zones to control zones without control by a security
device or similar (e.g. proxy server) should not be allowed.

— External maintenance access (e.g. via Internet) should be grouped in a separate zone.

6.5 Output and documentation

6.5.1 Description of the system under consideration

A high-level description of the SuC shall include the name, a high-level description of the function and the
intended usage of the SuC, as well as, a description of the equipment or process under control as it is
known at the stage of Initial Risk Assessment.
At later stage, during the Detailed Risk Assessment, this SuC description should be completed to achieve
a detailed description of all assets (reference and version).
6.5.2 Documentation of the initial risk assessment

The documentation shall include at least


— Threat Landscape

— Risk Matrix

— Risk Evaluation.

6.5.3 Definition of zones and conduits

The following items shall be identified and documented for each defined zone and conduit:
a) Name and/or unique identifier indicating also the type (zone or conduit)

b) Accountable organization(s)

c) Definition of logical boundary

d) Definition of physical boundary, if applicable

e) Safety designation

f) List of all logical access points

46
CLC/TS 50701:2021 (E)

g) List of all physical access points, if applicable

h) List of data flows associated with each access point

i) Connected zones or conduits

j) List of assets and their risk classification and business value.

7 Detailed risk assessment

7.1 General aspects

Clause 7 describes the detailed risk assessment, which should be performed for each zone and each
conduit (or cluster of zones and conduits) resulting in the definition of the cybersecurity requirements
specification, as the central outcome of this activity.
NOTE 1 It is assumed that for railway applications a detailed risk assessment is almost always necessary. In the
event that the outcome from the initial risk assessment is that all risks are sufficiently mitigated without any additional
countermeasures (e.g. because there is very strong physical and organisational protection, see ZCR4 of EN
IEC 62443-3-2), the detailed risk assessment could be skipped but for the documentation of the cybersecurity
requirements, see 7.2.5.

In EN IEC 62443-3-2 ZCR 5.6 different approaches for the derivation of SL-T are described. The first is
based directly on the need of protection against a particular kind of attacker (e.g. hacker, criminal
organization, state sponsored group), including the estimation of the needed efforts of an attacker also
known as attack vector. In this approach it is determined which type of attacks by which kind of attacker
a zone or conduit of the SuC should withstand, taking into account the identified threats and vulnerabilities
(see 7.2.1) and legal constraints, resulting directly in a SL-T, see 7.2.4 for more detail. But also, in this
approach, it still has to be checked that the risk is acceptable.
EXAMPLE 1 A railway duty holder decides that a particular zone of the SuC should be protected against hacker
groups or criminal organizations that have system knowledge and may apply sophisticated attacks but have only
moderate motivation and resources. By the definition of SL-T this is well represented by SL-T = 3 (see 7.2.4) and this
would be the overall requirement. By EN IEC 62443-3-3 (see Clause 8) the corresponding requirements can be
derived for this SL-T. Finally, the risk evaluation would have to confirm the SL-T.

The second approach is based on the difference between unmitigated cybersecurity risk (as derived in
Clause 6 as a basis for the zoning) and the acceptable risk (as defined in a risk matrix like Table 4). It is
described in this Clause in detail. It has to be mentioned as a precondition, that the zones and conduits
to be assessed by a detailed risk assessment should have reached a certain level of maturity of the
architecture and possible its (planned) implementation. Generally, the second approach leads to a more
appropriate SL-T value but takes more effort.
Generally, the detailed risk assessment described here is proactive e.g. it is not triggered by an incident
or vulnerability (see Clause 10).
There may be a need for update of the detailed risk assessment e.g. when compensating
countermeasures shall be evaluated (see Clause 8) or if new threats or vulnerabilities become known.
There is a need to emphasize the importance of embedding cybersecurity threat risk assessment as part
of the systematic management of risk in rail systems. In particular, cybersecurity risks affecting safety
availability and other important business objectives of the railway system have to be addressed.
The detailed risk assessment should be reviewed and updated:
— at each lifecycle phase (if necessary) by the responsible stakeholder for this product or system, and

— at regular intervals or whenever triggered (e.g. when new security threats or vulnerabilities become
known), to identify new threats and vulnerabilities of the product or system.

47
CLC/TS 50701:2021 (E)

Threat risk assessment may not override mandatory requirements for security protection, e.g. by
regulation.
NOTE 2 Clause 7 starts with the requirements for the Railway duty holder. It is well understood that also other
stakeholders have their role in detailed risk assessment and that there is a need for exchange of information with
respect to security risks.

The dynamic nature of cybersecurity may lead to new identified vulnerabilities being exploitable by an
attacker.
EXAMPLE 2

— Increase of attacker capabilities resulting in a higher likelihood of a threat

— Growth of networked systems offering a larger attack surface with new attack vectors

— Digitization of railway assets

— etc.

The risk assessment principles in this Clause shall be applied accordingly during the operation,
maintenance and performance monitoring to address the risks and decide on further actions to be taken.
In the case where zones or conduits are similar from a risk perspective, then the detailed risk assessment
may be performed only once, but the results need to be applied consistently to all zones and conduits
affected.
7.2 Establishment of cybersecurity requirements

7.2.1 General

The general process for the detailed risk assessment by the railway duty holder (or by the system
integrator by contractual agreement) is depicted in the flowchart in Figure 8. It adapts ZCR 5 from EN
IEC 62443-3-2 to the railway environment.
The zone and conduit cybersecurity requirements are derived in lifecycle phase 3 as a result of the
detailed risk assessment. These requirements are detailed and apportioned to components in lifecycle
phases 4 and 5.
The basis of the process is the output from the initial risk assessment, the zones and conduits are
identified there. In a first step the threat and vulnerability identification is detailed. Then for all identified
threats an appropriate risk acceptance principle is chosen and approved by the railway duty holder. Note
that for each threat only one principle is applied, but a set of threats can be treated by the same principle.
Depending on this choice the cybersecurity requirements are determined. The process is described in
more detail in the following clauses.
It is acknowledged that the easiest application of the process is, if only one principle is applied for each
zone or conduit, namely that a complete zone or conduit is either covered by a code of practice or by a
reference system or by explicit risk evaluation. But for complex systems a mixture of the principles may
be necessary.
NOTE 1 The process as described here is used to determine the cybersecurity targets, e.g. target security level
SL-T, as part of the security requirements specification. When the architecture is completed a similar assessment is
carried out to derive the capability security level SL-C. And finally, after integration of the complete system the
achieved security level SL-A can be assessed, see Clause 9 for more information.

NOTE 2 In some cases, e.g. application of codes of practice or reference systems, security targets are not
expressed by SL-T. In other cases the SL-T applies only to some threats, as others are controlled by other principles.
Finally, in the cybersecurity requirements specification all partial requirements are collected.

48
CLC/TS 50701:2021 (E)

NOTE 3 The process is similar to EN 50126-1 for safety-related hazards. It aims to integrate legacy solutions as
well as national or international codes of practice and ensures a continuity of security practices towards application
of IEC 62443 series.

Figure 8 — Detailed risk assessment flowchart

When applying either Code of Practices or Reference System to cope with a sub-set of the SuC threat
landscape, it shall be demonstrated at the end that all the threats of the SuC threat landscape are covered
either by Code of practices, Reference System or the Explicit Risk Evaluation.
Criteria for applicability of Code of practices and/or Reference system shall be evaluated for each update
of the Detailed Risk Assessment.
7.2.2 Threat identification and vulnerability identification

7.2.2.1 Overview

The initial threat identification takes place in the form of the identification of the threat landscape in phase
2 (6.2.6) and is being detailed and checked here, in particular for completeness.

49
CLC/TS 50701:2021 (E)

7.2.2.2 Objectives

Identify in a threat log all the threats towards each zone/ conduit.
7.2.2.3 Activities/ Requirement or Recommendation

Threat sources can be broken in the following categories:


a) Internal actors (staff, contractors and service providers)

— Operational Staff

— Maintenance Staff

— IT and OT Engineering Staff

— Contractors and Service Providers

— Suppliers

— Other staff

b) External actors

— Cyber terrorists

— Issue-motivated groups

— Former staff and contractors

— Cybercrime groups

— Nation state actors

— Hackers

— Others (e.g. passengers, e.g. with infected devices).

Each of the categories has different motivations (financial, political, personal), capabilities (from using
simple tools to development of novel malware) and freedom of action. The choice of considered actors
depends on the context of application and is documented by the entity executing the detailed risk
assessment.
In assessing a threat, an organization should assess the dimension of type, motivation, capability and
freedom of action, remembering that efficient cyber defence shall be built on an assessment so that it is
both efficient, economical and effective against most probable and most dangerous threat.
For each threat at least, the following information shall be documented in the threat log:
a) the threat sources

b) the capability or skills or motivation of the threat source

c) the possible threat scenarios and actions

d) the potentially affected assets (as identified in the initial risk assessment)

e) the vulnerabilities of the SuC (if known).

EXAMPLE It is recommended to systematically name the threats, e.g. mnemonically in the form of

50
CLC/TS 50701:2021 (E)

T.<attack>{<attacker>.<further attributes>}

Here T stands for threat, <attack> for the specific variant of the threat and <attacker> for the role of the person
performing the attack, e.g. an (external) attacker, user, etc. It often suffices to deal with T.<attack>, especially if the
cybersecurity goals are not differentiated according to further attributes or if it is clear from the assumptions which
attacker is meant.

Due to the high number of possible combinations, the following items may be classified into adequate
qualitative classes:
— Cyber capability/skills and resources

— Interest/Motivation of each attacker

— Knowledge of target

— Vulnerability of the SuC (if known)

— Risk.

The threats log shall be a living document, which is maintained and updated by the security engineer
during the design phase and, whenever needed during the operation phase.
7.2.2.4 Deliverables

Threat Log (Threats with respect to zone and conduit).


7.2.3 Vulnerability identification

7.2.3.1 Overview

If the vulnerabilities of solutions or assets are known, then these shall be considered in the detailed risk
assessment.
For new designs, e.g. HW or SW, specific vulnerabilities may not be known and that the analysis shall be
performed based on generic assumptions or vulnerabilities of similar systems or products. It may be
updated and detailed in later lifecycle phases.
7.2.3.2 Objectives

Identify vulnerabilities related to each asset of the zone or conduit.

7.2.3.3 Activities / requirement or recommendation

This activity needs a concise identification of the assets of the zone or conduit as well as their HW and
SW elements, e.g. operating systems.
NOTE Known vulnerabilities can be extracted from appropriate vulnerability databases including their criticality
classification by the vendor.

7.2.3.4 Deliverables

List of vulnerabilities (related to each asset).


7.2.4 Risk acceptance principles

7.2.4.1 General

The following risk acceptance principles may all be applied and those can be chosen deliberately in the
following order for each hazard or threat:
1. Application of codes of practice

51
CLC/TS 50701:2021 (E)

2. Analysis of similarity with reference systems

3. Explicit risk evaluation

NOTE As it is infeasible to quantify cybersecurity risks, all risk acceptance criteria are understood and applied
in a qualitative or semiquantitative manner.

7.2.4.2 Application of codes of practice

If relevant codes of practice apply to the threats identified, compliance with these codes of practice is
sufficient to consider that the risks associated with these threats are acceptable.
The requirements for a code of practice are that it shall be widely recognized in the railway domain (or
explicitly justified) and shall be relevant to rule out the threats, for which it is applied, effectively.
This means that laws, regulations or standards, can be consulted, but also internal codes of practice, e.g.
protection profiles. TSI are also an important source of codes of practice.
EXAMPLE EN 50159 used as a code of practice to cope with threats related to safety-related communication.

In the application of a code of practice the following points shall be checked:


a) Is the code of practice widely recognized as a relevant cybersecurity code of practice in the railway
domain? If not, e.g. because it comes from another application domain, its application shall be
explicitly justified.

b) Is it applicable and still valid to the particular threats (or vulnerabilities) under consideration?

c) Is there a justification that it rules out the particular threats (or vulnerabilities)? If not, its application
should be justified by a particular detailed risk assessment.

d) Has the code of practice been applied correctly and completely? Any deviations shall be justified by
a particular detailed risk assessment.

NOTE A code of practice can rule out a set of threats, but that also different code of practices can be applied to
different set of threats.

7.2.4.3 Reference systems

If no relevant code of practice exists for a threat or set of threats, comparison with a reference system
can help to determine requirements for which the risk can be acceptable.
The requirements are that a reference system shall still be acceptable according to the current
cybersecurity state of the art and that although operating and environmental conditions are not identical,
functions and interfaces and are so similar that the differences are insignificant. If the latter is not the
case, the differences can be subjected to an additional risk evaluation.
EXAMPLE Security gateways have been used for decades to couple operational control centres and sub-
centres. The method is approved and is currently still suitable for approval. Now, if similar coupling is to be used in a
different application, the requirements, and also the cybersecurity requirements, can be transferred for similar
functions, interfaces and also for operating and environmental conditions. The risk is acceptable if these requirements
are fulfilled.

For the application of a reference system the following points shall be checked:
a) Would the reference system still be acceptable to the railway duty holder according to the current
state of art? This rules out legacy systems that may not be introduced into operation anymore or
systems with grandfather’s rights.

b) Are functions and interfaces similar? If in doubt a detailed risk assessment shall be carried out to
show that differences are acceptable.

52
CLC/TS 50701:2021 (E)

c) Are the operating environment and the environmental conditions similar? If in doubt a detailed risk
assessment shall be carried out to show that differences are acceptable.

d) Is there a security requirement specification for the reference system? If not, the security
requirements shall be collected from the documentation of the reference system and checked for
correctness and completeness.

e) Are all the threats considered effectively treated by the reference system?

f) Have the security requirements for the reference system been applied correctly and completely? Any
deviations shall be justified by a particular detailed risk assessment.

If a code of practice (e.g. a standard or a protection profile) or reference system exists for a particular
zone or conduit, then the explicit risk evaluation does not need to be carried out.
Note that security requirements from a reference system can cover more than a single threat, and that
also multiple reference systems may be applied to different set of threats.
7.2.5 Derivation of SL-T by explicit risk evaluation

For all remaining threats that have not been covered by the application of Code of Practice or Reference
Systems an explicit risk evaluation is performed.
The explicit risk evaluation has the task to derive the appropriate SL-T vector, see Figure 8. In this risk
assessment only the remaining threats are considered and only for these threats a SL-T is derived.
In general, the security levels (SLs) are defined for zones and conduits in EN IEC 62443-3-3:20191
Annex A generically in relation to the threat type against whom they are to offer protection:

SL 0 No security protection necessary


SL 1 Protection against casual or coincidental violation
SL 2 Protection against intentional violation using simple means with few resources,
generic skills and a low degree of motivation
SL 3 Protection against intentional violation using sophisticated means with moderate
resources, IACS-specific skills and a moderate motivation
SL 4 Protection against intentional violation using sophisticated means with extended
resources, IACS-specific skills and high motivation
EN IEC 62443-3-3 defines countermeasures according to the SL-T rating and so the SL-T can be
interpreted as a qualitative means of risk reduction in correlation with the needed risk reduction.
NOTE 1 For simplicity the notation SL-T is used. In general SL-T is a vector, see 7.2.6.1.

Figure 9 shows the flowchart for an explicit risk assessment. The relation to EN IEC 62443-3-2 is
annotated by the ZCR.
The basic procedure starts with an initial SL-T value as a starting point, which may be determined based
on experience or the attacker profile. Then initially the corresponding countermeasures are chosen from
EN IEC 62443-3-3 (considering the guidance from Clause 8) and the risk is evaluated. If the risk is not
acceptable, countermeasures are added (with the SL-T adjusted appropriately). This is repeated until the
risk becomes acceptable and the final SL-T is derived, that determines the security requirements.
If the first approach to derive SL-T has to be used (see 7.1), then the process described in Figure 9 can
be shortened. Only the steps “Determine (initial) SL-T” (based on the definition above), “Determine
Countermeasures…” and “Evaluate Risk (and Countermeasure Effectiveness)” are applied, but no
iteration is performed.

53
CLC/TS 50701:2021 (E)

Figure 9 — Explicit risk evaluation flowchart

NOTE 2 As quantitative evaluation of security risks is infeasible, explicit risk evaluation is usually qualitative or
semiquantitative. This means that risks are categorized into different levels, but nevertheless the risk is explicitly
stated.

54
CLC/TS 50701:2021 (E)

7.2.6 Determine initial SL

7.2.6.1 Overview

Cybersecurity requirements are selected according to EN IEC 62443-3-3, where the cybersecurity
requirements are grouped into seven foundational requirements classes (FR):
a) Identification and authentication control (IAC)

b) Use control (UC)

c) System integrity (SI)

d) Data confidentiality (DC)

e) Restricted data flow (RDF)

f) Timely response to events (TRE)

g) Resource availability (RA).

NOTE Normally, only the objectives of integrity, availability and confidentiality are considered in cybersecurity.
However, the foundational requirements can be mapped to integrity, availability and confidentiality. But with regards
to the cybersecurity objectives confidentiality, integrity and availability (CIA) the priority in the railway domain lies on
availability and integrity.

In IEC 62443 a 5-stage security level is defined for each of these seven groups. The SL values for all
seven basic areas are combined in a vector, called the SL vector.
EXAMPLE For a particular zone (3, 3, 3, 1, 2, 3, 2) could be defined as an SL-T vector. Once this SL-T vector
is defined, EN IEC 62443-3-3 calls for a catalogue of generic cybersecurity requirements for the system under
consideration. The final result of explicit risk evaluation should be the derivation of the SL-T vector.

7.2.6.2 Objectives

Determine a starting point for the explicit risk evaluation


7.2.6.3 Activities / Requirement or Recommendation

The minimum cybersecurity requirements SL1 = (1,1,1,1,1,1,1) should be fulfilled and can be used as a
general starting point if no additional information is available.
NOTE The final SL-T does not depend on the starting point, but the effort needed to determine SL-T depends
on the number of iterations. It is recommended to start rather with a low initial SL than a SL, which is too high, to
ensure that adequate requirements are derived by the procedure in Figure 9.

As a starting point a candidate SL vector for a zone or conduit is chosen. This can be based on the initial
risk assessment or be based directly on the type of threats assumed or on particular approaches taking
into account railway specific parameters like the location, from which the attack can be launched or
traceability. It is also possible to split between confidentiality, integrity and availability security objectives.
EXAMPLE Assuming that for all integrity aspects a high protection is needed, while availability and
confidentiality need less protection, this might lead to SL = (3,3,3,1,3,3,2).

7.2.6.4 Deliverables

Initial SL-T vector.

55
CLC/TS 50701:2021 (E)

7.2.7 Determine countermeasures from EN IEC 62443-3-3

7.2.7.1 Overview

This step is invoked either at the start after determination of initial SL-T or after an unsuccessful iteration
which has shown that the risk was not acceptable.
7.2.7.2 Objectives

To find additional countermeasures (and update SL-T) which are based on the cybersecurity
requirements from EN IEC 62443-3-3.
7.2.7.3 Activities / Requirement or Recommendation

In the start there is only a determination of the countermeasures based on the initial SL-T. In all
subsequent iterations it shall be analysed for which threats and why the risk was not acceptable. For
these threats additional requirements from EN IEC 62443-3-3 shall be added (considering the guidance
from Clause 8) and the SL-T shall be adapted accordingly.
7.2.7.4 Deliverables

Updated SL-T and countermeasures to be implemented.


7.2.8 Risk estimation and evaluation

7.2.8.1 Overview

NOTE 1 Assessment of the likelihood of a threat manifesting is particularly challenging and differs from traditional
assessment of environmental hazards as there can be little historical evidence to predict a threat and no current
evidence of such a threat developing within a control system. For this reason, many risk assessment methodologies
assume all threats are manifest and assess the impacts rather than likelihood.

Often the exposure of the system to a threat or its vulnerability is evaluated instead of likelihood. This is
sometimes also called attack surface. For a threat to be successful, it is necessary to exploit one or more
vulnerabilities in an asset. It can include items like the type of access needed for the threat (e.g. physical,
local or remote network access) or the type of knowledge or privileges that are necessary to launch a
successful attack. Thus, the zone or conduit shall be analysed in order to identify and document the
known vulnerabilities in the assets contained within the zone or conduit including the access points.
The railway duty holder may choose any particular method for risk evaluation, see Annex E for examples.
However, to be compliant with EN IEC 62443-3-2 a SL-T as a result shall be derived.
Railway duty holders shall define an agreed risk acceptance method, see e.g. 6.3.3. The same method
as for initial risk assessment should be applied to detailed risk assessment.
EXAMPLE 1 A typical risk matrix is known from ISO 27005. For each threat the assessment would lead to the
assignment of a semiquantitative risk score on a scale of 0 to 8. Often also a colour code is used to group the results
into different categories, e.g.

— 0-2 Risk is acceptable

— 3-5 Risk is only acceptable if no additional countermeasures exist or if additional countermeasures are not
proportionate

— 6-8 Risk is not acceptable

EXAMPLE 2 In the risk matrix from 6.3.3 (Table 4) five different text labels are used which may be interpreted as

— Low: Risk is acceptable

— Medium: Risk is only acceptable if additional countermeasures are not proportionate and the risk is explicitly
signed off by the security manager of the railway duty holder

56
CLC/TS 50701:2021 (E)

— Significant: Risk is only acceptable if no additional countermeasures exist and the risk is explicitly signed off by
the senior management of the railway duty holder

— High and Very High: Risk is not acceptable

NOTE 2 Annex E and EN IEC 62443-3-2 have more examples.

7.2.8.2 Objectives

— evaluation of unmitigated risks

— estimation of countermeasure effectiveness and

— evaluation of reduced risks

7.2.8.3 Activities / requirement or recommendation

— Evaluate unmitigated risk for every threat (including guidance esp. on likelihood estimation
possibilities)

— Identify relevant countermeasures for every threat

— Estimate countermeasure effectiveness for every threat (including guidance on effectiveness criteria)

— Evaluate risk reduction for every threat through countermeasures

— Deal with residual risks (after risk reduction) through avoidance, acceptance or transfer (including an
explanation regarding those options)

Now for all threats that are related to this particular zone or conduit the cybersecurity risk is assessed by
the risk matrix. In this assessment all countermeasures are taken into account, in particular those that
are defined by EN IEC 62443-3-3 for the particular SL, but also organizational and physical
countermeasures.
EXAMPLE

— For threats with a score of 0-2 or low risk no additional measures are necessary.

— For those that have a score between 3 and 5 or a medium risk, additional countermeasures need to be discussed
considering the proportionality principle.

— If there exist threats with a score of 6 to 8 or the risk is at least significant then usually additional countermeasures
need to be defined. As an example, there might be an unacceptable risk related to resource availability (RA) for a
zone where SL-T= (3,3,3,1,3,3,1) was assumed. So an additional countermeasure is necessary to improve RA. If
this countermeasure is present in SL-T 2 for RA, then the SL-T could be lifted to SL-T= (3,3,3,1,3,3,2).

The risk is then re-evaluated until all threats are either acceptable or no additional adequate and
proportionate countermeasures can be derived. The final result for the zone or conduit is its SL-T together
with necessary organizational or physical countermeasures.
7.2.8.4 Deliverables

— Updated likelihood and impact

— Residual cybersecurity risk

— Updated list of countermeasures

57
CLC/TS 50701:2021 (E)

7.2.9 Determine security level target

7.2.9.1 Overview

This step is invoked only after an acceptable risk has been evaluated. It is a kind of final documentation
and housekeeping activity including the documentation of exceptions and assumptions.
7.2.9.2 Objectives

To give SL-T and related security requirements.


7.2.9.3 Activities / Requirement or Recommendation

The cybersecurity requirements needed to achieve an acceptable risk shall be compared to the full set of
cybersecurity requirements for the resulting SL-T. Some, usually only a few, of the cybersecurity
requirements from EN IEC 62443-3-3 (i.e. a subset of the foundational requirements, but not the complete
foundational requirement group) related to the SL-T may not be necessary to achieve acceptable risk.
Such requirements need not to be implemented but the exception of particular requirements shall be
documented, to be justified and to be communicated to the user by a particular release notice or security-
related application conditions.
All assumptions made in the detailed risk assessment shall be recorded and traced. If they cannot be
fulfilled by the technical system itself, they need to be exported as security-related application conditions.
Assumptions often relate to the operational environment or the operational staff. Often the following
assumptions can be made in railway applications (which need to be justified by the railway operator):
— A.PhysicalAccess The system components, but in particular the workstations, are situated within
controlled premises, access to which is monitored and denied to unauthorised persons.

— A.Installation Measures are taken to ensure that the technical system is delivered and installed in
a way that does not compromise security.

— A.OperatorsTraining Operators are adequately trained for the tasks assigned to them, to be able
to apply the cybersecurity functions used by them correctly and in compliance with the security policy.

— A.OperatorsTrusted Within the scope of the tasks assigned to them, the operators may be
considered to be trustworthy.

This does not exclude internal threats from the threat risk assessment, e.g. for high-security-risk tasks
there may be additional requirements. Also, different types of operators may be defined.
The aspect of physical access shall be considered especially carefully in the context of rolling stock, as
attackers may disguise as regular passengers.
7.2.9.4 Deliverables

SL-T and related cybersecurity requirements


7.2.10 Cybersecurity requirements specification for zones and conduits

The final step of the detailed risk assessment is to collect the cybersecurity requirements for a zone or
conduit related to all threats or vulnerabilities from the different sources:
— Requirements stated by used codes of practice (for threats covered by this principle)

— Requirements from Cybersecurity Requirements Specifications of applicable reference systems (for


threats covered by this principle)

— Requirements from EN IEC 62443-3-3 for the derived SL-T for the remaining threats arising from
explicit risk evaluation

58
CLC/TS 50701:2021 (E)

In many cases the requirements may result from a single source, e.g. if a code of practice is directly
related to a zone or conduit or if a reference system is similar to a zone or conduit or if only a SL-T was
derived for a zone or conduit by explicit risk evaluation, but in general requirements from different sources
need to be aligned.
The result is a Cybersecurity Requirements Specification (CRS) for each zone and conduit. It shall be
reviewed against general cybersecurity requirements based upon company or application specific
policies, standards and relevant regulations and shall be approved by the railway duty holder.
As a minimum, the CRS shall include or refer the following:
— List of detailed security requirements, including SL-T, assumptions and security-related application
conditions

— SuC description (see 6.2)

— Zone or conduit drawings (see 6.4)

— Zone or conduit characteristics (see 6.4)

— Operating environment assumptions (see 6.2 and 7.2.4.4.3)

— Threat environment (see 6.2 and 7.2.1)

— Risk Acceptance (see 6.5.3)

— Regulatory requirements.

EN IEC 62443-3-2 contains more detailed information on the contents of the CRS.
Cybersecurity requirements and security-related application conditions shall be communicated to all the
stakeholders of the SuC, including engineering, RAM, safety team, asset owner, etc.
For requirement tracing, cybersecurity requirements that are necessary to protect essential functions shall
be clearly identified.
CRS for similar zones or conduits may be combined in a single document.

8 Cybersecurity requirements

8.1 Objectives

The key objective of this Clause is the identification and structuring of security requirements for zones
and conduits of a given SuC in order to provide an acceptable level of protection from all identified threats
and known vulnerabilities.
The security requirements for a SuC are mainly based on the System Security Requirements of the
IEC 62443-3 which are referenced in 8.2 with additional guidance for railway applications.
Further requirements may arise from adoption of codes of practice, implementation of reference designs
or from the detailed risk analysis for the SuC. Other sources of requirements could originate from
assessments, stakeholders or regulations.
The complete requirements for all included zones and conduits shall be documented in the Cybersecurity
Requirements Specification (CRS) of the SuC.
8.2 System security requirements

The normative System Security Requirements as defined in EN IEC 62443-3-3 are structured by the
seven Foundational Requirements classes defined in IEC/TS 62443-1-1.

59
CLC/TS 50701:2021 (E)

As explained in 7.2.4, to determine the SL-T of a threatened zone, it is necessary to perform one or more
iterations through the EN IEC 62443-3-3 to find out which requirements shall be listed in the SuC
cybersecurity system requirements (CRS) in order to reduce the residual risk to an acceptable level.
All system security requirements from EN IEC 62443-3-3 are generally applicable to railway applications
according to the security levels (SL-T) of the zones and conduits in the SuC. Sometimes, due to the
peculiarity of the railway application, they shall be adapted based on the guidance in EN IEC 62443-3-3.
The System Security Requirements in seven Foundational Classes are depicted in Table 6.
For Legacy Railway systems, some guidance are also provided in Annex B.
To assist with performing the iteration and the potential adaptations in railway applications the following
information is embedded in the table:
— Req, SL and Title lists all the EN IEC 62443-3-3 cybersecurity requirements,

— Railway notes informs about the existence of railway specifics consideration as guidance,

— Relevant design principles show the design principles underpinning each requirement,

— Stakeholder and Type offers a classification in terms of principal duty holders and type of content.

Please note that the columns ‘Railway notes’, ‘Relevant design principle’, ‘Stakeholder’ and ‘Type’
are informative.

60
CLC/TS 50701:2021 (E)

Table 6 — System Security Requirements and Foundational Classes

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder

FR 1 Identification and authentication control (IAC)

This includes application


interfaces such as web server,
file transfer protocol (FTP)
server, OPC, and remote
desktop interfaces that provide
network access to human users 4 - Grant least
Human user and that do not securely convey privilege Op
identification the authenticated IACS user Tech
SR 1.1 1 6 - Authenticate Sys
and identity to the application during Proc
requests Sup
authentication connection. 7 - Control access
It is acceptable to implement this
requirement in combination with
other external authentication
solutions including physical
security measures in railways.
Unique 6 - Authenticate
identification requests Sys
SR 1.1 RE(1) 2 - Tech
and 13 - Precautionary Sup
authentication principle
The feasible multifactor
authentication solutions outside
the IT system in railways are
Multifactor generally external and could 6 - Authenticate
authentication comprise a badge or a physical requests Sys
SR 1.1 RE(2) 3 Tech
for untrusted recognition of presence for the 12 - Proportionality Sup
networks human user e.g. by a phone call. principle
This could equally apply to
regularly planned maintenance
activities.
The feasible multifactor
authentication solutions outside
the IT system in railways are
generally external and could 6 - Authenticate
Multifactor
comprise a badge or a physical requests Op
SR 1.1 RE(3) 4 authentication Tech
recognition of presence for the 12 - Proportionality Sys
for all networks
human user e.g. by a phone call. principle
This could equally apply to
regularly planned maintenance
activities.
Note that in the equivalent
requirement
IEC 62443-2-1 / IEC 62443–2-4
USER-07 “sw services are
considered instead of “sw
Identification processes”: 4 - Grant least
and
USER-07: All software services privilege Op
authentication Tech
SR 1.2 2 shall be identified and 6 - Authenticate Sys
of software Proc
authenticated prior to their requests Sup
processes and
execution. 7 - Control access
devices
Identification of internal software
processes/services and devices
are not a common practice in
railway applications or railway
systems.

61
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
White list application
management supports integrity
of the running processes as a
workaround for this requirement.
This capability should be
incorporated by the operating
system. For implementation
please refer to the
Proportionality Security Design
Principle.

Unique
6 - Authenticate
identification
requests
and
12 - Proportionality Sys
SR 1.2 RE(1) 3 authentication - Tech
principle Sup
of software
13 - Precautionary
processes and
principle
devices
Railways have mostly a
distributed system supported by
simple passwords. The full
5 - Economize
account management requires a
mechanism Op
Account major change in the existing Tech
SR 1.3 1 6 - Authenticate Sys
management control/command and IT Proc
requests Sup
infrastructure. The issue of
7 - Control access
maintenance and multiple
commercial contractors
complicates the issue.
The operators should install a
unique account and user
administration system in
combination with physical 6 - Authenticate
Unified account security in fulfilment of this requests Sys
SR 1.3 RE(1) 3 Tech
management requirement. The decision for 9 - Make security Sup
the appropriate solution is left to usable
the operators ideally in a
standardized manner across the
railway industry.
An Identifier is a construct
(username, tag) which is
associated closely to a user or
subscriber.
Authentication is a procedure
based on a secret given by an
identified user or subscriber
which enable the verification of
6 - Authenticate
it, maybe uniquely.
Identifier requests Sys
SR 1.4 1 There are currently no solutions Tech
management 9 - Make security Sup
for railway environments for this usable
requirement as an inherent
system solution. Railways have
mostly a distributed system
supported by simple passwords.
Full account management can
often only support with an
external solution in a form of
compensating countermeasures

62
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Example of a compensating
countermeasure for rolling stock:
Furthermore, for some railway
control systems, the capability
for the driver or the supervisor to
quickly interact with such
systems is critical. Local
emergency actions for the
control system should not be
hampered by identification
requirements. Access to these
systems may be restricted by
compensating
countermeasures. E.g. drivers
can have free access to cab
control systems, without need of
further identification, once they
have entered into the cabin (with
a key) and have inserted their
driver licence card. On the
contrary, they have to identify
themselves on their wireless
tablet, even if they are inside the
cab, because wireless devices
can work outside the restricted
area, making the
countermeasure useless.
In case a unique authentication
management system for the
railway system is not feasible, an
external solution based on
compensating countermeasures
should be considered.
Example of a compensating
countermeasure for rolling stock: 6 - Authenticate
A possible solution is to demand requests
Op
Authenticator the authentication responsibility 8 - Assume secrets Tech
SR 1.5 1 Sys
management to the system in charge for not safe Proc
Sup
identification. E.g. the driver’s 9 - Make security
card reader not only identifies usable
the driver, but also performs the
authentication of the card and do
this service for all the
subsystems connected to it,
which are not able to
authenticate the driver but trust
the card reader.
Credentials used as a base of
trust shall be storage with
Hardware Hardware Secure Mechanism. If
security for this for meaningful reasons is
8 - Assume secrets Sys
SR 1.5 RE(1) 3 software not possible the chain of trust Tech
not safe Sup
process identity should be supported by logically
credentials Secure Mechanism plus a
compensating countermeasure
(e.g. monitoring).

63
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
1 - Secure the
weakest link
6 - Authenticate Op
Wireless access Tech
SR 1.6 1 - requests Sys
management Proc
7 - Control access Sup
12 - Proportionality
principle
1 - Secure the
Unique weakest link
identification 6 - Authenticate Sys
SR 1.6 RE(1) 2 - Tech
and requests Sup
authentication 12 - Proportionality
principle
This will be a big challenge
especially for the maintainers.
They are used to very simple
passwords today and to
non-personal (role specific)
login. But SR 1.7 is a must
because most components 8 - Assume secrets
Strength of cannot be protected against not safe
brute force login attempts. Op
password- 9 - Make security Tech
SR 1.7 1 Sys
based In case this requirement cannot usable Proc
Sup
authentication be fulfilled the integrity of 16 - Secure
authentication information Defaults
should be supported by external
means and compensating
countermeasures
(e.g. use of personal card
readers to control access to
restricted areas and functions)
Lifetime restriction for human
users is not currently
Password consistently implemented in
8 - Assume secrets
generation and railways. To support this, an
not safe
SR 1.7 RE(1) 3 lifetime external solution with additional Sup Tech
9 - Make security
restrictions for system capability can be used
usable
human users e.g. for authentication control as
a compensating
countermeasure.
6 - Authenticate
In addition, the railway
Password requests
application should enforce
lifetime 8 - Assume secrets Op Proc
SR 1.7 RE(2) 4 password minimum and
restrictions for not safe Sup Tech
maximum lifetime restrictions for
all users 9 - Make security
human user
usable
8 - Assume secrets
Public Key
not safe Op Proc
SR 1.8 2 Infrastructure -
9 - Make security Sup Tech
(PKI) Certificate
usable
In case the given requirements
for validation of certificates 6 - Authenticate
cannot be supported in railway requests
Strength of Op
system, an external solution 8 - Assume secrets Tech
SR 1.9 2 public key Sys
(e.g. an offline copy of the CA not safe Proc
authentication Sup
that is updated every 24 h) as 9 - Make security
compensating countermeasure usable
should be established.

64
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Hardware 8 - Assume secrets
security for not safe
SR 1.9 RE(1) 3 - Sup Tech
public key 9 - Make security
authentication usable

Authenticator 8 - Assume secrets


SR1.10 1 - not safe Sys Tech
feedback
9 - Make security
usable

For mission or safety critical


systems that deliver essential
railway functions, limitation on
login attempts may result in
system or function unavailability
and adversely affect safety.
Implementation of this 6 - Authenticate
requirement should be fully requests Op
Unsuccessful
SR 1.11 1 cognisant of safety and 7 - Control access Sys Tech
login attempts
operational availability 9 - Make security Sup
implications. For example, a usable
freezing of a system (e.g. safe
stop in an ETCS Level 2 system)
can lead to an unsafe situation.
Thus, the required system
reaction shall be defined with
respect to safety and availability.
6 - Authenticate
requests
System use 9 - Make security Sys
SR 1.12 1 - Tech
notification usable Sup
10 - Promote
privacy
Access via Op
11 - Audit and Proc
SR 1.13 1 untrusted - Sys
monitor Tech
networks Sup
6 - Authenticate
Explicit access Op
SR 1.13 requests Proc
2 request - Sys
RE(1) 13 - Precautionary Tech
approval Sup
principle

FR 2 Use control (UC)

4 - Grant least
privilege Op
Authorization Tech
SR 2.1 1 - 6 - Authenticate Sys
enforcement Proc
requests Sup
7 - Control access
4 - Grant least
Authorization privilege Op
Tech
SR 2.1 RE(1) 2 enforcement for - 6 - Authenticate Sys
Proc
all users requests Sup
7 - Control access

65
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
4 - Grant least
Permission privilege Op
Tech
SR 2.1 RE(2) 2 mapping to - 6 - Authenticate Sys
Proc
roles requests Sup
7 - Control access
Railway operators do not
generally have a supervisor to
oversee the integrity of their
actions and decisions.
4 - Grant least
There is always one person fully
privilege Op
Supervisor responsible for a task (e.g. cab Tech
SR 2.1 RE(3) 3 6 - Authenticate Sys
override driver for his train running). Proc
requests Sup
Supervisor override is common 7 - Control access
in railways in order to manually
accept special situations. These
are documented in a juridical
way.
This may conflict with time
critical activities/functions. In
case of such conflicts, this
requirement should be
implemented with alternative
approaches to establish the
chain of trust efficiently.
4 - Grant least
For example, changing a set privilege Op
point in the TCMS that can affect Tech
SR 2.1 RE(4) 4 Dual approval 6 - Authenticate Sys
the computation of the speed of Proc
requests Sup
the train should require a dual 7 - Control access
approval; bypassing the ETCS
control of the speed of the train
may require a pre-defined
sequence of actions, carefully
chosen to minimize the risk of
accidental execution, performed
by the driver in case of danger.
5 - Economize
mechanism Op
Wireless use
SR 2.2 1 - 6 - Authenticate Sys Tech
control
requests Sup
7 - Control access
Identify and
report
SR 2.2 RE(1) 3 -
unauthorized
wireless devices
Portable and mobile devices are
already widely used in railway
infrastructure. E.g. for diagnostic 6 - Authenticate
Use control for purposes but also to safety requests
Op Tech
SR 2.3 1 portable and relevant purposes such as 7 - Control access
Sys Proc
mobile devices shunting / track works. A secure 16 - Secure
management of mobile access is Defaults
crucial in the safety critical
railway environment.
Enforcement of
security status
SR 2.3 RE(1) 3 -
of portable and
mobile devices

66
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
6 - Authenticate
requests
Sys
SR 2.4 1 Mobile code - 7 - Control access Tech
Sup
16 - Secure
Defaults
Mobile code
SR 2.4 RE(1) 3 -
integrity check
In view of the safety critical
nature of the railway
environment, session locks
should be carefully applied so as
not to interact adversely with
system availability and access to
essential functions. 5 - Economize Op
SR 2.5 1 Session lock mechanism Sys Tech
For example, the user screen 7 - Control access Sup
could be locked upon user
request or after a configured
period of inactivity and require
re-authentication if an
authorized user wants to unlock
it.
5 - Economize Op
Remote session
SR 2.6 2 - mechanism Sys Tech
termination
7 - Control access Sup
5 - Economize Op
Concurrent Tech
SR 2.7 3 - mechanism Sys
session control Proc
7 - Control access Sup
Auditable 11 - Audit and Sys
SR2.8 1 - Tech
events monitor Sup
Components and sub-systems
that log events locally should
ensure the monitoring and
Centrally
logging information are
managed, 11 - Audit and
SR 2.8 RE(1) 3 transferred to a centrally Sys Tech
system-wide monitor
managed system. There may be
audit trail Sup
a time delay between the local
logging data and the transfer to
the central system
Audit storage 11 - Audit and Sys
SR 2.9 1 - Tech
capacity monitor Sup
Warn when
audit record
11 - Audit and Sys
SR 2.9 RE(1) 3 storage capacity - Tech
monitor Sup
threshold
reached
Response to Op
11 - Audit and
SR 2.10 1 audit processing - Sys Tech
monitor
failures Sup
11 - Audit and Sys
SR 2.11 2 Timestamps - Tech
monitor Sup
SR 2.11 Internal time 11 - Audit and Sys
3 Tech
RE(1) synchronization monitor Sup

67
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Protection of
SR 2.11 11 - Audit and Sys
4 time source Tech
RE(2) monitor Sup
integrity
Op
Non-repudiation 6 - Authenticate Tech
SR 2.12 3 Sys
for human users requests Proc
Sup
Op
SR 2.12 Non-repudiation 6 - Authenticate Tech
4 - Sys
RE(1) for all users requests Proc
Sup

FR 3 System integrity (SI)

6 - Authenticate
Communication requests Sys
SR 3.1 1 - Tech
integrity 14 - Continuous Sup
protection
The cryptographic mechanisms
employed should be secure.
Rolling stock and restricted
network segments with an
adequate boundary, intrusion
detection and diverse
Cryptographic
transmission channel (e.g. use Sys
SR 3.1 RE(1) 3 integrity
of Ethernet and MVB) can use Sup
protection
these as compensating
countermeasures. Open
untrusted networks without
compensating countermeasures
require this extra protection in
railways.
Prevention means mechanisms
such as removable media
control, and workstation and
laptop management policies,
used in conjunction with means
of detection at railway system
entry points (e.g. USB cleaning
stations, IDS, etc.) may be
preferred as a compensating 1 - Secure the
security measure from detection weakest link
mechanisms deployed on all 2 - Defence-in-
railway embedded devices. depth
Op
Malicious code Use of USB ports should be 11 - Audit and Tech
SR 3.2 1 Sys
protection strictly limited. When no other monitor Proc
Sup
solution is available, USB 14 - Continuous
controller and OS driver Protection
hardening should be employed 17 - Trusted
to prevent execution of code Components
from USB devices.
A secure boot mechanism and a
white list application
management at operating
system/firmware and application
layers is required to ensure only
authorized software is permitted
and executed.

68
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Malicious code
This RE should be managed at 7 - Control Access Op Tech
protection on
SR 3.2 RE(1) 2 SL1 level to be consistent with 12 - Proportionality Sys Proc
entry and exit
SR 3.2 in railways principle Sup Env
points
Malware and malicious code
protection should be centrally
Central managed for integrity and
management consistency in railways.
9 - Make security Op Tech
SR 3.2 RE(2) 3 and reporting SIEM protection is largely a
usable Sys Proc
for malicious dynamic anomaly
codeprotection detection/protection mechanism
and may prove inadequate for
malicious code protection.
Security - 3 - Fail secure
Sys
SR 3.3 1 functionality 11 - Audit and Tech
Sup
verification monitor
Automated -
mechanisms for
9 - Make security Sys Tech
SR 3.3 RE(1) 3 security
usable Sup Proc
functionality
verification
Security
This RE needs to be carefully
functionality
implemented to avoid 9 - Make security
SR 3.3 RE (2) 4 verification Sys Tech
detrimental effects. It may not be usable
during normal
suitable for safety systems Proc
operation
- 8 - Assume secrets
not safe
Software and
11 - Audit and Sys
SR 3.4 2 information Tech
monitor Sup
integrity
13 - Precautionary
principle
Automated - 9 - Make security
notification usable
SR 3.4 RE(1) 3 Sup Tech
about integrity 14 - Continuous
violations Protection
- 2 - Defence-in-
depth
SR 3.5 1 Input validation 7 - Control Access Sup Tech
14 - Continuous
Protection
- 3 - Fail secure
12 - Proportionality
Deterministic Sys Tech
SR 3.6 1 principle
output Sup Proc
13 - Precautionary
principle
- 9 - Make security
usable
10 - Promote Sys Tech
SR 3.7 2 Error handling
privacy Sup Proc
11 - Audit and
monitor
- 6 - Authenticate
Session
SR 3.8 2 requests Sup Tech
integrity
7 - Control Access

69
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Invalidation of -
session IDs 9 - Make security Tech
SR 3.8 RE(1) 3 Sup
after session usable Op
termination
Unique session -
SR 3.8 RE(2) 3 Sup Tech
ID generation
Randomness of -
SR 3.8 RE(3) 4 Sup Tech
session Ids
Protection of -
4 - Grant least Sys
SR 3.9 2 audit Tech
privilege Sup
information
Audit records on -
13 - Precautionary
SR 3.9 RE(1) 4 write-once Sup Tech
principle
media

FR 4 Data confidentiality (DC)

4 - Grant least
privilege
8 - Assume secrets
not safe
Information Sup
SR 4.1 1 - 15 - Secure Tech
confidentiality Sys
Metadata
Management
16 - Secure
Defaults
4 - Grant least
privilege
Protection of 6 - Authenticate
confidentiality at requests
Sys Tech
SR 4.1 RE(1) 2 rest or in transit - 7 - Control access
Op Proc
via untrusted 8 - Assume secrets
networks not safe
10 - Promote
privacy
4 - Grant least
privilege
6 - Authenticate
Protection of
requests
confidentiality Sup Tech
SR 4.1 RE(2) 4 - 7 - Control access
across zone Sys Proc
8 - Assume secrets
boundaries
not safe
10 - Promote
privacy
Current railway applications do
not implement extensive
permission management so
implementation of this 7 - Control access
requirement for information 15 - Secure
Sup
Information purging in components and Metadata Tech
SR 4.2 2 Sys
persistence systems is challenging. Management Proc
Op
16 - Secure
Read authorization should not Defaults
be the only criteria for
data/information criticality that
qualifies for purging.

70
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
Purging of
8 - Assume secrets
SR 4.2 RE(1) 3 shared memory - Op
not safe
resources
The railway application product
supplier should document the
practices and procedures
relating to cryptographic key 2 - Defend in depth
establishment and 6 - Authenticate
Use of Sup Tech
SR 4.3 1 management. The railway requests
cryptography Sys Proc
application should utilize 8 - Assume secrets
established and tested not safe
encryption and hash algorithms,
such as the advanced
encryption standards.

FR 5 Restricted data flow (RDF)

In response to an incident, it may


be necessary to break the
connections between different
network segments. In that event,
the services necessary to
support essential operations
should be maintained in such a
way that the devices can
continue to operate properly
and/or shutdown in an orderly
manner. This may require that 1 - Secure the
Network Sys
SR 5.1 1 some servers may need to be weakest link Tech
segmentation Op
duplicated on the control system 2 - Defend in depth
network to support normal
network features, e.g. dynamic
host configuration protocol
(DHCP), domain name service
(DNS) or local CAs. It may also
mean that some critical control
systems and safety-related
systems be designed from the
beginning to be completely
isolated from other networks.
Independence from non-control
networks is required at
SR 5.1 RE(1). In case physical
segregation is technically not
feasible or even an increase in
Physical cybersecurity risks, a logical 1 - Secure the
Sys
SR 5.1 RE(1) 2 network segregation concept is weakest link Tech
Op
segmentation acceptable explicitly if the 2 - Defend in depth
following associated SR’s [SR
1.2, SR 1.8, SR 1.9, SR 3.1/SR
3.1 RE 1, SR 3.7, SR 4.1/SR 4.1
RE 1,, SR 6.2, SR 1.5 RE 1] are
fulfilled
Independence
from non- 1 - Secure the
Sup
SR 5.1 RE(2) 3 railway - weakest link Tech
Sys
application 2 - Defend in depth
networks

71
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
The criticality of a railway
application is determined by the
risk assessment and that should
influence the logical and
physical isolation.
Logical and
The usage of segmentation 1 - Secure the
physical Sup
SR 5.1 RE(3) 4 methods like different fibres or weakest link Tech
isolation of Sys
colours for fibre-optic cables or 2 - Defend in depth
critical networks
the usage of cryptographic
measures like those mentioned
in EN 50159 are ways to
implement this requirement in
railway applications.
1 - Secure the Op
Zone boundary
SR 5.2 1 - weakest link Sup Tech
protection
2 - Defend in depth Sys
Deny by default,
Sup
SR 5.2 RE(1) 2 allow by - 7-Control Access Tech
Sys
exception
SR 5.2 RE(2) 3 Island mode - Tech
Railway safety architectures do
not generally permit such
behaviours on networks. All
SR 5.2 RE(3) 3 Fail close essential functions should Tech
continue, and non-essential
functions stopped in the event of
boundary protection violations
General
purpose person-
SR 5.3 1 to-person - 2 - Defend in depth Sys Tech
communication
restrictions
Prohibit all
general-purpose Tech
SR 5.3 RE(1) 3 person-to- - Sys Tool
person s
communications
Application
SR 5.4 1 - Tech
partitioning

FR 6 Timely response to events (TRE)

4 - Grant least
privilege
6 - Authenticate
requests
7 - Control access
Audit log Sys Tech
SR 6.1 1 - 9 - Make security
accessibility Op Proc
usable
11 - Audit and
monitor
14 - Continuous
Protection

72
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
6 - Authenticate
requests
Programmatic 7 - Control access
Sys Tech
SR 6.1 RE(1) 3 access to audit - 9 - Make security
Op Proc
logs usable
11 - Audit and
monitor
6 - Authenticate
requests
Tech
7 - Control access
Continuous Sys Proc
SR 6.2 2 - 11 - Audit and
monitoring Op Tool
monitor
s
14 - Continuous
Protection

FR 7 Resource availability (RA)

Denial of
2 - Defend in depth Sys Tech
SR 7.1 1 service -
7 - Control access Sup Proc
protection

Manage Op
2 - Defend in depth Tech
SR 7.1 RE(1) 2 communication - Sys
7 - Control access Proc
loads Sup
Limit DoS
effects to other 2 - Defend in depth Sys Tech
SR 7.1 RE(2) 3 -
systems or 7 - Control access Sup Proc
networks
Watchdog based time allocation
2 - Defend in depth
and scheduling is prevalent in
4 - Grant least
the safety critical railway Op
Resource privilege Tech
SR 7.2 1 environment and applications. Sys
management 6 - Authenticate Proc
This is largely applied at Sup
requests
monitoring rather than control
7 - Control access
level.
Configuration management 3 - Fail secure
based on baselines are 8 - Assume secrets
employed in most railway not safe
products. The identity and 14 - Continuous
Op
Control system location of critical files should be Protection Tech
SR 7.3 1 Sys
backup known at application level. The 15 - Secure Proc
Sup
ability to conduct backups Metadata
specifically the critical Management
information and files should be 16 - Secure
supported by railway application Defaults
3 - Fail secure
8 - Assume secrets
not safe
14 - Continuous
Op
Backup Protection Tech
SR 7.3 RE(1) 2 - Sys
verification 15 - Secure Proc
Sup
Metadata
Management
16 - Secure
Defaults

73
CLC/TS 50701:2021 (E)

Relevant design Stake-


Req SL Title Railway notes (informative) Type
principles holder
8 - Assume secrets
not safe
Backup 15 - Secure
Automation Automate backup based on a Op
SR 7.3 RE(2) 3 Metadata Tech
configurable frequency
Management Sys
16 - Secure Proc
Sup
Defaults
In view of the safety critical
nature, Railways have strict 3 - Fail secure
policies on recovery and 8 - Assume secrets
reconstitution to ensure a safe not safe
state in addition to a secure state 14 - Continuous
Control system Op
Protection Tech
SR 7.4 1 recovery and A threat risk assessment shall Sys
15 - Secure Proc
reconstitution. be carried out in order to not Sup
Metadata
breach safety mechanisms for Management
vital data, in case of system 16 - Secure
recovery from backed up Defaults
information.
Emergency 14 - Continuous
SR 7.5 1 - Op Tech
power Protection
3 - Fail secure
8 - Assume secrets
not safe
Network and 14 - Continuous
Op
security Protection Tech
SR 7.6 1 - Sys
configuration 15 - Secure Proc
Sup
settings Metadata
Management
16 - Secure
Defaults
3 - Fail secure
8 - Assume secrets
not safe
Machine-
14 - Continuous
readable Op
Protection Tech
SR 7.6 RE(1) 3 reporting of - Sys
15 - Secure Proc
current security Sup
Metadata
settings
Management
16 - Secure
Defaults
1 - Secure the
weakest link
Op
Least 4 - Grant least Tech
SR 7.7 1 - Sys
functionality privilege Proc
Sup
16 - Secure
defaults
Control system 3 - Fail secure Op
Tech
SR 7.8 2 component - 13 - Precautionary Sys
Proc
inventory principle Sup

8.3 Apportionment of cybersecurity requirements

8.3.1 Objectives

The approach to the allocation of cybersecurity requirements to Subsystem level or Component level
followed by further refinement of requirements is addressed in this Clause.

74
CLC/TS 50701:2021 (E)

Additionally, the issue of compensating countermeasures is discussed in the context of cybersecurity


requirements, where these are needed to complete a cybersecurity requirement or substitute one for
technical reasons or limitation.
The railway domain comprises a conglomerate of network-based systems and components stretching
across signalling, rolling stock, and fixed installation and additional services. These are also connected
in a horizontal and vertical manner by a myriad of diverse interfaces.
Because of this existing legacy railway information and communications ecosystem, a clear allocation of
cybersecurity requirements and the implemented countermeasures are essential to ensure completeness
and robustness when it comes to system cybersecurity.
Cybersecurity is by nature a new area of concern and an additional property or service next to the
functional and safety requirements of a system or component. This implies the primary goal is always on
safety and operational functions supported by cybersecurity.
Finally, the benefit of a clear requirement breakdown structure as part of verification of the requirements
and supporting evidence for completeness will be demonstrated.
8.3.2 Break down of system requirements to subsystem level

The high-level requirements structured by the foundational requirements detailed in tables in 8.2 have
specific applicability according to their characteristic, e.g.:
— Network requirements (address schema, network bandwidth, access resolution, etc.)

— Host requirements (allocation of computer resources, call stack)

— Application requirement (allocated on different device types like mobile/embedded/networks/cloud)

— Interface properties (robustness, parameter range checks, buffer principles)

— Additional security function integrated (to enforce rule and policies).

The cybersecurity requirements shall be refined from foundational requirements and allocated to
subsystems and further refined and allocated to components and devices. Traceability of requirements
should be ensured. A top-down approach together with a bottom-up verification of requirements
(integrating approach) is recommended for completeness.
8.3.3 System requirement allocation at component level

The derivation of components requirements from zone requirements is not direct. Components
requirements deriving from the zone SL-Ts and system requirements are allocated so that the zone meets
the SL requirement about protecting its access points (internal components with no access point may not
inherit all the CRs).
In IEC 62443 this is reflected in different parts of the standard:
— IEC 62443-3-3 for System level

— EN IEC 62443-4-2 for Component level.

This subclause gives guidance how system requirement allocation at component level can be handled.
On textual requirement level all Component Requirements reflect the Foundational Requirement structure
from IEC 62443-3-3, but where needed those requirements are allocated on the different device types to
address the different characteristics of the device technology or platform adequately.
For example, security requirements for the management of mobile code derived from SR 2.4 is associated
to CR 2.4 and refined in EN IEC 62443-4-2:2019, Clauses 12 to 15 for their device specific
implementation:

75
CLC/TS 50701:2021 (E)

— SAR 2.4 Software Application Requirement


— EDR 2.4 Embedded Device Requirement
— HDR 2.4 Host Device Requirement
— NDR 2.4 Network Device Requirement.

For traceability and completeness of the implementation of cybersecurity system requirements, a clear
top down perspective from high level system requirements to component and implementation level shall
be adopted. Traceability of requirements through the different level of refinements is a prerequisite for
verification activities.
In apportionment of system and component requirements, the architecture of the security framework
within a zone which is defined as an outcome of the risk assessments (initial and detailed) shall be
considered. The harmonization of the dedicated security design within the zone and the functionality itself
shall also be addressed.
The functions hosted within the zone shall not be compromised in any case by the additional needs of
security functions implemented according to the principle of the 7 Foundational Requirements within a
zone.
As example a complex functionality should not be distributed between different zones. This should only
be considered if clear segregated sub functionalities with loose coupling characteristics exist for such a
function to control complexity of the design adequately.
Network related cybersecurity system requirements may be implemented and allocated additionally for
zone protection:
a) for security of the zone dedicated Gateways for the control of the communication load in a
bidirectional way,

b) monitoring and logging capabilities to support anomaly detection can be on a centralized server with
a system for Incident and Event Detection,

c) support of a unique System time for logging to make the zone monitoring consistent from a time
perspective.

8.3.4 Specific consideration for implementation of cybersecurity requirement on components

Performance and segregation aspects shall be considered on host level, explicitly when it comes to the
integration for security functions in parallel with operational functions.
A clear segregation with Virtual Machine or logical container mechanism may be needed at the host level,
ensuring the adequate management of shared computer resources such as memory and time.
Partitioning is also a helpful mechanism to the host security redundancy, offering the opportunity for
upload or patching mechanism as are shared memory mechanism based on MMI as well as obscuring of
memory content with morphologic mechanism on memory and data layout to hide the operational context
between different applications on host.
Inter process and inter partitioning links sharing information based on global variables are critical from a
functional context perspective (integrity), and from a cybersecurity perspective as well.
8.3.5 Requirement breakdown structure as verification

In order to facilitate test cases, which are specified at different level of requirements for verification
purposes (see Clause 9), it is essential that the cybersecurity requirements are verifiable and traceable,
especially in the railway domain, where the majority of the functionality relates to distributed components.
To this aim, complete and correct traceability of the cybersecurity requirements shall be ensured.

76
CLC/TS 50701:2021 (E)

8.3.6 Compensating countermeasures

Compensating countermeasures are required in cases, where the security level inherently provided by a
specific zone or component does not match the SL-T. This inherent security level, the Capability SL
(SL- C), states what security level can be provided natively without compensating countermeasures when
properly configured and integrated. If an SL-C is found not to meet the allocated SL-T for a cybersecurity
requirement in a specific zone or on component level the workflow according to Figure 10 applies.
Compensating countermeasure usually lead to security-related application conditions (SecRAC) which
are described in Clause 5.
The situation for compensating countermeasures may arise due to technical (e.g. contradictory
requirements from system engineering with higher priority) or resource limitations. Compensating
countermeasures should be seen in relation to cybersecurity requirements and is therefore traceable to
them.

Figure 10 — Handling of SL-C

Illustrative examples are given below:


EXAMPLE 1 remote access to a technical system two-factor authentication is demanded at a workstation, e.g.
ID card with PKI and password. But the product used does not support this e.g. only authentication by password at
the workstation is possible. The solution here could be that instead of a technical implementation of the security
requirement it is implemented by an additional physical or organizational countermeasure. Its effectiveness against
the corresponding threats is shown by risk evaluation.

77
CLC/TS 50701:2021 (E)

EXAMPLE 2 An additional countermeasure is restricting access to the room where the workstation is placed.
This can be either performed by organizational measures e.g. a door guard checking the IDs of employees or and
additional physical and technical measure, e.g. the workstation id put into a separate room with a locked door. Either
it can be a restrictive management of the physical keys to the room or an additional access protection at the door
e.g. access only for particular ID cards or other factors such as fingerprints, retina scanner, etc. The effectiveness is
shown for the particular implementation when it results in the same risk as if the original security requirement would
have been implemented.

Specific guidance on the design of compensating countermeasures is given in EN IEC 62443-3-3.


However, additional guidance is given based on Cybersecurity design principles (Annex C) and the
management of the compensating countermeasures (see proportionality and precautionary principles):
— The effectiveness of the compensating countermeasures selected, shall be ensured through
implementation of an adequate cybersecurity management system where the compensating
countermeasures are embedded.

— The selection of compensating countermeasures and their strength in cybersecurity shall ensure any
breach or loss of the countermeasures is revealed or follows the cybersecurity in depth approach.

NOTE Further compensating countermeasures are described also in Annex B.

9 Cybersecurity assurance and system acceptance for operation

9.1 Overview

Cybersecurity assurance activities are all the measures taken to ensure the confidence in the effective
implementation of cybersecurity requirements of the SuC against the selected threat landscape. It
consists of three types of activities:
— Cybersecurity verification: Continuously applied during all phases of the railway application lifecycle.

— Cybersecurity validation: Performed during phase 4 “Specification of system requirements” by the


asset owner and 9 “System validation” by the System Integrator.

— Cybersecurity system acceptance: Achieved by a successful SuC Handover.

Figure 11 — Cybersecurity assurance

Cybersecurity assurance activities are performed throughout the various lifecycle phases and include
verification as well as validation. The overall objective is to obtain system acceptance by the railway
operator by successfully completing the SuC Handover as shown in Figure 11.
This Clause contains following activities on system level to achieve acceptance for operation of the SuC:
a) Cybersecurity verification (see 9.3)

Applying assurance techniques to evaluate the correctness and completeness of cybersecurity


requirements implemented at each phase of the lifecycle.
b) Cybersecurity validation (see 9.4)

78
CLC/TS 50701:2021 (E)

Validation of the Cybersecurity Requirements Specification of the SuC for its intended use and its
operational configuration (SL-A = SL-T).
c) Cybersecurity system acceptance and handover (see 9.5)

Determines SuC compliance and if it is ready for operation. Focusses on the handover from System
Integrator to asset owner.
For these activities, the concept of a cybersecurity case is introduced and explained in 9.2.
An overview of the assurance tasks is given in Figure 11 and further explained in the following
subclauses.
9.2 Cybersecurity case

The cybersecurity case (see Figure 12) contains all assurance evidence (documented information) of the
verification and validation activities for the SuC and addresses any remaining open issues by security-
related application conditions.
The cybersecurity case provides the evidence and argumentation, that the system as designed and
developed can be operated to the expected security confidence level, i.e. the cybersecurity objectives
identified in the threat risk assessment resulting in the Cybersecurity Requirement Specification (e.g. SL-
T) are satisfied.

Figure 12 — Cybersecurity case concept

With the cybersecurity case concept, distinct lifecycle phases are documented. The cybersecurity case
is provided by the System Integrator and is updated once the SuC has been validated for its intended
use. If proven successful, the railway operator accepts the updated cybersecurity case during the SuC
Handover.
The cybersecurity case of the railway operator can refer to several cybersecurity cases from different
System Integrators. During the operation, the railway operator shall demonstrate that all security-related
application conditions are fulfilled, e.g. respect of SecRACs applicable during maintenance activities,
integration of different SuC provided by different System Integrators.
While operational, the update of the cybersecurity case provided by the system integrator is performed
according the contractual agreement between the asset owner and the system integrator.
The cybersecurity case consists in general of:
— Result of threat risk assessment and the Cybersecurity Requirements Specification (CRS) as defined
by Clause 7. Describing the zones and conduits and their SL-T.

— The cybersecurity assurance evidence provided as an outcome of performing assurance methods


(e.g. penetration testing). Assurance evidence has the objective to demonstrate fulfilment of the
security countermeasures defined by EN IEC 62443-3-3 and/or compensating countermeasures
depending on the SL-T required.

79
CLC/TS 50701:2021 (E)

NOTE 1 IEC 62443-2-4 and 4-1 (Practice 5) give guidance on assurance methods.

NOTE 2 Independence requirements for assurance methods are defined in EN IEC 62443-4-1.

— The security-related application conditions (SecRAC) required to be fulfilled. This approach ensures
an important synchronisation point with the lifecycle according to EN 50126-1 (see Table 1).

NOTE 3 Ideally all security-related application conditions are handled within the security domain. For
patches claiming not to impact safety requirements or functions an impact analysis supports the argumentation,
but this exclusion of safety impact is not always possible.

NOTE 4 If EN 50129 is applicable to the SuC, further information on the integration of IT-Security and safety
is described in the Technical Safety Report.

An example of the structure of a cybersecurity case is shown in Annex G.


The cybersecurity case is a living document and updated (whenever necessary) during the entire lifecycle
of the SuC.
NOTE 5 A cybersecurity case of a SuC may also refer to cybersecurity cases of products/components, as long
as a holistic view for the SuC is considered (e.g. the attack surface of a SuC is likely to be larger than the summary
of the single attack surfaces of the components it is made of).

9.3 Cybersecurity verification

9.3.1 General

This subclause describes the cybersecurity verification activities during lifecycle phases for the SuC.
Cybersecurity verification task is performed within each lifecycle phase, it supports and provides input to
security validation (see 9.4).
The objective of cybersecurity verification is to demonstrate, that the cybersecurity requirements of each
lifecycle phase have been fulfilled.
In each lifecycle phase, the cybersecurity verification task shall deal with:
a) correctness and adequacy of security risk assessment, where specified;

b) compliance of the cybersecurity deliverables of the phase with the cybersecurity deliverables of
former phases;

c) adequacy of specified methods, tools and techniques used within the lifecycle phase, where
specified;

d) correctness, consistency and adequacy of test specifications and executed test, as appropriate.

Errors or deficiencies found may require the re-application of some or all activities of one or more previous
lifecycle phases.
9.3.2 Cybersecurity integration and verification

When the system has been integrated, it shall be evaluated to judge whether the security requirements
and the security-related application conditions are fulfilled.
The integrated system (SuC) is subject to inspection and test procedures that verify the security
requirements integration as well as the compensating countermeasures (if present). Based on the
inspection and testing results the SL-A (for zones and conduits) is determined and compared to their
respective SL-T according to the CRS. If the SL-A is not equal or not larger than the SL-T, then either
compensating countermeasures are determined and implemented or the SuC returns to the design phase
to address the issues. If the SL-A is equal or larger than the SL-T, the cybersecurity case can be updated
accordingly.

80
CLC/TS 50701:2021 (E)

The Figure 13 gives an overview of the verification process:

Figure 13 — Cybersecurity assurance during integration and validation activities

Objective:
— Confirmation cybersecurity countermeasures have been implemented correctly based on the CRS
and security-related application conditions.

Inputs:
— Cybersecurity Requirements Specification (CRS).

— Cybersecurity Assurance Procedures (e.g. inspection and test procedures).

Output:
— Cybersecurity case (provided by System Integrator).

Activities:
It is highly recommended to integrate both system functions and Cybersecurity functions in a stepwise
systematic approach with a dedicated test schematic for both. The definition and acceptance of the test
coverage is the duty of the asset owner.

81
CLC/TS 50701:2021 (E)

Following security testing test types are important test methods in evaluating the Cybersecurity
performance and compliance of the SuC. They shall be performed with a coverage and depth argued,
based on the targeted security level (SL- T):
— Security requirements testing

Verification of implemented Cybersecurity Requirements Specification (CRS).

— Threat mitigation testing

Verification of countermeasures implemented to mitigate threats and testing the resilience of these
implemented countermeasures.

— Vulnerability Testing

Vulnerability scanning identifies systems and system attributes and attempts to detect known
vulnerabilities.

— Penetration testing

Penetration testing is security testing in which penetration testers mimic real-world attacks to identify
methods for circumventing the security features of a system or network. Penetration testing can be
seen as the final step within a series of tests for cybersecurity similar to the system test on a railway
system. The target of a penetration testing shall be the demonstration of the resilience against the
identified threats and risks for the railway SuC.

Strict independence of testers performing these activities is required for the penetration testing, which
shall be performed by an independent department or organization from development team.
Further guidance on inspection and test procedures is given in EN IEC 62443-4-1.
9.3.3 Assessment of results

If the tests fail to show proof of the successful implementation of the CRS, then design review activities
shall be performed resulting in either:
a) additional compensating countermeasures (security-related application conditions) being added

or
b) returning to the design phase of the SuC

9.4 Cybersecurity validation

Objectives
a) Assess by examination and provision of objective evidence that the SuC in combination with its
security-related application conditions complies with the Cybersecurity Requirements Specification
(CRS of EN IEC 62443-3-2).

b) Confirm or update the cybersecurity case for the SuC according to the results of the validation.

Inputs:
— Cybersecurity Requirements Specification from the threat risk assessment (TRA).

— Cybersecurity case.

— Assurance evidence.

82
CLC/TS 50701:2021 (E)

Output:
— Updated System Integrator cybersecurity case (incl. Cybersecurity Assessment Report)

Activity:
Validating the SuC in its specific configuration to determine its operational readiness.
9.5 Cybersecurity system acceptance

9.5.1 Independence

If an independent cybersecurity system assessment is required or other regulatory requirements call for
the cybersecurity system assessment to be performed independently, they shall be appointed and be
given authority to perform the independent security assessment of the SuC.
In any case, the cybersecurity assessor shall always be independent from the project manager and shall
be a different entity to those undertaking other roles in the project.
9.5.2 Objectives

— Assess compliance of the system with the overall security requirements.

— Accept the system for the entry into service by completing the handover.

9.5.3 Activities

Assessment of the System Integrator cybersecurity case resulting in an acceptance report, the
(independent) Cybersecurity Assessment Report, which shall make a conclusion about its fitness for
operation.
9.5.4 Cybersecurity handover

SuC handover is based on a valid cybersecurity case created, ensured and documented within previous
railway application lifecycle steps and phases. In order to commission and handover the railway
application to the asset owner, operational readiness of security functions shall be demonstrated by the
integration service provider or when specified by the asset owner by an independent party, e.g. to perform
the Site Acceptance Tests.
All asset owner activities to be performed in context with the SuC handover shall be planned beforehand
and later on documented in the System Integrator cybersecurity case. The results of the demonstration
of operational readiness shall be based on the asset owner asset configuration management baseline
and by this be unique and repeatable.
The risks and the security constraints for the demonstration of the operational readiness of the SuC shall
be identified by the integration service provider or independent party, communicated to and approved by
the asset owner before handover. Whenever necessary asset owner’ staging environment shall be used,
or the tests shall be limited to a tolerable risk, e.g. passive methods.
NOTE Further guidance is given by the IEC 62443-2-4 “Security program requirements for IACS service
providers” in particular all requirements of the subtopic “Verification”.

With completion of the SuC handover, the asset owner receives the cybersecurity case along with the
security guidelines.

10 Operational, maintenance and disposal requirements

10.1 Introduction

It shall be ensured that the cybersecurity of the SuC is maintained throughout operation, maintenance,
and decommissioning activities.

83
CLC/TS 50701:2021 (E)

The following subclauses deal with requirements on processes and procedures for operating and
maintaining railway application in a secure state.
As a precondition for ensuring a secure operation, maintenance and disposal the asset owner shall have
an OT Security Program (see Clause 5) and all parts of the concerned railway application should be in
scope of it.
Asset owner OT security program shall be applied by the railway operator and service provider.
As legal requirements differ from country to country and this Clause only provides a basic set of
requirements, every railway operator is required to check for additionally needed requirements.
For this version of this Technical Specification, specific advices are provided for vulnerability management
and patch management activities. In a later versions, security monitoring, Incident management, business
continuity and crisis management will be also addressed.
A process for vulnerability management and security patch management shall be established and
contractually agreed between asset owner and System Integrator/product supplier.
Example given for possible contractual agreement content:
— rules to address vulnerability watch (e.g. frequency of watching, trigger conditions for specific risk
analysis)

— rules to assess the vulnerability criticality level (e.g. using as entry CVSS from a public CVE)

— trigger conditions to perform additional risk assessment

— rules to determine patch deployment.

Vulnerability Management and security patch management rely on an asset inventory, system
architecture and change control process.
10.2 Vulnerability management

This subclause deals with the handling of vulnerabilities and the interfaces between the stakeholders.
The main objective of vulnerability management is to proactively detect, classify, prioritize and remediate
vulnerabilities.
Product suppliers / system Integrator should notify and report vulnerabilities to the asset owner in a timely
fashion.
Cooperating railway operators shall establish an exchange for quick response on new vulnerabilities with
each other.
Depending on the legal framework it may also be necessary to report to government agencies or other
bodies. Best practices of responsible disclosure are applied.
NOTE In order to keep the notation simple, the description here deals with a single exploit or vulnerability only,
but it will be equivalent for a group of several vulnerabilities or exploits.

A risk assessment (RA) may become necessary at any point in the lifecycle if a new vulnerability becomes
known that could affect a system in operation. This may be viewed as a triggered RA. It differs from the
detailed risk assessment in the fact that decisions should be made quickly whether the concerned system
may continue operation, perhaps with additional countermeasures. This does not exclude to use the
detailed risk assessment as an alternative method.
First it should be evaluated whether a productive system is affected. In this case ad hoc immediate
countermeasures should be applied which ensure a sufficient level of security while the vulnerability and
the associated risk is assessed. Essential functions should continue to be supported, if possible.
Then a RA of the system including the new vulnerability should be performed and countermeasures are
added to the system until an acceptable level of risk is achieved. In this process the ad hoc
countermeasures may also be changed or substituted. During this process a patch schedule should be

84
CLC/TS 50701:2021 (E)

determined. The time duration until the patch is rolled out may be a factor in the risk assessment.
Depending on the risk acceptance of the railway operator a risk can be accepted.
The implementation should be monitored. If changes in the schedule occur or if new threats arise from
the vulnerability, then the RA should be updated.

Figure 14 — General vulnerability handling flowchart

A risk-based decision matrix for application of patches should be established.


In some case, according to the SuC context, a global software upgrade may be used preferably to
applying a list of software patches.
10.3 Security patch management

10.3.1 General

If security patches are necessary, the product supplier should inform the asset owner self-dependently.
The product supplier should test and send a patch or new firmware to the asset owner within a bilaterally
agreed time frame. The patch should be tested according to the patch state model from IEC 62443-2-3
or a similar process.
The application should inform the operator or engineer (e.g. by popup.) if a reboot is necessary after the
update. The reboot should only be triggered by an operator or engineer manually with the acceptance of
the operator.

85
CLC/TS 50701:2021 (E)

10.3.2 Patching systems while ensuring operational requirements

The standard lifecycle gives us the phases of operation and maintenance for productive working with the
system and for keeping it working. Processing a security lifecycle in parallel might lead to additional
maintenance phases for security updates, more than intended to have just for standard maintenance
based on the availability and/or safety requirements.
The patch process for each system/component should be processed the same way as for other standard
patches, as there might be:
— Installation on test environment

— Comparative patch tests (behaviour before and after patching) to verify the functionality of the patch
itself

— Regression tests to ensure that the patch has no influence or negative side effects on other
functionality

— Rollback tests

— Documentation and logs on results of the process.

The following clustering of systems are to be considered:


— Railway systems with a certain expected availability

— Systems under RAM considerations according to EN 50126-1 or similar

— Systems under additional considerations like functional safety.

(Standard) railway systems should be security updated on request by following their timelines and in
cooperation with the operative users.
For systems under RAM or other availability constraints the outages due to security updates should be
evaluated and calculated to reach and keep their availability requirements.
For systems under additional considerations like functional safety extended procedures to ensure the
continuous correct behaviour of the system are supposed to be performed, e.g. observation phases,
which also lead to reduced availability of the system.
Application of a security patch impacting safety functions shall be coordinated with safety management.
Figures 15 and 16 below are examples to describe the additional outage time and timeframes with
reduced availability (run in not redundant mode) which should be considered for the calculation of the
availability of a system.
During the additional update cycles also, the arising timeframes of reduced availability should be
considered, e.g. during update of the redundant node the system runs without redundancy, so an outage
of the primary system would lead to a total outage.
Figure 15 gives an overview for these timeframes due to a security update on a 1 :1 active/passive
system. This is like the standard maintenance phase.
The pictures show the following states of the system:
— Operational State: System is operational or out of service

— Availability State: System runs with redundancy, without redundancy, or is out of service

— Security State: System is vulnerable (before patching the active system), or secure, or out of service.

86
CLC/TS 50701:2021 (E)

Figure 15 — Vulnerability and outage time during system update (maintenance phase) [example]

Figure 16 shows a similar split mode update on a 1:1 system with introduction of additional observation
phases as they might be required for systems under additional requirements.
For systems under additional Safety requirements the updates due to security reasons have to be
evaluated additionally against the Safety Certification Case. Probably an additional observation phase
after update might be defined.
Update the redundant (mirror) node, run the redundant node in parallel to the primary and compare its
behaviour for a certain timeframe, e.g. a whole business day, before switching the primary to the security
updated node, keep another observation phase for e.g. another business day before updating the second
node.
Such an observation depends on ability of the system, perhaps the comparison is done against a virtual
reference system, a digital twin, whatever is supposed and in normal case verified to act and react
synchronously to the active operative system.
These observation timeframes raise the time of reduced availability of the overall system.

87
CLC/TS 50701:2021 (E)

Figure 16 — Vulnerability and outage time during system update with observation phases
[example]

88
CLC/TS 50701:2021 (E)

Annex A
(informative)

Handling conduits

A.1 Introduction

In IEC 62443 conduits are the links or channels between zones. Similar concepts have also been
discussed in EN 50159, but within this Standard communication is only discussed from a safety
perspective. Railway specific recommendations for partitioning a SuC is given in Clause 6.4.
In principle there are only three different conduits necessary to connect zones, depending on the SL of
the zones:
— Conduit implementing a transparent gateway (connecting zones of same security level),

— Filtering conduit as firewall appliance (allowing a zone of lower or equal security level to communicate
with a zone of a higher security level), and

— Unidirectional conduit as data diode or network tap (allowing output from a higher security level zone
to others).

NOTE The gateway is protecting integrity and potentially confidentiality of data flow between two gateways. Its
major drawback is that it connects two networks transparently without separation, segmentation or filtering. For the
filtering conduit the filter rules can get very complex, in particular firewall devices are complex and require frequent
security patches. Also filters are not effective against masquerading attacks. For the data diode it has to be
distinguished how it is implemented. When realized in software the security is as good as the hardening and patch
level. When realized in hardware using physical one directional flow principles, it is hardly possible to breach higher
security levels from remote.

In EN 50159 only the case of a gateway is considered, as in EN 50159 two safety zones are connected
transparently, which means that they should have the same security level.
Some conduits have been handled successfully by protection profiles e.g. gateways by DIN VDE V 0831-
102 (based on Common Criteria). ANSSI has already worked out protection profiles for all three types of
conduits (gateway, data diode, filter) for industrial automation (https://www.ssi.gouv.fr/guide/profils-de-
protection-pour-les-systemes-industriels/).
Figure A.1 shows a simple example of four zones connected by three different types of conduits.

89
CLC/TS 50701:2021 (E)

Figure A.1 — Zones and conduits example

This Annex A aims at clarification of the requirements for conduits from IEC 62443 and their relation to
existing standards such as EN 50159 and cybersecurity codes of practice, e.g. protection profiles for
conduits.

A.2 Requirements for conduits in IEC 62443

In general requirements from EN IEC 62443-3-3 and EN IEC 62443-4-2 fit better to zones rather than
conduits. A reason for this may be that a conduit is more a logical than a physical asset, e.g. the security
functionality is mainly performed by one or two components (gateways, firewall or data diode). It is also
debatable whether such components belong only to the zone they protect, or only to the conduit, or to
both, e.g. in Figure A.1 the gateway is necessary for the boundary protection of the zones connected to
it but is also necessary as a component of the conduit. Generally, such devices at the boundary of zones
should be considered to belong to both the zone and the conduit.
In EN IEC 62443-4-2 requirements are stated explicitly for so called network devices (ND). These
Network Device Requirements (NDR) should be fulfilled additionally to the general component
requirements (CR).

A.3 Protection profiles for conduits

A protection profile can be understood as a generic cybersecurity requirement specification (CRS) for a
class or type of components or specific configuration setting of different components. Its intent is to ease
the re-use and tailoring of cybersecurity requirements. Protection profiles might also act as Codes of
Practice.
The general table of contents of a protection profile is
1. Description of the component including features, intended use, users and assumptions

2. Asset protection incl. environment and essential functions

3. Threat model

4. Security objectives (high-level requirements).

As protection profiles related to components used for the protection of conduits already exist, the question
arises how such protection profiles can be used in relationship with railway cybersecurity. The first option
is to use existing protection profiles as a code of practice. But to support standardization a second option

90
CLC/TS 50701:2021 (E)

may be more useful: cybersecurity objectives are included in the protection profile, they should be
traceable to 62443 standard and associated SL-T. By this approach only three protection profiles would
be necessary to cover all types of conduits which would simplify the specification of CRS for conduits.
EXAMPLE If the security objective for a gateway would be O.SecureSecretsStorage (meaning that credentials
have to be stored securely), then this objective might be mapped to CR 4.1 (Information confidentiality), CR 4.2
(Information persistence) and CR 4.3 (Use of Cryptography) from EN IEC 62443-4-2.

By this way the objectives cannot only be mapped to the requirements, but they can also be tailored to
the SL-T needed in the particular case. And additionally, requirements that are not applicable in the
specific context would be excluded as they would not be necessary to fulfil any security objective.

91
CLC/TS 50701:2021 (E)

Annex B
(informative)

Handling legacy systems

B.1 Introduction

In the short and medium-term future, there will be few components which will implement a set of security
requirements compliant with the EN IEC 62443-4-2 standard. Most current products were designed
through processes that did not incorporate dedicated cybersecurity activities. However, a set of security
measures can still be defined to ensure a basic security level for an installation including such products.
This annex defines the technical and organizational countermeasures to achieve a certain basic security
level for systems which do not achieve a security level according to IEC 62443.This basic security level,
however, does not fulfil any IEC 62433 security level and therefore can prevent only some cyber-attacks.
Detection of most cyber-attacks is possible thanks to a mature level of security operation, e.g. an
organization managing and operating a security program according to EN ISO/IEC 27001 or
IEC 62443-2-1. Some measures may also support limited activities to recover from a cyber-attack.

B.2 Basic security risks

B.2.1 Denial of service attacks and vulnerability exploits

A denial of service (DoS) attacks and vulnerability exploitation are typically possible if an attacker gets
access to the operational network. The attacker sends either malformed data or huge volumes of data
that will make the targeted devices unavailable (e.g. unresponsive). When an attacker exploits one or
more vulnerabilities of the attacked devices, the attacker can render the device unavailable or
compromise the integrity of the device (i.e. change data and code). An attacker can also use a
compromised device as a new attack device.
Such attacks can be achieved when the attacker gets physical or logical network access to the operational
network to create or attach an attack device. Attaching attack devices to the physical network can be
impeded by physical security of the installation as detailed in B.3.1. Detection of such attack devices can
be achieved by regular inspections of the installation and by network monitoring.
Compromising an existing device via logical access requires remote access to the operational network.
This can be mitigated by closed network design and network segmentation. If a closed network is not
possible, the operational network should be separated from the non-operational network by a data diode
(when data are only leaving the operational network) or a Demilitarized Zone (DMZ) when bi-directional
communication is required.
B.2.2 Impersonation attack

During an impersonation attack, an attacker sends a message with correct syntax to a target. The attacker
typically forges all required data (as IP addresses, sequence numbers, identifiers, etc.). Simple attacks
just replay a previously sent message, more sophisticated attacks emulate the interface protocol. One
variation is the Man-in-the-Middle attack, where arbitrary data from and to the attacked device can be
altered.
Since a legacy device might not strongly authenticate the sender of the request, it cannot distinguish
between a permitted message and a specifically crafted and forged message.
In order to execute an impersonation attack, the attacker requires either physical access (to place an
attack device in the operational network or compromise an existing device) or remote network access.
Adding an attack device can be detected by regular physical inspections and by network monitoring.
Compromising an existing device via remote network access typically involves several network activities

92
CLC/TS 50701:2021 (E)

that can be detected by an intrusion detection system. The problem is much more complex if rogue
devices are only temporarily attached.

B.3 Basic process activities

B.3.1 General

The following process activities enhance a legacy system's protection against cyber-attacks and
complement the basic technical measures already present. It is assumed here that no activities in earlier
lifecycle phases can be carried out, e.g. because of legacy systems or pre-developed components or
legacy systems in place.
B.3.2 Zoning

Even if no SL is assigned, components of similar functions and security requirements should be integrated
in one security zone. The boundaries of security zones should be protected by security gateways, firewalls
or data diodes.
As a default, the Purdue model can be used to group components into zones:
Level 0: All sensors (e.g. axle counters, track circuits, odometers) and actors (e.g. point machines,
signals, brakes) that provide the basic input and output of the control system.
Level 1: Local Control: All elements that receive input from sensors or provide output to actors, elements
that process data and elements that send or received data to or from an area control element.
Level 2: Area Control: All elements that are required for area control or train control functions.
Level 3: Overall Control: All elements that are needed for central control and business logic (as planning
and disposition).
Level 4+5: The Enterprise/Office network of the asset owner.
B.3.3 Defence in depth

The principle of the defence in depth approach is to ensure that countermeasures are still in place even
if a security breach has occurred. The following NIST principles of security provide one possible solution
to achieve defence in depth.
Prevention – Prevent attacks against assets to ensure the Availability, Integrity and Data Confidentiality
of systems and information.
Detection – Detect abnormal behaviour and trigger alerts for the rapid identification of a security breach,
incident or suspect activity.
Respond – Respond to a detected security incident by taking appropriate actions for recovery.
Application of the defence in depth principle can be based on the system and component requirements
of IEC 62443. The requirements of IEC 62443 can be taken as a guideline to achieve compliance with
the NIST principles of security:
— Prevention

o Authentication of users (human users, devices, SW)

o Access Control - Control of access to devices

o System Integrity (SW and HW)

o Segmentation of the network (separation of essential/safety devices from non-essential)

o Comprehensive software patching process

93
CLC/TS 50701:2021 (E)

— Detection

o System monitoring (situational awareness)

o Diversity (Safety and Security concept essential Devices, redundancy)

o System and network segmentation

— Response

o System monitoring (situational awareness)

o Diversity (Safety and Security concept essential Devices)

o Incident reporting.

As a result, there should be more than one defence that needs to be overcome to breach the system
which could be selected and weighted from the variety of the security functions according feasibility and
cost.
B.3.4 Basic risk analysis

Like newly developed systems, legacy systems should be analysed with respect to security in a structured
and comprehensive manner. The main difference is that the legacy system is already completely defined
while the approaches for risk assessment in Clause 6 and 7 are more targeted at systems to be
developed. So for legacy system directly the possible attack scenarios through known weaknesses in the
design or vulnerabilities many be addressed.
Attack trees are one possible way to systematically identify attack vectors for legacy systems and possible
mitigations to underlying vulnerabilities. Attack trees are used to analyse the system in a top-down
approach, starting from an abstract “loss of assets” scenario and resulting in possible threats at specific
attack vectors.
Dedicated vulnerability databases, e.g. based on Common Vulnerability and Exposures (CVE), are
suitable sources for the identification of vulnerabilities in utilized software modules and third-party
libraries. Another approach to find vulnerabilities within the system can be penetration testing.
A qualification of attack vectors helps to establish an attack cost model. Based on the outcome of the
analysis, additional countermeasures may be prioritized. Measures to reduce the attack surface should
be considered.
B.3.5 (Re-)Commissioning

The following activities are recommended during (re-)commissioning:


— Check of applied basic security mechanisms (e.g. a subset from B.4)

— Create a complete list of all network capable assets

— Create a restoration point / backup of all assets.

B.3.6 Site acceptance test (SAT)

At SAT, all implemented security mechanisms should be tested. This includes the following list:
— If hardening measures have been implemented, the effectiveness should be demonstrated (e.g.
disabled services, changed default passwords, etc.).

— Restoration of assets should be demonstrated.

94
CLC/TS 50701:2021 (E)

— Forged attacks by penetration testers should be visible in the IDS / SIEM.

Additionally, photos of the final installations should be taken and archived for later use.
B.3.7 Operation

The following activities are recommended during operation


— Visual inspection of installed systems (with help of installed-time photos)

— Validation of list of network capable assets

— Check of restoration capabilities (is the backup still accessible and still up-to date?)

— Security operation according to EN ISO/IEC 27001 or IEC 62443-2-1.

Security operators should monitor the SIEM (or IDS) as a minimum during normal office hours. Alarms
should be analysed and investigated.
If an incident is identified, a standard defined procedure of handling the incident should be executed. This
typically involves activities as triage (list indicator, type of compromise, amount, criticality and location of
affected devices), investigation (evidence collection, analysis of evidence), communication (internally,
externally), and remediation (network device shutdowns, clean-up, plan rebuild, plan prevention).
Special care should be taken on maintenance activities, especially on legacy systems where no cyber
controls are available: in this case, maintenance activities, potentially interfacing the unprotected core of
the system, can be a threat vector. A dedicated risk analysis considering the maintenance operation could
help managing those risks.
B.3.8 Training of personnel

Advanced attacks need escalation of privileges and interaction with legitimate users e.g. phishing attacks.
Personnel should be regularly trained. Awareness of cybersecurity risks should be kept at a high level.
B.3.9 Asset inventory

It should be ensured that systems are known in depth and it can be analysed where which versions are
in use.

B.4 Basic security countermeasures

B.4.1 General

This Clause describes the suggested cybersecurity measures for legacy devices.
B.4.2 Protect installation

In order to prevent unauthorized access to the operational network, the access should be physically
restricted.
Access to installations, especially on the operational network, should be restricted to authorized
personnel only.
Any installation should be protected according to the protection classes of EN 50600-2-5. The standard
lists a set of physical and technical access controls according to protection classes (starting from the
outer zone or fence, moving inwards to the building, then the inner building zones and individual room).
The technical measures include security lighting, video surveillance, intruder alarm system, access
control and alarm monitoring.
Trackside installation or installations in other open areas (e.g. on-board installations on trains) should be
secured by metal cases according to resistance class 3 of EN 1627. If the specific installation allows for

95
CLC/TS 50701:2021 (E)

access to the operational network, additional sensors to detect excessive motion or vibration should be
considered as a means to initiate a visual inspection.
B.4.3 Regular inspection of installation

Installations of equipment in the operational network, especially the locations of installed equipment such
as cabinets, racks and cable routes, should be inspected visually for modifications and additions on a
regular basis.
Photos of the installed equipment help to identify modifications. It is therefore recommended to have
access to the photos of the original installation during the visual inspection (e.g. as print-outs or on a
mobile device).
Seals can be used to reveal modification and tampering of installations. They can reduce the need for
detailed inspection unless the seal is broken.
B.4.4 Closed network / perimeter protection

Assets of a railway system are least susceptible to cyber-attacks when operated in a closed network. Any
access to or from the operational network is then prohibited by the network design (strict physical
separation).
If data needs to be sent from within the operational network, a data diode (allowing only uni-directional
data flow) should be used. Such a device prevents access to the operational network from the outside,
but still allows the sending of data outside to the external network. This allows for remote diagnosis,
export of data to cloud systems, and external intrusion detection analysis.
For example, if bi-directional data flow is required between the operational network and an external
network, a demilitarized zone (DMZ) is required. A DMZ usually consists of two application level firewalls
and at least one bastion host. The bastion host is a hardened server that terminates the data transfer
between the two networks. It restricts transmission to only based on deny all principle (address ranges,
protocols or commands).
B.4.5 Network segmentation / restricted data flow

Operational networks should be segmented to limit the consequences of a successful attack on one part
of the network, impeding access to other parts.
Network segmentation requires detailed analysis of the existing network and the data flow of the installed
devices. This analysis results in a communication matrix which can be used to restrict the routing of the
network resulting in a segmented network.
A network blueprint can be created for standard system configurations. It allows the use of configuration
tools that generate the required configuration files for the network elements.
B.4.6 Network monitoring system

A network monitoring system (NMS) can be used to detect new devices on the network (when such
devices use a different MAC address than the existing ones of the installation). Additionally, configuration
changes of network devices (as managed switches, routers and firewalls) can be detected.
A NMS should be installed in conjunction with managed switches. It should be configured to monitor all
network devices and to create alerts when unknown devices appear in the network or when the
configuration of network devices changes. If a Security Incident and Event Management System (SIEM)
is used, the alerts should be forwarded to the SIEM.
The operator should monitor the alerts generated by the NMS and react to those alarms (e.g. activities to
find and inspect the new device, find the reason for configuration change, etc.).
B.4.7 Intrusion detection / SIEM

An intrusion detection system (IDS) is used to detect anomalous network traffic. A IDS requires the
analysis of the network data (or at least the meta data of the transferred data) from relevant locations in

96
CLC/TS 50701:2021 (E)

the network. Depending on the network architecture, this data can be retrieved by means of one or more
network taps (e.g. data diodes) within the operational network.
An IDS typically works with a defined or trained baseline of normal operation and then reports deviations
from this baseline as alerts. These alerts can be picked up by a Security Incident and Event Management
System (SIEM). The SIEM provides an overview of security alerts for security operators and can correlate
these events with log entries from network devices such as switches, routers and firewalls.
B.4.8 Virtual private networks (VPN)

If site-to-site connectivity is required over an open network (e.g. public networks as Internet), VPN
technology should be used. Typically, access routers or wireless 3G modems provide integrated VPN
capabilities. VPN functionality should be enabled to setup a secure channel over a public network.
VPN have a security drawback since they essentially bridge across and combine two distinct networks. It
is therefore advisable to include the VPN connections in the overall network analysis and look for network
segmentation and filtering opportunities at the VPN end points.
B.4.9 Redundant communication

If redundant communication channels are used in the operational network, this can be used to further
enhance the detection rate of a Network Intrusion Detection System (NIDS).
If an attacker influences only one of the two channels, the NIDS can detect this attack instantly.
An alarm will be triggered when one of the channels is not available. If such an alarm is triggered, a
physical inspection of the communication path (from device to the communication end point) is
recommended, since this can be an indication of an attacker inserting an attack device in the
communication path (e.g. for a Man-in-the-Middle attack).
B.4.10 Security gateway

A security gateway (SG) can be added to each communication channel of a system or behind a media
converter in a field cabinet. An SG should be placed at each end of the communication path. The SG
shields a non-secure legacy device from unauthorized access and protects its communication with other
devices. Man-in-the-Middle attacks from the network are successfully prevented. Also, vulnerabilities of
the device cannot be exploited from remote locations.
Security gateways typically provide confidentiality through encryption (e.g. on transport layer by
TLS/DTLS) for all outgoing and incoming network traffic in an n-to-m fashion. This allows to use the SG
not only at the field level, but also at central locations.
The communication path between two security gateways is protected. However, the path between SG
and the legacy device is not protected. Therefore, additional protection mechanisms (e.g. door contacts
and other physical access restrictions) should be in place.
SG can be equipped by digital I/Os that can be used for door contacts or other tamper protection devices.
When the I/O status changes, an alarm is sent via the diagnostic interface.
B.4.11 Handling mobile devices

In legacy systems, some data might be transferred by using mobile devices like USB devices. To protect
the system against malware infections, those devices should be checked for malware continuously or
limited to only a one-time use.

97
CLC/TS 50701:2021 (E)

Annex C
(informative)

Cybersecurity design principles

C.1 Introduction

The design of cybersecure systems is dependent on the set of empirical and scientific principles that
essentially provide a roadmap to the achievement of desired security objectives, whatever the medium.
The principles are not rules or requirements but in general terms, they represent a mind-set that
influences and underpins the design and architecture of a system. It is also true that the design principles
are the foundation of the more detailed requirements that are in turn translated into functionalities.
The cybersecurity design principles therefore represent the wisdom of the design and architecture of
complex systems that are derived scientifically or empirically and are deemed to lead to a higher level of
attainment of security in an intended application.
The cybersecurity design principles presented in this annex in the following clauses have been derived
from best practice and review of existing sources and standards. They are developed and presented in a
templated format to ensure consistency and coverage. The selection of the principles relevant to a
development is left to the designer but it is recommended that a suite of synergistic principles is selected
together before the commencement of design activities and the security requirements are mapped to the
chosen set for traceability and consistency.

C.2 Secure the weakest link

1. Rationale and scope

o Since attackers are supposed to look for and attack first the weakest parts of a system, system
designers should consider the weakest links and the least protected aspects in their system and
ensure that they are secure enough. Security can be seen as a chain, which is only as strong
as the weakest link. This design principle aims to push the designer to consider the security of
all the components of the system and not only the most obvious, such as the protocols or the
interfaces.

o This principle is relevant to railway cybersecurity.

o Railway IT systems normally operate for many years in complex, multi-vendor, transnational and
mutable environments. Therefore, it is highly probable that the robustness of their security
chains to be tried over time. Securing the weakest link at design time should prevent the chain
to be easily broken when this happens.

o Principal stakeholders: Product Provider, System Integrator.

2. Guidelines for implementation in a railway environment

o All components, boundaries (internal and external) and data flows need to explicitly be captured
and transparently identified and described before the weakest link can be identified in a system
or architecture.

o A good risk analysis is essential to ensure a good level of security to the system. Standard
methods are available to define, implement and assess target security levels for an industrial
system.

98
CLC/TS 50701:2021 (E)

o This document provides a guide on how to use risk analysis to achieve desired security level on
railway applications. On the other hand (paradoxically), risk analysis methods tend sometimes
to prioritize on major problems, with the risk to leave the easiest ones out of scope.

o Secure the weakest link design principle requires the designer to pay attention even to the
components that are not identified by risk analysis as the most difficult/expensive to protect.

o For example, it is unlikely that a hacker tries to decrypt an encrypted communication from train
to ground, if he can more easily compromise the notebook of a maintainer (using social
engineering, for instance), with the final scope to install a malicious software in the train-to-
ground communication server.

o In many cases, the weakest link of a system is the human one and this should be taken into
serious consideration when designing every user interaction with the railway applications, for
instance, giving the minimum privileges to each user, as stated by “Grant the least privilege”
design principle.

o Designer should not find constraints in applying this design principle, even if the system already
exists. Legacy systems may contain many weak links. The principle asks to the designers to
identify them with open mind, avoiding the bias of thinking that firewall, encrypted
communications and antivirus software are all they need to secure a system.

3. Tailoring and prioritization

o Considering and implementing this principle is recommended for every components and
systems, whatever SL be selected.

o At every iteration of the risk analysis process, weakest links should be identified and secured
with appropriate countermeasures such that the overall system risk is acceptable.

4. Relationship with other principles

o This principle is related to the following principles for their robust implementation and
effectiveness:

o Defend in Depth,

o Fail Secure,

o Grant Least Privilege,

o Authenticate Requests,

o Control Access,

o Make Security Usable,

o Assume Secrets Not Safe,

o Promote Privacy,

o Audit and Monitor,

o Proportionality.

o When prioritising effort for the implementation, all related and synergistic principles should be
considered and weighed.

99
CLC/TS 50701:2021 (E)

5. Validation rules/requirements for implementation

o The use of this principle and supporting arguments should be documented transparently in the
risk analysis.

o A verification, monitoring and inspection process for every component and interface.

o Monitoring the dynamic behaviour of the protocols and seeking and identifying anomalies.

o Advanced Penetration Testing should be used to find out which weakest link has not been
adequately secured.

C.3 Defence-in-depth

1. Rationale and scope

o Defence in depth principle is a system principle that derives from the observation that no single
protection would be enough to definitively stop an attack. Two main drivers lead the
implementation of this principle:

— Ensure that no single vulnerability or breach would endanger the system.

— Provide response capabilities in the system by mixing

o preventing measures that would slow down the attacker

o detective measures that would allow the response team to detect, analyse and provide
adequate response to stop or mitigate the attack.

o The basis of defence in depth is the conjunction of several diverse protections in sequential
manners. To access targeted data, one has to pass through protection and access controls at
each logical frontier. It is often represented as concentric protections around the data, though
data are dispersed into the system, so this representation should be understood as starting from
every device and protections should be mutualised by zones and entities.

o The first barrier should be physical protection; with no trust on who can access the hardware,
there is no trust on the data. This is typically fence, doors and locks, cameras and guards.

o The second barrier is perimeter protection. When messages or data enter the network or the
system, it has to be verified to ensure that it may cause no harm to the system, either by bringing
malicious content, or by providing irrelevant data (spoofing, forgery, etc.) This is typically
implemented through firewall, proxy inside Demilitarized Zone, data diode, Intrusion Detection
Systems and Honey pots.

o Next protection is network access control within a zone. This protection ensures that no
unauthorized device is inside the perimeter, bypassing perimeter protection, and able to
maliciously interact with legitimate devices. This is typically implemented through asset
management, network access control (802.1x, etc.), secure network protocols, Intrusion
Detection System, probes and Honey pots.

o Host protection and integrity comes next, that is, protection at device interfaces and system
level. This protection level has to ensure that no interaction with the host may modify the normal
behaviour of the host and its guest applications. So, it has to protect device interfaces through
access control and hardening, but also detect any anomaly inside the host itself such as an
abnormal modification of host integrity. This is typically implemented through host-based firewall,
service access control, host Intrusion Detection System, integrity protection and detection
system, hardening and security logging.

100
CLC/TS 50701:2021 (E)

o Application protection, the fifth defence layer, protects the manipulation of the actual data. It
ensures that data manipulation is performed by a rightful agent, and no application input may
modify the computing of the data in an uncontrolled manner. This is typically input validation and
authentication, access controls, code hardening and event logging.

o Last layer is finally data protection. This level ensures there may be no access to the data in an
uncontrolled manner. This is typically hardware data protection through security modules and
CPU modes, Operating System protection such as access control, data protection while at rest
or on the move through cryptographic means.

o Principal Stakeholder: System Integrator.

2. Guidelines for implementation in a railway environment

o This principle is to be used at every level of design, from the widest view including physical and
operational protection, down to host, application and data level.

o The very meaning of this principle (layered protection), makes it relevant at any stage.

o Defence in Depth is an architectural principle that applies at every stage. It is to ensure that
protections are put toward the data to be protected. In railway environment, those protection
implementations need to be adapted to the specific context, but the protection typology is
applicable.

o Security measures will require balancing at different layers to enable the prioritization of safety
critical functions.

o The full implementation of this principle in railways needs to bear in mind the safety critical
functions of the operational environment as well as the availability of resources for safety-related
functions. This may imply incomplete implementation of host protection mechanisms that may
need to be delegated to network level.

o The correct timing requirements for the safe execution of the safety critical functions may be
adversely affected by the security mechanisms and that could require careful architectural
implementation of this principle.

3. Tailoring and prioritization

o Defence In Depth is the association of different countermeasures along any attack path, from
the external world down to the targeted data. The actual implementation of this principle has to
be refined and defined through secure architecture design principles and has to be coordinated,
harmonized and globally accepted.

o That is, the choice of precise robustness, complexity, protection level, of those controls may be
defined and reused and typical architecture, from one system to another. The zoning principle,
with conduits, provides a good way to view and manage implementation of Defence in Depth.
This implementation would then allow to justify the achievement of a Security Level Target in
every zone of the system.

4. Relationship with other principles

o The controls strongly associated with “Defence in Depth” are “Access control” principle and
“Audit and Monitor” principle. Defence in Depth basically is ensuring that any successive access
toward the data are controlled to ensure that it is authorized and rightful; and it is ensuring that
any deviation from control state would be detected and traced.

101
CLC/TS 50701:2021 (E)

o Correctly implemented “Defence in Depth” principle should provide a good overview to apply the
“Economise mechanism”. "Defence in Depth" provides a clear justification and measurable
efficiency of countermeasures that are placed in the system at each level.

o The layered structure of “Defence in Depth” principle can assist with the implementation of the
“Secure the weakest link” principle.

5. Validation rules/requirements for implementation

o Achievement of Defence in Depth is directly linked to risk mitigation shown in a risk analysis and
attack path analysis. The countermeasures placed along any attack path would limit the
likelihood of a threat to happen, and addition detection mechanisms would mitigate the
associated risk. Penetration testing and other appropriate testing methodologies would provide
confidence on the effectiveness of the implementation of this principle.

C.4 Fail secure

1. Rationale and scope

o The Fail secure principle means a security function is designed in a way, that in case of a failure
the security function or the secure system delivering the function remains in the secure state.

For example, in the case of loss of power and following the Fail Secure principle, the door has to
remain locked, meaning remain secure. In contrast with the safety approach which requires the door
in this example to be unlocked.

Because of the strict requirements for product safety by standards and legislation within railway
domain, the fail secure principle could be followed only in case no product safety requirement is
undermined or in contradicted. In all other cases the product safety requirements and architecture
should be dominant.

o This principle is relevant to railway cybersecurity and applicable in areas with no safety
requirements and architecture, or as a minimum with a risk analysis showing no indication to
compromise any safety requirement.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

The definition of some system requirements of IEC 62443-3-3 could be interpreted as


implementation of the Fail secure principle. Explicitly,

o SR 5.2 RE 3 Fail close

The control system has to provide the capability to prevent any communication through the control
system boundary when there is an operational failure of the boundary protection mechanisms (also
termed fail close).

Under the preconditions we discussed before (product safety) this is a broadly accepted security
requirement at system level, not only applicable as a result of an attack as well as a fall back condition
in case of a failure of the security function itself.

o SR 3.6 Deterministic output

The control system has to provide the capability to set outputs to a predetermined state if normal
operation cannot be maintained as a result of an attack or failure state.

102
CLC/TS 50701:2021 (E)

Under the preconditions discussed before (product safety) this is a system requirement that
addresses the outputs of a Component, so the system level fall-back mechanism will be affected by
this principle. In a second step, this may lead to the break down on component level for the
predetermined state of outputs. (Refer CR 3.6 Deterministic output).

As a conclusion system level requirements and component requirements should be carefully defined
and allocated in the fail secure case.

o Fail secure design always correlates with Robustness of a System or a Security Function. The
variety of concept starts with Fall-back solutions and restart conditions in case of computation
performance is lost (e.g. timeout, power cuts) and may end with complex redundancy concepts.

o Reduction of complexity is always considered. It seems more practicable in the end to have one
clear fall-back strategy than a variety of different local approaches. In case of existing safety
requirements this architecture is always applicable and may be reused as part of a fail secure
concept.

3. Tailoring and prioritization

o As mentioned before security functions do stand on their own but support a functional or safety
architecture of a System for a dedicated application. The requirements for this function have to
be not compromised. For this reason, those requirements have to be always considered and if
needed get priority.

4. Relationship with other principles

o This principle is related to the design and implementation of all other principles discussed within
this Clause:

o Defend in Depth,

o Economize mechanism,

o Proportionality.

5. Validation rules/requirements for implementation

o All failure modes of a system or component should be exercised and tested to verify fail security.

6. Criteria for validation

o All system or component failure modes should be identified, analysed and recorded as a basis
for verification and system level validation.

C.5 Grant least privilege

1. Rationale and scope

o Each component should have only those privileges to accomplish its specified functions, but no
more.

o During a typical advanced attack, a hacker looks for a software component that has enough
privileges to read confidential data, download malicious software, write and run scripts, send
commands impersonating authorized user, etc. Once such component is discovered, there are
many ways to substitute or augment its benevolent code with a malicious one and use its
privileges to do whatever is in the scope of the attack. The fewer privileges the component has,
the lesser interest it poses to an attacker.

103
CLC/TS 50701:2021 (E)

o This principle is relevant to railway cybersecurity.

o Railway systems have some peculiar aspects that can greatly benefit from components
designed accordingly to the least privilege principle. One of these is in the very nature of
transportation: to move people and goods through a vast technological network, made of
multitudes of components, many of which equipped with firmware and software and
communication interfaces.

o Interaction between components from different suppliers and even from different owners are
necessary and frequent. This greatly increases the likelihood for a railway software component
to come in contact with malicious software in search of high privileges to prey. Railway
components and system should be designed with this in mind.

o Principal Stakeholders: System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

o Least privilege is a pervasive principle and is reflected in all aspects of the system.

o For instance, to different users of the same railway application should be presented different
interfaces, carefully designed to give them all the tools they need to accomplish their tasks and
nothing more. The choice of the right interface for a given user is possible only after the user
has been identified and authenticated by the system and correct privileges have been retrieved
and assigned to the user. The user simply cannot do what the interface does not provide.

o Least privileges principle is not limited to giving users the right authorisations. It is also related
to the notions of modularity and encapsulation. A good system design is normally characterized
by a high level of modularity. A module is designed to do some specific functions and nothing
more. Even if internally it could do something else, because it has full access to low-level
components, it exposes to other modules only what it has been designed to do. In this way, the
module reduces the privileges of its user to the minimum required. In a railway environment, this
is a typical way to design safety-related systems.

o When using COTS, least privileges principle should be confronted with the fact that COTS are
normally designed to meet the largest possible application needs. Commercial operating
systems running in many industrial and railway COTS normally have many software components
that are not really needed for the specific application but are nevertheless available to the user.
The effort to apply the least privileges principle to COTS should be carefully considered or
hardened such that unwanted or unused features are not made accessible.

o Privileges has to be allocated on a Need to Know basis independent of the access privileges to
the system.

3. Tailoring and prioritization

o Considering and implementing this principle is recommended for every components and
systems, whatever SL is selected.

4. Relationship with other principles

o This principle is related to the following principles for their robust implementation and
effectiveness:

o Defend in Depth,

o Fail Secure,

104
CLC/TS 50701:2021 (E)

o Grant Least Privilege,

o Authenticate Requests,

o Control Access,

o Make Security Usable,

o Assume Secrets not Safe,

o Promote Privacy,

o Audit and Monitor,

o Proportionality.

o When prioritising effort for the implementation, all related and synergistic principles should be
considered and weighed.

5. Validation rules/requirements for implementation

o The use of this principle should be documented in the system or component configuration
procedures and in the user manual.

o Advanced Penetration Testing should be used to find out if privilege escalation is still possible
on any component of the system.

6. Criteria for validation

Review according:

o Self-Reliant principle (means functions or system are well defined and implemented and the
reliance on other systems is minimized).

o the keep it simple rule (means the solution is transparent, functionality easy to understand).

C.6 Economize mechanism

1. Rationale and scope:

o To reduce the attack vector for a potential attack in a significant way for long term perspective a
security in depth in approach is recommended as state of the art. To support this in an efficient
and demonstrable way, the integration of selected countermeasures in terms of cybersecurity
has to be realized with elegance (clarity, simplicity, necessity, expandability) of the interfaces,
together with a precise definition of the functional behaviour to support ease of analysis,
inspection and testing. Avoiding redundancies and overlapping of functionalities is the essence
of economize mechanism. This design principle is consistent with the often referenced and well
known Keep it simple Theorem.

o This principle is relevant to railway cybersecurity. Functions within Railway domain are
meanwhile realized with computer-based solutions often inclusive a variety of explicit
requirements for RAMS. For that reason, reduction of complexity is key for efficient and verifiable
solutions.

o Principal stakeholders: Product Provider and System Integrator.

105
CLC/TS 50701:2021 (E)

2. Guidelines for implementation in a railway environment

To reach this objective the following methods has to be considered:

o Abstraction-Method to reduce complexity outcome has to be the identification of commonality of


security services for different functions or components, which can be implemented once and
reused or instantiated multiple (Avoiding of redundant implementation by design). It can also be
achieved through a specific configuration of useful parameters for different instantiations of such
services. Take Client Server Architecture principle into account. The interfaces should be
defined with elegance (clarity, simplicity, sufficiency) combined with a precise definition of the
function triggered by events and time.

o Encapsulation- or information hiding paradigm. Following the Abstraction step described before,
start with the principal definition of an API for the planned services or function before the internal
functionality of a service will be specified. Be aware that the encapsulation factor is significant
indicator for the quality of the design.

o Transparency and traceability of system requirements versus component requirements.

It has to be clearly identified which security function is an implementation of a system requirement.


Every component requirement has to be a traceable implementation of a system level requirement.
Components has to not impose independent security requirements that are not related to system
level. Be aware every system requirement has to be implemented on Components. For system
requirements often, distributed solutions are required. Design once and deploy anywhere should be
the preferred code of practice!

o Allocation to function of cybersecurity within the different defensive architectural Layers of the
system and component. As a prerequisite a layered architecture-model has to be established.
To enable a clear allocation of the functional properties within the architecture and a clear
allocation of the cybersecurity functionality. For example implementation of a filter on Transfer
OSI layer 2 (MAC layer) and OSI layer 4 (protocol layer) has to be done simultaneously. It is
very helpful to separate these filtering functions according to the appropriate layers.

o Allocation to time of cybersecurity functions A state diagram-based model of the functional


Architecture is helpful to add the specific security functionality in a timely context.

NOTE In safety-related operating systems strict monitoring of computation time mechanism (watchdog) is often
implemented. To avoid time overrun conditions, the execution time for additional security functions need
sufficient spare time especially for worst-case scenarios.

Both Allocation principles for Functions & Time support a maximum of transparency from an overall
perspective of a System and ease the integration by time and context to the operational function.

o Robustness of Implementation

For security function (not limited to monitoring) sufficient memory allocation and memory
management has to be foreseen. Memory overwriting, timeout conditions, missing events scenarios
along all identified task phases has to be considered.

Simple Fall-back solutions and restart conditions in case of degradation of computational


performance (e.g. timeout, power cuts) to avoid inconsistencies of data and data-streams has to be
devised. If not otherwise possible, mark those events for further management of data integrity during
forensic issues.

106
CLC/TS 50701:2021 (E)

3. Tailoring and prioritization

o Be always aware security functions are not isolated. Security functions support a functional or
safety architecture of a System for a dedicated function. The requirements for this system
function have to be not compromised through security measure implementation. For this reason,
those requirements have to always be considered and if needed get priority.

4. Relationship with other principles

o This principle is related to the design and implementation of all other principles discussed within
this Clause:

o Defend in Depth,

o Fail Secure,

o Grant Least Privilege,

o Authenticate Requests,

o Control Access,

o Make Security Usable,

o Assume Secrets not Safe,

o Promote Privacy,

o Audit and Monitor,

o Proportionality.

o This principle is relevant when it comes to design and implementation of security functions
derived from the other principles herein or derived from cybersecurity requirements itself to
realize robust and resilient security functions.

5. Validation rules for implementation

o Indicators for a good design and implementation processed with economized mechanism are:

o Traceability - Requirements to Implementation,

o Transparency and Correlation of function to time,

o Encapsulation of Functions, reduced access/usage of shared information,

o Design of data structures, support of data consistency (avoid fragmentation of important


control data also in relation of time).

6. Criteria for validation

Review according:

o Self-Reliant principle (means functions or system are well defined and implemented and the
reliance on other systems is minimized).

o the keep it simple rule (means the solution is transparent, functionality easy to understand).

107
CLC/TS 50701:2021 (E)

C.7 Authenticate requests

1. Rationale and scope:

o Considering environment hostile, it’s necessary to check identity of users (human users,
components/devices and processes), to protect against unauthorized access first, and to ensure
the identity of the sender of a network message.

o Intent and relevance to railway cybersecurity: This principle is one of the best practices applied
for defence in depth, recommended for critical systems, in order to reduce the attack surface.
The distributed nature of control/command as well as the mobile nature of rolling stock makes
implementation of this principle challenging in the railway environment.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

o Where technically feasible, it is useful to identify and authenticate all users (humans, software
processes/agents and devices) attempting to access the system.

Authentication can take several forms:

o ID and password (often used for users in order to assign role and associated privileges; ID
should be assigned at individual level (agent identity) when possible, in particular for high
privileges / high critical systems. Generic role ID are possible in few cases for low privileges.),

o Physical token (sometimes for users; e.g. smart card),

o Certificate (often for portable/mobile devices; for wireless access with e.g. IEEE 802.1X
protocol),

o Authentication codes (for communications between control system components),

o Digital signatures (for software images or patches),

o Biometrics or location-based authentication can also be used for users,

o In case of remote access, it is recommended to use strong / multifactor authentication (e.g. token
and PIN code for VPN access),

o In case of critical systems, it is recommended to protect all external access: other linked
systems, user access (e.g. maintenance port), wireless access, etc.,

o In any case, credentials (certificate, password, shared key, etc.) have to be updatable following
security policy,

o For ground networks, it is easier to use IT component as directory server, PKI or Radius server
to check authentication,

o For embedded networks, it is harder to use this IT component, so authentication based on a


username and a secret password is a very commonly used mechanism, with often shared
accounts. The interfaces that are not capable of integration into authentication solution, should
preferably be disabled where possible.

108
CLC/TS 50701:2021 (E)

3. Tailoring and prioritization

o Need and level of authentication have to be determined by the security level applied on the
considered zone, and by risk analysis (according with the data sensitivity inside the zone). The
target security level first, and the technical and organisational possibilities too, have to be taken
into account to define the best form of authentication.

o Develop a deep understanding of the related design principles (as control access and assume
secret not safe) and the organization during system exploitation (internal and/or external
stakeholders, maintenance process, connectivity, etc.), in order to determine the best choice for
authentication technology.

o Relationship with attaining a desired Security Level.

o Security measures have to be defined in accordance with the security level targeted.

4. Relationship with other principles

o Defence in depth: Authenticated requests is a part of defence in depth.

o Control access.

o Grant least privilege: Authenticated requests is commonly used to control access (user on
component or on access point of network; component on network) in order to allow
communication, to assign privileges on the system.

o Assume secret not safe: Authenticated requests often employ secrets (as certificate).

5. Validation rules/requirements for implementation

o During concept stage, it is useful to specify the functional validation tests for authentication.

o During deployment, it is useful to monitor vulnerabilities on authentication components used on


the system, to survey associated traceability (authenticated requests succeeded and failed) and
make regularly penetration-testing.

6. Validation criteria for and effectiveness at system level

o Prove effectiveness by using certificated component or making penetration-testing.

o brute force attacks that can validate the effectiveness of the implementation of this
authentication principle.

o review the duration of validity for historic user accounts.

C.8 Control access

1. Rationale and scope

o Access to all resources, assets and objects in a railway application has to be controlled in order
to grant access only to authorized entities (users, programs, processes or other systems).

o As access control is strongly dependent of the operational concept, system designer and
operator have to work out in close collaboration a role-based and origin authenticatable access
model.

o The account management system should be unified and unique.

109
CLC/TS 50701:2021 (E)

o Access to an asset can be physical as well as remote (LAN, WLAN, Internet).

o Physical access protection is in general insufficient due to the open nature of the railway
environment.

o Principal Stakeholders: Product Provider and System Integrator.

2. Guidelines for implementation in a railway environment

o Access control consists of two main measure types: a security policy and technical measures to
implement this security policy.

o The Security policy contains a set of rules that specify or regulate how a system or organization
provides security services to protect its assets (definition taken from IEC 62443).

o Implementation is based on one or more of the following means:

— Authentication and Authorization (e.g. IAM, passwords, PKI certificates),

— Network access controls (e.g. firewalls, 802.1x network access control),

— Physical countermeasures (e.g. fences, locks).

o In order to define and prioritize mitigation measures, the attack surface has to be analysed in
the risk analysis.

o The responsibilities of train drivers, signallers and maintenance staff in the system under
operation have to be supported by the security policy. For instance, a train driver cannot stop
his train because the cab blocks access to speed control.

o The persistence of authentication is a major issue (essential functions with reference to the
railway cybersecurity reference architecture such as control/command or safety critical functions
has to be not hampered by authentication).

o Continuous authentication (e.g. via biometric mouse) should be considered. Additionally, system
inherent security approach is recommended that means security countermeasures are part of
the integrated electronic and computer components.

o Guidance to the implementation of this principle is mainly given in the following clauses of the
IEC 62443:

— IEC/TR 62443-3-1:2009, Clause 5 - Authentication and authorization technologies

— IEC/TR 62443-3-1:2009, Clause10 - Physical countermeasures

3. Implementation constraints

o Due to the large geographical spread of railway infrastructure, it is a challenge to physically


protect every part (e.g. cable ducts along the track). Thus, it is crucial to monitor unprotected
areas by subsystems such as Intrusion Detection Systems or Video Surveillance System which
in term are located in physically protected areas.

4. Tailoring and prioritization

o Considering and implementing this principle is recommended for all components and systems,
whatever SL-T is required.

110
CLC/TS 50701:2021 (E)

o Defence against Safety relevant threats has highest priority. But maintaining the Availability of
operational services (e.g. route setting as an essential function) is also a top priority target (see
also essential safety and operational functions as defined by IEC 62443).

5. Relationship with other principles

o This principle is closely related to the following design principles:

— Authenticate Requests,

— Audit and Monitor

o This principle is an essential precondition for the application of the following ones:

— Grant Least Privilege

— Audit and monitor

o Since access control involves the storage of personal information, a special care should be given
to privacy implications and a potential conflict should be considered with:

— Promote privacy

6. Validation rules/requirements for implementation

o The use of this principle and supporting arguments should be documented in the component
and system level security requirements

o Requirements for the operation of this principle are specified in the following clauses of the
IEC 62443:

a) IEC 62443-2-1:2010, Clause 7 - SPE 3 – Network and communications security

b) IEC 62443-2-1:2010, Clause 10 - SPE 6 – User access control

o Requirements for the system design are specified in the following Clause of the IEC 62443:

a) EN IEC 62443-3-3:20191, Clause 5 - FR 1 – Identification and authentication control

b) EN IEC 62443-3-3:20191, Clause 6 - FR 2 – Use control

o Advanced Penetration Testing should be used to find out which weakest link has not been
adequately secured.

o A Test Case for establishing the effectiveness of the Security Access Control requirements
should be developed and implemented

o Additionally, check cybersecurity policies for Access Control e.g. key management

C.9 Assume secrets not safe

1. Rationale and scope

o Cybersecurity design needs to assume that an attacker would know all the details of the system.
Even if an attacker has not at start all the design information of the system, it is trivially easy to
determine obscure information. Public sources, social engineering on internal sources, mapping

111
CLC/TS 50701:2021 (E)

tools, decompilers and disassemblers, are standard and efficient means for an attacker to get
any information that was thought to be hidden in the design.

o Then, at any level, security can rely neither on the secrecy of the inner design nor on magic
values that would be planted into the system such as hidden keys or private accesses. It is
especially relevant on choosing communication protocols and technologies, where people may
think that a propriety design would be sufficient to protect the transmitted data, or for component
access control, where people may be interested in planting debugging full access on the
component, relying on the secrecy of this access or its hardcoded authenticator.

o Principal Stakeholders: Product Provider, System Integrator

2. Guidelines for implementation in a railway environment

o The first key to proper cybersecurity design with “Assume secrets are not safe” principle in mind
is “[to] always assume that an attacker knows everything that you know – assume the attacker
has access to all source code and all designs. Even if this is not true [at start]” 2

o Understanding that, security of the system should rely on algorithms and protocols that minimize
or even nullify the need for secret data. One-way functions or asymmetric protocols have been
designed in this goal in mind:

— Salted and hashed passwords database is a good reference to check a user identity claims
(through password authentication), though carrying no direct information on the passwords.

— A public key, signed by a certificate authority, is a practical data allowing the establishment
of a secure communication channel, and that can be distributed all over the system or even
divulgated without fear for the secrecy of the secure channel.

o The remaining secret data, as small as they remain, hold all the security of the system and
should be protected as such. Typical protections are:

— Key lifetime, where secrets are changed as soon as their secrecy is not guaranteed or
regularly,

— Forward Secrecy, where the secret is present in the system for a limited time and
recoverable afterwards,

— Hardware secure container (Trusted Protected Module) where the secret may enter, be
used, but never be extracted,

— Secret sharing among individuals, where n people need to be together for the secret to be
usable.

3. Tailoring and prioritization

o This principle is about technology and protocols choices for a system.

o Those choices should rely on standard design that is shared among industries and project
implementations. This will ensure that the security is not based on hidden technical feature,
whose discovery would endanger the whole system, and that any key data whose confidentiality
is mandatory is well identified and managed.

2 Howard and LeBlanc, Never Depend on Security Through Obscurity Alone, Clause 3, Security Principles
to Live By

112
CLC/TS 50701:2021 (E)

o Once implemented, the technology and this principle implementation is easily used across other
system instantiations and projects.

4. Relationship with other principles

o The implementation of “Assume secrets are not safe” principle is key in the technical design
decisions used in access control, needed by “Control access”, communication protection, of
“Authenticate requests”, and data confidentiality, needed by “Promote privacy”.

o Those security functions need data confidentiality. “Assume secrets are not safe” principle
implies that the functions or the data (access control, authentication mechanism, private data)
cannot be regarded as secure over the entire lifecycle. Standard cryptographic technology
enables the protection or confidentiality of information through a limited small secret key data
set.

o Limiting the portion of secret data are key to achieve both “Economise principle” and “Make
security usable”. If the entire system or large amount of data have to be protected at the same
level, operational constraint and access control protection should be deployed globally.

5. Validation rules/requirements for implementation

o Security function should be based on standard technology and protocols, whose design and
principles are public and understood, so they rely on no secret mechanisms.

o System documentation should describe the data whose exposure undermine the security
assumptions. Protection mechanisms and use-cases (who can access this data, under which
control) should also be provided in the design documentation.

o The validation requires a strict traceable process that has been followed to implement the
principle that may include various tests, evaluating evidence and conclusion.

C.10 Make security usable

1. Rationale and scope

o Avoid annoying and painful mechanisms, and do not compromise usability for security.

o Intent and relevance to railway cybersecurity: this principle is important, because if security
makes performing jobs really difficult (e.g. for operators or maintainers), people will try to bypass
the security, consequently security measure becomes ineffective, and the risk covered not
mitigated. And security measures should not deteriorate the performance constraints (e.g. real-
time constraints, etc.).

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintainer.

2. Guidelines for implementation in a railway environment

o For a railway project/application, security risk analysis has to be carried out.

o All those security measures arising from risk analysis which are technical or organisational
should have a responsible person for its implementation.

o Regarding organisational aspects, the measures should have the acceptance of the measures
by all concerned people.

o Regarding technical aspects, tests of end-to-end chain, comprising the security equipment, have
to be carried out in order to validate that all the performance constraints are respected.

113
CLC/TS 50701:2021 (E)

o For example, approval committee with security responsibility, project managers and all the
responsible of security measures implementation can be tasked with the acceptance of exported
constraints: analysis of usability of measures (technical and organisational).

o If usability undermines security, the committee with the overall security responsibilities should
be notified in order to identify corrective measures.

3. Tailoring and prioritization

o Need of security should be determined by the security level applied to the considered
zone/conduit, and by risk analysis. In regard to the target security level first, the technical and
organisational requirements have to be defined, taking into account usability.

o Some types of projects will require more usability (e.g. project with Man-Machine Interface for
operation and/or maintenance).

o Demonstration of usability varies depending on the nature of the project; project with high
performance constraints will demand much demonstration by tests.

o Usability should be implemented with a view to avoid compromising essential operational and
safety functions.

4. Relationship with attaining a desired security level

o usability is relevant to all SLs but for high SLs, security should be given priority over usability.

5. Relationship with other principles

o This principle has dependencies with all security principles, for it has to make all security
principles usable.

o If a security measure has impact on organisational or performance aspects, this principle has
priority.

6. Validation rules/requirements for implementation

o During development phase, it is essential to conduct functional validation for technical security
measures alongside usability for the functions.

o During development phase, it is useful to involve maintainer and operator to present the security
measures, in order to consult and gain their acceptance.

o The full implementation of the security mechanism by all operators is a good opportunity to check
for usability. Any violations should be investigated to identify usability issues as a root cause.

o Approval committee with security responsibility, project managers and all the responsible of
security measures implementation can be tasked with the action plans: to analyse/carry out
issue, a non-conformity arising from validation tests.

7. Validation criteria in a railway project at system level

o The validation criterion is the formal acceptance by the user of the security measures (technical
or organisational) and successful tests concerning the respect of performance constraints.

114
CLC/TS 50701:2021 (E)

C.11 Promote privacy

1. Rationale and scope

o applies to User as well as System data availability and accessibility. Only collect user’s
personally identifiable information for a given task and securely store and delete after purpose
is done. For system level privacy, eliminate or minimize information given by system services.

o This principle is relevant to railway cybersecurity.

o The case of driver data entry is an illustration of the application.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment:

o Collect only the minimal personally identifiable data for a given user category in a given
application,

o Store and limit access to any user derived/sourced data,

o Remove or limit system services status and data display/reports such as IP addresses, Version
numbers and Operating System, System Configuration parameters on a given target machine,

o Deliberate obfuscation or misreporting of system/service configuration data should be


considered for SL3-4,

o Use a firewall to block access to other services not relevant to the required transaction,

o Encrypt all critical/sensitive data stored and the keys maintained on a different machine. Also
ensure encryption and decryption take place on a different machine to where the data are stored,

o Enforce request for additional information before granting access to sensitive data,

o Once the purpose for elicitation, storage and processing is achieved, securely delete the user
identifiable data,

o Where user identifiable data are required for a duration of time beyond single instance, protect
the data and limit access to authorized system operators such as for cases for non-repudiation
needs.

3. Tailoring and prioritization

o User privacy is often traded off against system usability and convenience.

o Assurance of privacy should be balanced against data criticality and traceability should the
security of the system be compromised. Any tailoring should be principally based on risk of
misuse should data security be compromised by malicious intent. Alternative approach to
tailoring is to show partial user identifiable data with some aspects blanked off to make this
virtually useless for misuse.

o Observing and implementing this principle is recommended for all security levels for components
and systems.

115
CLC/TS 50701:2021 (E)

4. Relationship with other principles

o This principle is related to the following principles for its robust implementation and effectiveness:

a) Defence in Depth,

b) Grant Least Privilege,

c) Authenticate Requests,

d) Make Security Usable,

e) Assume Secrets not Safe,

f) Proportionality.

o When prioritising effort for the implementation, all related and synergistic principles should be
considered and weighed.

5. Validation rules/requirements for implementation

o Two classes of evidence for implementation are required namely:

a) User Privacy,

b) System Privacy.

o Develop Test Scripts and conduct penetration testing for User and System data based on the
implementation guidelines provided above.

o Consolidate test performance and capture for validation of Privacy properties of the system.

o The system integrator should document the user and system private data, their locations,
protection measures and outcome of validation tests.

o Promote Privacy for users, system and all the applications.

C.12 Audit and monitor

1. Rationale and scope

o To fulfil the requirement for incident reporting for critical infrastructures additionally also required
by the EU NIS directive, monitoring and audit capabilities for system integrity (statically for
components and dynamically for dataflow) is a fundamental service for operation and
maintenance of all railway control systems.

o This principle is relevant to railway cybersecurity.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

To reach this objective the following capabilities should be supported:

o Distribution of a Unique System time (distribution of time for a unique time reference),

o Unique Identification/location of Components/Network/Users within the log and monitoring data,

116
CLC/TS 50701:2021 (E)

o Management of Storage space for temporary and archive (Warning level and report of memory
underrun conditions),

o Implementation of overload strategy, marking/identification of loss of login/monitoring data,

o Implementation of User authentication and Access control for log and monitoring data,
equivalent to the defined SL-T of the zone as originator of those data,

o Implementation of the data confidentiality for log and monitoring data, proportionate with the SL-
T of the zone.

3. Tailoring and prioritization

o Beware security functions are auxiliary and not the main functions of a railway system. Security
functions support a functional or safety architecture of a System for a dedicated application.

o The requirements for the critical functions should not be compromised by enhanced monitoring.
For this reason, those requirements should always be considered and if needed get priority. For
example, upload of comprehensive monitoring data can reduce communications bandwidth and
capabilities in a railway application.

4. Relationship with other principles

o For the realization of the monitoring and audit principle, those principles listed hereafter are
explicitly helpful to generate secure solutions as part of security in depth approach:

o Grant Least Privilege,

o Authenticate Requests,

o Control Access,

o Make Security Usable,

o Assume Secrets Not Safe,

o Promote Privacy,

o Proportionality.

o Monitoring and Audit belong to the key properties for a security in depth approach and serve as
a prerequisite to verify authentication of users, use control and verification of the Integrity of the
System.

o Monitoring generates the forensic evidence for successful implementation of other principles.
The extent of monitoring implemented therefore should be proportionate to the evidence
required for protection/successful implementation of other principles.

5. Validation rules/requirements for implementation

o The following key properties are identified for the verification of a sufficient and robust Monitoring
and audit solution of a railway control system.

— A system review to set out the criteria for adequacy of monitoring.

— Completeness (available data for all controller and interfaces?).

117
CLC/TS 50701:2021 (E)

— Unique time (correlated to all entries).

— Recovery concept for robustness (fall-back solution of memory limitation, overload,


temporarily and final storage).

— Lack of misleading monitoring (mismatch of events and data).

C.13 Proportionality principle

1. Rationale and scope:

o Balance security and utility, Security is about trade-offs. This principle is intended to answer if is
it worth it for security versus utility/usability.

o This principle is highly relevant to the railway cybersecurity.

o This is a universal principle that relates to all security risks in the railway environment from
SCADA and Passenger Information Systems to the safety critical ERTMS and CBTC systems.

o Principal Stakeholders: System Integrator.

2. Guidelines for implementation in a railway environment:

o Identify security versus usability/experience conflicts in the system design,

o Establish and record value, sensitivity and criticality of information and control-command assets,

o Identify and assess security vulnerabilities for component and system services and evaluate
economic loss associated with modification, denial of service or disclosure/misuse of information
versus cost of implementation for corresponding risk control mechanisms,

o Risk control/mitigation mechanisms comprise four broad strategies namely:

o Avoid,

o Treat (eliminate, mitigate, control, etc.),

o Transfer (to other entities),

o Tolerate/Accept,

o In assessing information security risks, consider vulnerability, probability/frequency of


attacks/violations and expected loss,

o Balance Security versus Utility bearing in mind the attackers’ resources, the cost of risk control
and acceptable level of residual risk,

o Security is concerned about the effectiveness of a defence mechanism in a specific environment


and application. Thus, only security prevention or risk control mechanisms should be considered
for implementation that entail less cost than the untreated risk,

o Prudent assessment of “Due Care” and implementation of broadly accepted good/best practice
information security safeguards may also be an alternative to economic cost-benefit evaluation
approach,

o The constraints for this principle will arise from an economic risk evaluation or contrasting with
the best practice approaches, bearing in mind the mission and the environment as appropriate.

118
CLC/TS 50701:2021 (E)

3. Tailoring and prioritization:

o This principle is fundamentally tailorable since only security risk control mechanisms are
selected that entail costs proportionate to the loss arising from untreated risks,

o The principle promotes rationality versus over-reaction to irrational fears about security,

o This principle applies to all Security Levels. However, due to uncertainties in risk evaluation, the
cost may not be the deciding factor for SL3-4 applications and the precautionary principle should
be also decision criteria for risk control.

4. Relationship with other principles

o This principle is relevant to the viability of all other security risk principles.

o Proportionality should be an underpinning rule in implementing all other security principles.

5. Validation rules/requirements for implementation

o The evidence for compliance is the existence and records relating to a transparent and objective
economic evaluation of the loss arising from untreated risks versus costs associated with the
viable risk control mechanisms.

o If a risk control mechanism is worse or costlier than the problems it strives to solve, it is not
desirable or rational bearing in mind all uncertainties. No such cases should be permitted unless
supported by the Precautionary Principle.

C.14 Precautionary principle

1. Rationale and scope:

o making decisions on cybersecurity design in the face of high uncertainty or lack of adequate
scientific knowledge. When an activity or threats raises risk of harm to humans or the
environment, precautionary measures should be taken even if some cause and effect
relationships are not fully established scientifically/empirically.

o This principle is relevant to railway cybersecurity because of the long deployment life in railway
domain. Therefore, it is recommended to apply at least the precautionary principle for essential
devices instead of the proportionality principle.

o The likelihood of terror related cyber-attack on railway signalling and control command could
justify adopting this principle.

o Principal stakeholders: System Integrator, Operator and Maintenance

2. Guidelines for implementation in a railway environment

o This principle underpins the concerns that may result in implementation of preventative and
protection mechanisms in the design of railway IT systems and services.

o The principle can be applied in Strong and Weak variants.

o The Strong Precautionary Principle justifies security measures and costs in the face of serious
concerns over risk to health, safety, or the environment, even if the supporting evidence is
speculative.

119
CLC/TS 50701:2021 (E)

o The Weak Precautionary Principle still applies when certain mechanisms are deemed necessary
but as yet unsupported by empirical evidence.

o The case for major concerns over known control-command vulnerabilities or major threats
should be documented in support of adopting this principle.

o The protection and response mechanisms devised under this principle pertinent to the
perceived, yet unproven risks of attack should be stated.

o The implementation of current best practice mechanisms in the specific context of application,
based on a Reference Model constitute one approach to compliance with Precautionary
Principle.

o The Precautionary Principle can be employed to justify implementing a suite of mechanisms or


adopting a suite of cybersecurity design principles in response to perceived threats or known
vulnerabilities. This applies even when no historic precedent regarding risk to health, safety, or
the environment can be cited or the supporting evidence is speculative.

3. Tailoring and prioritization

o Considering and implementing this principle is highly recommended for SL3-4 components and
systems.

4. Relationship with other principles

o This principle is related to and could underpin all other cybersecurity design principles for their
robust implementation and effectiveness:

1) Secure the weakest link,

2) Defend in Depth,

3) Fail Secure,

4) Grant Least Privilege,

5) Economize mechanism,

6) Authenticate Requests,

7) Control Access,

8) Make Security Usable,

9) Assume Secrets not Safe,

10) Promote Privacy,

11) Audit and Monitor,

12) Proportionality.

o When prioritising effort for the implementation, all related and synergistic principles should be
considered and weighed.

120
CLC/TS 50701:2021 (E)

5. Validation rules/requirements for implementation

o The use of this principle and supporting arguments should be documented in the Case for
Security or equivalent records in the application.

C.15 Continuous protection

1. Rationale and scope

o Scope: this principle applies across the entire range of railway information technology, control
and command systems that require security protection.

o This is a universal principle that relates to all security risks in the railway environment on a
continual time basis.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

o All components and data used to enforce the security policy should have uninterrupted
protection that is consistent with the security policy and the security architecture assumptions.

o No assurances that the system can provide the specified confidentiality, integrity, availability,
and privacy protections for its design capability can be made if there are gaps in the protection.

o Any assurances about the ability to secure a delivered capability require that data and
information are continuously protected.

o Employ a continuous data protection (CDP) mechanism that allows the railway organizations to
continuously capture and track data modifications, automatically saving every version of
the data that is created locally or at a target repository.

o Ensure that there are no time periods during which data and information are left unprotected
while under control of the system.

o Data and information should be protected during the creation, storage, processing or
communication.

o Data and information should be protected during system initialization, execution, failure,
interruption and shutdown.

o Data and information should be protected during system and network maintenance and
upgrades.

o Continuity of protection should be ensured at Data Stack, Application Stack, Server Stack,
Network Stack, Physical infrastructure and policies and procedures.

o Data and information should be completely erased, and erasure verified during system
decommissioning and disposal.

3. Tailoring and prioritization

o This principle is sensitive to the security level and for SL3-4, almost 100 % protection is required
during all operational phases in all zones and conduits.

o Continuity of protection is proportionate with the SL for all zones and conduits.

121
CLC/TS 50701:2021 (E)

4. Relationship with other principles

o This principle is related to the following secure design principles:

1 - Secure the weakest link,

2 - Defend in depth,

3 - Fail secure,

4 - Grant least privilege,

6 - Authenticate requests,

7 - Control access,

8 - Assume secrets not safe,

9 - Make security usable,

11 - Audit and monitor,

12 - Proportionality principle,

13 - Precautionary principle,

15 - Secure Metadata Management,

16 - Secure Defaults.

5. Validation rules/requirements for implementation

o Penetration testing should be carried out during system initialization, execution, failure,
interruption, maintenance and shutdown to ensure continuity of protection for all data and
information.

6. Criteria for validation

o Acceptable penetration testing outcomes.

C.16 Secure metadata

1. Rationale and scope

o The Scope (from NIST SP 800-160, F.2.2): This principle states that metadata has to be
considered as top-class objects with respect to security policy when the policy requires complete
protection of information or it requires the security subsystem to be self-protecting.

o This principle is driven by the recognition that a system, subsystem, or component cannot
achieve self-protection unless it protects the data it relies upon for correct execution.

o Data are generally not interpreted by the system that stores it. It may have semantic value (i.e.
it comprises information) to users and programs that process the data. In contrast, metadata are
information about data, such as a file name or the date when the file was created.

o Principal responsible Duty Holders: Product Provider, System Integrator.

122
CLC/TS 50701:2021 (E)

2. Guidelines for implementation in a railway environment

o All data and metadata relating to critical items of infrastructure and rolling stock should be
protected.

o Access to and modification of meta-data should be restricted to the highest level of access
control.

3. Tailoring and prioritization

o The securing of metadata pertaining to critical sub-systems and components of the railway
system has the highest priority since the trust in the security of the operation of such sub-
systems is dependent on the integrity of the metadata.

4. Relationship with other principles

o This principle is related to and could underpin following cybersecurity design principles:

1. Grant Least Privilege,

2. Authenticate Requests,

3. Secure Defaults.

5. Validation rules/requirements for implementation

o Access rights of different roles to metadata (e.g. file date) should be organized according to
‘least privilege principle’.

o If feasible, access to file metadata should be restricted to the operating system itself. Otherwise
it should be restricted to the administrator role.

o The respective hardening and the resulting access rights should be documented.

6. Criteria for validation

o Penetration testing to ensure metadata in all critical sub-systems and components is protected
against unauthorized access.

C.17 Secure defaults

1. Rationale and scope

o This principle states that the default configuration of a system should reflect a restrictive and
conservative enforcement of security policy.

o It applies to the initial configuration of a system as well as to the security engineering and design
of access control and other security functions that should follow a “deny unless explicitly
authorized” strategy.

o A good design of the initial status of each component is essential to the resilience of the system
against cyber-attacks.

o Principal Stakeholders: Product Provider, System Integrator, Operator and Maintenance.

123
CLC/TS 50701:2021 (E)

2. Guidelines for implementation in a railway environment

o Any “as shipped” configuration of a railway system, subsystem, or component should not aid in
the violation of the security policy.

o If the protection provided by the “as-shipped” product is inadequate, the stakeholder should
assess the risk of using it prior to establishing a secure initial state.

o Examples of inadequate initial state are:

a) built-in accounts with high privileges (e.g. root, admin or superuser);

b) availability of account details (address, username, passwords) in the installation procedure


documents or in the user manuals;

c) minimal or absent default security policy (e.g. strong password policy disabled by default).

o A system designed according to secure defaults principle will operate “as-shipped” in such a
way to prevent security breaches before the intended security policy of the system is
established.

o A valid implementation of this principle can be to prevent the system from operating at all, until
the security policy is fully configured by the operational user.

o Adherence to the principle of secure defaults guarantees a system is established in a secure


state upon successfully completing initialization. Moreover, in situations where the system fails
to complete initialization, either it will perform a requested operation using secure defaults or it
will not perform the operation.

3. Tailoring and prioritization

o Considering and implementing this principle is highly recommended for all security levels of
components and systems.

4. Relationship with other principles

o This principle is related to and could underpin following cybersecurity design principles:

— Fail Secure,

— Grant Least Privilege,

— Authenticate Requests,

— Control Access,

— Make Security Usable,

— Continuous protection.

o When prioritising effort for the implementation, all related and synergistic principles should be
considered and weighed.

5. Validation rules/requirements for implementation

o Initial configuration of the system and, recursively, of each subsystem and components should
be fully documented.

124
CLC/TS 50701:2021 (E)

o Procedures to modify such initial configurations should be available only in a controlled


distribution environment (e.g. not freely available online or distributed during tender phase).

o Initial configuration management should be documented in the Case for Security or equivalent
records in the application.

6. Criteria for validation

o Testing of all operational scenarios should prove the component and system defaults are secure
against unauthorized access.

C.18 Trusted components

1. Rationale and scope

o The principle of trusted components states that a component has to be trustworthy to at least a
level commensurate with the security dependencies it supports. This underpins the degree that
a component or sub-system is trusted to perform its share of security functions by other
components.

o Principal stakeholders: Product Provider, System Integrator, Operator and Maintenance.

2. Guidelines for implementation in a railway environment

o The Trusted Component principle is universal and largely domain independent. In this spirit, it
applies to the railway applications in its entirety.

o All new railway systems incorporating communications and computing and processing
components should ensure trusted components are employed in the realization of the service,
control and command functions.

o The principle enables the composition of components such that trustworthiness is not
inadvertently diminished and where consequently the trust is not misplaced.

o The principle demands metrics by which the trust in a component and the trustworthiness of a
component can be measured on the same abstract scale.

o The principle is particularly relevant in systems and components in which there are complex
chains of trust dependencies.

o The principle also applies to a compound component that consists of several subcomponents
(e.g. a subsystem), which may have varying levels of trustworthiness.

o The overall trustworthiness of a compound component is that of its least trustworthy


subcomponent as a conservative measure. It may be possible to provide a security engineering
rationale that the trustworthiness of a particular compound component is greater than the
conservative assumption; however, any such rationale should reflect logical reasoning based on
a clear statement of the trustworthiness goals, and relevant and credible evidence.

3. Tailoring and prioritization

o This principle is fundamental to security assurance and should be an integral consideration in


adoption of security principles pertinent to any railway applications from the viewpoint of security
and RAMS performance.

o Trustworthiness should not be misconstrued as enhanced availability or layering in a defence in


depth approach to the secure design of components and systems.

125
CLC/TS 50701:2021 (E)

4. Relationship with other principles

o As a fundamental principle, trusted component is related to all other cybersecurity design


principles in this annex.

5. Validation rules/requirements for implementation

o The trustworthiness metrics should be derived and employed in the system design and
configuration.

6. Criteria for validation

o The least trustworthy component or sub-system should be regarded as the weakest link in the
security architecture that would undermine the existence of higher trustworthy components and
systems.

126
CLC/TS 50701:2021 (E)

Annex D
(informative)

Safety and security

D.1 Introduction

The discussion on the relationship between cybersecurity and safety has produced many different and
contradictory recommendations. In CLC IEC/TR 63069 some general guidance for standardization has
been worked out, which is used as the basis in this Annex D, which aims at a more specific derivation
and justification of basic principles for the railway field.
Concerning terminology, ‘security’ is used in this annex synonymously for cybersecurity unless physical
security or other issues are meant. In the same way, ‘safety’ is used for functional safety. It is assumed
that the reader is familiar with the basic safety and security concepts as stated, e.g., in standards such
as EN 50126 or IEC 62443 series.

D.2 The differences between safety and security

Safety and security have


— complementary goals: safety mainly seeks to protect people or the environment from malfunctions
of automation systems, while security aims to protect the technical systems from attacks from the
environment

— different regulatory authorities, e.g. the Federal Railway Authority (EBA) and Federal Office for
Information Security (BSI) in Germany, the National Cybersecurity Agency (ANSSI) in France, the
European Union Agency for Railways (ERA) and the European Union Agency for Network and
Information Security (ENISA) in Europe, etc.

— different concepts e.g. what is a hazard in safety is a threat in security

— different communities, e.g. journals, conferences and standardization committees are mostly
separate

— different standards, e.g. the EN 50126 series for RAMS (including safety) and the ISO 27000 or
IEC 62443 series for security.

In safety, frequent changes should be avoided because of the cost of safety demonstration, In security,
update should be easy in order to be able to patch the system in a timely manner. There is the trend to
segregate security from safety as far as possible.
Methods and solutions are also different, as are requirements, which are often conflicting. Let us take as
a simple example of an emergency message (e.g. to immediately shut down or stop a system). From the
safety perspective the message should be transmitted as fast as possible and the reaction should be
executed immediately. From a security perspective the message should be authenticated to prevent
masquerade which might lead at least to denial of service, if an attacker could simply send emergency
messages. But the calculation and checking of cryptographic codes consumes time and leads to a delay
of the emergency message and the reaction. Alternatively, emergency messages could be pre-calculated
at the sender side to save some time, but this may open the door for replay attacks. Another possibility
might be the cyclic sending of heartbeat message, which would trigger an emergency reaction if these
are not received in time. So the sender would simply stop sending heartbeats but the delay would depend
on the cycle time. In summary the trade-off in safe and secure design is not easy and it can be sometimes
hard to find an optimal solution.

127
CLC/TS 50701:2021 (E)

So, we should conclude that safety and security are different and that they cannot easily be merged.
Furthermore, security cannot simply be regarded as an add-on to safety or vice versa.
Principle 1: Safety and security are different and should be treated as such.

D.3 Security from a safety perspective

Safety relies on several environmental conditions or influences that need to be controlled in order to
guarantee safety. These are listed in 7.2 of EN 50129:2018 and form a mandatory subclause, “assurance
of safety with adverse external influences”, in the technical safety report, see Figure D.1. One of these
aspects to be covered is access protection and this is where security has its interface with safety.

Figure D.1 — Security as an environmental condition for safety

The view from a security perspective, e.g. IEC 62443, is similar. Here safety is viewed as an essential
function that needs to be protected. Other essential functions are operational functions or availability. This
means that safety functions can only fulfil their intended use in an appropriate security environment. And
this also explains why the UK Department of Transport is promoting “If it is not secure, it is probably not
safe.” This leads to
Principle 2: The security environment should protect essential functions, incl. safety.

D.4 Co-engineering of safety and security

Because of the many differences it is not reasonable to integrate safety and security. However, the
processes and lifecycles need to be coordinated and appropriate interfaces need to be established.
In particular, in safety risk analysis, hazards resulting from security problems need to be identified, and
they are then treated as threats in the cybersecurity threat risk assessment. Here the safety engineer
needs to provide support in order to assess the safety implications during the cybersecurity assessment,
but the derivation of appropriate security countermeasures is the responsibility of security engineers in
accordance with security standards. This gives

128
CLC/TS 50701:2021 (E)

Principle 3: Cybersecurity Threat and Risk Analysis is the main interface with Safety Analysis.
Finally, conflicts between the identified safety and security measures should be resolved. During the
safety risk assessment, the safety assessor assesses the safety implications of the SuC design which
includes the implementation of its security requirement (please note that the Safety Assessor does not
assess the security of the designed solution). Here it can be helpful if the security management supplies
evidence in a manner compatible with safety management, e.g. trusted verification documents with clearly
stated assumptions and application rules, so that safety and security assessments can be decoupled.
This generally results in
Principle 4: Separate security and safety as far as possible but coordinate them effectively.
This also holds for architecture principles or maintenance processes such as SW updates. If safety and
security were tightly integrated than any change in security functions might invalidate the safety case.
Here an effective strategy could be to rely from a safety case point of view only on those parts of the
security functionality that create a secure environment and on the application rules. So if both the security
functionality and the application rules remain unchanged, the safety case may remain valid even if the
security SW is updated. Nevertheless, a justification that the changes have no effect on safety should be
provided.
This is also recommended by the revised EN 50129, which recommends referencing security analyses
in the safety case only. In order to ease the integration, as well as compatibility, it is recommended to
base security considerations on established international standards such as ISO 27000 or IEC 62443.
Several analyses, e.g. by CENELEC SGA16 or Shift2Rail, IEC 62443 as the future baseline security
standard for railway automation have also recommended.
Principle 5: Security should be evaluated on the basis of international standards, e.g. IEC 62443.

D.5 Quantification of security

Safety-related security problems occur because of threats to the integrity of the system. These threats
arise from attackers who exploit vulnerabilities in the security environment. Attackers act intentionally,
using all the information about the system that they can obtain, according to a certain state of the art in
attacking or hacking. The degree might be different, depending on the attacker. So, differently from safety,
no probability or rate of an attack exists. The similarity to safety is that the causes of security threats are
similar to systematic faults in safety. Vulnerabilities often originate from errors in the security functionality,
mainly SW, which is similar to SW faults in safety. It follows that
Principle 6: It is infeasible to evaluate the Security Risk probabilistically.
The major difference is that in security an attacker is needed to exploit the vulnerability (and the SL is
related to the type of attacker), while in safety certain conditions in the operational environment trigger
the SW fault, resulting in a system failure. So security requirements need to be established in a similar
way to safety integrity requirements, i.e. a scheme of target levels similar to safety integrity levels (SIL).

D.6 The relationship between safety integrity levels and security levels

Security levels (SL) according to IEC 62443 are defined with respect to the type of attacker. SL 1
represents unintentional errors or foreseeable misuse only, while SL 2, SL 3 and SL 4 relate to intentional
attacks in which the attacker possesses increasing levels of knowledge, motivation and resources. As
safety considers security as an environmental condition it is immediately evident that measures according
to any particular SIL do not cover measures against intentional attacks. However, errors and foreseeable
misuse also need to be addressed by safety systems, so any safety system should also cover SL 1, at
least for requirements related to integrity. But for other SLs there is no automatic correspondence
between SL and SIL as the SL will always depend on the security environment. And it should also be
noted that security requirements cannot be fulfilled only by IT measures, but physical security measures
are also necessary. In summary the following principle is established
Principle 7: Safety and Security Target measures should not be coupled.

129
CLC/TS 50701:2021 (E)

However, there is a general relation between safety and security approaches. In safety there is the
general rule that the first fault should not be hazardous, see e.g. EN 50129. Depending on the system
design only a second similar fault may cause a failure. So many safety designs rely on detection and
negation of the first fault.
In security a similar concept exists: defence in depth. This means that no single security measure should
be regarded as sufficient. There should always be a second line of defence which protects against an
attack. This does not mean that both security measures need to have the same effectiveness, but even
for the most effective security measure there should be a fall-back. This implies that security measures
should also be monitored for their effectiveness.

D.7 Responsibility for security

As in safety, there is usually no single individual or body fully responsible for all security aspects. It is a
joint effort of the operators (often called asset owners in security), the system integrators (who supply
complete systems) and the suppliers (who sell components). But unlike safety, the evaluation processes
operate at a higher frequency in security. Even without any incident it is good practice to update threat
risk assessments at least once per year and to feed the results forward and backward to the stakeholders
at the interfaces. So, the conclusion is
Principle 8: Security is a collaborative continuous effort.
And similarly, to safety, effective security protection relies heavily on the company culture. Many
successful attacks show a similar pattern:
— first, the attacker gains access to the system (network),

— then the attacker explores the system, often trying to gain higher privileges, until

— finally, the attacker carries out the attack.

Access or higher privileges can be obtained by exploiting vulnerabilities (e.g. weak passwords) or by
social means such as phishing. Often, the attacker cannot achieve his goals without operators or
employees who breach security rules or are complacent. So it is very important that security awareness
is promoted and trained as part of the company culture.

130
CLC/TS 50701:2021 (E)

Annex E
(informative)

Risk acceptance methods

E.1 Introduction

This annex contains examples of risk acceptance methods that may be used in initial or detailed risk
assessment e.g. risk matrices (see 6.4 and 7).
For each method the following information should be documented:
— Impact Assessment

— Likelihood Assessment

— Risk Acceptance

— Justification

Justification and references may also be documented elsewhere but should be publicly available.

E.2 Example based on EN 50126-1

E.2.1 Introduction

The risk is the combination of the likelihood and the severity. Table E.1 below has been taken from
EN 50126-1, whereas in EN 50126-1 the term “Frequency” or “probability” is used in contrast to the term
“likelihood” in cybersecurity.
Table E.1 — Risk acceptance categories acc. EN 50126-1

Frequency of
occurrence of an
Risk Acceptance Categories
accident
(caused by a hazard)
Frequent Undesirable Intolerable Intolerable Intolerable
Probable Tolerable Undesirable Intolerable Intolerable
Occasional Tolerable Undesirable Undesirable Intolerable
Rare Negligible Tolerable Undesirable Undesirable
Improbable Negligible Negligible Tolerable Undesirable
Highly improbable Negligible Negligible Negligible Tolerable
Insignificant Marginal Critical Catastrophic
Severity of an accident (caused by a hazard)

In order to use the risk categories in security a respective mapping of frequency and severity to respective
cybersecurity categories has to be performed.
E.2.2 Impact assessment

Table E.2 below shows a mapping between Severity from EN 50126-1 and security consequence
expressed in terms of Railway Control Priority.

131
CLC/TS 50701:2021 (E)

Table E.2 — Mapping severity categories acc. EN 50126-1 to cybersecurity severity

Severity Severity Severity


category description description
(EN 50126-1) (EN 50126-1) (Cybersecurity)
no impact No injury No impact
Insignificant single minor injury confidentiality
Availability, moderate impact on
Marginal multiple minor injury
service
Single fatality and / or single System integrity and/or major
Critical
severe injury impact on service
Fatalities and / or multiple System integrity and/or severe
Catastrophic
severe injuries service impact

E.2.3 Likelihood assessment

Instead of frequency or probability in cybersecurity the term likelihood is used. The likelihood is evaluated
in terms of accessibility as follows:
The evaluation of the likelihood or accessibility respectively is done by assessing the following criteria
which are detailed in Table E.3:
— Expertise level (EXP)

— Equipment Needed (EQP)

— Window of Opportunity (WOO)

— Time needed (TIM).

Table E.3 — Likelihood assessment criteria

Rating /
EXP EQP WOO TIM
likelihood
multiple
bespoke
Experts short long low
equipment
needed
specialized
Expert moderate moderate medium
equipment
specialized
Proficient long long high
COTS
standard
Laity unlimited very short very high
equipment

Table E.4 below shows a mapping between Likelihood and Accessibility and Probability according
EN 50126-1. The likelihood can only be estimated based upon the accessibility. The rationale is to
estimate the likelihood of a successful attack, i.e. the mapping is indicated in Table E.4:

132
CLC/TS 50701:2021 (E)

Table E.4 — Mapping Likelihood to accessibility and Probability

Likelihood in terms Accessibility Probability according EN 50126-1


public access frequent
very easy probable
easy occasional
medium rare
hard improbable
very hard highly improbable

The likelihood (accessibility) is derived from frequency levels of as indicated in Table 3, i.e. the likelihood
is the result of the different ratings (likelihood) of the 4 parameters EXP, EQP, WOO and TIM according
to Table 3.
E.2.4 Risk acceptance

Risk acceptance is based upon risk analysis, definition of mitigations, and final risk assessment.
The objectives of risk analysis / assessment are:
— to identify threats associated with the system,

— to identify the vulnerabilities regarding the threats to materialize,

— to determine the risk associated with the threats and vulnerabilities,

— to identify the countermeasures that have to be implemented in the design in order to reduce the risk
to an acceptable level.

Based upon the initial conceptual system architecture, existing safety and hazardous assessments and
the functional specification for SuC, a risk identification process is undertaken to provide outputs
consisting of Target Security Levels (SL-T) of the SuC and a conceptual Zonal model which identifies the
risk-based system Security Levels (SL) and boundary protection.
E.2.5 Justification

Justification of this methodology is, that through this procedure a mapping of security risk analysis to
EN 50126-1 methodology is achieved.
Justification of the result of Security risk analysis is based upon the following three principles:
— Verification

— Validation

— Consideration of security within the Safety Case.

All these three principles will be supported by the threats log.

E.3 Example method – system integrator

E.3.1 Introduction

This method is used by a railway system integrator and turn-key supplier as a tool in their solution security
risk assessment, mainly for large scale projects, both metro and main line business. Its structure is based
on ISO/IEC 27005.

133
CLC/TS 50701:2021 (E)

E.3.2 Impact Assessment

Table E.5 below provides an example of impact assessment matrix for a system integrator.
Table E.5 — Impact assessment matrix – Example 2

Category Availability Integrity Confidentiality Integrity (Business)


(Safety)
A Major interruption of Catastrophic Loss of security related Catastrophic business
operation affecting a accident, information. e.g. impact possibly leading
network or a fleet or typically affecting credentials, giving to bankruptcy or loss of
loss of service more a large number direct access to the license of operator
than 500.000 people of people and system and leading to
for a long time1 leading to catastrophic safety,
multiple fatalities availability or business
impacts.
B Major interruption of Critical accident, Loss of security related Critical business impact
operation affecting a typically affecting information, no direct possibly leading to
network or a fleet or a small number access to the system is severe impact in
loss of service to of people and possible (physical revenue or earnings
more than 500.000 leading to a protection), attacker (>10 % on annual basis)
people for a single fatality could perform
significant time1 or of commands leading to at
a line or station or few least critical availability,
vehicles for a long safety and business
time impacts.

C Significant interruption safety Loss of security related Significant business


of operation affection implications, information, no direct impact possibly leading
a network or fleet or typically leading access to the system is to substantial impact on
more than 500.000 to injuries possible (physical revenue or earnings (on
people for a short requiring protection), attacker annual basis)
time1 OR of a line or hospitalization cannot perform any
station or few vehicles critical safety-related
for a significant time commands; for
example: only read
access to diagnostic
data are possible; loss
of data under data
protection law or
commercially sensitive
data
D Significant interruption minor safety Loss of non-security Marginal business
of operation of a line implications, relevant data, data are impact
or station or a few typically leading not under data
vehicles for a to injuries without protection; attacker can
significant time hospitalization make commercial use
of the data by combing
with other information
E typically, no influence typically, no Loss of non-security Negligible business
safety relevant data, data are impact
implications not under data
protection

NOTE The applicable down times is application specific, e.g. a long time is 1 week for some mainline network
but 1 day for some metro networks, or a significant time is either 1 day or 1 h, respectively.

134
CLC/TS 50701:2021 (E)

E.3.3 Likelihood assessment

Likelihood is estimated from scales based on exposure and vulnerability of the asset.
Table E.6 below provides an example of likelihood assessment for a system integrator.
Table E.6 — Likelihood assessment matrix – Example 2

Rating Exposure (EXP) Vulnerability (VUL)


1 Highly restricted logical or physical — Successful attack is only possible for a
access for attacker, e.g. small group of attackers with high hacking
— highly restricted network and physical skills (high capabilities needed)
access, or — Vulnerability is only exploitable with high
— product or components cannot be effort, and if strong technical difficulties can
acquired by attacker or only with high be solved, non-public information about inner
effort workings of system is required
— State of the art security measures to
counter the threat
— High chance for attacker to be traced and
prosecuted
2 Restricted logical or physical access for — Successful attack is feasible for an
attacker, e.g. attacker with average hacking skills (medium
— internal network access required, or capabilities needed)
— restricted physical access, or — Vulnerability is exploitable with medium
effort, requiring special technology, domain or
— product or components can be
tool knowledge
acquired by attacker with medium effort
— Some security measures to counter the
threat
— Medium chance for attacker to be traced
and prosecuted
3 Easy logical or physical access for — Successful attack is easy to perform, even
attacker, e.g. for an unskilled attacker (little capabilities
— Internet access sufficient, or needed)
— public physical access, or — Vulnerability can be exploited easily with
low effort, since no tools are required, or
— attacker has access as part of daily
suitable attack tools freely exist.
work, operation, or maintenance activities,
or — No or only weak security measures to
counter the attack caused by the threat
— product or components can be
acquired by attacker with low effort — Low chance for attacker to be traced and
prosecuted

Likelihood index L is calculated from Exposure and Vulnerability by L = EXP + VUL − 1.

135
CLC/TS 50701:2021 (E)

E.3.4 Risk acceptance

Table E.7 below provides an example of risk acceptance matrix assessment for a system integrator.
The risk acceptance matrix is built on a 5x5 Risk Matrix.
Table E.7 — Risk acceptance matrix – Example 2

Impact
E D C B A
Likelihood
1 Low Low Low Low Low
2 Low Low Low Medium Medium
3 Low Low Medium Medium High
4 Low Medium Medium High Extreme
5 Low Medium High Extreme Extreme

Only low risks are acceptable per se. All other risks should be reduced either by technical or other
countermeasures and be accepted by the asset owner.
E.3.5 Justification

The impact assessment matrix levels the different impacts. For safety consequences it applies the
common safety impact criteria.
Likelihood assessment is only based on exposure and vulnerability (exploitability) of the system towards
attacks. Subjective judgments are reduced as far as possible. The combination rule reflects a barrier
model (both exposure and vulnerability present a barrier).
Both impact and likelihood are measured on ordinal scales. This means that their combination leads to a
semi-ordered metric, so per definition e.g. (2,C) and (3,D) are not directly comparable. Risk evaluation is
symmetric and reflects risk isoclines in its diagonals. It starts with observation that (5,E) and (1,A) should
be labelled “Low”, as e.g. the highest impact category A should be combined with the most demanding
requirement 1. There are exceptions for three combinations i.e. (4,D), (3,C) and (2,B), which might also
have been labelled “Low” but were regarded “Medium” in a kind of risk aversion approach.

E.4 Example method – infrastructure manager

E.4.1 Introduction

The method is used by a large-scale infrastructure manager.

136
CLC/TS 50701:2021 (E)

E.4.2 Impact assessment

Table E.8 below provides an example of impact assessment matrix for an infrastructure manager.
Table E.8 — Impact assessment matrix – Example 3

Safety OPErational FINancial STrateGy Image REGulatory


1-Minor Minor physical Impacts on 10 000 Loss < No market Impact No juridical
/psychological people. 1 M€ loss. local and impact or
injuries or Perturbation of punctual regulatory
damage local economy.
2-Moderate Major physical Impacts on 100 1M€ < Loss Market Impact No respect of
/psychological 000 people. < 10 M€ loss < 5 % local regulatory or
injuries or Disruption of iterative or legal obligations
damage national economy / regional with low
temporary loss of punctual administrative
major sanctions
infrastructure.
3-Significant Significant Impacts on 10 M€ Market loss
physical 1 000 000 people. < Loss < between 5 Impact Conviction and
/psychological Disruption national 50 M€ and 10 % regional criminal
injuries or economy. iterative or sanction.
damage Temporary loss of national Financial
critical punctual. penalties import
infrastructure ant
critique. Definitive .
loss of a
major infrastructure
.
4-Critical Death or Impacts on Loss > Market Impact Major infraction
critical injuries 10 000 000 people. 50 M€ loss > 10 national resulting in
on several Definitive loss of a iterative. criminal
people critical conviction.
infrastructure. term of
imprisonment

137
CLC/TS 50701:2021 (E)

E.4.3 Likelihood assessment

Table E.8 below provides an example of likelihood assessment matrix for an infrastructure manager.
The following four factors are all evaluated on a scale from 1 to 4. These factors are then multiplied to
arrive at an overall likelihood score.
Table E.9 — Likelihood assessment matrix – Example 3

Value IT competency Motivation Easy to discover Exposition


4 Novice Railway accident / Known vulnerabilities, etc. Direct access or
transportation public access
paralysis / critical
damages

3 IT knowledge and Major blackmail / Vulnerabilities identified by Enterprise network


public information national or superficial analysis
on ICS international notoriety

2 Advanced Local blackmail / Identification of Internal network with


knowledge on ICS personal revenge vulnerabilities with an restraint access or
and hacking expertise and need of access which
resources requires privileged
information
1 Expert in hacking Challenge Discover extremely Local access
improbable during a
reasonable time

Based on the product of the factors the overall likelihood level is determined by Table E.10 below.
Table E.10 — Likelihood conversion table – Example 3

Conversion limit
Product Change of level
16 >1
24 >2
64 >3
E.4.4 Risk acceptance

Table E.11 below provides an example of risk acceptance matrix for an infrastructure manager.
Risk acceptance is based on the following 4x4 matrix:
Table E.11 — Risk acceptance matrix – Example 3

Likelihood/Impact 1 2 3 4
4 3 3 4 4
3 2 3 3 4
2 2 2 3 3
1 1 2 2 3
Risk Severity Levels

138
CLC/TS 50701:2021 (E)

In this example, the risk severity is used to define the level of mitigation needed according to the following
Table E.12.
Table E.12 — Risk Severity / Mitigation matrix – Example 3

Risk severity Description Risk mitigation


4 Very High Risk Measures required with the
highest priority
3 High Risk Measures required
2 Moderate = medium and significant Measures recommended
1 Low risk Measures optional

E.4.5 Justification

This methodology identifies the 6 main stakes to be considered for a railway infrastructure manager:
Safety, Operational, Financial, Strategy, Image and Regulatory.
Likelihood is calculated as a function of 4 parameters, 2 related to the attacker profile (IT competency,
Motivation) and 2 related to the SuC itself (Vulnerability easiness to discover, exposition).
A specific addition in this methodology is the link and prioritization made between the severity of a risk
and the level of need of a risk mitigation (from optional measure to the highest priority).

139
CLC/TS 50701:2021 (E)

Annex F
(informative)

Railway architecture and zoning

F.1 Glossary to railway system overview

Table F.1 below describes the subsystems that have been mentioned in Clause 4.
Table F.1 — Railway system glossary

Function, subsystem, Abbr. Description


component or asset

Traffic management system TMS controls the route setting for trains based on
timetables and short-term needs

Communication - comprises subsystems to communicate with railway


personnel and passengers (e.g. GSM-R)

Maintenance management and - comprises the analysis of the system status and
diagnostic planning and logging of maintenance activities

Network management system NMS comprises network configuration and monitoring


Energy management system EMS includes systems used to collect energy
consumption data from trains for analysis, billing and
planning purposes

Facility management comprises configuration and control of civil work


equipment (lighting, heating, air condition, mains)

Passenger information system PIS informs passengers about train departure time,
platforms, etc.
Interlocking IXL accounts the safe setting of routes for trains by
controlling signals, points and the track vacancy
Radio block center RBC Controls the movement authorities for the trains in
an ETCS Level 2/3 system
Electrical supply and traction mains Comprises the subsystems for the conversion and
supply of traction power
Tunnels - includes the electronics installed in railway tunnels
to support tunnel specific infrastructure functions
(e.g. ventilation, alarm systems, fire and smoke
detectors, fire extinguisher, etc.)
Safety tunnel earthing system STES The emergency tunnel earthing system of overhead
line is an integrated automatic system that allows
the safe management of the disconnectors and the
earthing of overhead line in the tunnel.
Bridges includes the electronics installed in railway bridges
to support bridge specific infrastructure functions
(e.g. monitoring systems, lift control, etc.)

140
CLC/TS 50701:2021 (E)

Function, subsystem, Abbr. Description


component or asset

Diagnostics comprises systems dedicated to collecting data from


on-ground or on-board sources for monitoring,
analysis and maintenance purposes.
Closed-circuit television CCTV is used for video surveillance of assets and people
at risk

Digital signage is a remote-managed distributed system of LCD


screens or multimedia PC that can span over many
passenger stations for advertising and informative
scopes
Public address system PA informs passengers and railway personnel about
actual situation via audio path

Entertainment provides train passengers with movies and other


leisure activities
Automatic train protection ATP protects the train driver from running too fast and
jumping a red light/closed signal

Point machine sets points/switches to left or right position according


to the route setting

Level crossing protects the crossing area of rail and road traffic
Signal informs the train driver about the allowance and
speed of the route he runs
Train detection sensor detects if a given track section is free or occupied by
a train (or coach)
Lighting includes the electronics dedicated to ensure correct
illumination of railway cars both internally and
externally; special case of external lighting are
headlights
Passenger alarm system is a mechanism that passengers can manually
activate in case of immediate danger conditions to
alert the driver and eventually stop the train
Other safety functions includes safety function not explicitly mentioned in
other blocks of the picture
Fire detection and extinguisher is the system dedicated to detect smoke and fire and
activate extinguishing countermeasures

Traction is the system responsible of train movement?


Driver machine interface DMI includes all the tech objects used to manage
communications between the train and the driver
(e.g. screens, buttons, handles, etc.)

Mobile communication gateway MCG is the train-side sub-system providing on-board to


ground services to the on-board end devices

Train control management system TCMS is the train's brain, nerves and sensors whose
purpose is to control, in a safe and efficient way, the
muscles powering the rolling stock.

Braking system is the component dedicated to the braking function

141
CLC/TS 50701:2021 (E)

Function, subsystem, Abbr. Description


component or asset

Radio communication includes the equipment used for ground to train


voice communications and the equipment for ground
to train ERTMS communications. Technology can be
WLAN, GSMR or 3/4/5G mobile.
Event recording is the equipment dedicated to register train events
mainly for legal issues in case of accident; can be
used for diagnostics purposes

Train communication network TCN is the set of communicating vehicle and train
backbones

Doors is the sub-system that controls the train doors


Anti-intrusion is the sub-system dedicated to detecting human
intruders
Driver advisory system DAS is the system that provide the driver with real-time
advices on how to arrive on time in an efficient way
Heating, ventilation and air-conditioning HVAC is the system that provide crew and passenger with
ambient comfort conditions
Toilet is the sub-system that controls the train toilet
Energy metering system is the sub-system that measures the train energy
consumptions and regeneration and send them to
ground for billing and other purposes
Energy control system Controls the production and transport of traction
power

F.2 Zoning examples

F.2.1 Introduction

This appendix contains examples for zoning and segmentation of signalling, fixed installation and rolling
stock.
The following terms are used in this appendix:
Zone criticality (ZC)
Every zone identified in the initial risk assessment should be classified according to risks.
The zone criticality represents the security demands in a simplified expression to define the allowed
communication between zones.
Zone criticality landside (ZC-L)
The ZC-L defines the criticality of each zone in comparison to all other network zones to define
communication rules on railway operator (infrastructure manager) level for signalling and fixed
installation.
Zone criticality rolling stock (ZC-RS)
The ZC-RS defines the criticality of each zone in comparison to all other network zones to define
communication rules on train operator (railway undertaker) level in the rolling stock environment.

142
CLC/TS 50701:2021 (E)

Communication matrix
The communication matrix shows on a high-level the authorized and unauthorized communication. The
communication matrix is the base to define rulesets for security devices to control the data flow between
zones.
Data diode
Data diodes are security devices that allow generally data flow only in one direction based on the principle
of an electronic diode.
F.2.2 Landside (fixed installations and signalling)

F.2.2.1 Zone criticality

Every zone identified in the initial risk assessment should be classified according to the risk’s criticality.
The criticality represents the security demands in a simplified expression to define the allowed
communication between zones.
Further on a communication matrix will be developed based on the following two rules:
— Direct communication between zones with well-known risk (e.g. zones with well-known and fixed
mounted OT devices) and unknown risk (e.g. office zones with laptops, printer, internet connectivity)
should be refused.

— In general, direct communication is only allowed between zones with the same or a subsequent zone
criticality.

The following steps show an approach for a high-level communication concept.


Step 1: Evaluating “groups of criticalities” with similar security requirements
Evaluate groups of available criticality levels with well-known security demands of the target network
concept based on asset types and their corresponding risks.
Table F.2 below shows an example of a typical result:
Table F.2 — Example – Evaluating groups of criticalities for landside-landside communication

Zone criticality and


communication matrix
landside - landside
Zone criticality
landside (ZC-L) Security maturity Example

highly secure / safety: interlocking, high


Safety voltage
highly secure
SCADA, central ICS
/critical
data centre, internal DMZ,
secure
ICS/automation
internal network, office and
medium
business network
low gateway area, external DMZ
low external partner/companies
none / unsecure Internet

143
CLC/TS 50701:2021 (E)

Step 2: Define the criticality of ZC-L zones


The number of zones and criticality levels can be chosen individually by the railway operator or
infrastructure manager; but should be identical for his whole infrastructure. In this example 6 plus 1 safety
level are defined in Table F.3:
Table F.3 — Example – Zone criticality definition for landside-landside communication

Zone criticality
Security maturity Example
landside (ZC-L)

ZC-L 5s highly secure / Safety safety: interlocking, high voltage

ZC-L 5 highly secure /critical SCADA, central ICS

data centre, internal DMZ,


ZC-L 4 secure
ICS/automation

internal network, office and


ZC-L 3 medium
business network

ZC-L 2 low gateway area, external DMZ

ZC-L 1 low external partner/companies

ZC-L 0 none / unsecure Internet

Step 3: Set up a communication matrix


The matrix can be chosen depending of the number of zone criticality levels; but should refer to the
communication rules in F.2.2.3. The communication matrix is the base for the final zone models and
communication flows as per example in Table F.4.

144
CLC/TS 50701:2021 (E)

Table F.4 — Example – Landside-landside communication matrix basic structure

Zone criticality and

Internal network, Office and Business network


communication matrix

Data Centre, internal DMZ, ICS/Automation


landside - landside

SCADA, central ICS/ SCADA System


Safety: Interlocking, High Voltage

Gateway area, External DMZ

External partner/companies

Internet
highly secure / critical
ZC-L5s highly secure / Safety

none / unsecure
medium
secure

low

low
ZC-L5

ZC-L4

ZC-L3

ZC-L2

ZC-L1

ZC-L0
Zone criticality Security
landside (ZC-L) maturity Example

source / from destination / to

highly secure / safety: interlocking, high


ZC-L 5s
Safety voltage

highly secure
ZC-L 5 SCADA, central ICS
/critical

data centre, internal DMZ,


ZC-L 4 secure
ICS/automation

internal network, office and


ZC-L 3 medium
business network

ZC-L 2 low gateway area, external DMZ

ZC-L 1 low external partner/companies

ZC-L 0 untrusted Internet

Table F.5 below shows an example of a typical communication matrix for landside:

145
CLC/TS 50701:2021 (E)

Table F.5 — Example – Communication matrix - landside to landside

Zone criticality and

Internal network, Office and Business network


communication matrix

Data Centre, internal DMZ, ICS/Automation


landside - landside

SCADA, central ICS/ SCADA System


Safety: Interlocking, High Voltage

Gateway area, External DMZ

External partner/companies

Internet
highly secure / critical
highly secure / Safety

none / unsecure
medium
secure

low

low
ZC-L5s

ZC-L5

ZC-L4

ZC-L3

ZC-L2

ZC-L1

ZC-L0
Zone criticality Security
landside (ZC-L) Maturity Example

source / from destination / to

highly secure / safety: interlocking, high


ZC-L 5s + R - - - - -
Safety voltage

highly secure
ZC-L 5 SCADA, central ICS + + R - - - -
/critical

data centre, internal DMZ,


ZC-L 4 secure - + + + - - -
ICS/automation

internal network, office and


ZC-L 3 medium - - + + + + -
business network

ZC-L 2 low gateway area, external DMZ - - - + + - +

ZC-L 1 low external partner/companies - - - + - + -

ZC-L 0 untrusted Internet - - - - + + +

The data flow should be controlled depending on the safety and security demands of the zones

— “+” data flow is allowed in both directions.

The fine tuning of data flow is controlled by rules and access lists (e.g. of the security devices)

— “R”: data flow is restricted to read-only only by data diodes or similar measures
— “-“: data flow is prohibited

146
CLC/TS 50701:2021 (E)

This matrix covers the normal data flow for standard operation usage. Temporary connections for remote
maintenance are also a part of standard operations and are marked with a “+”. The conditions to open
such maintenance connections may be supported by multi factor authentication (e.g. SMS, email or
pressing a button on the local network equipment) and are not part of the ruleset of the corresponding
security device.
F.2.2.2 Zoning and segmentation

The communication matrix is used to define a generic railway zone model including the zone criticalities
as in Figure F.1 below:

147
CLC/TS 50701:2021 (E)

Figure F.1 — Adopted generic high-level railway zone model (example)

148
CLC/TS 50701:2021 (E)

Considering the result of the initial risk assessment (see 4.4) and functional asset groups, the generic
zone model can be subdivided in subsystems and zones. Figure F.2 shows an example:

Figure F.2 — Example of a railway system zone model

NOTE The communication between rolling stock and landside is described in F.2.3.

F.2.2.3 Communication rules

— The communication generally should be kept in the subsystem in order not to pass zones with other
system responsibilities or different criticality.

— If border crossing communication between subsystems is necessary, data has to flow via the
corresponding two DMZ.

— Communication into and out of zones should be well defined and be supervised for detecting
unauthorized communication (e.g. by an intrusion detection system).

— Communication between different subsystems should be controlled by a security device (e.g. by a


firewall).

— Communication between zones with the same criticality within a subsystem should be controlled by
a security device.

— Communication between zones with different criticality within a subsystem should be controlled by a
security device.

149
CLC/TS 50701:2021 (E)

— All communication into and out of a subsystem should pass the same security device (or device
group if redundant). No backdoors or parallel communication paths (like ISDN modem, etc. for direct
remote maintenance), bypassing the corresponding security device, are allowed.

F.2.3 Rolling stock

F.2.3.1 Zone criticality, zoning and segmentation

It is useful to define a zone model based on zone criticality. The number of zones criticality levels can be
adapted by the operator.
Table F.6 below shows an example for a rolling stock zone model:
Table F.6 — Example – Rolling stock zone model

Zone criticality rolling Security maturity / network layer Example


stock (ZC-RS)
ZC-RS 5 Highly Secure / signalling ATP systems
Safety
ZC-RS 4 Secure command and control TCMS, doors, traction
and breaking
ZC-RS 3 Medium auxiliary CCTV, diagnostic

ZC-RS 2 Medium comfort Passengers information


system
ZC-RS 1 Low public interface Entertainment
WiFi
ZC-RS 0 Untrusted external Train-to-ground
communication Train-to-train
channel

In this example, zone criticality levels are aligned with the six-colour scheme by function class (signalling,
command and control, auxiliary, comfort, public, train-to-ground) described in Clause 4. As stated before,
zone criticality levels can be adapted by the operator. Thus, e.g. comfort (ZC-RS 2) and auxiliary (ZC-RS
3) may be gathered in a same level.
The Internet, other company networks and public networks should be considered by default as untrusted.
F.2.3.2 Zone criticality and communication matrix in the rolling stock domain

Table F.7 below shows an example of a typical communication matrix for zone criticalities and
communication rules in the rolling stock domain.

150
CLC/TS 50701:2021 (E)

Table F.7 — Example – Communication matrix - rolling stock to rolling stock

Zone criticality and communication matrix

External communication channel


rolling stock - rolling stock

Command and control

Public interface
Signalling

Auxiliary

Comfort
Highly Secure / Safety
Security maturity

Unsecure
Medium

Medium
Secure

Low
rolling stock (ZC-R)
Zone criticality

Zone criticality ZC-RS 5

ZC-RS 4

ZC-RS 3

ZC-RS 2

ZC-RS 1

ZC-RS 0
rolling stock
(ZC-RS) Security maturity

source / from destination / to

Highly Secure /
ZC-RS 5 Signalling + + - - - +
Safety

ZC-RS 4 Secure Command and control + + + + R +

ZC-RS 3 Medium Auxiliary - + + + +/R +

ZC-RS 2 Medium Comfort - + + + +/R +

ZC-RS 1 Low Public interface - - - - + +

External
+ + + +
ZC-RS 0 Untrusted communication + +
(*) (*) (*) (*)
channel

Key

“+” data flow allowed through appropriated security device

“R” data flow restricted to read-only only by data diodes or similar measures

“-“ data flow prohibited

(*) data flow generally initiated from on-board device to outside

F.2.3.3 Communication rules

The zone model allows defining a communication rules model.

151
CLC/TS 50701:2021 (E)

The risk analysis allows correctly adapting communication rules (especially for “should” rules) and sets
of measures within the specific context of a project.
Below an example of communication rules model is shown:
— signalling (ZC-RS5) and command and control (ZC-RS4) can be connected

— connection between command and control (ZC-RS4) and signalling (ZC-RS5) should require security
device/solution (*)

— signalling (ZC-RS5) and others (different than ZC-RS4) cannot be directly connected

— comfort (ZC-RS2) and command and control (ZC-RS4) can be connected

— auxiliary (ZC-RS3) and command and control (ZC-RS4) can be connected

— connection between comfort (ZC-RS2)/auxiliary (ZC-RS3) and command and control (ZC-RS4)
should require security device/solution (*)

— comfort (ZC-RS2) and auxiliary (ZC-RS3) can be connected

— comfort (ZC-RS2) and auxiliary (ZC-RS3) may be gathered in a same level

— connection between comfort (ZC-RS2) and auxiliary (ZC-RS3) may require security device/solution
(*)

— comfort (ZC-RS2) and public (ZC-RS1) can be connected

— auxiliary (ZC-RS3) and public (ZC-RS1) can be connected

— connection between comfort (ZC-RS2)/auxiliary (ZC-RS3) and public (ZC-RS1) should require
security device/solution (*) and a DMZ; except if using a data-diode to ensure unidirectional
communication from (ZC-RS2)/(ZC-RS3) to (ZC-RS1)

— public (ZC-RS1) and command and control (ZC-RS4) cannot be directly connected; except if using
a data-diode to ensure unidirectional communication from (ZC-RS4) to (ZC-RS1)

— public (ZC-RS1) and signalling (ZC-RS5) cannot be directly connected

— each ZC-RS1/5 on board network can be connected to ground through train-to-ground component(s)
(ZC-RS0)

— connection between ZC-RS1/5 and ZC-RS0 should require security device/solution (*)

— signalling (ZC-RS5) should use dedicated train-to-ground component (ZC-RS0)

— train-to-ground component (ZC-RS0) could be mutualised for comfort (ZC-RS2), auxiliary (ZC-RS3)
and command and control (ZC-RS4) networks; using security device/solution (*) to ensure no
possibilities of bouncing between ZC-RSx networks

— for ZC-RS0, the set of security measures (using private APN, secured protocols within a public
telecom networks, authenticate mechanisms, hardening of exposed components, etc.) depends on
the components used and the capability of the telecom networks; and should fulfil the security needs
of the supported applications

— connection between ZC-RS0 and landside network should require a DMZ at the boundary to landside

(*) security device/solution may be e.g. a security gateway with firewalling function

152
CLC/TS 50701:2021 (E)

F.2.4 Communication rules between rolling stock and landside

F.2.4.1 Rolling stock and landside mapping table

F.2.2 and F.2.3 present examples for zone criticality model for landside and for rolling stock.
These examples could be adapted by each owner for his responsible perimeter.
In order to define a readable communication matrix, it is strongly recommended to use a mapping-table
for each direction of data flow as in the example Table F.8 (landside to rolling stock) and example
Table F.9 (rolling stock to landside); according to each zone criticality model applied.

153
CLC/TS 50701:2021 (E)

Table F.8 — Example – Communication matrix - landside to rolling stock

Zone criticality and

External communication channel


communication matrix
Direction: landside - > rolling stock

Command and control

Public interface
Signalling

Auxiliary

Comfort
Highly Secure / Safety

Unsecure
Medium

Medium
Secure

Low
rolling stock (ZC-R)
Zone criticality

Zone
criticality
ZC-RS 5

ZC-RS 4

ZC-RS 3

ZC-RS 2

ZC-RS 1

ZC-RS 0
landside Security
(ZC-L) maturity Example

source / from destination / to

highly secure / Safety: Interlocking, High +


ZC-L 5s - - - - -
Safety Voltage (**)

highly secure
ZC-L 5 SCADA, central ICS - - - - - -
/critical

Data Centre, internal DMZ,


ZC-L 4 secure - - - - - -
ICS/Automation

Internal network, office and + + +


- - -
ZC-L 3 medium business network (*) (*) (*)

Technical DMZ on landside - - - - - +

ZC-L 2 low Gateway area, external DMZ - - - - - -

ZC-L 1 low External partner/companies - - - - + -

ZC-L 0 untrusted Internet - - - - + +

Key

“+” data flow allowed through appropriated security device

“-“ data flow prohibited

(*) see F.2.4.3

(**) see F.2.4.4

154
CLC/TS 50701:2021 (E)

Table F.9 — Example – Communication matrix - rolling stock to landside

External communication channel


Zone criticality and
communication matrix
Direction: rolling stock - > landside

Command and control

Public interface
Signalling

Auxiliary

Comfort
Highly Secure / Safety

Unsecure
Medium

Medium
Secure

Low
Zone criticality rolling
stock (ZC-R)

Zone
criticality
ZC-RS5

ZC-RS4

ZC-RS3

ZC-RS2

ZC-RS1

ZC-RS0
landside Security
(ZC-L) maturity Example

destination / to source / from

highly secure +
ZC-L 5s Safety: Interlocking, High Voltage - - - - -
/ Safety (**)

highly secure
ZC-L 5 SCADA, central ICS - - - - - -
/critical

Data Centre, internal DMZ,


ZC-L 4 secure - - - - - -
ICS/Automation

Internal network, office and + + +


- - -
ZC-L 3 medium business network (*) (*) (*)

Technical DMZ on landside - - - - - +

ZC-L 2 low Gateway area, external DMZ - - - - - -

ZC-L 1 low External partner/companies - - - - + -

ZC-L 0 untrusted Internet - - - - + +

Key

“+” data flow allowed through appropriated security device

“-“ data flow prohibited

(*) see F.2.4.3

(**) see F.2.4.4

155
CLC/TS 50701:2021 (E)

NOTE 1 In an optimized system, landside (fixed installation and signalling) and rolling stock have same groups
and same zone criticality.

NOTE 2 The number of ZC levels in this example is freely chosen and can be adopted by the Railway undertakers
and Infrastructure managers.

F.2.4.2 General rules

— Zones that are connected should fulfil a mapping table that identifies allowed data flow between
zones

— All data should be checked by a security device in the corresponding subsystem

— The CISO or ISO approve communication which are not defined in standards or specifications

— Exceptions should be identified in the documentation with associated risk.

F.2.4.3 Rules for business IT

Connection between on-board network and IT business/office network landside


(examples: diagnostic, CCTV, etc.):
— Zone criticality level may be different between on-board and landside.

— Often, public telecom networks are used for communication.

— Communication components and train-to-ground communication should be secured according to


security needs. The set of security measures depends on the components used and the capability
of the telecom networks.

Examples of measures:
— Using private APN (allow reducing the exposure of on-board communication devices)

— Using secured protocols (to ensure integrity and confidentiality of train to ground communication
over public telecom networks)

— Using authenticate mechanisms (to ensure identities)

— Hardening of exposed components (to reduce the attack surface)

— Communication preferentially initiated by on-board software/component

— etc.

— A DMZ should be required at the boundary to the landside network.

— Data flow should be checked by a security device in each subsystem (on-board and landside).

— The responsible CISO(s) or ISO(s) should approve that architecture and measures in place fulfil the
security needs.

F.2.4.4 Rules for operational technology (OT)

Connection between on-board network and OT network landside (e.g. ERTMS, etc.):
— Zone criticality level may be the same between on-board and landside.

— Sometimes, dedicated telecom networks are used for communication.

156
CLC/TS 50701:2021 (E)

— Communication components and train-to-ground communication should be secured according to


security needs. The set of security measures depends on the components used and the capability
of the telecom networks.

— Data flow should be checked by a security device in each subsystem (on-board and landside).

— When a system should respect a normative specification (e.g. TSI CCS for ERTMS), components
and communications have to fulfil the specification requirements, and the CISO/ISO approval may
be optional in this case.

— For the other cases, the responsible CISO(s) or ISO(s) should approve that architecture and
measures in place fulfil the security needs.

157
CLC/TS 50701:2021 (E)

Annex G
(informative)

Cybersecurity deliverables content

G.1 Introduction

This annex provides content for two Cybersecurity Deliverables from System Integrator to Railway
Operator. These contents are provided as example and can be tailored according project context with
Integrator and asset owner agreement.
— Cybersecurity management plan

— Cybersecurity case.

Other cybersecurity deliverables will be described in future version of this document.

G.2 Cybersecurity management plan

It is recommended that the Cybersecurity management plan includes the following topics.
Note that according to context, this plan could be split in or refer several documents.
Introduction
Cybersecurity activities management
— Project Organization chart

— Role and responsibilities related to Cybersecurity activities

— Interface with other stakeholders (Engineering, Safety, RAM, V&V, T&C)

— Key Milestones

— Communication and reporting

— Information protection: data classification, access and transfer

— Project team security skills and training needs.

Cybersecurity context (could be a set of references to other documents)


— High level description of the system under consideration

— Cybersecurity objectives

— Applicable cybersecurity regulations and standards

— Operation environment security assumptions, including assumption of cybersecurity shared services


that will be provided by the environment to the SuC

— Maintenance environment security assumptions

— Threat environment

158
CLC/TS 50701:2021 (E)

Cybersecurity risk management (could be a set of references to other documents)


— Risk assessment methodology description

— Risk impact table

— Likelihood parameters definition

— Risk level definition and acceptance criteria

— Management of security risks and associated treatment plan

— Cybersecurity risk assessment updates: periodicity and triggers event

Cybersecurity design (could be a set of references to other documents)


— SuC partitioning method

— Allocation of cybersecurity requirements

— Organization of cybersecurity design reviews

Secure development lifecycle definition (could be a set of references to other documents)


Cybersecurity assurance and acceptance (could be a set of references to other documents)
— Specification of Verification and tests activities to be performed

— Review of V&V and penetration tests results

— Verification of application of cybersecurity process

— Cybersecurity case production

Vulnerabilities and cybersecurity issues management (could be a set of references to other documents)
— Tools and organization

— Scoring criteria

— Cybersecurity event reporting

Third parties risk management (could be a set of references to other documents)


— Applicable process for supplier assessment, selection and monitoring.

G.3 Cybersecurity case

It is recommended that the cybersecurity case includes the following topics:


Introduction (could be a set of references to other documents)
— system under consideration definition (incl. Zones and Conduits)

— Initial and detailed risks assessment

o Assumptions

o List of threat intelligences sources

o List of threat Scenarios

159
CLC/TS 50701:2021 (E)

o List of sufficiently mitigated risks (with explanation).

Cybersecurity Requirement Specification (CRS) (could be a set of references to other documents)


— Assumptions

— Cybersecurity needs (including safety-related high-level objectives)

— Cybersecurity requirements

— List of open risks (with explanation).

Cybersecurity management (could be a set of references to other documents)


— Cybersecurity policy

— Cybersecurity plan

— Cybersecurity process

— Vulnerability assessment and management.

Cybersecurity fulfilment (could be a set of references to other documents)


— Implementation of cybersecurity measures - evidences of fulfilment of CRS

— Evidence of application of cybersecurity process

— Verification and validation results

o Testing of security measures (e.g. V&V, Penetration testing)

o Traceability to cybersecurity requirements

— Related cybersecurity cases (from included components or subsystems, if any).

Security-related application conditions (could be a set of references to other documents)


— Installation

— Maintenance

— Operation.

Conclusion
— Cybersecurity claim

— Residual risks status.

160
CLC/TS 50701:2021 (E)

Bibliography

[1] EN IEC 62443-2-4:2019, Security for industrial automation and control systems – Part 2-4:
Security program requirements for IACS service providers

[2] IEC/TR 62443-3-1:2009, Security Technologies for Industrial Automation and Control Systems

[3] EN IEC 62443-4-1:2018, Secure product development lifecycle requirements

[4] EN IEC 62443-4-2:2019, Technical security requirements for IACS components

[5] IEC/TS 62443-1-1:2009, Industrial communication networks - Network and system security -
Part 1-1: Terminology, concepts and models

[6] EN 1627, Pedestrian doorsets, windows, curtain walling, grilles and shutters - Burglar resistance
- Requirements and classification

[7] EN 50129:2018, Railway applications - Communication, signalling and processing systems -


Safety related electronic systems for signalling

[8] EN 50159, Railway applications - Communication, signalling and processing systems - Safety-
related communication in transmission systems

[9] IEC Guide 120, Edition 1, Security aspects – Guidelines for their inclusion in standards

[10] ENISA Threat Landscape Yearly report

[11] ISO/IEC 27005, Information technology – Security techniques – Information security risk
management

[12] NIST SP 800-30

[13] NIST SP 800-160

161

You might also like