z114 System Overview
z114 System Overview
z114 System Overview
System Overview
SA22-1087-01
Level 01b
zEnterprise 114
System Overview
SA22-1087-01
Level 01b
Note:
Before using this information and the product it supports, read the information in Safety on
page xi, Appendix D, Notices, on page 153, and IBM Systems Environmental Notices and User
Guide, Z125-5823.
This edition, SA22-1087-01, applies to the IBM zEnterprise 114 (z114) and replaces SA22-1087-00.
These might be a newer version of this document in a PDF file available on Resource Link. Go to
http://www.ibm.com/servers/resourcelink and click Library on the navigation bar. A newer version is indicated by
a lowercase, alphabetic letter following the form number suffix (for example: 00a, 00b, 01a, 01b).
Copyright IBM Corporation 2011, 2012.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Level 01b
Contents
Figures . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . ix
Safety . . . . . . . . . . . . . . . xi
Safety notices . . . . . . . . . . . . . . xi
World trade safety information . . . . . . . xi
Laser safety information . . . . . . . . . . xi
Laser compliance . . . . . . . . . . . xi
About this publication . . . . . . . . xiii
What is included in this publication . . . . . . xiii
Revisions . . . . . . . . . . . . . . . xiii
Prerequisite publications. . . . . . . . . . xiii
Related publications . . . . . . . . . . . xiii
Ensemble publications . . . . . . . . . xiv
Parallel sysplex publications . . . . . . . xiv
OSA publications . . . . . . . . . . . xiv
Cryptographic publications . . . . . . . . xv
IBM Smart Analytics Optimizer for DB2 for
z/OS, V1.1 publications . . . . . . . . . xv
IBM DB2 Analytics Accelerator for z/OS V2.1
publications . . . . . . . . . . . . . xv
Miscellaneous publications . . . . . . . . xv
Related web sites . . . . . . . . . . . . xv
Additional online information . . . . . . . xvi
Engineering change (EC) level considerations . . . xvi
Accessibility . . . . . . . . . . . . . . xvi
How to send your comments . . . . . . . . xvii
Summary of changes . . . . . . . . xix
Chapter 1. Introduction . . . . . . . . 1
z114 highlights . . . . . . . . . . . . . 2
z114 model . . . . . . . . . . . . . . . 9
Performance . . . . . . . . . . . . . . 9
Resource Link . . . . . . . . . . . . . 10
Fiber optic cabling . . . . . . . . . . . . 10
z/Architecture . . . . . . . . . . . . . 10
Upgrade progression . . . . . . . . . . . 11
Unsupported features/functions . . . . . . . 11
Chapter 2. Hardware characteristics . . 13
System frame configuration . . . . . . . . . 13
Processor drawer . . . . . . . . . . . 14
I/O drawers and PCIe I/O drawers . . . . . 20
Support Element. . . . . . . . . . . . 23
System power supply . . . . . . . . . . 23
Internal Battery Feature (IBF) . . . . . . . 24
Internet Protocol Version 6 . . . . . . . . . 25
Multiple Subchannel Sets (MSS) . . . . . . . 25
IPL from an alternate subchannel set . . . . . . 25
LPAR mode . . . . . . . . . . . . . . 25
Processor units . . . . . . . . . . . . 26
Storage . . . . . . . . . . . . . . . 26
Channels . . . . . . . . . . . . . . 26
LPAR time offset support . . . . . . . . . 27
Server Time Protocol (STP) . . . . . . . . . 27
Hardware Management Console (HMC) . . . . . 28
Top exit cabling . . . . . . . . . . . . . 28
Bolt-down kit. . . . . . . . . . . . . . 29
Power sequence controller . . . . . . . . . 29
IBM zEnterprise BladeCenter Extension (zBX) . . . 29
zBX configuration . . . . . . . . . . . 29
Storage . . . . . . . . . . . . . . . 34
Display networking resources associated with the
IEDN . . . . . . . . . . . . . . . 34
Time coordination for zBX components . . . . 34
Entitlement and management of zBX racks,
BladeCenters, and zBX blades . . . . . . . 34
Ensemble . . . . . . . . . . . . . . . 34
IBM DB2 Analytics Accelerator for z/OS V2.1 . . . 35
Additional features/functions supported . . . . 35
Monitoring and estimating power consumption
and temperature. . . . . . . . . . . . 35
Reducing power consumption . . . . . . . 36
Displaying historical power, temperature, and
utilization data . . . . . . . . . . . . 37
Preplanning and setting up the Storage Area
Network (SAN) environment . . . . . . . 37
Chapter 3. Software support . . . . . 39
Chapter 4. Channel subsystem
structure . . . . . . . . . . . . . . 41
IOCP channel, link, and adapter definitions . . . 42
Coupling link peer channels . . . . . . . . 43
Subchannel connectivity . . . . . . . . . . 44
Guidelines for maximum availability . . . . . . 44
Planning for channel subsystem . . . . . . . 47
PCHID assignments . . . . . . . . . . 47
AID assignments . . . . . . . . . . . 49
PCHID report . . . . . . . . . . . . 50
CHPID Mapping Tool . . . . . . . . . . 50
Multiple Image Facility (MIF) . . . . . . . . 51
Spanned channels . . . . . . . . . . . . 51
Internal coupling and HiperSockets channels . . . 52
IOCP considerations . . . . . . . . . . . 52
LPAR definition . . . . . . . . . . . . 52
Channel path definition . . . . . . . . . 52
I/O device definition . . . . . . . . . . 53
Hardware Configuration Definition (HCD)
considerations . . . . . . . . . . . . . 53
Chapter 5. I/O connectivity . . . . . . 55
FICON and FCP channels . . . . . . . . . 55
FICON Express8S features . . . . . . . . 55
FICON Express8 features . . . . . . . . . 56
FICON Express4 features . . . . . . . . . 57
Copyright IBM Corp. 2011, 2012 iii
|
||
|
||
||
Level 01b
Channel consolidation using FICON Express8 . . 58
Name server registration . . . . . . . . . 58
High Performance FICON for System z (zHPF) 58
Discover and automatically configure devices
attached to FICON channels . . . . . . . . 58
The MIDAW facility . . . . . . . . . . 58
Multipath Initial Program Load (IPL) . . . . . 58
Purge path extended . . . . . . . . . . 59
Fibre channel analysis . . . . . . . . . . 59
Fibre Channel Protocol (FCP) for SCSI devices. . 59
ESCON channels . . . . . . . . . . . . 63
ESCON converter operation . . . . . . . . 64
I/O operations control. . . . . . . . . . 65
I/O interface protocol . . . . . . . . . . 67
ESCON channel performance . . . . . . . 69
OSA channels. . . . . . . . . . . . . . 72
Supported CHPID types . . . . . . . . . 72
OSA/SF . . . . . . . . . . . . . . 73
OSA-Express4S features . . . . . . . . . 74
OSA-Express3 features. . . . . . . . . . 75
OSA-Express2 features. . . . . . . . . . 76
OSA-Express4S, OSA-Express3 and
OSA-Express2 supported functions . . . . . 77
HiperSockets . . . . . . . . . . . . . . 80
Asynchronous delivery of data . . . . . . . 80
HiperSockets network integration with IEDN . . 80
Broadcast support . . . . . . . . . . . 81
IPv6 support . . . . . . . . . . . . . 81
VLAN support . . . . . . . . . . . . 81
HiperSockets Network Concentrator . . . . . 81
HiperSockets Network Traffic Analyzer . . . . 81
Layer 2 (Link Layer) support . . . . . . . 82
Multiple Write facility . . . . . . . . . . 82
Chapter 6. Sysplex functions . . . . . 83
Parallel Sysplex . . . . . . . . . . . . . 83
Parallel Sysplex coupling link connectivity . . . . 84
ISC-3 links. . . . . . . . . . . . . . 85
InfiniBand (IFB) coupling links . . . . . . . 86
IC links. . . . . . . . . . . . . . . 87
Coupling facility. . . . . . . . . . . . . 87
CFCC considerations . . . . . . . . . . 88
Coupling connection considerations . . . . . 90
I/O configuration considerations . . . . . . 90
Server Time Protocol (STP) . . . . . . . . . 91
STP enhancements . . . . . . . . . . . 92
System-managed CF structure duplexing . . . . 93
GDPS . . . . . . . . . . . . . . . . 94
GDPS/PPRC . . . . . . . . . . . . . 95
GDPS/PPRC HyperSwap Manager . . . . . 95
GDPS/XRC . . . . . . . . . . . . . 95
GDPS/Global Mirror . . . . . . . . . . 96
GDPS/Active-Active . . . . . . . . . . 96
Intelligent Resource Director (IRD) . . . . . . 97
LPAR CPU management (clustering) . . . . . 97
I/O priority queuing (IOPQ) . . . . . . . 97
Dynamic channel path management (DCM) . . 98
Workload manager (WLM) . . . . . . . . 98
EAL5 certification . . . . . . . . . . . . 99
Chapter 7. Cryptography . . . . . . 101
CP Assist for Cryptographic Function (CPACF) . . 101
Protected key CPACF. . . . . . . . . . 102
Enablement and disablement of DEA key and
AES key functions. . . . . . . . . . . 102
Crypto Express3 and Crypto Express3-1P . . . . 102
User-defined extensions . . . . . . . . . . 104
Trusted Key Entry (TKE) . . . . . . . . . 104
Trusted Key Entry (TKE) with Smart Card
Readers . . . . . . . . . . . . . . 105
Wizard for migrating cryptographic
configuration data . . . . . . . . . . . 105
RMF monitoring . . . . . . . . . . . . 105
FIPS certification . . . . . . . . . . . . 105
Remote loading of ATM and POS keys . . . . . 106
Chapter 8. Cabling . . . . . . . . . 107
Fiber Quick Connect (FQC) for ESCON and FICON
LX cabling . . . . . . . . . . . . . . 107
Cabling responsibilities . . . . . . . . . . 108
Cable ordering . . . . . . . . . . . . . 108
Cabling report . . . . . . . . . . . . . 110
Chapter 9. Hardware Management
Console and Support Element . . . . 113
Hardware Management Console Application
(HWMCA) . . . . . . . . . . . . . . 115
Hardware Management Console and Support
Element enhancements for z114 . . . . . . . 115
HMC and Support Element network connection 116
HMC and Support Element features and functions 116
Customization of the Hardware Management
Console or Support Element . . . . . . . 116
Status reporting . . . . . . . . . . . 116
Service Required state . . . . . . . . . 117
Degrade indicator . . . . . . . . . . . 117
Hardware messages . . . . . . . . . . 117
Operating system messages. . . . . . . . 117
Problem analysis and reporting . . . . . . 118
Enablement and disablement of DEA key and
AES key functions . . . . . . . . . . . 118
Virtual RETAIN . . . . . . . . . . . 118
Licensed Internal Code (LIC) . . . . . . . 118
Remote I/O configuration and IOCDS
management. . . . . . . . . . . . . 119
Scheduled operations . . . . . . . . . . 119
Remote Support Facility (RSF) . . . . . . . 120
Automation and API support . . . . . . . 120
CPC activation . . . . . . . . . . . . 120
NTP client/server support on the Hardware
Management Console. . . . . . . . . . 121
z/VM integrated systems management . . . . 121
Installation support for z/VM using the
Hardware Management Console . . . . . . 121
Network traffic analyzer authorization . . . . 121
User authentication . . . . . . . . . . 122
Network protocols. . . . . . . . . . . 122
Customizable console date and time. . . . . 122
System I/O configuration analyzer (SIOA) . . 122
Network analysis tool for Support Element
communications . . . . . . . . . . . 122
iv z114 System Overview
||
||
Level 01b
Instant messaging facility . . . . . . . . 122
Screen capture function . . . . . . . . . 122
Call-home servers selection. . . . . . . . 123
User interface . . . . . . . . . . . . 123
Password prompt for disruptive actions . . . 124
User authority . . . . . . . . . . . . 124
Controlling user access to the Hardware
Management Console. . . . . . . . . . 124
View only access to selected Hardware
Management Console and Support Element
tasks . . . . . . . . . . . . . . . 124
Removable writable media . . . . . . . . 124
LPAR controls . . . . . . . . . . . . 124
Auditability support . . . . . . . . . . 125
Unified Resource Manager . . . . . . . . . 125
Security considerations . . . . . . . . . . 126
Change management considerations . . . . . . 127
Remote operations and remote access . . . . . 127
Remote manual operations . . . . . . . . 128
Remote automated operations . . . . . . . 129
Chapter 10. Reliability, Availability,
and Serviceability (RAS). . . . . . . 131
Reliability . . . . . . . . . . . . . . 131
Availability . . . . . . . . . . . . . . 131
Asynchronous delivery of data . . . . . . 131
Alternate HMC preload function . . . . . . 131
Server/Application State Protocol (SASP)
support for load balancing . . . . . . . . 131
Access to Unified Resource Management
capabilities using APIs . . . . . . . . . 131
Alternate HMC preload function . . . . . . 132
Redundant zBX configurations . . . . . . 132
RAIM . . . . . . . . . . . . . . . 132
Redundant I/O interconnect . . . . . . . 132
Plan ahead features . . . . . . . . . . 132
Enhanced driver maintenance . . . . . . . 133
Dynamic OSC/PPS card and OSC Passthru card
switchover . . . . . . . . . . . . . 133
Program directed re-IPL . . . . . . . . . 133
Processor unit (PU) sparing . . . . . . . 133
Processor design . . . . . . . . . . . 134
Support Elements . . . . . . . . . . . 134
Hardware Management Console . . . . . . 134
Attaching to IBM service through the Internet 134
Hardware Management Console monitor system
events . . . . . . . . . . . . . . . 134
SAPs . . . . . . . . . . . . . . . 134
Application preservation . . . . . . . . 134
Dynamic coupling facility dispatching . . . . 135
Error Correction Code (ECC) . . . . . . . 135
Dynamic memory sparing . . . . . . . . 135
Memory scrubbing . . . . . . . . . . 135
Fixed HSA . . . . . . . . . . . . . 135
Dynamic changes to group capacity using an
API. . . . . . . . . . . . . . . . 135
Dynamic additions to a channel subsystem and
LPARs. . . . . . . . . . . . . . . 135
LPAR dynamic storage reconfiguration . . . . 135
CICS subsystem storage protect . . . . . . 135
Partial memory restart . . . . . . . . . 135
Dynamic I/O configuration. . . . . . . . 136
ESCON port sparing . . . . . . . . . . 136
FICON cascaded directors . . . . . . . . 136
FCP full-fabric connectivity. . . . . . . . 136
Maintenance/Upgrade for coupling . . . . . 136
Concurrent channel upgrade . . . . . . . 136
Redundant power feeds . . . . . . . . . 136
Redundant power and thermal subsystems . . 137
Dynamic FSP card switchover . . . . . . . 137
Preferred Time Server and Backup Time Server 137
Concurrent hardware maintenance . . . . . 137
Concurrent Licensed Internal Code (LIC) patch 138
Concurrent internal code change . . . . . . 138
Electronic Service Agent (Service Director). . . 138
Internal Battery Feature (IBF) . . . . . . . 138
Redundant coupling links . . . . . . . . 138
Large page support . . . . . . . . . . 139
Customer Initiated Upgrade (CIU) . . . . . 139
Capacity Upgrade on Demand (CUoD) . . . . 139
On/Off Capacity on Demand (On/Off CoD) 139
Capacity Backup (CBU) . . . . . . . . . 139
Capacity for Planned Events (CPE) . . . . . 139
Capacity provisioning . . . . . . . . . 139
System-managed CF structure duplexing (CF
duplexing) . . . . . . . . . . . . . 139
GDPS . . . . . . . . . . . . . . . 140
Concurrent undo CBU . . . . . . . . . 140
Fiber optic cabling. . . . . . . . . . . 140
CHPID Mapping Tool . . . . . . . . . 140
Multipath initial program load . . . . . . 140
Point-to-point SMP network . . . . . . . 141
System-initiated CHPID reconfiguration . . . 141
Link aggregation support . . . . . . . . 141
System power on/off cycle tracking . . . . . 141
Network Traffic Analyzer Trace facility . . . . 141
QDIO diagnostic synchronization. . . . . . 141
FICON purge path extended . . . . . . . 142
FICON Express8S, FICON Express8, and FICON
Express4 pluggable optics for individual
servicing . . . . . . . . . . . . . . 142
CICS subspace group facility . . . . . . . 142
Dynamic channel path management . . . . . 142
Serviceability . . . . . . . . . . . . . 142
Appendix A. zEnterprise 114 Version
2.11.1 purpose and description. . . . 143
Preventative Service Planning (PSP) bucket
considerations . . . . . . . . . . . . . 143
Software corequisites . . . . . . . . . . 143
Engineering change (EC) considerations . . . . 143
Support Element EC N48168 + MCLs . . . . 143
HMC EC N48180 + MCLs . . . . . . . . 143
Miscellaneous lower level ECs included in
Version 2.11.1 . . . . . . . . . . . . 144
Appendix B. Resource Link . . . . . 145
Resource Link functions . . . . . . . . . . 145
Appendix C. Capacity upgrades . . . 147
Permanent upgrades . . . . . . . . . . . 147
Contents v
||
|
||
|
||
Level 01b
Temporary upgrades . . . . . . . . . . . 148
On/Off Capacity on Demand (On/Off CoD) 148
Capacity Backup (CBU) . . . . . . . . . 149
Capacity for Planned Events (CPE) . . . . . 150
Concurrent PU conversions. . . . . . . . . 150
Reserved CP support in LPAR mode . . . . . 150
Nondisruptive upgrades. . . . . . . . . . 151
Processor capacity downgrades . . . . . . . 151
Appendix D. Notices . . . . . . . . 153
Trademarks . . . . . . . . . . . . . . 154
Electronic emission notices . . . . . . . . . 154
Glossary . . . . . . . . . . . . . 159
Index . . . . . . . . . . . . . . . 171
vi z114 System Overview
Level 01b
Figures
1. z114 . . . . . . . . . . . . . . . 1
2. z114 frame configuration . . . . . . . . 13
3. z114 processor drawer . . . . . . . . . 14
4. HCA fanout connections . . . . . . . . 20
5. 14, 28, 42, and 56 zBX blade slots (Part 1 of 4) 30
6. 70 and 84 zBX blade slots (Part 2 of 4) . . . 31
7. 98 zBX blade slots (Part 3 of 4) . . . . . . 31
8. 112 zBX blade slots (Part 4 of 4) . . . . . . 32
9. I/O drawer layout . . . . . . . . . . 45
10. PCIe I/O drawer layout . . . . . . . . 46
11. Control Unit (CU) priority on ESCON channels
attached to a 9034 ES connection converter . . 69
12. Coupling link connectivity . . . . . . . 85
13. Cabling section of the PCHID report sample 111
14. Hardware Management Console configuration 114
15. Remote operation example configuration 128
Copyright IBM Corp. 2011, 2012 vii
Level 01b
viii z114 System Overview
Level 01b
Tables
1. Summary of changes . . . . . . . . . xix
2. z114 model structure . . . . . . . . . . 9
3. PUs per z114 model . . . . . . . . . . 15
4. I/O drawer and PCIe I/O drawer
configurations. . . . . . . . . . . . 21
5. Channels, links, ports, and adapters summary
per system . . . . . . . . . . . . . 21
6. System IBF hold times . . . . . . . . . 24
7. Supported operating systems for z114. . . . 39
8. Channel, port, adapter maximums . . . . . 41
9. Channels, links, and adapters with CHPID
type . . . . . . . . . . . . . . . 42
10. PCHIDs assignments for I/O drawer . . . . 48
11. PCHIDs assignments for PCIe I/O drawer 48
12. AID assignments for HCA fanout cards 49
13. Coupling link options . . . . . . . . . 84
14. IOPQ in a single-system environment . . . . 98
15. Channel card feature codes and associated
connector types and cable types . . . . . 109
16. Software corequisites . . . . . . . . . 143
17. ECs included in Version 2.11.1 . . . . . . 144
Copyright IBM Corp. 2011, 2012 ix
Level 01b
x z114 System Overview
Level 01b
Safety
Safety notices
Safety notices may be printed throughout this guide. DANGER notices warn you of conditions or
procedures that can result in death or severe personal injury. CAUTION notices warn you of conditions
or procedures that can cause personal injury that is neither lethal nor extremely hazardous. Attention
notices warn you of conditions or procedures that can cause damage to machines, equipment, or
programs.
World trade safety information
Several countries require the safety information contained in product publications to be presented in their
translation. If this requirement applies to your country, a safety information booklet is included in the
publications package shipped with the product. The booklet contains the translated safety information
with references to the US English source. Before using a US English publication to install, operate, or
service this IBM
product, you must first become familiar with the related safety information in the
Systems Safety Notices, G229-9054. You should also refer to the booklet any time you do not clearly
understand any safety information in the US English publications.
Laser safety information
All System z
, FICON
114 models. It is intended for executives, data processing managers, data processing
technical staff, consultants, and vendors who wish to exploit z114 advantages.
You should be familiar with the various publications listed in Prerequisite publications and Related
publications. A glossary and an index are provided at the back of this publication.
What is included in this publication
This publication contains the following chapters and appendices:
v Chapter 1, Introduction, on page 1
v Chapter 2, Hardware characteristics, on page 13
v Chapter 3, Software support, on page 39
v Chapter 4, Channel subsystem structure, on page 41
v Chapter 5, I/O connectivity, on page 55
v Chapter 6, Sysplex functions, on page 83
v Chapter 7, Cryptography, on page 101
v Chapter 8, Cabling, on page 107
v Chapter 9, Hardware Management Console and Support Element, on page 113
v Chapter 10, Reliability, Availability, and Serviceability (RAS), on page 131
v Appendix A, zEnterprise 114 Version 2.11.1 purpose and description, on page 143
v Appendix B, Resource Link, on page 145
v Appendix C, Capacity upgrades, on page 147
v Appendix D, Notices, on page 153
Revisions
A technical change to the text is indicated by a vertical line to the left of the change.
Prerequisite publications
Before reading this publication you should be familiar with IBM z/Architecture, IBM S/390, and IBM
Enterprise Systems Architecture/390 (ESA/390) as described in the following publications:
v z/Architecture Principles of Operation, SA22-7832
v Enterprise System Architecture/390 Principles of Operation, SA22-7201
Related publications
Important:
Please ensure that you are using the most recent version of all related documentation.
Other IBM publications that you will find helpful and that you should use along with this publication are
in the following list. You can access these books from Resource Link under the Library section.
v System z Application Programming Interfaces, SB10-7030
v System z Application Programming Interfaces for Java, API-JAVA
v System z Common Information Model (CIM) Management Interface, SB10-7154
v System z Hardware Management Console Web Services API , SC27-2616-00
v zEnterprise System Capacity on Demand User's Guide, SC28-2605
v System z CHPID Mapping Tool User's Guide, GC28-6900
v System z ESCON and FICON Channel-to-Channel Reference, SB10-7034
Copyright IBM Corp. 2011, 2012 xiii
Level 01b
v System z Hardware Management Console Operations Guide, SC28-6905
v System z Stand-Alone Input/Ouput Configuration Program (IOCP) User's Guide, SB10-7152
v System z Input/Output Configuration Program User's Guide for ICP IOCP, SB10-7037
v zEnterprise 114 Installation Manual, GC28-6902
v zEnterprise 114 Installation Manual for Physical Planning, GC28-6907
v zEnterprise System Processor Resource/Systems Manager Planning Guide, SB10-7155
v zEnterprise System Support Element Operations Guide, SC28-6906
v System z Small Computer Systems (SCSI) IPL - Machine Loader Messages, SC28-6839
v System z Planning for Fiber Optic Links (ESCON, FICON, Coupling Links, and Open System Adapters),
GA23-0367
v zEnterprise System Service Guide for Trusted Key Entry Workstations, GC28-6901
v zEnterprise 114 Service Guide, GC28-6903
v System z Coupling Links I/O Interface Physical Layer, SA23-0395
v System z Service Guide for Hardware Management Consoles and Support Elements, GC28-6861
v System z Maintenance Information for Fiber Optics (ESCON, FICON, Coupling Links, and Open System
Adapters), SY27-2597
v System z Fibre Channel Connection (FICON) I/O Interface Physical Layer, SA24-7172
v Fiber Optic Cleaning Procedures, SY27-2604
v Set-Program-Parameter and the CPU-Measurement Facilities, SA23-2260
v CPU-Measurement Facility Extended Counters Definition for z10 and z196, SA23-2261
Ensemble publications
The following publications provide overview, planning, performance, and Hardware Management
Console (HMC) and Support Element task information about creating and managing an ensemble.
v zEnterprise System Introduction to Ensembles, GC27-2609
v zEnterprise System Ensemble Planning and Configuring Guide, GC27-2608
v zEnterprise System Ensemble Performance Management Guide, GC27-2607
v zEnterprise System Hardware Management Console Operations Guide for Ensembles, GC27-2615
v zEnterprise System Support Element Operations Guide, SC28-6906
v zEnterprise BladeCenter Extension Installation Manual Model 002, GC27-2610
v zEnterprise BladeCenter Extension Installation Manual Model 002 for Physical Planning, GC27-2611
v z/VM Systems Management Application Programming, SC24-6234
v z/VM Connectivity, SC24-6174
v z/VM CP Planning and Administration, SC24-6178
Parallel sysplex publications
A Parallel Sysplex system consists of two or more z/OS images coupled by coupling links to a common
Coupling Facility and synchronized by a common time source, such as Server Time Protocol (STP) or a
Sysplex Timer. A Parallel Sysplex can be used to present a single image to the user. A Parallel Sysplex can
use the coupling facility to provide data sharing among the systems participating in the Parallel Sysplex.
The following publications provide additional information to help you understand and prepare for a
Parallel Sysplex that uses coupling facility for data sharing purposes.
v z/OS Parallel Sysplex Application Migration, SA22-7662
v z/OS Parallel Sysplex Overview: Introducing Data Sharing and Parallelism in a Sysplex, SA22-7661
v z/OS MVS Setting Up a Sysplex, SA22-7625
OSA publications
The following publications provide additional information for planning and using the OSA-Express
features:
v zEnterprise, System z10, System z9 and zSeries Open Systems Adapter-Express Customer's Guide and
Reference, SA22-7935
v System z10 Open Systems Adapter-Express3 Integrated Console Controller Dual-Port User's Guide, SC23-2266
xiv z114 System Overview
Level 01b
Cryptographic publications
The following publications provide additional information about the cryptographic function:
v z/OS Integrated Cryptographic Service Facility Trusted Key Entry PCIX Workstation User's Guide, SA23-2211
v z/OS Integrated Cryptographic Service Facility Administrator's Guide, SA22-7521
v z/OS Integrated Cryptographic Service Facility Application Programmer's Guide, SA22-7522
v z/OS Integrated Cryptographic Service Facility Messages, SA22-7523
v z/OS Integrated Cryptographic Service Facility Overview, SA22-7519
v z/OS Integrated Cryptographic Service Facility System Programmer's Guide, SA22-7520
IBM Smart Analytics Optimizer for DB2 for z/OS, V1.1 publications
The following publications provide additional information about the IBM Smart Analytics Optimizer:
v IBM Smart Analytics Optimizer for DB2 for z/OS Quick Start Guide, GH12-6915
v IBM Smart Analytics Optimizer for DB2 for z/OS Installation Guide, SH12-6916
v IBM Smart Analytics Optimizer for DB2 for z/OS Stored Procedures and Messages Reference, SH12-6917
v IBM Smart Analytics Optimizer Studio User's Guide,
v SH12-6919
v IBM Smart Analytics Optimizer for DB2 for z/OS Getting Started, GH12-6953
IBM DB2 Analytics Accelerator for z/OS V2.1 publications
The following publications provide additional information about the IBM DB2 Analytics Accelerator for
z/OS V2.1:
v IBM DB2 Analytics Accelerator for z/OS, V2.1, Quick Start Guide, GH12-6957
v IBM DB2 Analytics Accelerator for z/OS, V2.1, Installation Guide, SH12-6958
v IBM DB2 Analytics Accelerator for z/OS, V2.1, Stored Procedures Reference, SH12-6959
v IBM DB2 Analytics Accelerator Studio, V2.1, User's Guide, SH12-6960
v IBM DB2 Analytics Accelerator for z/OS, V2.1, Getting Started, GH12-6961
Miscellaneous publications
The following publications provide additional miscellaneous information:
v IBM Enterprise Storage Server Host Systems Attachment Guide, SC26-7446
v IBM Enterprise Storage Server Introduction and Planning Guide, 2105 and Models E10 and E20, GC26-7444
v Server Time Protocol Planning Guide, SG24-7280
v Server Time Protocol Implementation Guide, SG24-7281
v Getting Started with InfiniBand on System z10 and System z9, SG24-7539
Related web sites
The following web sites provide additional z114 information:
Resource Link
http://www.ibm.com/servers/resourcelink
Resource Link is a key element in supporting the z114 product life cycle. Some of the main areas
include:
v Education
v Planning
v Library
v CHPID Mapping Tool
v Customer Initiated Upgrade (CIU)
Supported operating systems information
http://www.ibm.com/systems/z/os/
Parallel Sysplex and coupling facility information
http://www.ibm.com/systems/z/pso/
About this publication xv
|
|
|
|
|
|
|
|
Level 01b
FICON information
http://www.ibm.com/systems/z/hardware/connectivity
Open Systems Adapter information
http://www.ibm.com/systems/z/hardware/networking/index.html
Linux on System z information
v http://www.ibm.com/systems/z/os/linux
v http://www.ibm.com/developerworks/linux/linux390/
Note: When searching, specify Linux instead of All of dW.
IBM WebSphere DataPower Integration Appliance XI50 information
http://www.ibm.com/software/integration/datapower/xi50
Additional online information
Online information about defining tasks and completing tasks associated with z114 is available on the
Hardware Management Console and the Support Element. This information is available under the
Library category on the Hardware Management Console Welcome screen or the Support Element
Welcome page:
v Coupling Facility Control Code (CFCC) commands
v Coupling Facility Control Code (CFCC) messages
v Hardware Management Console Operations Guide
v Support Element Operations Guide
v Hardware Management Console Operations Guide for Ensembles.
Help is available for panels, panel options, and fields on panels.
Engineering change (EC) level considerations
Future enhancements available for z114 models may be dependent on the EC level of the Central
Processor Complex (CPC) and/or Hardware Management Console. Additionally, some enhancements
may further be dependent on the Microcode Load (MCL) level of the EC on the CPC and/or Hardware
Management Console. The required MCL level will be available to the IBM field representative.
EC levels can be tracked by accessing Resource Link, http://www.ibm.com/servers/resourcelink. Go to Tools
> Machine Information.
Accessibility
This publication is in Adobe Portable Document Format (PDF) and should be compliant with accessibility
standards. If you experience difficulties using this PDF file you can request a web-based format of this
publication. Go to Resource Link
business
applications and backend database systems.
v Integrated Facility for Linux (IFL)
An IFL is a specialty engine that provides additional processing capacity exclusively for Linux on
System z workloads.
v Internal Coupling Facility (ICF)
An ICF is a specialty engine that provides additional processing capability exclusively for the execution
of the Coupling Facility Control Code (CFCC) in a coupling facility partition.
v Up to 248 GB of Redundant Array of Independent Memory (RAIM)
RAIM technology provides protection at the dynamic random access memory (DRAM), dual inline
memory module (DIMM), and memory channel level.
v Up to 248 Gigabytes available real memory for Model M10. Up to 120 Gigabytes available real
memory for Model M05
v 8 GB (Gigabytes) fixed size Hardware System Area (HSA)
v IBM zEnterprise BladeCenter
DataPower
(GDPS
, z/VSE
, and
Chapter 1. Introduction 5
|
|
|
|
|
|
|
|
|
Level 01b
Linux on System z. HiperSockets requires no physical cabling. A single logical partition can connect up
to 32 HiperSockets. HiperSockets can also be used with the intraensemble data network (IEDN) for
data communications.
v Large page support (1 megabyte pages) provides performance improvement for a select set of
applications, primarily long running memory access intensive applications.
v Reduced impact of planned and unplanned server outages through:
Redundant I/O interconnect
Enhanced driver maintenance
Dynamic Oscillator/Pulse Per Second (OSC/PPS) card switchover and Oscillator (OSC) Passthru
card switchover
Dynamic FSP card switchover
Program directed re-IPL
System-initiated CHPID reconfiguration
Concurrent HCA fanout card hot-plug and rebalance.
v Enhanced driver maintenance allows Licensed Internal Code (LIC) updates to be performed in
support of new features and functions. When properly configured, the z114 is designed to support
activating a selected new LIC level concurrently. Certain LIC updates will not be supported by this
function.
v Redundant I/O interconnect helps maintain critical connections to devices. The z114 is designed so
that access to I/O is not lost in the event of a failure in an HCA fanout card, IFB cable or the
subsequent repair.
v Up to 30 logical partitions (LPARs)
v Server consolidation
The expanded capacity and enhancements to the I/O infrastructure facilitates the consolidation of
multiple servers into one z114 with increased memory, and additional I/O, which may allow you to
reduce the number of servers while hosting additional applications.
The z114 provides the ability to define up to two logical channel subsystems (LCSS). Each LCSS is
capable of supporting up to 256 CHPID definitions and 15 LPARs (up to a maximum of 30 LPARs per
system).
v Nonraised floor support
v Top exit cabling
z114 provides the ability to route I/O cables and power cables through the top of the frame. Top exit
cabling is available for a nonraised floor and a raised floor configuration.
v Frame bolt-down kit
A bolt-down kit is available for a raised floor installation (9 to 13 inches, 12 to 22 inches, and 12 to 36
inches).
v High voltage DC universal input option
Ability to operate z114 using high voltage DC power (380-570 volts) in addition to AC power. The
direct high voltage DC design improves data center energy efficiency by removing the need for any
conversion.
v PCIe I/O drawer
The PCIe I/O drawer is a PCIe based infrastructure. The PCIe I/O drawer provides increased port
granularity and improved power efficiency and bandwidth over the I/O drawers.
v ESCON (16 ports) supporting 240 channels
v FICON Express8S, FICON Express8, and FICON Express4
Note: FICON Express4 features can only be carried forward. FICON Express4-2C and FICON Express8
features can be carried forward or ordered on MES using an RPQ for certain machine configurations.
FICON Express8S features:
FICON Express8S 10KM LX (2 channels per feature)
6 z114 System Overview
|
|
Level 01b
FICON Express8S SX (2 channels per feature)
FICON Express8 features:
FICON Express8 10KM LX (4 channels per feature)
FICON Express8 SX (4 channels per feature)
FICON Express4 features:
FICON Express4 10KM LX (4 channels per feature)
FICON Express4 4KM LX (4 channels per feature)
FICON Express4 SX (4 channels per feature)
FICON Express4-2C 4KM LX (2 channels per feature)
FICON Express4-2C SX (2 channels per feature)
Enhancements:
T10-DIF support for FCP channels for enhanced reliability
High Performance FICON for System z (zHPF) for FICON Express8S, FICON Express8, and FICON
Express4 features (CHPID type FC)
Extension to zHPF multitrack operations removing the 64 kB data transfer limit
Assigning World Wide Port Names (WWPNs) to physical and logical Fibre Channel Protocol (FCP)
ports using the WWPN tool
v OSA-Express4S, OSA-Express3 and OSA-Express2
Note: OSA-Express2 features can only be carried forward. OSA-Express3 10 Gigabit Ethernet,
OSA-Express3 Gigabit Ethernet, and OSA-Express3-2P Gigabit Ethernet features can be carried forward
or ordered on MES using an RPQ for certain machine configurations.
OSA-Express4S features:
OSA-Express4S GbE LX (2 ports per feature)
OSA-Express4S GbE SX (2 ports per feature)
OSA-Express4S 10 GbE LR (1 port per feature)
OSA-Express4S 10 GbE SR (1 port per feature)
OSA-Express3 features:
OSA-Express3 GbE LX (4 ports per feature)
OSA-Express3 GbE SX (4 ports per feature)
OSA-Express3 1000BASE-T Ethernet (4 ports per feature)
OSA-Express3-2P 1000BASE-T Ethernet (2 ports per feature)
OSA-Express3 10 GbE LR (2 ports per feature)
OSA-Express3 10 GbE SR (2 ports per feature)
OSA-Express3-2P GbE SX (2 ports per feature)
OSA-Express2 features:
OSA-Express2 GbE LX (2 ports per feature)
OSA-Express2 GbE SX (2 ports per feature)
OSA-Express2 1000BASE-T Ethernet (2 ports per feature)
Enhancements:
OSA-Express3 1000BASE-T Ethernet (CHPID type OSM) provides connectivity to the intranode
management network (INMN)
OSA-Express4S and OSA-Express3 10 GbE (CHPID type OSX) provide connectivity and access
control to the intraensemble data network (IEDN)
Inbound workload queuing (IWQ) and IWQ for Enterprise Extender (EE) for OSA-Express4S and
OSA-Express3.
v Cryptographic options:
Configurable Crypto Express3 and Crypto Express3-1P features.
CP Assist for Cryptographic Function (CPACF), which delivers cryptographic support on every PU
with data encryption/decryption. CPACF also provides a high performance secure key function that
ensures the privacy of key material used for encryption operations.
CPACF support includes AES for 128-, 192- and 256-bit keys; SHA-1, SHA-224, SHA-256, SHA-384,
and SHA-512 for message digest; PRNG, DES, and TDES
Chapter 1. Introduction 7
Level 01b
CPACF supports the following Message-Security Assist 4 instructions: Cipher Message with CFB
(KMF), Cipher Message with Counter (KMCTR), Cipher Message with OFB (KMO), and Compute
Intermediate Message Digest (KIMD)
Using the Support Element, you can enable or disable the encrypt DEA key and encrypt AES key
functions of the CPACF.
Elliptic Curve Cryptography (ECC) and RSA public-key cryptography support
User Defined Extension (UDX) support
Remote loading of ATMs and POS keys.
Dynamically add, move, or delete a Crypto Express3 and Crypto Express3-1P feature to or from an
LPAR.
Cryptographic migration wizard on TKE for migrating configuration data from one Cryptographic
coprocessor to another Cryptographic coprocessor.
The tamper-resistant hardware security module, which is contained within the Crypto Express3 and
Crypto Express3-1P, is designed to meet the FIPS 140-2 Level 4 security requirements for hardware
security modules.
v Fiber Quick Connect (FQC), an optional feature, is a fiber harness integrated in the z114 frame for a
quick connect to ESCON and FICON LX channels.
v Simple Network Management Protocol (SNMP) Client Libraries 3.0 support
v Common Information Model (CIM) API support
v Hardware Management Console Web Services (Web Services) API support
v CFCC Level 17 support
v TKE 7.1 Licensed Internal Code (LIC) support
v z/VM-mode partition (LPAR) support to contain processor types (CPs, IFLs, zIIPs, zAAPs, and ICFs)
v Plan ahead memory, an optional feature, allows you to preplan to future memory upgrades. The
memory upgrades can be made nondisruptively and also concurrently.
v Worldwide Port Name (WWPN) tool
The WWPN tool assists you in preplanning and setting up your Storage Area Networks (SANs)
environment prior to the installation of your server. Therefore, you can be up and running much faster
after the server is installed. This tool applies to all FICON channels defined as CHPID type FCP (for
communication with SCSI devices). The WWPN tool is located on Resource Link.
v EAL5 certification
z114 is designed for and is currently pursuing the Common Criteria Evaluation Assurance Level 5
(EAL5) for the security of its LPARs that run under the control of the Processor Resource/Systems
Manager
(PR/SM
).
v Enhanced security using digital signatures
Digitally Signed Firmware (Licensed Internal Code) support provided by the HMC and the SE. This
support provides the following benefits:
Ensures that no malware can be installed on System z products during firmware updates (such as,
transmission of MCL files, delivery of code loads, and restoration of critical data)
Designed to comply to FIPS (Federal Information Processing Standard) 140-2 Level 1 for
Cryptographic LIC (Licensed Internal Code) changes.
v Support to control user access to the HMC using a pattern name that defines:
Search criteria used to identify specific user IDs
LDAP server used for authentication
HMC user ID template used to identify logon permissions for the user IDs using this template
List of HMCs that can be accessed.
v Auditability function
HMC/SE tasks are available to generate, view, save, and offload audit reports (Audit & Log
Management task), to set up a schedule for generating, saving, and offloading audit information
8 z114 System Overview
|
Level 01b
(Customize Scheduled Operations task), to receive email notifications for select security log events
(Monitor task), and to remove the predefined password rules to prevent them from being mistakenly
used (Password Profiles task).
You can also manually offload or set up a schedule to automatically offload HMC and Support
Element log files, which can help you satisfy audit requirements.
z114 model
z114 (machine type 2818) is offered in two models. The model naming is representative of the maximum
number of customer configurable processor units (PUs) in the system. PUs are delivered in single engine
increments orderable by feature code.
The following table lists the two models and some of their characteristics, such as the range of PUs
allowed, the memory range of each model, and the number of I/O drawers and PCIe I/O drawers that
can be installed. The table lists the maximum values. These values are affected by the number of fanout
cards ordered and available.
Table 2. z114 model structure
Model Processor Units (PUs) Memory
Maximum number ofI/O drawers
/ PCIe I/O drawers
M05 1 to 5 8 to 120 GB 4
1
/ 2
M10 1 to 10 16 to 248 GB 3
1
/ 2
Note:
1. An RPQ is required for 3 and 4 I/O drawers.
The CP features offered have varying levels of capacity. The capacity setting is based on the quantity and
type of CP feature. It is identified by a model capacity indicator. The model capacity indicator identifies
the number of active CPs rather than the total physical PUs purchased and identifies the type of capacity.
The model capacity indicators are identified as A0x - Z0x, where A - Z identifies the subcapacity level
and x is the number of active CP features (1 - 5).
Performance
With the expanded capacity of the z114 and enhancements to the I/O infrastructure, IBM continues to
facilitate the consolidation of multiple servers into one z114 with a substantial increase in:
v Available memory
v Advanced virtualization technologies
v LPARs
v Speed using InfiniBand
v Available processors in a single footprint
v 3.8 GHz high frequency z114 processor chip.
IBM's Large Systems Performance Reference (LSPR) method provides comprehensive z/Architecture
processor capacity ratios for different configurations of Central Processor Units across a wide variety of
system control program and workload environments. For z114, z/Architecture processor subcapacity
indicator is defined with a A0x - Z0x notation, where x is the number of installed CPs (from one to five).
There are a total of 26 subcapacity levels, designated by the letters A through Z.
For more information on LSPR, refer to http://www.ibm.com/servers/resourcelink/lib03060.nsf/pages/
lsprindex?OpenDocument.
Chapter 1. Introduction 9
Level 01b
Resource Link
Resource Link is a key component in getting your z114 server up and running and maintained. Resource
Link provides: customized planning aids, a CHPID Mapping Tool, Customer Initiated Upgrades (CIU),
power estimation tool, and education courses. Refer to Appendix B, Resource Link, on page 145 for
detailed information about Resource Link and all the functions that it can assist you with your z114.
Fiber optic cabling
To serve the cabling needs of System z customers, IBM Site and Facilities Services has fiber optic cabling
services available whether the requirements are product-level or enterprise-level. These services consider
the requirements for the protocols and media types supported on z114 (for example, ESCON, FICON,
OSA-Express) whether the focus is the data center, the Storage Area network (SAN), the Local Area
Network (LAN), or the end-to-end enterprise.
The IBM Site and Facilities Services is designed to deliver convenient, packaged services to help reduce
the complexity of planning, ordering, and installing fiber optic cables. The appropriate fiber cabling is
selected based upon the product requirements and the installed fiber plant.
See to Chapter 8, Cabling, on page 107 for additional information.
z/Architecture
The z114, like its predecessors, support 24, 31 and 64-bit addressing, as well as multiple arithmetic
formats. High-performance logical partitioning via Processor Resource/System Manager (PR/SM) is
achieved by industry-leading virtualization support provided by z/VM. The z/Architecture also provides
key technology features such as HiperSockets and the Intelligent Resource Director, which result in a high
speed internal network and an intelligent management with dynamic workload prioritization and
physical resource balancing.
IBM's z/Architecture or a characteristic of a particular implementation includes:
v New high-frequency z114 processor chip (3.8 Ghz operation in system)
v Out-of-order execution of instructions
v Hardware accelerators on the chip for data compression, cryptographic functions and decimal floating
point
v Integrated SMP communications
v Instructions added to z114 chip to improve compiled code efficiency
v Enablement for software/hardware cache optimization
v Support for 1MB segment frame
v Full hardware support for Hardware Decimal Floating-point Unit (HDFU)
v 64-bit general registers
v 64-bit integer instructions. Most ESA/390 architecture instructions with 32-bit operands have new
64-bit and 32- to 64-bit analogs
v 64-bit addressing is supported for both operands and instructions for both real addressing and virtual
addressing
v 64-bit address generation. z/Architecture provides 64-bit virtual addressing in an address space, and
64-bit real addressing.
v 64-bit control registers. z/Architecture control registers can specify regions and segments, or can force
virtual addresses to be treated as real addresses
v The prex area is expanded from 4K to 8K bytes
v Quad-word storage consistency
v The 64-bit I/O architecture allows CCW indirect data addressing to designate data addresses above
2GB for both format-0 and format-1 CCWs
v The 64-bit SIE architecture allows a z/Architecture server to support both ESA/390 (31-bit) and
z/Architecture (64-bit) guests and Zone Relocation is expanded to 64-bit for LPAR and z/VM
v 64-bit operands and general registers are used for all cryptographic instructions
10 z114 System Overview
Level 01b
v The implementation of 64-bit z/Architecture can help reduce problems associated with lack of
addressable memory by making the addressing capability virtually unlimited (16 exabytes).
For more detailed information about z/Architecture and a list of the supported instructions and facilities,
see the z/Architecture Principles of Operation. To determine what facilities are present in your configuration,
you can use the STORE FACILITY LIST EXTENDED instruction. Information about how to use this
instruction is described in the z/Architecture Principles of Operation.
Upgrade progression
z9 BC and z10
BC servers can be upgraded to a z114. An upgrade includes all frames, drawers, support
cards, and new I/O features.
zEnterprise 114 (Model M10) can be upgraded to a zEnterprise 196 (Model M15) air-cooled.
Unsupported features/functions
This section lists the features/functions that are not supported on z114 and a recommended alternative, if
applicable.
v FICON Express
ICFs
IFLs
zAAPs
zIIPs
SAPs
Std
SAPs
Opt
Spare
PUs
Memory
(GB)
M05 5 0 - 5 0 - 5 0 - 5 0 - 2 0 - 2 2 0 - 2 0 8 to 120
M10 10 0 - 5 0 - 10 0 - 10 0 - 5 0 - 5 2 0 - 2 2 16 to 248
Notes:
1. Only one active PU (CP, ICF, or IFL) is required for any model. The total number of CPs purchased may not
exceed the total number available for that model.
2. One CP must be installed with or prior to any zIIPs or zAAPs that are installed. You can purchase one zAAP
and/or one zIIP for each CP on the system. This means that for every one CP, you can have one zAAP and one
zIIP.
3. An additional 8 GB is delivered and reserved for HSA.
4. PU selection is completed by identifying the number of features when ordering.
Central Processor (CP): A Central Processor (CP) is a PU that has the z/Architecture and ESA/390
instruction sets. It can run z/VM, z/OS, z/VSE, z/TPF, and Linux on System z operating systems and
the Coupling Facility Control Code (CFCC). z114 processors operate only in LPAR mode; consequently all
CPs are dedicated to a partition or shared between partitions. Reserved CPs can also be defined to a
logical partition, to allow for nondisruptive image upgrades.
All CPs within a configuration are grouped into a CP pool. Any z/VM, z/OS, z/VSE, z/TPF, and Linux
on System z operating systems can run on CPs that were assigned from the CP pool. Within the capacity
of the processor drawer, CPs can be concurrently added to an existing configuration permanently by
using CIU or CUod, or temporarily by using On/Off CoD, CBU. and CPE.
Internal Coupling Facility (ICF): An ICF provides additional processing capability exclusively for the
execution of the Coupling Facility Control Code (CFCC) in a coupling facility LPAR. Depending on the
model, optional ICF may be ordered. ICFs can only be used in coupling facility logical partitions.
However, it can be shared or dedicated, because only CFCC runs on these PUs. The use of dedicated
processors is strongly recommended for production coupling facility use. Software Licensing charges are
not affected by the addition of ICFs. For more information, refer to Coupling facility on page 87.
Integrated Facility for Linux (IFL): An IFL feature provides additional processing capacity exclusively
for Linux on System z workloads with no effect on the z114 model designation. An IFL can only be used
in Linux on System z or z/VM LPARs. However, it can be shared or dedicated because only Linux on
System z software runs on these CPs.
IFL is an optional feature for z114. Up to 10 IFL features may be ordered for z114 models, depending
upon the server model and its number of maximum unused PUs.
Software licensing charges are not affected by the addition of IFLs. For more information on software
licensing, contact your IBM representative.
Chapter 2. Hardware characteristics 15
Level 01b
The IFL enables you to
v Add processing capacity dedicated to running Linux on System z on a z114 server.
v Run multiple Linux on System z images independently of the traditional z/Architecture, with
associated savings of IBM z/Architecture.
v Define many virtual Linux on System z images on fewer real z114 resources.
As with any change in the LPAR configuration of a processor, the introduction of additional resources to
manage may have an impact on the capacity of the existing LPARs and workloads running on the server.
The size of the impact is dependent on the quantity of added resources and the type of applications
being introduced. Also, one should carefully evaluate the value of sharing resources (like CHPIDs and
devices) across LPARs to assure the desired balance of performance, security, and isolation has been
achieved.
System z Applications Assist Processor (zAAP): The System z Application Assist Processor is a
specialized processor unit that provides a Java execution environment for a z/OS environment. This
enables clients to integrate and run new Java-based web application alongside core z/OS business
applications and backend database systems, and can contribute to lowering the overall cost of computing
for running Java technology-based workloads on the platform.
zAAPs are designed to operate asynchronously with the CPs to execute Java programming under control
of the IBM Java Virtual Machine (JVM). This can help reduce the demands and capacity requirements on
CPs.
The IBM JVM processing cycles can be executed on the configured zAAPs with no anticipated
modifications to the Java application. Execution of the JVM processing cycles on a zAAP is a function of
the Software Developer's Kit (SDK) 1.4.1 for zEnterprise, System z10, System z9, zSeries
, z/OS, and
Processor Resource/Systems Manager (PR/SM).
Note: The zAAP is a specific example of an assist processor that is known generically as an Integrated
Facility for Applications (IFA). The generic term IFA often appears in panels, messages, and other online
information relating to the zAAP.
z/VM 5.4 or later supports zAAPs for guest exploitation.
System z Integrated Information Processor (zIIP): The IBM System z Integrated Information Processor
(zIIP) is a specialty engine designed to help improve resource optimization, enhancing the role of the
server as the data hub of the enterprise. The z/OS operating system, on its own initiate or acting on the
direction of the program running in SRB mode, controls the distribution of work between the general
purpose processor (CP) and the zIIP. Using a zIIP can help free capacity on the general purpose
processor.
z/VM 5.4 or later supports zIIPs for guest exploitation.
System Assist Processor (SAP): A SAP is a PU that runs the channel subsystem Licensed Internal Code
(LIC) to control I/O operations. One of the SAPs in a configuration is assigned as a Master SAP, and is
used for communication between the processor drawer and the Support Element. All SAPs perform I/O
operations for all logical partitions.
A standard SAP configuration provides a very well balanced system for most environments. However,
there are application environments with very high I/O rates (typically some z/TPF environments), and in
this case additional SAPs can increase the capability of the channel subsystem to perform I/O operations.
Additional SAPs can be added to a configuration by either ordering optional SAPs or assigning some
PUs as SAPs. Orderable SAPs may be preferred since they do not incur software charges, as might
happen if PUs are assigned as SAPs.
16 z114 System Overview
Level 01b
z/VM-mode LPARs: z114 allows you to define a z/VM-mode LPAR containing a mix of processor types
including CPs and specialty processors (IFLs, zIIPs, zAAPs, and ICFs). This support increases flexibility
and simplifies systems management by allowing z/VM 5.4 or later to manage guests to operate Linux on
System z on IFLs, operate z/VSE and z/OS on CPs, offload z/OS system software overhead, such as DB2
workloads, on zIIPs, and provide an economical Java execution environment under z/OS on zAAPs, all
in the same VM LPAR.
Memory
Each z114 CPC has its own processor memory resources. CPC processor memory can consist of both
central and expanded storage.
Central storage: Central storage consists of main storage, addressable by programs, and storage not
directly addressable by programs. Nonaddressable storage includes the Hardware System Area (HSA).
Central storage provides:
v Data storage and retrieval for the Processor Units (PUs) and I/O
v Communication with PUs and I/O
v Communication with and control of optional expanded storage
v Error checking and correction.
Part of central storage is allocated as a fixed-sized Hardware System Area (HSA), which is not
addressable by application programs. Factors affecting size are described in Hardware System Area
(HSA) on page 18.
In z/Architecture, storage addressing is 64 bits, allowing for an addressing range up to 16 exabytes.
Consequently, all central storage in a z114 can be used for central storage.
Key-controlled storage protection provides both store and fetch protection. It prevents the unauthorized
reading or changing of information in central storage.
Each 4 KB block of storage is protected by a 7-bit storage key. For processor-initiated store operations,
access key bits 0-3 from the active program status word (PSW) are compared with bits 0-3 from the
storage key associated with the pertinent 4 KB of storage to be accessed. If the keys do not match, the
central processor is notified of a protection violation, the data is not stored, and a program interruption
occurs. PSW key 0 matches any storage key. The same protection is active for fetch operations if bit 4 of
the storage key (the fetch protection bit) is on. Refer to zEnterprise System Processor Resource/Systems
Manager Planning Guide for more information on central storage.
Expanded storage: Expanded storage can optionally be defined on zEnterprise. It is controlled by the
control program, which can transfer 4 KB pages between expanded storage and central storage. The
control program can use expanded storage to reduce the paging and swapping load to channel-attached
paging devices in a storage-constrained environment and a heavy-paging environment.
z114 offers a flexible storage configuration which streamlines the planning effort by providing a single
storage pool layout at IML time. The storage is placed into a single pool which can be dynamically
converted to ES and back to CS as needed. Logical partitions are still specified to have CS and optional
ES as before. Activation of logical partitions as well as dynamic storage reconfigurations will cause LPAR
to convert the storage to the type needed.
The control program initiates the movement of data between main storage (the addressable part of central
storage) and expanded storage. No data can be transferred to expanded storage without passing through
main storage. With z114, a dedicated move page engine assists in efficiently transferring data between
main and expanded storage. Refer to zEnterprise System Processor Resource/Systems Manager Planning Guide
for more information on expanded storage.
Chapter 2. Hardware characteristics 17
Level 01b
Memory cards: Up to 10 memory cards (DIMMs) reside within a processor drawer. The physical card
capacity can be either 4 GB (FC 1605), 8 GB (FC 1606), or 16 GB (FC 1607). Each feature code includes 10
DIMMs.
Note: The sum of enabled memory on each card is the amount available for use in the system.
The following list contains some general rules for memory.
v Memory cards are Field Replaceable Units (FRUs), separate from the processor drawer.
v Larger capacity cards may be used for repair actions and manufacturing substitution. LICCC will dial
down to ordered size.
v Memory downgrades are not supported.
v Minimum memory orderable is 8 GB. Maximum memory of 248 GB is available on the M10 when 16
GB DIMMs are plugged in the 20 DIMM slots (10 in each processor drawer).
v Memory is only upgradeable in 8 GB increments between the defined minimum and maximum.
v LICCC dialing is used to offer concurrent memory upgrades within the physical memory card
installed.
v The memory LICCC record for the processor drawer is combined with the PU LICCC record for the
processor drawer. Both memory and PU LICCC are shipped on a single CD.
Hardware System Area (HSA): The HSA contains the CPC Licensed Internal Code (LIC) and
configuration dependent control blocks. HSA is not available for program use. The HSA has a fixed size
of 8 GB. Customer storage will no longer be reduced due to HSA size increase on a GA upgrade because
an additional 8 GB is always delivered and reserved for HSA.
Error Checking and Correction (ECC): Data paths between central storage and expanded storage (if
configured), and between central storage and the central processors and channels are checked using
either parity or Error Checking and Correction (ECC). Parity bits are included in each command or data
word. ECC bits are stored with data in central storage. ECC codes apply to data stored in and fetched
from central storage. Memory ECC detects and corrects single bit errors. Also, because of the memory
structure design, errors due to a single memory chip failure are corrected. Unrecoverable errors are
flagged for follow-on action. ECC on z114 is performed on the memory data bus as well as memory
cards.
Fanout cards
z114 has one or two processor drawers. Each processor drawer includes four fanout slots. There are six
fanout cards that will plug into the z114 HCA2-O fanout card, HCA2-O LR fanout card, HCA2-C
fanout card, HCA3-O fanout card, HCA3-O LR fanout card, PCIe fanout card.
The HCA2-O, HCA2-O LR, HCA3-O, HCA3-O LR fanout cards are used for coupling using fiber optic
cabling.
The HCA2-O fanout supports a two-port 12x IFB coupling link with a link data rate of 3 GBps (if
attached to a z9) and 6 GBps (if attached to a z10 or zEnterprise), and a maximum distance of 150 meters
(492 feet). The HCA2-O LR fanout supports a two-port 1x IFB coupling link with a link data rate of either
5.0 Gbps or 2.5 Gbps and a maximum unrepeated distance of 10 kilometers (6.2 miles) and a maximum
repeated distance of 100 kilometers (62 miles).
The HCA2-C fanout card supports two ports. Each port uses a 12x InfiniBand copper cable (6 GBps in
each direction) providing a connection to an I/O cage or I/O drawer.
The HCA3-O fanout supports a two-port 12x IFB coupling link with a link data rate of 6 GBps and a
maximum distance of 150 meters (492 feet). The HCA3-O fanout also supports the 12x IFB3 protocol if
four or less CHPIDs are defined per port. The 12x IFB3 protocol provides improved service times. An
HCA3-O fanout can communicate with a HCA2-O fanout on z196, z114, or z10.
18 z114 System Overview
Level 01b
The HCA3-O LR fanout is designed to support four-port 1x IFB coupling link with a link data rate of 5.0
Gbps and a maximum unrepeated distance of 10 kilometers (6.2 miles) or a maximum repeated distance
of 100 kilometers (62 miles). With DWDM, the HCA3-O LR fanout supports a four-port 1x IFB coupling
link with a link data rate of either 2.5 or 5 Gbps. An HCA3-O LR fanout can communicate with an
HCA2-O LR fanout on z196, z114, or z10.
The PCIe fanout card provides PCIe interface and is used to connect to the PCI-IN cards in the PCIe I/O
drawer.
The following is a list of InfiniBand connections from z114 to a z114, z196, z10 EC, z10 BC, z9 EC, or z9
BC:
v HCA3-O fanout card on a z114 can connect to an:
HCA3-O fanout card on a z114 or z196
HCA2-O fanout card on a z114, z196, z10 EC, or z10 BC
v HCA2-O fanout card on a z114 can connect to an:
HCA3-O fanout card on a z114 or z196
HCA2-O fanout card on a z114, z196, z10 EC, or z10 BC
HCA1-O fanout card on a z9 EC or z9 BC
v HCA3-O LR fanout card on a z114 can connect to an:
HCA3-O LR fanout card on a z114 or z196
HCA2-O LR fanout card on a z114, z196, z10 EC, or z10 BC
v HCA2-O LR fanout card on a z114 can connect to an:
HCA3-O LR fanout card on a z114 or z196
HCA2-O LR fanout card on a z114, z196, z10 EC, or z10 BC
The fanout cards are inserted in a specific sequence from right to left in the processor drawer first are
the HCA2-C fanout cards and PCIe fanout cards used for I/O, then the HCA3-O LR fanout cards and the
HCA2-O LR fanout cards, and last the HCA3-O fanout cards and HCA2-O fanout cards.
Figure 4 on page 20 is a sample configuration showing connections from the fanout cards on the z114
processor drawer to another z114 processor drawer or z196 book, an I/O drawer, or a PCIe I/O drawer.
Chapter 2. Hardware characteristics 19
Level 01b
Oscillator/Pulse Per Second (OSC/PPS) cards and Oscillator (OSC) Passthru cards
On the z114 M05, two Oscillator/Pulse Per Second (OSC/PPS) cards (FPH605) are required. On the z114
M10, two OSC/PPS cards are required in the first processor drawer and two Oscillator (OSC) Passthru
cards (CEC2CD) are required in the second processor drawer. The two OSC/PPS cards serve as a primary
card and a backup card (and the two OSC Passthru cards serve as a primary card and a backup card, if
the second processor drawer is needed). If the primary OSC/PPS card (or OSC Passthru card fails), the
corresponding backup card detects the failure and continues to provide the clock signal preventing an
outage due to an oscillator failure.
Each OSC/PPS card also contains one pulse per second (PPS) port. If z114 is using STP and configured in
an STP-only CTN using NTP with PPS as the external time source, a cable connection from the PPS port
on the OSC/PPS card to the PPS output of the NTP server is required.
The OSC Passthru cards do not have the PPS connections. If the second processor drawer is needed, the
oscillator signal is passed through from the first processor drawer to the second processor drawer.
FSP cards
Two flexible service processor (FSP) cards (FC FPH606) are required on z114. The FSP cards provide a
subsystem interface (SSI) for controlling components.
Distributed Converter Assembly (DCA) cards
The Distributed Converter Assembly (DCA) cards are DC-to-DC converter cards in the processor drawer
that convert -350 volts DC to logic voltages. There are two DCA cards in a processor drawer.
I/O drawers and PCIe I/O drawers
z114 provides two types of I/O drawers the I/O drawer and the PCIe I/O drawer.
B
o
o
k
z196
HCA3-O
B
o
o
k
z10 EC
P
C
I
-
I
N
P
C
I
-
I
N
F
S
P
PCIe I/O drawer
12x IFB
(6 GBps)
Processor drawer
IFB-MP IFB-MP
I/O drawer
P
C
I
e
H
C
A
3
-
O
H
C
A
2
-
C
H
C
A
2
-
O
L
R
z10 BC
P
r
o
c
e
s
s
o
r
d
r
a
w
e
r
HCA2-O LR
1x IFB
(5 Gbps)
8 GBps
6 GBps
ISC-3
(2 Gbps)
z114
Figure 4. HCA fanout connections
20 z114 System Overview
Level 01b
An I/O drawer and PCIe I/O drawer allow you to add channels up to the amount supported by the I/O
drawer and PCIe I/O drawer and the processor drawer.
You can have multiple I/O drawers in your configuration depending on the z114 model. Table 4 displays
the different I/O drawer and PCIe I/O drawer configurations.
Table 4. I/O drawer and PCIe I/O drawer configurations
Model M05 (1 processor drawer) Model M10 (2 processor drawers)
I/Oslots
# of I/O
drawers
# of PCIe
I/O drawers
PCIe
I/Oslots
I/Oslots
# of I/O
drawers
# of PCIe
I/O drawers
PCIe I/O
slots
0 0 1 32 0 0 1 32
0 0 2 64 0 0 2 64
1-8 1 0 0 1-8 1 0 0
1-8 1 1 32 1-8 1 1 32
9-16 2 0 0 1-8 1 2 64
9-16 2 1 32 9-16 2 0 0
17-24 3
1
0 0 9-16 2 1 32
25-32 4
1
0 0 17-24 3
1
0 0
Note:
1. An RPQ is required for 3 and 4 I/O drawers.
I/O features
The I/O cards that are supported in z114 are shown in Table 5. There are a total of 8 I/O slots per I/O
drawer and 32 I/O slots per PCIe I/O drawer.
Table 5 and Table 5 show I/O layout for the two type of drawers. You can also refer to Chapter 5, I/O
connectivity, on page 55 for more detailed information on the I/O channels and adapters.
Notes:
1. Crypto Express3 and Crypto Express3-1P features use I/O slots. The Crypto Express3 feature has two
PCIe adapters and the Crypto Express3-1P feature has one PCIe adapter. The Crypto Express3 and
Crypto Express3-1P features do not have ports and do not use fiber optic cables. They are not defined
in the IOCDS, and, therefore, do not receive CHPID numbers. However, they are assigned a PCHID.
2. HCA2-O, HCA2-O LR, HCA3-O, and HCA3-O LR are not I/O features. They are fanout cards in the
processor drawer.
Table 5. Channels, links, ports, and adapters summary per system
Feature
Maximum
features
Maximum
connections
Channels/
Links/Adapters
per feature
Purchase
increment
16-port ESCON (FC 2323)
1, 7
16 240 channels 16 channels
2
4 channels
FICON Express8S 10KM LX (FC 0409)
1, 8
FICON Express8S SX (FC 0410)
1, 8
64 128 channels 2 channels 2 channels
FICON Express8 10KM LX (FC 3325)
1, 7
FICON Express8 SX (FC 3326)
1, 7
16 64 channels 4 channels 4 channels
FICON Express4 10KM LX (FC 3321)
1, 7
FICON Express4 4KM LX (FC 3324)
1, 7
FICON Express4 SX (FC 3322)
1, 7
16 64 channels 4 channels 4 channels
FICON Express4-2C 4KM LX (FC 3323)
1, 7
FICON Express4-2C SX (FC 3318)
1, 7
16 32 channels 2 channels 2 channels
Chapter 2. Hardware characteristics 21
Level 01b
Table 5. Channels, links, ports, and adapters summary per system (continued)
Feature
Maximum
features
Maximum
connections
Channels/
Links/Adapters
per feature
Purchase
increment
OSA-Express4S GbE LX (FC 0404)
8
OSA-Express4S GbE SX (FC 0405)
8
48
9
96 ports 2 ports 1 feature
OSA-Express4S 10 GbE LR (FC 0406)
8
OSA-Express4S 10 GbE SR (FC 0407)
8
48
9
48 ports 1 port 1 feature
OSA-Express3 GbE LX (FC 3362)
7
OSA-Express3 GbE SX (FC 3363)
7
16 64 ports 4 ports 1 feature
OSA-Express3 10 GbE LR (FC 3370)
7
OSA-Express3 10 GbE SR (FC 3371)
7
16 32 ports 2 ports 1 feature
OSA-Express3-2P GbE SX (FC 3373)
7
16 32 ports 2 ports 1 feature
OSA-Express3 1000BASE-T Ethernet (FC
3367)
7
16 64 ports 4 ports 1 feature
OSA-Express3-2P 1000BASE-T Ethernet (FC
3369)
7
16 32 ports 2 ports 2 ports
OSA Express2 GbE LX (FC 3364)
7
OSA Express2 GbE SX (FC 3365)
7
OSA Express2 1000BASE-T Ethernet (FC
3366)
7
16 32 ports 2 ports 1 feature
Crypto Express3 (FC 0864)
6
8 16 PCIe adaptrs 2 PCIe adaptrs 2 features
Crypto Express3-1P (FC 0871)
6
8 8 PCIe adaptrs 1 PCIe adaptr 1 features
ISC-3
1
12 48 links 4 links 1 link
12x IFB (HCA2-O (FC 0163))
1, 5
4
3
8
4
8 links
3
16 links
4
2 links 2 links
12x IFB (HCA3-O (FC 0171))
1, 5
4
3
8
4
8 links
3
16 links
4
2 links 2 links
1x IFB (HCA2-O LR (FC 0168))
1, 5
4
3
8
4
8 links
3
12 links
4
2 links 2 links
1x IFB (HCA3-O LR (FC 0170))
1, 5
4
3
8
4
16 links
3
32 links
4
4 links 4 links
Notes:
1. A minimum of one I/O feature (ESCON or FICON) or one coupling link feature (12x InfiniBand, 1x InfiniBand,
or ISC-3) is required.
2. Each ESCON feature has 16 channels, of which a maximum of 15 may be activated. One is reserved as a spare.
3. Applies to Model M05.
4. Applies to Model M10.
5. IFBs are not included in the maximum feature count for I/O slots, but they are included in the CHPID count.
6. An initial order for Crypto Express3 is four PCIe adapters (two features) and an initial order for Crypto
Express3-1P is two PCIe adapters (two features).
7. This feature can only be used in an I/O drawer.
8. This feature can only be used in a PCIe I/O drawer.
9. For every OSA-Express3 feature in the configuration, the OSA-Express4S maximum number of features is
reduced by two.
IFB-MP and PCI-IN cards
The IFB-MP card can only be used in the I/O drawer. The IFB-MP cards provide the intraconnection from
the I/O drawer to the HCA2-C fanout card in the processor drawer.
22 z114 System Overview
Level 01b
The PCI-IN card can only be used in the PCIe I/O drawer. The PCI-IN cards provide the intraconnection
from the PCIe I/O drawer to the PCIe fanout card in the processor drawer.
Distributed Converter Assembly (DCA) cards
The Distributed Converter Assembly (DCA) cards are DC-to-DC converter cards in the I/O drawer and
PCIe I/O drawer that convert -350 volts DC to logic voltages. There are two DCA cards in each I/O
drawer.
PSC24V card
The PSC24V card is a power sequence control (PSC) card used to turn on/off specific control units from
the CPC. The PSC24V card in the I/O drawer provides the physical interface between the cage controller
and the PSC boxes, located outside the I/O drawer in the system frame. Only one PSC24V card is
required on z114. Each card has two jacks that are used to connect to the PSC boxes.
The PSC feature is not supported on the PCIe I/O drawer, so the PSC24V card is not available with the
PCIe I/O drawer.
Note: The PSC24V card is not hot pluggable.
For more information on PSC, refer to Power sequence controller on page 29.
Support Element
The z114 is supplied with two integrated laptop computers that function as a primary and alternate
Support Elements. Positioned over each other in the front of the A frame, the Support Elements
communicate with the CPC and each other through the service network. The Support Element sends
hardware operations and management controls to the Hardware Management Console for the CPC and
allows for independent and parallel operational control of a CPC from the Hardware Management
Console. The second, or alternate, Support Element is designed to function as a backup and to preload
Support Element Licensed Internal Code.
The Support Element contains the following:
v Licensed Internal Code for the CPC.
v Hardware system definitions for the CPC (contained in the reset, image, and load profiles for the CPC
and IOCDs).
v Battery-powered clock used to set the CPC time-of-day (TOD) clock at power-on reset. In STP timing
mode, the CPC TOD clock is initialized to Coordinated Server Time (CST).
v Two 1 GB SMC Ethernet hubs (FC 0070) to manage the Ethernet connection between the Support
Elements and the Hardware Management Console.
The SMC hubs are offered as part of the initial order or as a Manufacturing Engineering Specification
(MES). They are shipped automatically on every order unless you deselected FC 0070.
v An ethernet LAN adapter or LAN on board to connect the Support Element to the CPC through the
power service network.
For more detailed information on the Support Element, refer to the Chapter 9, Hardware Management
Console and Support Element, on page 113 or to the zEnterprise System Support Element Operations Guide.
System power supply
The system power supply located in the top of the A frame provides the control structure to support
the z114 power requirements for the processor drawers and up to four I/O drawers.
The z114 power subsystem basic components include:
v Bulk Power Assembly (BPA) - provides the prime power conversion and high voltage DC distribution.
v Bulk Power Controller (BPC) - is the main power controller and cage controller for the BPA.
Chapter 2. Hardware characteristics 23
Level 01b
The BPC is the principal control node for the z114 diagnostic/service and power/cooling system. It is
the cage controller for the BPA cage and connects to both ethernet service networks.
v Bulk Power Distribution (BPD) - distributes -350 VDC and RS422 communications to logic cage power
Field Replaceable Units (FRUs)
v Bulk Power Fan (BPF) - is a cooling device
v Bulk Power Regulator (BPR) - is the main front end power supply that converts line voltage (DC and
AC) to regulated -350 VDC
v Bulk Power Enclosure (BPE) - is the metal enclosure that contains the back plane
v Bulk Power Hub (BPH) - is the Ethernet hub for system control and monitoring. BPH contains 24 ports
(8 1-Gigabit Ethernet ports and 16 10/100 Ethernet ports).
v Internal Battery Feature (IBF) - provides battery power to preserve processor data if there is a power
loss.
v Distributed Converter Assemblies (DCAs).
Internal Battery Feature (IBF)
The optional Internal Battery Feature (FC 3212) provides the function of a local uninterruptible power
source. It has continuous self-testing capability for battery backup which has been fully integrated into
the diagnostics, including Remote Service Facility (RSF) support.
The IBF provides battery power to preserve processor data if there is a power loss on both of the AC or
DC supplies.
In the event of input power interruption to the system, the IBF provides sustained system operation for
the times listed in the following table.
Table 6. System IBF hold times
# of PCIe I/O drawers
# of I/O drawers
Model M05
(1 processor drawer)
Model M10
(2 processor drawers)
0 PCIe I/O drawers
0 I/O drawers
40 min 19 min
0 PCIe I/O drawers
1 I/O drawer
22 min 12 min
1 PCIe I/O drawer
0 I/O drawers
16 min 9 min
0 PCIe I/O drawers
2 I/O drawers
16 min 9 min
1 PCIe I/O drawer
1 I/O drawer
10 min 7.5 min
0 PCIe I/O drawers
3 I/O drawers
10 min 7.5 min
2 PCIe I/O drawers
0 I/O drawers
9 min 6 min
1 PCIe I/O drawer
2 I/O drawers
9 min 6 min
0 PCIe I/O drawers
4 I/O drawers
9 min -
2 PCIe I/O drawers
1 I/O drawes
- 5 min
Note: The times listed are minimum values because they are calculated for maximum possible plugging for any
given configuration. Actual times might be greater.
24 z114 System Overview
Level 01b
If the IBF is ordered, they must be installed in pairs. The maximum number of battery units per system is
two (one per side).
The IBF is fully integrated into the server power control/diagnostic system that provides full battery
charge, and test and repair diagnostics. For more information about the IBF, see zEnterprise 114 Installation
Manual for Physical Planning.
Internet Protocol Version 6
IPv6 is the protocol designed by the Internet Engineering Task Force (IETF) to replace Internet Protocol
Version 4 (IPv4) to satisfy the demand for additional IP addresses. IPv6 expands the IP address space
from 32-bits to 128-bits enabling a far greater number of unique IP addresses.
IPv6 is available for the Hardware Management Console and Support Element customer network, the
Trusted Key Entry (TKE) workstation network connection to operating system images, OSA-Express4S,
OSA-Express3, OSA-Express2, and HiperSockets.
The HMC and Support Elements are designed to support customer internal and open networks that are
configured to use only IPv6 addresses, only IPv4 addresses, or a combination of the two.
Multiple Subchannel Sets (MSS)
The multiple subchannel sets structure allows increased device connectivity for Parallel Access Volumes
(PAVs). Two subchannel sets per Logical Channel Subsystem (LCSS) are designed to enable a total of
65,280 subchannels in set-0 and the addition of 64K - 1 subchannels in set-1. Multiple subchannel sets is
supported by z/OS V1.12 and Linux on System z. This applies to the ESCON, FICON, and zHPF
protocols.
IPL from an alternate subchannel set
z114 allows you to IPL a device from subchannel set 1, in addition to subchannel set 0, in supported
operating systems such as z/OS. Devices used early during IPL processing can now be accessed using
subchannel set 1. This is intended to allow the users of Metro Mirror (PPRC) secondary devices defined
using the same device number and a new device type in an alternate subchannel set to be used for IPL,
IODF, and stand-alone dump volumes when needed.
LPAR mode
LPAR mode is the mode of operation for the z114. It allows you to:
v Define ESA/390, ESA/390 TPF, coupling facility, z/VM-mode, and Linux-only logical partitions
v Define and use up to the maximum installed storage as central storage in a single logical partition.
v Dynamically reconfigure storage between logical partitions.
You can define and activate up to 30 logical partitions for each CPC.
After you define and activate an ESA/390 or ESA/390 TPF logical partition, you can load a supporting
operating system into that logical partition.
Processor Resource/Systems Manager (PR/SM) enables logical partitioning of the CPC.
Resources for each logical partition include:
v Processor units (CPs, ICFs, IFLs, zIIPs, or zAAPs)
v Storage (central storage and expanded storage)
v Channels.
Chapter 2. Hardware characteristics 25
Level 01b
Processor units
On z114, PUs can be used within a logical partition as Central Processors (CPs), Internal Coupling
Facilities (ICFs), Integrated Facilities for Linux (IFLs), System z Integrated Information Processor (zIIP), or
System z Application Assist Processors (zAAPs). The initial allocation of CPs, ICFs, IFLs, zIIPs, and
zAAPs to a logical partition is made when the logical partition is activated.
Within a logical partition on z114, they may be used as follows:
v CPs can be dedicated to a single logical partition or shared among multiple logical partitions. The use
of CP resources shared between logical partitions can be limited and modified by operator commands
while the logical partitions are active. CPs that are dedicated to a logical partition are available only to
that logical partition.
v ICFs, IFLs, zIIPs, and zAAPs are available as orderable features on z114 for use in a logical partition.
ICFs are available as a feature for use in a coupling facility (CF) logical partition (refer to Internal
Coupling Facility (ICF) on page 15 for additional information). IFLs are available as a feature for
running Linux on System z. zAAPs are available as a feature for providing special purpose assists that
execute JAVA programming under control of the IBM Java Virtual Machine (JVM) (refer to System z
Applications Assist Processor (zAAP) on page 16 for additional information).
Storage
Before you can activate logical partitions, you must define central storage and optional expanded storage
to the logical partitions. Refer to Central storage on page 17 and Expanded storage on page 17 for
more information.
All installed storage is initially configured as central storage. This installed storage can be divided up
among logical partitions as workload requirements dictate, including, if desired, allocating all of the
installed storage to a single logical partition as central storage. When a logical partition is activated, the
storage resources are allocated in contiguous blocks.
For z114, logical partition central storage granularity is a minimum of 128 MB and increases as the
amount of storage defined for the logical partition increases. You can dynamically reallocate storage
resources for z/Architecture and ESA/390 architecture logical partitions using Dynamic Storage
Reconfiguration. Dynamic storage reconfiguration allows both central and expanded storage allocated to
a logical partition to be changed while the logical partition is active. It provides the capability to reassign
storage from one logical partition to another without the need to POR the CPC or IPL the recipient
logical partition. For more information, refer to zEnterprise System Processor Resource/Systems Manager
Planning Guide.
Note: You cannot share allocated central storage or expanded storage among multiple logical partitions.
Expanded storage granularity for logical partitions is fixed at 128 MB.
Channels
You can allocate channels to logical partitions as follows:
v Dedicated channels
Dedicated channels are unshared channels and can only be used by one logical partition. All channel
types supported by the model can be allocated as dedicated channels.
v Reconfigurable channels
Reconfigurable channels are unshared channels that can be moved among logical partitions within an
LCSS but can only belong to one logical partition at a given time. All channel types supported by the
model can be allocated as reconfigurable channels.
v Shared channels
26 z114 System Overview
Level 01b
The Multiple Image Facility (MIF) allows channels to be shared among multiple logical partitions in a
Logical Channel Subsystem (LCSS). Shared channels are configured to a logical partition giving the
logical partition a channel image of the shared channel that it can use. Each channel image allows a
logical partition to independently access and control the shared channel as if it were a physical channel
assigned to the logical partition. For more information, refer to Multiple Image Facility (MIF) on
page 51.
You can define the channels, shown in Table 9 on page 42, as shared among multiple logical partitions
within an LCSS so that the shared channels can be accessed by more than one logical partition in an
LCSS at the same time.
On z114 with coupling facility logical partitions, CFP, CBP, and ICP channels may be shared by many
ESA logical partitions and one coupling facility logical partition.
v Spanned channels
Spanned channels are channels that are configured to multiple Logical Channel Subsystems (LCSSs)
and are transparently shared by any or all of the configured LPARs without regard to the LCSS to
which the LPAR is configured.
v Device Sharing
You can share a device among logical partitions by:
Using a separate channel for each logical partition
Using a shared channel
Using a spanned channel.
LPAR time offset support
Logical partition time offset support provides for the optional specification of a fixed time offset
(specified in days, hours, and quarter hours) for each logical partition activation profile. The offset, if
specified, will be applied to the time that a logical partition will receive from the Current Time Server
(CTS) in a Coordinated Timing Network (CTN).
This support can be used to address the customer environment that includes multiple local time zones
with a Current Time Server (CTS) in a CTN.
It is sometimes necessary to run multiple Parallel Sysplexes with different local times and run with the
time set to GMT=LOCAL. This causes the results returned in the store clock (STCK) instruction to reflect
local time. With logical partition time offset support, logical partitions on each z114 CPC in a Parallel
Sysplex that need to do this can specify an identical time offset that will shift time in the logical partition
sysplex members to the desired local time. Remaining logical partitions on the z114 CPCs can continue to
participate in current date production Parallel Sysplexes utilizing the same CTS with the time provided
by the Sysplex Timer(s) or CTS.
This function is supported by all in service releases of z/OS.
For more information on logical partitions, refer to zEnterprise System Processor Resource/Systems Manager
Planning Guide and to the System z Input/Output Configuration Program User's Guide for ICP IOCP.
Server Time Protocol (STP)
Server Time Protocol (STP) (FC 1021) provides the means for multiple zEnterprise, System z10, and
System z9 servers to maintain time synchronization with each other. STP is designed to synchronize
servers configured in a Parallel Sysplex or a basic sysplex (without a coupling facility), as well as servers
that are not in a sysplex.
STP uses a message-based protocol to transmit timekeeping information over externally defined coupling
links between servers. STP distributes time messages in layers (called stratums). The timekeeping
information is needed to determine the Coordinated Server Time (CST) at each server. The coupling links
Chapter 2. Hardware characteristics 27
Level 01b
used to transport STP messages include ISC-3 links configured in peer mode and IFB links. These links
can be the same links already being used in a Parallel Sysplex for coupling facility communications.
For more details about Server Time Protocol, refer to Server Time Protocol (STP) on page 91.
For hardware and software requirements, see the STP website located at http://www.ibm.com/systems/z/
advantages/pso/stp.html.
Hardware Management Console (HMC)
The Hardware Management Console (HMC) is a desktop PC. The HMC performs system management
tasks or performs both system management tasks and ensemble management tasks. The HMC provides a
single point of control and single system image for those CPCs (nodes) defined to it. (A single CPC,
including any optionally attached zBX, is called a node.)
When managing an ensemble, a pair of HMCs are required the primary HMC and the alternate HMC.
The HMC managing the nodes in an ensemble is referred to as the primary HMC. The primary HMC can
also manage CPCs that are not member of an ensemble. The alternate HMC is used as backup. If the
primary HMC fails, the alternate HMC will inherit the role of the primary HMC.
A HMC, other than the primary HMC or the alternate HMC, can manage CPCs that are in an ensemble.
However, it cannot perform any ensemble management tasks.
The HMC can manage up to 100 CPCs. However, only eight of these CPCs can be a member of an
ensemble managed by that HMC. The other CPCs can be members of an ensemble managed by other
HMCs. A CPC, that is not a member of an ensemble, can be managed by up to 32 HMCs. A single node
can be a member of only one ensemble.
The HMCs utilize VLAN and an included PCI Express Ethernet adapter for handling both single and
dual Ethernet configuration. The HMC is supplied with two Ethernet ports.
The physical location of the Hardware Management Console hardware features (standard and/or
optional) are dictated by the specific PC. Some features can be mutually exclusive with other features
depending on the PC model. Each CPC must be connected to at least one Hardware Management
Console on the same network as the Support Elements of the CPC.
For more detailed information on the Hardware Management Console, refer to Chapter 9, Hardware
Management Console and Support Element, on page 113 or to the System z Hardware Management Console
Operations Guide.
Top exit cabling
For z114, you can optionally route all I/O cables; ESCON, FICON, OSA-Express, 12x InfiniBand, 1x
InfiniBand, and ISC-3 cables; and 1000BaseT Ethernet cables from I/O drawers and PCIe I/O drawers
through the top of the frame. This option (FC 7920) improves the airflow, therefore improving efficiency.
The power cables can also exit (FC 7901) through the top of the frame.
Top exit cabling is available for a nonraised floor and a raised floor configuration. However, for a
nonraised floor configuration, the I/O cables and the power cables must all route through the top or all
route through the bottom. You cannot have a mixture.
Extensions are added to each corner of the frame with this option.
28 z114 System Overview
Level 01b
Bolt-down kit
A bolt-down kit is available for a raised floor installation (9 to 13 inches, 12 to 22 inches, and 12 to 36
inches) and a nonraised floor installation. You will need to order only one bolt-down kit.
Power sequence controller
The optional power sequence controller (PSC) is available on the z114. The PSC feature provides the
ability to turn on/off specific control units from the CPC. The PSC feature consists of two PSC boxes, one
PSC24V card, and PSC cables.
You can order one PSC feature on your z114.
IBM zEnterprise BladeCenter Extension (zBX)
zBX, machine type 2458 (model number 002), is a hardware infrastructure that consists of a BladeCenter
chassis attached to a z114 or a z196. zBX can contain optimizers (IBM Smart Analytics Optimizer for DB2
for z/OS, V1.1 (IBM Smart Analytics Optimizer) and IBM WebSphere DataPower Integration Appliance
XI50 for zEnterprise (DataPower XI50z)) and IBM blades (select IBM POWER7 blades and select IBM
System x blades (supporting Linux and Microsoft Windows).
The IBM Smart Analytics Optimizer processes and analyzes CPU intensive DB2 queries. It provides a
single enterprise data source for queries providing fast and predictable query response time. For a quick
step-through of prerequisite, installation, and configuration information about the IBM Smart Analytics
Optimizer, see the IBM Smart Analytics Optimizer for DB2 for z/OS Getting Started.
DataPower XI50z is used to help provide multiple levels of XML optimization, streamline and secure
valuable service-oriented architecture (SOA) applications, and provide drop-in integration for
heterogeneous environments by enabling core Enterprise Service Bus (ESB) functionality, including
routing, bridging, transformation, and event handling.
The IBM POWER7 blades and IBM System x blades enable application integration with System z
transaction processing, messaging, and data serving capabilities.
The IBM POWER7 blades, the IBM Smart Analytics Optimizer, the DataPower XI50z, and the IBM System
x blades, along with the z114 central processors, can be managed as a single logical virtualized system by
the Unified Resource Manager.
zBX configuration
A zBX Model 002 configuration can consist of one to four zBX racks (Rack B, Rack C, Rack D, and Rack
E) depending on the number of zBX blades. Each IBM POWER7 blade, IBM Smart Analytics Optimizer,
and IBM System x blade require one blade slot. Each DataPower XI50z requires two adjacent blade slots.
See Figure 5 on page 30.
Chapter 2. Hardware characteristics 29
|
Level 01b
Rack B - front view
28 zBX blade slots
Rack B - front view
14 zBX blade slots
BladeCenters
PDUs
PDUs
BladeCenter
Top-of-rack
switches
Top-of-rack
switches
Rack B - front view
BladeCenters
PDUs
Top-of-rack
switches
Rack C - front view
PDUs
BladeCenter BladeCenters
PDUs
Top-of-rack
switches
BladeCenters
PDUs
Rack B - front view Rack C - front view
42 zBX blade slots 56 zBX blade slots
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
Figure 5. 14, 28, 42, and 56 zBX blade slots (Part 1 of 4)
30 z114 System Overview
Level 01b
Rack D - front view
PDUs
BladeCenter BladeCenters
PDUs
Top-of-rack
switches
BladeCenters
PDUs
Rack B - front view Rack C - front view
70 zBX blade slots
BladeCenters
PDUs
Top-of-rack
switches
BladeCenters
PDUs
Rack B - front view Rack C - front view
84 zBX blade slots
BladeCenters
PDUs
Rack D - front view
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
Figure 6. 70 and 84 zBX blade slots (Part 2 of 4)
BladeCenters
PDUs
Top-of-rack
switches
BladeCenters
PDUs
Rack B - front view Rack C - front view
98 zBX blade slots
BladeCenters
PDUs
Rack D - front view Rack E - front view
PDUs
BladeCenter
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
Figure 7. 98 zBX blade slots (Part 3 of 4)
Chapter 2. Hardware characteristics 31
Level 01b
Each zBX rack consists of:
v Top-of-rack (TOR) switches management TOR switches and intraensemble data network TOR
switches (only located in the first rack, Rack B)
v Power distribution units (PDUs) (2 per BladeCenter)
v One or two BladeCenters per rack
v Optional rear door heat exchanger
v Optional acoustic door.
Top-of-rack (TOR) switches
There are two management TOR switches and two intraensemble data network (IEDN) TOR switches in
the first rack. If more than one rack is needed, the additional racks do not require management or
intraensemble data network TOR switches. These switches are located near the top of the rack and are
mounted from the rear of the rack.
The management TOR switches provide a 1000BASE-T Ethernet connection to z114 operating at 1 Gbps.
One switch connects the zBX to the bulk power hub (BPH) port on the z114. For redundancy, a switch on
the other set of management TOR switches connects to the BPH 1 Gbps port on the other side of the
z114.
The management TOR switches also provide connectivity to the management modules and the switch
modules located on the BladeCenter. The management modules monitor the BladeCenter. The information
gathered is reported to the Support Element on z114 using the connectivity set up over the intranode
management network (INMN). These connections are also configured for redundancy.
The intraensemble data network TOR switches provide connectivity for application data communications
within an ensemble. Data communications for workloads can flow over the IEDN within and between
nodes of an ensemble. This is provided by redundant connections between the OSA-Express3 10 GbE SR,
OSA-Express3 10 GbE LR, OSA-Express4S 10 GbE SR, or OSA-Express4S 10 GbE LR cards in the I/O
drawer or PCIe I/O drawer in the z114 and the intraensemble data network TOR switches in the zBX.
BladeCenters
PDUs
Rack E - front view
BladeCenters
PDUs
Top-of-rack
switches
BladeCenters
PDUs
Rack B - front view Rack C - front view
112 zBX blade slots
BladeCenters
PDUs
Rack D - front view
0 1 X X X X
X X
X X
X X
0 1 X X X X
X X
X X
X X
Figure 8. 112 zBX blade slots (Part 4 of 4)
32 z114 System Overview
Level 01b
The intraensemble data network TOR switches also provide zBX to zBX IEDN communications and
external customer network communications.
Power Distribution Unit (PDU)
The power distribution units (PDUs) provide the connection to the main power source, the power
connection to the intranode management network and intraensemble data network top-of-rack switches,
and the power connection to the BladeCenter. The number of power connections needed is based on the
zBX configuration. A rack contains two PDUs if only one BladeCenter is installed. A rack contains four
PDUs if two BladeCenters are installed.
BladeCenter
The BladeCenter is a type H chassis. It is configured for redundancy to provide the capability to
concurrently repair its components.
An IBM BladeCenter consists of:
Blade slot
The BladeCenter can contain IBM POWER7 blades, IBM System x blades, the IBM Smart
Analytics Optimizer, and the DataPower XI50z. Within a BladeCenter, the IBM Smart Analytics
Optimizer cannot be mixed with any other type of zBX blade. However, the IBM POWER7
blades, the DataPower XI50z, and the IBM System x blades can reside in the same BladeCenter.
Each IBM POWER7 blade, IBM Smart Analytics Optimizer, and IBM System x blade require one
blade slot. Each DataPower XI50z requires two adjacent blade slots.
Power module and fan pack
The BladeCenter contains up to four hot-swap and redundant power supply modules with
load-balancing and failover capabilities. The power supply also contains fan packs.
Switch modules
The switch modules are the interface to the BladeCenter. The BladeCenter provides up to two
high-speed switch module bays and four traditional switch module bays.
v 10 GbE switch modules (FC 0605)
(Bays 7 & 9) These switches are part of the intraensemble data network, which is used for
application data communication to the node. The switches are connected to the intraensemble
data network top-of-rack switches, which are connected to either two OSA-Express3 10 GbE
ports or two OSA-Express4S 10 GbE ports on the z114.
Up to a combination of eight z114s and z196s can connect to a zBX through an intraensemble
data network.
v 1000BASE-T Ethernet switch modules operating at 1 Gbps
(Bays 1 and 2) These switches are part of the intranode management network. They assist in
providing a path for the Support Element to load code on a zBX blade.
v 8 GbE Fibre Channel switch modules (FC 0606)
(Bays 3 & 4) These switches provide each zBX blade with the ability to connect to Fibre
Channel (FC) Disk Storage.
Management modules
The management modules monitor the BladeCenter. The information gathered by the
management modules is reported to the Support Element using the connectivity set up over the
intranode management network.
Blowers
Two hot-swap and redundant blowers standard. There are additional fan packs on power
supplies.
Chapter 2. Hardware characteristics 33
Level 01b
Rear door heat exchanger
The heat exchanger rear door (FC 0540) is an optional feature on the zBX. The heat exchanger is intended
to reduce the heat load emitted from the zBX. The rear door is an air to water heat exchanger.
Acoustic door
The acoustic door (FC 0543) is an optional feature on the zBX. The acoustic door is intended to reduce the
noise emitted from the zBX.
Storage
Storage is provided by the customer outside of the zBX racks via the Fibre Channel (FC) Disk Storage.
The required number of SFPs per switch module depends on the number of BladeCenters. There is a
connection from the FC switch modules (Bays 3 and 4) in the BladeCenter to the ports in the FC Disk
Storage.
Display networking resources associated with the IEDN
You can use the Network Monitors Dashboard task to monitor network metrics and to display statistics
for the networking resources associated with the IEDN. You can also view performance of the IEDN
resources to validate the flow of traffic.
Time coordination for zBX components
z114 and z196 provide the capability for the components in zBX to maintain an approximate time
accuracy of 100 milliseconds to an NTP server if they synchronize to the Support Element's NTP server at
least once an hour.
Entitlement and management of zBX racks, BladeCenters, and zBX
blades
For z114, your IBM representative must identify the number of select IBM POWER7 blades and select
IBM System x blades you might use and the number of zBX blades you might need for the IBM Smart
Analytics Optimizer and DataPower XI50z.
The maximum number of select IBM POWER7 blades you can have is 112. The maximum number of
select IBM System x blades you can have is 56. The maximum number of DataPower XI50z blades you
can have is 28. The maximum number of IBM Smart Analytics Optimizers you can have is 56, and they
must be ordered in the following increments: 7, 14, 28, 42, 56. The total zBX capacity cannot exceed 112
zBX blades.
Management of the zBX blades is provided by the HMC and Support Element. You can use the Support
Element to add and remove a zBX blade. You can also add entitlement to a zBX blade, remove
entitlement from a zBX blade, or transfer entitlement from one zBX blade to another zBX blade.
Note: On November 1, 2011, IBM Smart Analytics Optimizer was withdrawn from marketing for new
build and MES zBX. It has been replaced with IBM DB2 Analytics Accelerator for z/OS, a workload
optimized, appliance add-on that attaches to the zEnterprise System.
Ensemble
With zEnterprise, you can create an ensemble. An ensemble is a collection of one to eight zEnterprise
nodes, and each node is a single z114 or z196 with or without an attached zBX. An ensemble delivers a
logically integrated and managed view of the zEnterprise infrastructure resources. A zEnterprise node can
be a member of only one ensemble.
The ensemble is managed by the Unified Resource Manager, which is Licensed Internal Code (LIC) that is
part of the HMC. The Unified Resource Manager performs tasks that provide a single, cohesive
34 z114 System Overview
|
|
|
|
|
|
|
|
Level 01b
management context applied across all managed objects of the ensemble. See Unified Resource
Manager on page 125 for more information on the Unified Resource Manager.
For an ensemble, you must have two HMCs:
v A primary HMC managing the resources of one ensemble (and managing CPCs that are not part of an
ensemble)
v An alternate HMC, which will become the primary HMC if the HMC currently managing the ensemble
fails.
For more information about ensembles, see the zEnterprise System Introduction to Ensembles.
IBM DB2 Analytics Accelerator for z/OS V2.1
IBM DB2 Analytics Accelerator for z/OS V2.1 is a workload optimized, appliance add-on that logically
plugs into DB2 for z/OS on a z196 or z114, and uses Netezza technology to perform high speed, complex
DB2 queries. This disk-based accelerator speeds the response time for a wide variety of complex queries
that scan large tables. Efficient data filtering by early SQL projections and restrictions is performed using
a Field Programmable Gate Array (FPGA).
Similar to IBM Smart Analytics Optimizer for DB2 for z/OS V1.1, IBM DB2 Analytics Accelerator for
z/OS V2.1:
v Provides access to data in terms of authorization and privileges (security aspects) is controlled by DB2
and z/OS (Security Server)
v Uses DB2 for z/OS for the crucial data management items, such as logging, backup/recover, enforcing
security policies, and system of record
v Provides no external communication to the IBM Smart Analytics Optimizer for DB2 for z/OS V1.1
beyond DB2 for z/OS
v Is transparent to applications.
Enhancements provided with IBM DB2 Analytics Accelerator for z/OS V2.1 include:
v Extended acceleration to significantly larger number of queries
v Expansion of the size of the data to be accelerated
v Improved concurrent query execution
v Incremental update by partition
v DB2 for z/OS V9 and DB2 for z/OS V10 support.
Communication to a z196 or z114 is provided through an OSA-Express3 10 GbE SR, OSA-Express3 10
GbE LR, OSA-Express4S 10 GbE SR, or OSA-Express4S 10 GbE LR connection.
IBM DB2 Analytics Accelerator for z/OS V2.1 is not integrated into a zBX and is not managed by Unified
Resource Manager. It does not require or exploit zEnterprise ensemble capabilities.
Additional features/functions supported
In addition to the standard and optional features previously listed, the design of the z114 also provides
the following functions:
Monitoring and estimating power consumption and temperature
You can use the Hardware Management Console (HMC) or the Active Energy Manager to monitor the
power consumption and the internal temperature of a CPC.
Chapter 2. Hardware characteristics 35
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Level 01b
Using the HMC
You can use the Activity task and the Monitors Dashboard task on the HMC to monitor the following:
v Power consumption of a zCPC, CPC, BladeCenters, and blades
v Internal temperature of a zCPC CPC, and BladeCenters
v Processor usage of a zCPC, a CPC, BladeCenters, blades, CPs, ICFs, IFLs, zIIPs, zAAPs, SAPs, virtual
servers, and LPARs
v Memory usage of virtual servers and blades
v Shared and non-shared channel usage
v Relative humidity of the air entering the system
v Dewpoint the air temperature at which water vapor will condense into water
v Amount of heat removed from the system by forced air.
Note: Some of this data is only available under certain conditions.
The Activity task displays the information in a line-oriented format. The Monitors Dashboard task
displays the information in a dashboard format that uses tables and graphs.
Using the Monitors Dashboard task, you can export the data displayed in the window to a read-only
spreadsheet format (.csv file). For a selected CPC, you can also create histograms that display processor
usage, channel usage, power consumption, and input air temperature data over a specified time interval.
Using the Active Energy Manager
In addition to providing the power consumption and temperature of a specific CPC, Active Energy
Manager also provides the aggregated temperature and power for a group of systems or a complete data
center. Active Energy Manager allows you to display this data in a format that shows trends over a
specified time interval.
Before using Active Energy Manager, you must enable the SNMP or Web Services APIs, and, if using
SNMP, you must define a community name for Active Energy Manager. This action is specified on the
Customize API Settings task on the HMC. Once you have configured the SNMP or Web Services support
on the HMC, you must set up Active Energy Manager so it can communicate to the HMC. You can
perform this setup, within Active Energy Manager, by defining it as an SNMP or Web Services device.
Once the setup is complete, the Active Energy Manager can communicate to the HMC.
Active Energy Manager is a plug-in to IBM Director.
For more information, see the IBM Systems Software Information Center website (http://
publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp). Expand IBM Systems Software Information Center
located in the navigation pane on the left, select Product listing, then select IBM Director extension:
Active Energy Manager from the product listing.
Power estimation tool
You can estimate the power consumption of a specific z114 model and its associated configuration using
the Power Estimation tool. The exact power consumption for your machine will vary. The purpose of the
tool is to produce an estimation of the power requirements to aid you in planning for your machine
installation. This tool is available on Resource Link.
Reducing power consumption
zEnterprise provides the capability for you to reduce the energy consumption of a system component
(zBX blade, zBX BladeCenter) or group of components. You can reduce the energy consumption by
enabling power saving mode or limiting the peak power consumption.
To enable power saving mode, use any of the following methods:
36 z114 System Overview
|
|
|
Level 01b
v The Set Power Saving task to manually enable the power saving mode
v The Customize Scheduled Operations task to set up a schedule defining when you want to turn on
power saving mode
v SNMP, CIM, or Web Services APIs
v Active Energy Manager (AEM)
v The Customize/Delete Activation Profiles task to enable power saving mode at activation time.
To limit the peak power consumption of zBX blades, use the Set Power Cap task to enter the power cap
value in watts (W).
Displaying historical power, temperature, and utilization data
You can use the Environmental Efficiency Statistics task to display a historical view of power,
temperature, and utilization data of your z114. Reviewing this data over a period of time can help you
track the performance of your system and make appropriate changes to improve your system's
performance. The data displays in both table and graph format.
When using the Environmental Efficiency Statistics task, you identify:
v A start date
v The number of days (from one to seven) of information you want to display. This includes the start
date.
v The type of data you want displayed:
System power consumption (in kW and Btu/hour)
System temperature (in Celsius and Fahrenheit)
Average utilization of all central processors
Average CPU utilization of all blades.
You can also export this data to a read-only spreadsheet format (.csv file).
This data is not saved on your system forever. Therefore, if you want to monitor this data for a period of
time, you can use the export function to save the data to a .cvs file.
Preplanning and setting up the Storage Area Network (SAN)
environment
The WWPN tool assists you in preplanning and setting up your Storage Area Networks (SANs)
environment before the installation of your z114. Therefore, you can be up and running much faster after
the server is installed.
The WWPN tool assigns WWPNs to virtual and physical FCP ports, which is required to set up your
SAN, and creates a binary configuration that can be imported by your system.
This tool applies to all FICON channels defined as CHPID type FCP (for communication with SCSI
devices). The WWPN tool can be downloaded from Resource Link under the Tools section.
Chapter 2. Hardware characteristics 37
|
Level 01b
38 z114 System Overview
Level 01b
Chapter 3. Software support
This chapter describes the software support for the z114. This information applies to z114 systems
running in LPAR mode. The following table displays a summary of the minimum supported operating
systems levels for the z114 models.
Table 7. Supported operating systems for z114
Operating System
ESA/390
(31-bit)
z/Architecture
(64-bit)
z/OS Version 1 Release 11
5, 8
, 12
5, 8
, 13
5, 8
No Yes
z/OS Version 1 Release 8
1
, 9
2
, 10
3
with IBM Lifecycle Extension for z/OS
V1.8, V1.9, and V1.10
No Yes
Linux on System z
4, 7
: Red Hat RHEL 5, 6 and Novell SUSE SLES 10, 11 No Yes
z/VM Version 5 Release 4
4, 5, 9
z/VM Version 6 Release 1
4, 5, 9
z/VM Version 6 Release 2
4, 5, 9
No
6
Yes
z/VSE Version 4 Release 2 and later
5, 10
No Yes
z/TPF Version 1 Release 1 No Yes
Note:
1. z/OS V1.8 support was withdrawn September 30, 2009. However, with the z/OS Lifecycle Extension for z/OS
V1.8 (5638-A01), z/OS V1.8 supports z114. Talk to your IBM representative for details. No exploitation of z114
functions is available with z/OS V1.8. Certain functions and features of the z114 require later releases of z/OS.
For a complete list of software support, see the 2818DEVICE Preventive Planning (PSP) bucket. For more
information on the IBM Lifecycle Extension for z/OS V1.8, see Software Announcement 209-180 (RFA 53080)
dated June 9, 2009.
2. z/OS V1.9 support was withdrawn September 30, 2010. After that date, the z/OS Lifecycle Extension for z/OS
V1.9 (5646-A01) is required for z114. Talk to your IBM representative for details. No exploitation of z114
functions is available with z/OS V1.9. Certain functions and features of the z114 require later releases of z/OS.
For a complete list of software support, see the 2818DEVICE Preventive Planning (PSP) bucket. For more
information on the IBM Lifecycle Extension for z/OS V1.9, see Software Announcement 210-027 dated May 11,
2010.
3. z/OS V1.10 supports z114; however, z/OS V1.10 support will be withdrawn September 30, 2011. After that
date, the z/OS Lifecycle Extension for z/OS V1.10 (5656-A01) is required for z114. Talk to your IBM
representative for details. Certain functions and features of the z114 require later releases of z/OS. For a
complete list of software support, see the 2818DEVICE Preventive Planning (PSP) bucket. For more information
on the IBM Lifecycle Extension for z/OS V1.10, see Software Announcement 211-002, dated February 15, 2011.
4. Compatibility support for listed releases. Compatibility support allows OS to IPL and operate on z114.
5. With PTFs.
6. z/VM supports 31-bit and 64-bit guests.
7. RHEL is an abbreviation for Red Hat Enterprise Linux. SLES is an abbreviation for SUSE Linux Enterprise
Server.
8. Refer to the z/OS subset of the 2818DEVICE Preventive Service Planning (PSP) bucket prior to installing
zEnterprise.
9. Refer to the z/VM subset of the 2818DEVICE Preventive Service Planning (PSP) bucket prior to installing
zEnterprise or IPLing a z/VM image.
10. Refer to the z/VSE subset of the 2818DEVICE Preventive Service Planning (PSP) bucket prior to installing
zEnterprise.
Any program written for z/Architecture or ESA/390 architecture mode can operate on CPCs operating in
the architecture mode for which the program was written, provided that the program:
v Is not time-dependent.
Copyright IBM Corp. 2011, 2012 39
|
Level 01b
v Does not depend on the presence of system facilities (such as storage capacity, I/O equipment, or
optional features) when the facilities are not included in the configuration.
v Does not depend on the absence of system facilities when the facilities are included in the
configuration.
v Does not depend on results or functions that are defined as unpredictable or model dependent in the
z/Architecture Principles of Operation or in the Enterprise System Architecture/390 Principles of Operation.
v Does not depend on results or functions that are defined in this publication (or, for logically
partitioned operation, in the zEnterprise System Processor Resource/Systems Manager Planning Guide) as
being differences or deviations from the appropriate Principles of Operation publication.
v Does not depend on the contents of instruction parameter fields B and C on interception of the SIE
instruction.
Any problem-state program written for ESA/390 architecture mode can operate in z/Architecture mode
provided that the program complies with the limitations for operating in ESA/390 mode and is not
dependent on privileged facilities which are unavailable on the system.
40 z114 System Overview
Level 01b
Chapter 4. Channel subsystem structure
A channel subsystem (CSS) structure for z114 is designed for 256 channels. With the scalability benefits
provided by z114, it is essential that the channel subsystem (CSS) structure is also scalable and permits
horizontal growth. This is facilitated by allowing more than one logical channel subsystem (LCSS) on a
single z114.
Table 8. Channel, port, adapter maximums
Type z114 Maximum
ESCON 16 cards / 240 channels
FICON Express8S 64 features / 128 channels
FICON Express8 16 features / 64 channels
FICON Express4 16 features / 64 channels
FICON Express4-2C 16 features / 32 channels
OSA-Express4S GbE
1
48 features / 96 ports
7
OSA-Express4S 10 GbE
1
48 features / 48 ports
7
OSA-Express3 GbE
1
16 features / 64 ports
OSA-Express3 10 GbE
1
16 features / 32 ports
OSA-Express3-2P GbE
1
16 features / 32 ports
OSA-Express3 1000BASE-T
1
16 features / 64 ports
OSA-Express3-2P 1000BASE-T
1
16 features / 32 ports
OSA-Express2 GbE
1
16 features / 32 ports
OSA-Express2 1000BASE-T
1
16 features / 32 ports
IC link 32 links
ISC-3
2
12 mother cards / 48 links
8, 9
12x IFB (HCA2-O)
2
4 features / 8 links
3, 8
8 features / 16 links
4, 9
12x IFB (HCA3-O)
2
4 features / 8 links
3, 8
8 features / 16 links
4, 9
1x IFB (HCA2-O LR)
2
4 features / 8 links
3, 8
6 features / 12 links
4, 9
1x IFB (HCA3-O LR)
2
4 features / 16 links
3, 8
8 features / 32 links
4, 9
Crypto Express3
5, 6
8 cards / 16 PCIe adapters
Crypto Express3-1P
5, 6
8 cards / 8 PCIe adapters
Copyright IBM Corp. 2011, 2012 41
Level 01b
Table 8. Channel, port, adapter maximums (continued)
Type z114 Maximum
Note:
1. Maximum number of PCHIDs for combined OSA-Express4S, OSA-Express3, and OSA-Express2 features is 48.
2. Maximum number of coupling CHPIDs (ISC-3 and IFB) is 128. Each coupling feature cannot exceed its
individual maximum limit (shown in the table).
3. Applies to Model M05.
4. Applies to Model M10.
5. The maximum number of combined Crypto Express3 and Crypto Express3-1P features is eight.
6. The initial order for Crypto Express3 and Crypto Express3-1P is two features.
7. For every OSA-Express3 feature in the configuration, the OSA-Express4S maximum number of features is
reduced by two.
8. z114 M05 supports a maximum of 56 extended distance links (8 1x IFB and 48 ISC-3) with no 12x IFB links*.
9. z114 M10 supports a maximum of 72 extended distance links (24 1x IFB and 48 ISC-3) with no 12x IFB links*.
* Uses all available fanout slots. Allows no other I/O or coupling.
The CSS structure offers the following:
v Two logical channel subsystems (LCSSs)
Each LCSS can have up to 256 channels defined
Each LCSS can be configured with one to 15 logical partitions (cannot exceed 30 LPARs per system).
v Spanned channels are shared among logical partitions across LCSSs. For more information on spanned
channels, refer to Table 9 and to Spanned channels on page 51.
Note: One operating system image supports up to a maximum of 256 Channel Path Identifiers (CHPIDs).
The I/O Subsystem (IOSS) continues to be viewed as a single Input/Output Configuration Data Set
(IOCDS) across the entire system with up to two LCSSs. Only one Hardware System Area (HSA) is used
for the multiple LCSSs.
A CHPID is a two-digit hexadecimal number that identifies a channel path in the CPC. A Physical
Channel Identifier (PCHID) is a three-digit number that identifies the physical location (drawer, slot, card
port) for a channel path in the CPC. An adapter ID (AID) is a two-digit hexadecimal number that
identifies HCA3-O, HCA3-O LR, HCA2-O or HCA2-O LR fanout cards. CHPIDs are associated with ports
on an adapter and the AID is used in that definition.
The CHPID Mapping Tool can help you map your PCHIDs to the CHPID definitions in your IOCP
source statements. The tool will provide you with a new report with your CHPID assignment in addition
to the PCHID values. The CHPID Mapping Tool is available from Resource Link, http://www.ibm.com/
servers/resourcelink, as a standalone PC-based program. For more information on the CHPID Mapping
Tool, CHPIDs, PCHIDs or AIDs, refer to System z CHPID Mapping Tool User's Guide.
IOCP channel, link, and adapter definitions
The following table lists the channel and link types as defined in an IOCDS that are used with z114
systems.
Table 9. Channels, links, and adapters with CHPID type
Channels/Links/Adapters
CHPID
type
May be
defined as
Shared
May be
defined as
Spanned
ESCON channels:
42 z114 System Overview
Level 01b
Table 9. Channels, links, and adapters with CHPID type (continued)
Channels/Links/Adapters
CHPID
type
May be
defined as
Shared
May be
defined as
Spanned
Connection Channel (ESCON architecture)
Channel-to-Channel (connects to CNC)
CNC
CTC
yes
yes
no
no
ESCON channels connected to converter:
Conversion Channel (ESCON to Parallel Block Multiplexer (BL))
Conversion Channel (ESCON to Parallel Byte Multiplexer (BY))
CVC
CBY
no
no
no
no
FICON channels native FICON, zHPF, or CTC for attachment to
FICON channels on System z servers, directors, control units, and
printers
FC yes yes
Fibre Channel Protocol (FCP) for communicating with SCSI devices FCP yes yes
ISC-3 peer mode links (connects to another ISC-3) CFP yes yes
IC peer links (connects to another IC) ICP yes yes
IFB peer links (connects to another IFB) CIB yes yes
HiperSockets IQD yes yes
OSA adapters using QDIO architecture: TCP/IP traffic when Layer 3,
Protocol-independent when Layer 2
OSD yes yes
OSA adapters using non-QDIO architecture for TCP/IP and/or
SNA/APPN/HPR traffic
OSE yes yes
OSA-ICC: OSA 1000BASE-T Ethernet adapters for TN3270E, non-SNA
DFT, IPL CPCs, and LPARs, OS system console operations
OSC yes yes
OSA-Express for NCP: NCPs running under IBM Communication
Controller for Linux (CDLC)
OSN yes yes
OSA-Express3 10 GbE LR , OSA-Express3 10 GbE SR, OSA-Express4S
10 GbE LR, OSA-Express4S 10 GbE SR adapters for intraensemble
data network (IEDN)
OSX yes yes
OSA-Express3 1000BASE-T Ethernet adapters for intranode
management network (INMN)
OSM yes yes
Each of these channel types requires that a CHPID be defined, even if it is an internal channel and no
physical hardware (channel card) exists. Each channel, whether a real channel or a virtual (such as
HiperSockets) must be assigned a unique CHPID within the LCSS. You can arbitrarily assign a number
within the X'00' to X'FF' range. Real channels require a PCHID value to be defined. Most of these channel
types can be shared and used concurrently among multiple LPARs within the same LCSS. Refer to
Multiple Image Facility (MIF) on page 51 for more information on shared channels.
AIDs are used for InfiniBand connections.
Coupling link peer channels
You may define an ISC-3 feature as CFP and an IFB link as CIB. Any available/unused CHPID may be
defined as ICP.
You can configure a CFP, ICP, or CIB channel path as:
v An unshared dedicated channel path to a single logical partition.
v An unshared reconfigurable channel path that can be configured to only one logical partition at a time
but which can be dynamically moved to another logical partition by channel path reconfiguration
Chapter 4. Channel subsystem structure 43
Level 01b
commands. Reconfigurable support for CFP, CIB, and ICP is limited to two coupling facility logical
partitions total. One coupling facility logical partition in the initial access list and one other coupling
facility partition in the candidate list.
v A shared or spanned channel path that can be concurrently used by the logical partitions to which it is
configured. A peer channel cannot be configured to more than one coupling facility logical partition at
a time, although it can be configured to multiple z/Architecture or ESA/390 logical partitions in
addition to the single coupling facility logical partition.
v Timing-only links. These are coupling links that allow two servers to be synchronized using Server
Time Protocol (STP) messages when a coupling facility does not exist at either end of the coupling link.
Note: CHPID type ICP is not supported for a timing connection.
Each ICP channel path must specify which ICP channel path it is logically connected to.
The z114 models support dynamic I/O configuration for all peer channel path types.
Subchannel connectivity
With two Logical Channel Subsystems comes more subchannels. There is a maximum of 65280
subchannels per LCSS for subchannel set 0 and 65535 subchannels per LCSS for subchannel set 1.
Subchannel set 0 allows definitions of any type of device (bases, aliases, secondaries, etc.). Subchannel set
1 is designated for disk alias devices (of both primary and secondary devices) and metro mirror
secondary devices.
z114 allows you to IPL from subchannel set 1 in supported operating systems such as z/OS.
With two Logical Channel Subsystems you can have:
v Up to a maximum of 65280 devices/subchannels per LCSS for subchannel set 0
v Up to a maximum of 65535 devices/subchannels per LCSS for subchannel set 1
v Up to a maximum of 261630 devices for two LCSSs (two times the maximum devices/subchannels for
subchannel set 0 and 1 (2 * (65280 + 65535)).
Each LPAR can access all the devices in its assigned LCSS.
This capability relieves the I/O device configuration constraints experienced by large system
configurations.
Guidelines for maximum availability
When configuring devices with multiple paths to the same CPC, select any of the channel paths from any
I/O card shown in Figure 9 on page 45, and Figure 10 on page 46 that:
v Are available on the CPC you are defining
v Are the correct type (FICON, ESCON, etc.) to meet the control unit, coupling facility, or network
attachment requirements
v Satisfy the rules regarding the mixing of channel types to a control unit.
44 z114 System Overview
Level 01b
Legend:
IFB-MP
InfiniBand Multiplexer
D1 In the I/O drawer, D1 represents the half-high daughter card located in the left side of the slot.
D2 In the I/O drawer, D2 represents the half-high daughter card located in the right side of the slot.
DCA1
Card slot 02
Card slot 03
Card slot 04
Card slot 05
Front of I/O drawer Rear of I/O drawer
Domain 1
DCA2
Card slot 07
Card slot 08
Card slot 10
Card slot 11
D109
(IFB-MP)
D209
(IFB-MP)
Notes:
D109 IFB-MP card location controls Domain 0 ( card slots 02, 05, 08, and 10).
D209 IFB-MP card location controls Domain 1 ( card slots 03, 04, 07, and 11).
Domain 0
Figure 9. I/O drawer layout
Chapter 4. Channel subsystem structure 45
Level 01b
Legend:
PCI-IN
PCIe Interconnect
For maximum availability of the device, OSA network, or coupling facility on z114, you should consider
the following guidelines:
v Choose channels plugged in different I/O domains.
With an I/O drawer, an I/O domain contains four channel cards controlled by a single IFB-MP card.
The IFB-MP card provides connection to the CPC. For example, the domain for the IFB-MP card in
D109 controls slots 02, 05, 08, and 10. (See Figure 9 on page 45.) With a PCIe I/O drawer, an I/O
domain contains eight channel cards controlled by a single PCI-IN card. For example, the domain for
PCI-IN card 0 controls slots 01, 02, 03, 04, 06, 07, 08, and 09. (See Figure 10.)
Top View
Notes:
PCI-IN card 0 controls Domain 0 (card slots 01, 02, 03, 04, 06, 07, 08, and 09).
PCI-IN card 1 controls Domain 1 (card slots 30, 31, 32, 33, 35, 36, 37, and 38).
PCI-IN card 2 controls Domain 2 (card slots 11, 12, 13, 14, 16, 17, 18, and 19).
PCI-IN card 3 controls Domain 3 (card slots 20, 21, 22, 23, 25, 26, 27, and 28).
.
Rear of PCIe I/O drawer
C
a
r
d
s
l
o
t
3
3
C
a
r
d
s
l
o
t
3
2
C
a
r
d
s
l
o
t
3
1
C
a
r
d
s
l
o
t
2
8
C
a
r
d
s
l
o
t
2
7
C
a
r
d
s
l
o
t
2
6
C
a
r
d
s
l
o
t
2
3
C
a
r
d
s
l
o
t
2
2
C
a
r
d
s
l
o
t
2
1
C
a
r
d
s
l
o
t
2
0
C
a
r
d
s
l
o
t
2
5
C
a
r
d
s
l
o
t
3
0
C
a
r
d
s
l
o
t
3
5
C
a
r
d
s
l
o
t
3
7
C
a
r
d
s
l
o
t
3
6
C
a
r
d
s
l
o
t
3
8
P
C
I
-
I
N
3
P
C
I
-
I
N
1
F
S
P
Domain 1
Front of PCIe I/O drawer
C
a
r
d
s
l
o
t
0
1
C
a
r
d
s
l
o
t
0
2
C
a
r
d
s
l
o
t
0
3
C
a
r
d
s
l
o
t
0
4
C
a
r
d
s
l
o
t
0
6
C
a
r
d
s
l
o
t
0
7
C
a
r
d
s
l
o
t
0
8
C
a
r
d
s
l
o
t
0
9
C
a
r
d
s
l
o
t
1
1
C
a
r
d
s
l
o
t
1
2
C
a
r
d
s
l
o
t
1
3
C
a
r
d
s
l
o
t
1
6
C
a
r
d
s
l
o
t
1
7
C
a
r
d
s
l
o
t
1
8
C
a
r
d
s
l
o
t
1
9
C
a
r
d
s
l
o
t
1
4
P
C
I
-
I
N
0
P
C
I
-
I
N
2
F
S
P
Domain 0 Domain 2
Domain 3
Figure 10. PCIe I/O drawer layout
46 z114 System Overview
Level 01b
Note: This is also recommended for optimum performance of your most heavily-used I/O devices.
When choosing the I/O domains to use, whether from different drawers or the same drawer, consider
using a combination of I/O domains. When you must use IFB links from the processor drawer, try to
use IFB links from different HCA fanout cards. Refer to your PCHID report to determine which IFB
links belong to which HCA fanout cards. If you have multiple paths to the device and multiple
domains available that have the correct channel type, spreading the paths across as many HCAs as
possible is also advisable.
Redundant I/O interconnect is a function that allows one IFB-MP back up another IFB-MP, for an I/O
drawer, or one PCI-IN back up another PCI-IN, for a PCIe I/O drawer, in case of a failure or repair.
For example, in the I/O drawer, the IFB-MP cards in slot 09 back up each other. In the PCIe I/O
drawer, the PCI-IN card 0 and PCI-IN card 1 back up each other. Therefore, in the event of a cable or
fanout card failure, the remaining IFB-MP card (or PCI-IN card) will control both domains. There are
failures (for example, the IFB-MP card or the PCI-IN card) that may prevent the redundant takeover,
which is why it is advisable to spread your paths over multiple domains.
When configuring Coupling using InfiniBand (CIB) links for the same target CPC or coupling facility,
use InfiniBand links that originate from different processor drawers (for Model M10) and different
HCA cards on these processor drawers. This eliminates the HCA fanout card and the IFB cable as a
single point of failure where all connectivity would be lost.
v If you define multiple paths from the same IFB link, distribute paths across different channel cards.
Also, if you define multiple coupling facility channels to the same coupling facility or to the same ESA
image, distribute paths across different coupling facility channel adapter cards or different coupling
facility daughter cards.
With z114, each SAP handles FICON work on an on-demand basis. That is, as FICON work for any
channel arrives, the next available SAP will handle that request. It does not matter if it is an outbound
request or an inbound interrupt, the next available SAP will handle the FICON work.
For the other channel types, the z114 automatically balances installed channel cards across all available
SAPs. The processor attempts to assign an equal number of each channel card type to each available SAP.
While all channels on a given I/O card are always in the same SAP, it is not predictable which I/O cards
will be assigned to which SAPs. However, there are two exceptions. First, HCAs used for coupling are
always given affinity to a SAP on the local drawer. Second, if an OSA channel is defined as OSD, OSM,
or OSX or a FICON channel is defined as FCP, these channels use QDIO architecture and, therefore, do
not actually use any SAP resource during normal operations.
For all channel types, simply follow the preceding recommendations for configuring for RAS, and the
SAPs will handle the workload appropriately.
Planning for channel subsystem
This section contains information to aid in the planning for maximum channel subsystem availability on
z114. It addresses ESCON, FICON, and OSA channels; ISC-3, IFB, and IC links; and HiperSockets. The
information shows the major components of the channel subsystem and suggests ways to configure the
CSS for maximum availability.
The overall process of assigning CHPIDs, PCHIDs, and AIDs begins when you order z114 or an MES to
an existing machine. After placing the order, the configurator prepares a report (PCHID report) detailing
the physical location for each channel in the machine. This report shows PCHID and AID assignments.
PCHID assignments
There are no default CHPIDs assigned. You are responsible for assigning the logical CHPID number to a
physical location, identified by a PCHID number. You can complete this task in using either IOCP or
HCD. The CHPID Mapping Tool may be used to help with these assignments. (Refer to CHPID
Mapping Tool on page 50 for more information.)
Chapter 4. Channel subsystem structure 47
Level 01b
You will use the data in the CFReport, which you can either obtain from your representative or retrieve
from Resource Link, and the IOCP input for assigning PCHID values using the CHPID Mapping Tool.
Table 10 lists the PCHID assignments for slots in the I/O drawer. Table 11 lists the PCHID assignments
for slots in the PCIe I/O drawer. Only the active ports on an installed card are actually assigned a
PCHID, the remainder are unused.
Except for ESCON sparing, the cards in the I/O drawer and PCIe I/O drawer are assigned a PCHID
starting with the first value in the range for the slot and drawer where the card is located.
For ISC-3 cards, the first daughter is assigned the first two PCHID values of the slot. The second
daughter is assigned the slot value plus 8 for the first port and plus 9 for the second port.
OSA-Express4S GbE LX and OSA-Express4S GbE SX cards have two ports, but only one PCHID is
assigned.
Crypto cards are assigned one PCHID even though they have no ports.
Table 10. PCHIDs assignments for I/O drawer
Slot
PCHID range
Drawer 1
A16B
Drawer 2
A09B
Drawer 3
A02B
Drawer 4
A26B
2 200 - 20F 180 - 18F 100 - 10F 280 - 28F
3 210 - 21F 190 - 19F 110 - 11F 290 - 29F
4 220 - 22F 1A0 - 1AF 120 -12F 2A0 - 2AF
5 230 - 23F 1B0 - 1BF 130 - 13F 2B0 - 2BF
7 240 - 24F 1C0 - 1CF 140 - 14F 2C0 - 2CF
8 250 - 25F 1D0 - 1DF 150 - 15F 2D0 - 2DF
10 260 - 26F 1E0 - 1EF 160 - 16F 2E0 - 2EF
11 270 - 27F 1F0 - 1FF 170 - 17F 2F0 - 2FF
Table 11. PCHIDs assignments for PCIe I/O drawer
Slot
Drawer 1
A02B
Drawer 2
A09B
1 100-103 180-183
2 104-107 184-187
3 108-10B 188-18B
4 10C-10F 18C-18F
6 110-113 190-193
7 114-117 194-197
8 118-11B 198-19B
9 11C-11F 19C-19F
11 120-123 1A0-1A3
12 124-127 1A4-1A7
13 128-12B 1A8-1AB
14 12C-12F 1AC-1AF
16 130-133 1B0-1B3
48 z114 System Overview
Level 01b
Table 11. PCHIDs assignments for PCIe I/O
drawer (continued)
Slot
Drawer 1
A02B
Drawer 2
A09B
17 134-137 1B4-1B7
18 138-13B 1B8-1BB
19 13C-13F 1BC-1BF
20 140-143 1C0-1C3
21 144-147 1C4-1C7
22 148-14B 1C8-1CB
23 14C-14F 1CC-1CF
25 150-153 1D0-1D3
26 154-157 1D4-1D7
27 158-15B 1D8-1DB
28 15C-15F 1DC-1DF
30 160-163 1E0-1E3
31 164-167 1E4-1E7
32 168-16B 1E8-1EB
33 16C-16F 1EC-1EF
35 170-173 1F0-1F3
36 174-177 1F4-1F7
37 178-17B 1F8-1FB
38 17C-17F 1FC-1FF
AID assignments
HCA2-O, HCA2-O LR, HCA3-O, and HCA3-O LR fanout cards used for coupling are identified by
adapter IDs (AIDs) rather than PCHIDs.
CHPID numbers need to be associated with ports on an adapter, and the AID is used for this purpose.
You are responsible for assigning CHPIDs. You can use either IOCP or HCD. The CHPID assignment is
done by associating the CHPID number to an AID and port. You cannot use the CHPID Mapping Tool to
assign AID values.
You cannot change an AID assignment. After an AID is assigned, if an optical fanout card is moved on a
z114, the AID value moves with it.
Table 12 shows the initial AID assignments for the ports on the HCA fanout cards plugged into the
processor drawers.
Table 12. AID assignments for HCA fanout cards
Processor drawer
number Location Fanout card slots Possible AIDs
1 A21 D1, D2, D7, D8 08-0B
2 A26 D1, D2, D7, D8 00-03
Chapter 4. Channel subsystem structure 49
Level 01b
Each fanout slot is allocated one AID number. (Remember that slots D3, D4, D5, and D6 do not have
fanout cards plugged into them; therefore, they are not assigned AIDs.) For example, the allocation for
the processor drawer 1 would be:
Fanout slot AID
D1 08
D2 09
D7 0A
D8 0B
Note: These AID assignments can only be predicted for a new build machine. For an MES to an existing
z114, the VPD contains the AID assigned to each installed HCA2-O, HCA2-O LR, HCA3-O, or HCA3-O
LR. The VPD also contains the AID that is assigned to all other slots in existing drawers. If a new HCA
fanout is added to the drawer, the AID from the VPD should be used.
PCHID report
The PCHID report from the configurator provides details on the placement of all the I/O features in your
order. Your representative will provide you with this report. Using this report and the guidelines listed in
Guidelines for maximum availability on page 44, you can plan the configuration of your I/O.
Note: If you use the CHPID Mapping Tool to aid you in assigning PCHIDs to CHPIDs, the tool will
provide you with a new report with your CHPID assignment in addition to the PCHID values.
Other resources available are the System z Input/Output Configuration Program User's Guide for ICP IOCP
and the CHPID Mapping Tool. These resources are available on Resource Link.
CHPID Mapping Tool
The CHPID Mapping Tool is a Java-based standalone application available from IBM Resourse Link, and
it must be downloaded to your personal computer for use. Once downloaded, you can make CHPID
assignments without further internet connections. As part of the CHPID Mapping Tool process, you will
need a CFReport (which you can download from Resource Link or obtain from your representative) and
an IOCP file.
Note: The CHPID Mapping Tool does not assign AID values.
The intent of the CHPID Mapping Tool is to ease installation of new z114 processors and for making
changes to an already installed z114 processor either to make slight changes to the mapping or as part of
an MES action to add or remove channel features on the processor.
z114 does not have default CHPIDs assigned to ports as part of the initial configuration process. It is
your responsibility to perform these assignments by using the HCD/IOCP definitions and optionally the
CHPID Mapping Tool. The result of using the tool is an IOCP deck that will map the defined CHPIDs to
the corresponding PCHIDs for your processor. However, there is no requirement to use the CHPID
Mapping Tool. You can assign PCHIDs to CHPIDs directly in IOCP decks or through HCD, but it is
much easier to use the tool to do the channel mapping and the tool can help make PCHID to CHPID
assignments for availability.
For more information on the CHPID Mapping tool refer to any of the following:
v System z CHPID Mapping Tool User's Guide
v CHPID Mapping Tool on Resource Link.
50 z114 System Overview
Level 01b
Multiple Image Facility (MIF)
The Multiple Image Facility (MIF) allows channel sharing among multiple LPARs and optionally shares
any associated I/O devices configured to these shared channels. MIF also provides a way to limit the
logical partitions that can access a reconfigurable channel, spanned channel, or a shared channel to
enhance security.
With multiple LCSSs, the CSS provides an independent set of I/O controls for each logical channel
subsystem called a CSS image. Each logical partition is configured to a separate CSS image in order to
allow the I/O activity associated with each logical partition to be processed independently as if each
logical partition had a separate CSS. For example, each CSS image provides a separate channel image and
associated channel path controls for each shared channel and separate subchannel images for each shared
device that is configured to a shared channel.
With MIF, you can configure channels as follows:
v ESCON (TYPE=CNC, TYPE=CTC), FICON (TYPE=FC or TYPE=FCP), ISC-3 peer (TYPE=CFP), IC
peer (TYPE=ICP), IFB peer (TYPE=CIB), HiperSockets (TYPE=IQD), and OSA (TYPE=OSC,
TYPE=OSD, TYPE=OSE, TYPE=OSN, TYPE=OSX, or TYPE=OSM).
You can configure a channel path as:
An unshared dedicated channel path to a single LPAR.
An unshared reconfigurable channel path that can be configured to only one logical partition at a
time it can be moved to another logical partition within the same LCSS.
A shared channel path that can be concurrently used by the ESA/390 images or CF logical partitions
within the same LCSS to which it is configured.
With MIF and multiple channel subsystems, shared and spanned channel paths can provide extensive
control unit and I/O device sharing. MIF allows all, some, or none of the control units attached to
channels to be shared by multiple logical partitions and multiple CSSs. Sharing can be limited by the
access and candidate list controls at the CHPID level and then can be further limited by controls at the
I/O device level.
For example, if a control unit allows attachment to multiple channels (as is possible with a 3990 control
unit), then it can be shared by multiple logical partitions using one or more common shared channels or
unique unshared channel paths.
Spanned channels
With multiple LCSSs, transparent sharing of internal (ICs and HiperSockets) and external (FICON, ISC-3,
IFB, OSA) channels across LCSSs is introduced, extending Multiple Image Facility (MIF). MIF allows
sharing of channel resources across LPARs. ICs, HiperSockets, FICON, ISC-3s, IFBs, and OSA features can
all be configured as MIF spanning channels.
Spanning channels is the ability for the channels to be configured to multiple LCSSs, and be transparently
shared by any or all of the configured LPARs without regard to the Logical Channel Subsystem to which
the partition is configured. For information on the channel CHPID types and spanning capabilities, refer
to Table 9 on page 42.
You can configure the following as a spanned channel:
v FICON (TYPE=FC or TYPE=FCP), ISC-3 peer (TYPE=CFP), IC peer (TYPE=ICP), IFB peer
(TYPE=CIB), HiperSockets (TYPE=IQD), and OSA (TYPE=OSC, TYPE=OSD, TYPE=OSE,
TYPE=OSN, TYPE=OSX, or TYPE=OSM)
They can be shared by LPARs in different logical channel subsystems.
Chapter 4. Channel subsystem structure 51
Level 01b
Internal coupling and HiperSockets channels
Internal coupling (IC) channels and HiperSockets are virtual attachments and, as such, require no real
hardware. However, they do require CHPID numbers and they do need to be defined in the IOCDS. The
CHPID type for IC channels is ICP; the CHPID type for HiperSockets is IQD.
v It is suggested that you define a minimum number of ICP CHPIDs for Internal Coupling. For most
customers, IBM suggests defining just one ICP for each coupling facility (CF) LPAR in your
configuration. For instance, if your z114 configuration has several ESA LPARs and one CF LP, you
would define one pair of connected ICP CHPIDs shared by all the LPARs in your configuration. If
your configuration has several ESA LPARs and two CF logical partitions, you still would define one
connected pair of ICP CHPIDs, but one ICP should be defined as shared by the ESA images and one of
the CF LPARs, while the other ICP is defined as shared by the ESA LPARs and the other CF LPAR.
Both of these examples best exploit the peer capabilities of these coupling channels by using the
sending and receiving buffers of both channels. If your ESA images and CF images are in different
CSSs and you want to exploit the optimal use of ICP then your ICP CHPIDs must be defined as
spanned.
v Each IQD CHPID represents one internal LAN. If you have no requirement to separate LAN traffic
between your applications, only one IQD CHPID needs to be defined in the configuration. If the
partitions sharing the LAN are in different LCSSs your IQD CHPID must be defined as spanned.
IOCP considerations
ICP IOCP supports z114 and multiple LCSSs. Refer to System z Input/Output Configuration Program User's
Guide for ICP IOCP for more information.
IOCP allows you to define controls for multiple channel subsystems. This includes changes to the way
you define LPARs, channel paths, and I/O devices.
LPAR definition
Use the RESOURCE statement to define LCSSs and the logical partitions in each LCSS. You can also
assign a MIF image ID to each LPAR. If you do not specify a MIF image ID using the RESOURCE
statement, ICP IOCP assigns them. Any LPARs not defined will be reserved and available to be
configured later using dynamic I/O.
Channel path definition
You can define shared channel paths in addition to dedicated and reconfigurable channel paths. The
CHPID statement has an additional SHARED keyword to accomplish this. You can also define spanned
channel paths using the PATH keyword. You can define:
v All channel paths as dedicated or reconfigurable.
v Only CNC, CTC, FC, FCP, CFP, ICP, IQD, CIB, OSC, OSD, OSE, OSN, OSX, and OSM channel paths as
shared.
v Only FC, FCP, CFP, ICP, IQD, CIB, OSC, OSD, OSE, OSN, OSX, and OSM channel paths as spanned.
ICP IOCP provides access controls for spanned, shared or reconfigurable channel paths. Parameters on
the PART | PARTITION or NOTPART keyword on the CHPID statement allow you to specify an access
list and a candidate list for spanned, shared and reconfigurable channel paths.
The access list parameter specifies the logical partition or logical partitions that will have the channel
path configured online at logical partition activation following the initial power-on reset of an LPAR
IOCDS. For exceptions, refer to zEnterprise System Processor Resource/Systems Manager Planning Guide.
The candidate list parameter specifies the LPARs that can configure the channel path online. It also
provides security control by limiting the logical partitions that can access shared or reconfigurable
channel paths.
52 z114 System Overview
Level 01b
Note: PR/SM LPAR manages the channel path configuration across POR. Refer to zEnterprise System
Processor Resource/Systems Manager Planning Guide.
I/O device definition
You can specify either the optional PART | PARTITION keyword or the optional NOTPART keyword on
the IODEVICE statement to limit device access by logical partitions for devices assigned to shared
ESCON, FICON, or OSA channels, or HiperSockets. (The IODEVICE candidate list is not supported for
shared CFP, CIB, or ICP CHPIDs.)
By limiting access to a subset of logical partitions, you can:
v Provide partitioning at the device level.
v Provide security at the device level.
v Better manage the establishment of logical paths.
Hardware Configuration Definition (HCD) considerations
HCD provides the capability to make both dynamic hardware and software I/O configuration changes. It
also provides:
v An online, interactive way to more usably manage the I/O configuration than IOCP.
v The capability to define the I/O configuration for dynamic or nondynamic I/O configuration purposes.
HCD allows you to define LPAR controls for defining LPARs, channel paths, and I/O devices. The
following HCD panels (or corresponding HCM dialogs) support these controls.
Add Partition
Allows explicit definition of LPARs and associated LPAR numbers.
Define Access List
Allows definition of initial access list for channel path access control of shared and reconfigurable
channel paths.
Define Candidate List (for channel paths)
Allows definition of candidate list for channel path access control of shared and reconfigurable
channel paths.
Define Candidate List (for devices)
Allows definition of candidate list for device access control for devices assigned to shared
channels.
Add Processor
Allows you to determine the capabilities of a CPC.
Add Channel Path
Operation mode field allows definition of a channel path as dedicated, reconfigurable, or shared.
Define Device / Processor
Additional field to specify candidate list.
Chapter 4. Channel subsystem structure 53
Level 01b
54 z114 System Overview
Level 01b
Chapter 5. I/O connectivity
This chapter discusses the channels associated with the z114 I/O connectivity. You can also refer to I/O
features on page 21 for a summary of the I/O channel characteristics.
FICON and FCP channels
The FICON Express channel uses the industry standard Fibre Channel Standard as a base. It is an upper
layer protocol that maps the channel architecture on the general transport vehicle used throughout the
industry for such other upper layer protocols as SCSI, IPI, and IP, among others. This transport vehicle
includes the physical definition, the transmission protocol, and signalling protocol that is the same for all
of the other upper layer protocols.
The FICON Express8S, FICON Express8, and FICON Express4 features conform to the Fibre Connection
(FICON) architecture, the High Performance FICON on System z (zHPF) architecture, and the Fibre
Channel Protocol (FCP) architecture, providing connectivity between any combination of servers,
directors, switches, and devices (control units, disks, tapes, printers) in a Storage Area Network (SAN).
There are two CHPID types that can be specified using IOCP or HCD. Each channel has its own unique
CHPID type:
v CHPID type FC native FICON, High Performance FICON for System z (zHPF), and
channel-to-channel (CTC)
v CHPID type FCP Fibre Channel Protocol (FCP) for communication with SCSI devices
FICON builds upon the strengths of ESCON. The FICON implementation enables full duplex data
transfer. So data travels both directions simultaneously, rather than the ESCON half duplex data transfer.
Furthermore, concurrent I/Os can occur on a single FICON channel, a fundamental difference between
FICON and ESCON. The data rate drop is minimal with FICON even at distances up to 10 km.
Native FICON supports up to 64 concurrent I/O operations. ESCON supports one I/O operation at a
time.
In conjunction with the Fibre Channel protocol (FCP), N_Port ID Virtualization (NPIV) is supported,
which allows the sharing of a single physical FCP channel among operating system images.
FICON Express8S features
There are two FICON Express8S features for z114 FICON Express8S 10KM LX and FICON Express8S
SX. These features can only be used in a PCIe I/O drawer.
Each FICON Express8S feature has two channels per feature. Each of the two independent channels
supports a link data rate of 2 Gbps (gigabits per second), 4 Gbps, or 8 Gbps, depending upon the
capability of the attached switch or device. The link speed is autonegotiated point-to-point. A link data
rate of 1 Gbps is not supported. Each channel utilizes small form factor pluggable optics (SFPs) with LC
duplex connectors. The optics allow each channel to be individually repaired without affecting the other
channels.
Each FICON Express8S feature supports cascading (the connection of two FICON Directors in succession)
to minimize the number of cross-site connections and help reduce implementation costs for disaster
recovery applications, GDPS, and remote copy.
The FICON Express8S features:
Copyright IBM Corp. 2011, 2012 55
Level 01b
v Provide increased bandwidth and granularity for the SAN
v Support 8 GBps PCIe interface to the PCIe I/O drawer
v Provide increased performance with zHPF and FCP protocols
v Provide increased port granularity with two channels/ports per feature
v Include a hardware data router for increased performance for zHPF.
The FICON Express8S features for z114 include:
v FICON Express8S 10KM LX (FC 0409)
FICON Express8 10KM LX utilizes a long wavelength (LX) laser as the optical transceiver and supports
use of a 9/125 micrometer single mode fiber optic cable terminated with an LC duplex connector.
FICON Express8 10KM LX supports distances up to 10 km (kilometers) (6.2 miles).
FICON Express8 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be
shared among LPARs within and across LCSS.
The sending and receiving transceiver must be the same type, LX.
v FICON Express8S SX (FC 0410)
FICON Express8S SX utilizes a short wavelength (SX) laser as the optical transceiver and supports use
of a 50/125 micrometer multimode fiber optic cable or a 62.5/125 micrometer multimode fiber optic
cable terminated with an LC duplex connector.
Note: You cannot mix 50 and 62.5 micron multimode fiber optic cabling in the same link.
For details about the unrepeated distances for FICON Express8S SX, refer to System z Planning for Fiber
Optic Links (ESCON, FICON, Coupling Links, and Open System Adapters).
FICON Express8S SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared
among LPARs within and across LCSS.
The sending and receiving transceiver must be the same type, SX.
FICON Express8 features
There are two FICON Express8 features for z114 FICON Express8 10KM LX and FICON Express8 SX.
These features can only be used in an I/O drawer.
FICON Express 8 features can be carried forward or ordered on MES using RPQ 8P2534. During an MES,
if slots are available in an I/O drawer and no slots are available in a PCIe I/O drawer, RPQ 8P2534 is
used to order these features.
Each FICON Express8 feature has four channels per feature. Each of the four independent channels
supports a link data rate of 2 gigabits (Gbps), 4 Gbps, or 8 Gbps per second, depending upon the
capability of the attached switch or device, with autonegotiation to 2, 4, or 8 Gbps depending upon the
capability of the attached device. A link data rate of 1 Gbps is not supported. Each channel utilizes small
form factor pluggable optics (SFPs) with LC duplex connectors. The optics allow each channel to be
individually repaired without affecting the other channels.
Each FICON Express8 feature supports cascading (the connection of two FICON Directors in succession)
to minimize the number of cross-site connections and help reduce implementation costs for disaster
recovery applications, GDPS, and remote copy.
The FICON Express8 features for z114 include:
v FICON Express8 10KM LX (FC 3325)
All the channels on a single FICON Express8 10KM LX feature are the same type, 10KM LX. FICON
Express8 10KM LX utilizes a long wavelength (LX) laser as the optical transceiver and supports use of
a 9/125 micrometer single mode fiber optic cable terminated with an LC duplex connector.
FICON Express8 10KM LX supports distances up to 10 km (6.2 miles).
56 z114 System Overview
Level 01b
FICON Express8 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be
shared among LPARs within and across LCSS.
v FICON Express8 SX (FC 3326)
All the channels on a single FICON Express8 SX feature are the same type, SX. FICON Express8 SX
utilizes a short wavelength (SX) laser as the optical transceiver and supports use of a 50/125
micrometer multimode fiber optic cable or a 62.5/125 micrometer multimode fiber optic cable
terminated with an LC duplex connector.
Note: You cannot mix 50 and 62.5 micron multimode fiber optic cabling in the same link.
For details about the unrepeated distances for FICON Express8 SX, refer to System z Planning for Fiber
Optic Links (ESCON, FICON, Coupling Links, and Open System Adapters).
FICON Express4 features
FICON Express4 features can only be carried forward. FICON Express4-2C features can be carried
forward or ordered on MES using RPQ 8P2534. During an MES, if slots are available in an I/O drawer
and no slots are available in a PCIe I/O drawer, RPQ 8P2534 is used to order these features.
FICON Express4 features can only be used in an I/O drawer.
The FICON Express4 features for the z114 include:
v FICON Express4 10KM LX (FC 3321)
FICON Express4 10KM LX has four channels per feature. It is designed to support unrepeated
distances up to 10 km (6.2 miles) over single mode fiber optic cabling.
v FICON Express4 4KM LX (FC 3324)
FICON Express4 4KM LX has four channels per feature. It is designed to support unrepeated distances
up to 4 km (2.5 miles) over single mode fiber optic cabling.
v FICON Express4 SX (FC 3322)
FICON Express4 SX has four channels per feature. It is designed to carry traffic over multimode fiber
optic cabling.
v FICON Express4-2C 4KM LX (FC 3323)
FICON Express4-2C 4KM LX has two channels per feature. It is designed to support unrepeated
distances up to 4 km (2.5 miles) over single mode fiber optic cabling.
v FICON Express4-2C SX (FC 3318)
FICON Express4-2C SX has two channels per feature. It is designed to carry traffic over multimode
fiber optic cabling.
All channels on a single FICON Express4 feature are of the same type: 4KM LX, 10KM LX, or SX.
FICON Express4 supports a 4 Gbps link data rate with auto-negotiation to 1, 2, or 4 Gbps for synergy
with existing switches, directors, and storage devices. An entry level 4KM LX feature supporting two
channels per feature for data centers with limited requirements for single mode fiber optic cabling
connectivity is offered.
Note: You need to ensure that the tactical as well as the strategic requirements for your data center,
Storage Area Network (SAN), and Network Attached Storage (NAS) infrastructures are taken into
consideration as you employ 2 Gbps and beyond link data rates.
Mode Conditioning Patch (MCP) cables are only supported at the 1 Gbps link data rate.
Chapter 5. I/O connectivity 57
Level 01b
Channel consolidation using FICON Express8
You can consolidate your FICON Express4 and FICON Express8 channels onto fewer FICON Express8S
channels while maintaining and enhancing performance. You can also migrate ESCON channels to
FICON Express8 channels. Contact your IBM representative for assistance.
Name server registration
Registration information is provided on the name server for both FICON and FCP, which enhances
problem determination, analysis, and manageability of the storage area network (SAN).
High Performance FICON for System z (zHPF)
High Performance FICON for System z (zHPF) is an extension to the FICON architecture and is designed
to improve the performance of small block and large block data transfers. zHPF supports multitrack
operations and the transfer of greater than 64 kB of data in a single operation, resulting in higher
throughputs with lower response times.
zHPF applies to all FICON Express8S, FICON Express8, and FICON Express4 features (CHPID type FC)
on z114.
Discover and automatically configure devices attached to FICON
channels
z114 provides a function, z/OS discovery and autoconfiguration (zDAC), that discovers and
automatically configures control units and devices that are accessible to z114, but not currently
configured.
This function performs a number of I/O configuration definition tasks for new and changed control units
and devices attached to FICON channels. The proposed configuration incorporates the current contents of
the I/O definition file (IODF) with additions for newly installed and changed control units and devices.
This function is designed to help simplify I/O configuration on z114 running z/OS and reduce
complexity and setup time.
This function is supported by z/OS V1.12 with PTFs and applies to all FICON features (CHPID type FC)
on z114.
The MIDAW facility
The Modified Indirect Data Address Word (MIDAW) facility is designed to improve FICON performance.
The MIDAW facility:
v Improves FICON performance for extended format data sets. Non-extended data sets can also benefit
from MIDAW.
v Reduces FICON channel and control unit overhead.
Multipath Initial Program Load (IPL)
If I/O errors occur during the IPL, z/OS on z114 allows the system to attempt an IPL on alternate paths,
if the paths are available. The system will attempt the IPL on an alternate path until all paths have been
attempted or until the IPL is successful.
This function is applicable for all FICON features with CHPID type FC.
58 z114 System Overview
Level 01b
Purge path extended
The purge path extended function provides enhanced capability for FICON problem determination. The
FICON purge path error-recovery function is extended to transfer error-related data and statistics
between the channel and entry switch, and from the control unit and its entry switch to the host
operating system.
FICON purge path extended is supported by z/OS. FICON purge path extended applies to the FICON
features when configured as a native FICON channel.
Fibre channel analysis
You can use the Fibre Channel Analyzer task on the HMC to identify fiber optic cabling issues in your
Storage Area Network (SAN) fabric without contacting IBM service personnel. All FICON channel error
information is forwarded to the HMC where it is analyzed to help detect and report the trends and
thresholds for all FICON channels on z114. This report shows an aggregate view of the data and can span
multiple systems.
This applies to FICON channels exclusively (CHPID type FC).
Fibre Channel Protocol (FCP) for SCSI devices
Fibre Channel (FC) is a computer communications protocol that attempts to combine the benefits of both
channel and network technologies. Fibre Channel made the biggest impact in the storage arena,
specifically using Small Computer System Interface (SCSI) as an upper layer protocol.
Fibre Channel is broken down into five layers: FC-0, FC-1, FC-2, FC-3, and FC-4. The layers define the
following functions:
v FC-0 defines the physical characteristics
v FC-1 defines the character encoding and link maintenance
v FC-2 defines the frame format, flow control, classes of service
v FC-3 defines the common services
FICON and FCP implement those layers, unchanged.
v FC-4 defines the upper layer protocol mapping which includes SCSI as well as Fibre Channel - Single
Byte-2 (FC-SB-2), which is FICON.
The Fibre Channel Protocol (FCP) capability, supporting attachment to Small Computer Systems Interface
(SCSI) is based on the Fibre Channel (FC) standards defined by the INCITS, and published as ANSI
standards. SCSI devices in Linux on System z environments, as well as SCSI devices defined to z/VM
and z/VSE, are based on the Fibre Channel standards. FC is an upper layer fibre channel mapping of
SCSI on a common stack of Fibre Channel physical and logical communication layers. HIPPI, IPI, IP, and
FICON (FC-SB-2) are other examples of upper layer protocols.
SCSI is an industry-standard protocol that is supported by a wide range of controllers and devices that
complement the System z9, System z10, and zEnterprise storage attachment capability through FICON
and ESCON channels. FCP channels on System z9, System z10, and zEnterprise are provided to enable
operating systems on System z9, System z10, and zEnterprise to access industry-standard SCSI storage
controllers and devices.
FCP is the base for open industry-standard Fibre Channel networks or Storage Area Networks (SANs).
Fibre Channel networks consist of servers and storage controllers and devices as end nodes,
interconnected by Fibre Channel switches, directors, and hubs. While switches and directors are used to
build Fibre Channel networks or fabrics, Fibre Channel loops can be constructed using Fibre Channel
hubs. In addition, different types of bridges and routers may be used to connect devices with different
interfaces (like parallel SCSI). All of these interconnects may be combined in the same network.
Chapter 5. I/O connectivity 59
Level 01b
For information about the configurations supported by the FCP channel, refer to Configurations.
An FCP channel is defined in the IOCP as channel type FCP and is available on FICON features.
FCP channels support full-fabric support. The FCP full-fabric support means that multiple numbers of
directors/switches can be placed between the server and FCP/SCSI device, thereby allowing many hops
through a storage network for I/O connectivity.
In addition, for FCP channels, a high integrity fabric solution is not required but is recommended. If an
FCP Interswitch Link (ISL) is moved, data could potentially be sent to the wrong destination without
notification.
The FICON Express4, FICON Express8, and FICON Express8S features, when defined as CHPID type
FCP in the IOCP, support storage controllers and devices with an FCP interface in z/VM, z/VSE, and
Linux on System z environments.
Each port on a single FICON card can be configured individually and can be a different CHPID type.
FCP channels support T10-DIF
System z FCP has implemented support of the American National Standards Institute's (ANSI) T10 Data
Integrity Field (T10-DIF) standard. With this support, data integrity protection fields are generated by the
operating system and propagated through the storage area network (SAN). System z helps to provide
added end-to-end data protection between the operating system and the storage device.
An extension to the standard, Data Integrity Extensions (DIX), provides checksum protection from the
application later through the host bus adapter (HBA), where cyclical redundancy checking (CRC)
protection is implemented.
T10-DIF support by the FICON Express8S and FICON Express8 features, when defined as CHPID type
FCP, is exclusive to z114 and z196. Exploitation of the T10-DIF standard is required by the control unit.
Configurations
Storage controllers and devices with an FCP interface can be directly attached to zEnterprise
(point-to-point connection), or by using Fibre Channel switches or directors. A storage controller or device
with an appropriate FCP interface may be attached to each port of a FICON feature, or of a Fibre
Channel switch or director.
In addition, the following devices and controllers can be attached to each port on a Fibre Channel switch
or director:
v FC-AL controllers or devices, and FC-AL hubs
If the switch or director supports the Fibre Channel Arbitrated Loop (FC-AL) protocol, devices
implementing this protocol may be attached to that port and accessed from System z9, System z10, or
zEnterprise. Devices typically implementing the FC-AL protocol are tape units and libraries, and
low-end disk controllers.
If the switch or director does not support the FC-AL protocol, you can also install a FC-AL bridge
between the switch or director and the FC-AL controller or device.
If more than one FC-AL controller or device should be attached to a FC-AL switch port, it is
convenient to use a Fibre Channel hub, where multiple devices with a FC-AL interface can be directly
attached to that hub.
v Fibre-Channel-to-SCSI bridges
Fibre-Channel-to-SCSI bridges can be used to attach storage controllers and devices implementing the
electrical, parallel SCSI interface. Different types of Fibre-Channel-to-SCSI bridges may support
different variants of the parallel SCSI interface, such as Low Voltage Differential (LVD), High Voltage
Differential (HVD), Single Ended, wide (16-bit) versus narrow (8-bit) interfaces, and different link
speeds.
60 z114 System Overview
Level 01b
Each FCP channel (CHPID) can support up to 480 subchannels, where each subchannel represents a
communication path between software and the FCP channel. Refer to Channel and device sharing for
more information.
Host operating systems sharing access to an FCP channel can establish in total up to 2048 concurrent
connections to up to 510 different remote Fibre Channel ports associated with Fibre Channel controllers.
The total number of concurrent connections to end devices, identified by logical unit numbers (LUNs),
must not exceed 4096.
I/O devices
The FCP channel implements the FCP standard as defined by the INCITS Fibre Channel Protocol for SCSI
(FCP), and Fibre Channel Protocol for SCSI, Second Version (FCP-2), as well as the relevant protocols for
the SCSI-2 and SCSI-3 protocol suites. Theoretically, each device conforming to these interfaces should
work when attached to an FCP channel as previously defined. However, experience tells us that there are
small deviations in the implementations of these protocols. Therefore, it is advisable to do appropriate
conformance and interoperability testing to verify that a particular storage controller or device can be
attached to an FCP channel in a particular configuration (i.e. attached via a particular type of Fibre
Channel switch, director, hub, or Fibre-Channel-to-SCSI bridge).
Also, for certain types of FCP and SCSI controllers and devices, specific drivers in the operating system
may be required in order to exploit all the capabilities of the controller or device, or to cope with unique
characteristics or deficiencies of the device.
Information about switches and directors qualified for IBM System z FICON and FCP channels is located
on Resource Link (http://www.ibm.com/servers/resourcelink) on the Library page under Hardware products
for servers.
Addressing
FCP channels use the Queued Direct Input/Output (QDIO) architecture for communication with the
operating system. IOCP is only used to define the QDIO data devices. The QDIO architecture for FCP
channels, derived from the QDIO architecture that had been defined for communications via an OSA
card, defines data devices that represent QDIO queue pairs, consisting of a request queue and a response
queue. Each queue pair represents a communication path between an operating system and the FCP
channel. It allows an operating system to send FCP requests to the FCP channel via the request queue.
The FCP channel uses the response queue to pass completion indications and unsolicited status
indications to the operating system.
IOCP is not used to define the actual Fibre Channel storage controllers and devices, nor the Fibre
Channel interconnect units such as switches, directors, or bridges. IOCP is only used to define the QDIO
data devices. The Fibre Channel devices (end nodes) in a Fibre Channel network are addressed using
World Wide Names (WWNs), Fibre Channel Identifiers (IDs), and Logical Unit Numbers (LUNs). These
addresses are configured on an operating system level, and passed to the FCP channel together with the
corresponding Fibre Channel I/O or service request via a logical QDIO device (queue).
Channel and device sharing
An FCP channel can be shared between multiple operating systems, running in a logical partition or as a
guest operating system under z/VM. Under z/VM, multiple z/VM, CMS, Linux on System z, and z/VSE
guests are able to share SCSI channels and devices using z/VM Fixed Block Architecture (FBA)
emulation. To access the FCP channel, each operating system needs one FCP device on an FCP channel.
Each FCP channel can support up to 480 QDIO queue pairs. This allows each FCP channel to be shared
among 480 operating system instances.
Channel and device sharing using NPIV: N_Port ID Virtualization (NPIV) allows the sharing of a
single physical FCP channel and attached devices, logical units, among operating system images, whether
Chapter 5. I/O connectivity 61
Level 01b
in logical partitions or as z/VM guests in virtual machines. This is achieved by assigning a unique
WWPN to each subchannel that is defined for an FCP Channel using IOCP.
Each operating system instance using such a subchannel and its associated QDIO queues therefore also
uses its own WWPN. When the operating system image starts using its subchannel, the FCP channel
performs a login to the Fibre Channel fabric and acquires a unique Fibre Channel ID, also called N_Port
ID. This ID is used in all further Fibre Channel communication that is done on behalf of this operating
system image.
Access controls based on the assigned WWPN can be applied in the SAN environment, using standard
mechanisms such as zoning in FC switches and Logical Unit Number (LUN) masking in the storage
controllers. You can configure the SAN prior to the installation of a new machine using the WWPN tool
available on Resource Link.
NPIV exploitation requires a Fibre Channel director or switch that supports the NPIV standard. If such a
director or switch is installed, NPIV mode can be enabled for the FCP channel that attaches to this Fibre
Channel switch or director through the Support Element. This enablement can be done on logical
partition base, i.e., per FCP channel image.
NPIV is not supported in a point-to-point topology.
Channel and device sharing without NPIV: Without NPIV support, multiple operating system images
can still concurrently access the same remote Fibre Channel port through a single FCP channel. However,
Fibre Channel devices or logical units, identified by their LUNs, cannot be shared among multiple
operating system images through the same FCP channel.
Positioning
FCP and SCSI are industry-standard protocols, which have been implemented by many vendors in a
large number of different types of storage controllers and devices. These controllers and devices have
been widely accepted in the market place and proven to be adequate to meet the requirements regarding
reliability, availability, and serviceability (RAS) in many environments.
However, it must be noted that there are some advanced, unique RAS characteristics of zEnterprise
storage attachments based on ESCON and FICON attachments, using channel programs (and the
Extended Count Key Data (ECKD
) protocol in the case of disk control units), that may not be readily
available in such an FCP or SCSI based world. Therefore, whenever there are very stringent requirements
regarding isolation, reliability, availability, and serviceability, a conscious decision must be made whether
FCP attached storage controllers and devices or FICON or ESCON attached control units should be used.
Customers requiring the more robust RAS characteristics should choose FICON or ESCON channels.
SCSI Initial Program Load (IPL)
This function allows you to IPL an operating system from an FCP-attached disk, to execute either in a
logical partition or as a guest operating system under z/VM. In particular, SCSI IPL can directly IPL a
z114 operating system that has previously been installed on a SCSI disk. Thus, there is no need for a
classical channel (ESCON or FICON) attached device, such as an ECKD disk control unit, in order to
install and IPL a z114 operating system. The IPL device is identified by its Storage Area Network (SAN)
address, consisting of the WWPN of the disk controller and the Logical Unit Number (LUN) of the IPL
device.
You can also IPL a standalone-dump program from an FCP channel attached SCSI disk. The
standalone-dump program can also store the generated dump data on such a disk.
SCSI IPL in z/VM allows Linux on System z, z/VSE, and other guest operating systems that support
SCSI IPL to be IPLed from FCP-attached SCSI disk, when z/VM is running on a z114. Therefore, z/VM,
z/VSE, and Linux on System z guests may be started and run completely from FCP channel attached
disk in your hardware configuration.
62 z114 System Overview
Level 01b
z/VM provides the capability to install z/VM from a DVD to an Enterprise Storage Server
(ESS) SCSI
disk emulated as a Fixed Block Architecture (FBA) disk as well as an Enterprise Storage Server from a
DVD to a 3390 disk. Thus, z/VM and its Linux on System z guests may be started and run completely
from FCP disks on your hardware configuration. Refer to z/VM subset of the 2818DEVICE Preventive
Service Planning (PSP) bucket for any service required for z/VM support for SCSI IPL.
z/VM supports SCSI-attached disks to be used for installation, IPL, and operations such as storing
dumps, and other functions, while continuing to provide support for ESCON-attached of FICON-attached
disk or tape.
z/VM SCSI support allows a Linux on System z server farm and z/VSE to be deployed on z/VM in a
configuration that includes only SCSI disks.
z/VM provides the capability to dump Linux on System z guests to FCP-attached SCSI disks. Benefits
include:
v More guest virtual memory can be dumped because SCSI disks can be larger than ECKD disks
v Avoids the need to convert a VMDUMP into a Linux tool format
v Allows the same SCSI dump mechanisms to be used when running Linux for System z in an LPAR
and in a z/VM virtual machine.
For Linux on System z support for SCSI IPL, refer to this website: http://www.ibm.com/developerworks/linux/
linux390/.
z/VSE supports FCP-attached SCSI disks for installation and IPL. For z/VSE SCSI support, refer to the
appropriate z/VSE publications (for example, z/VSE Administration).
For additional information on:
v How to use SCSI IPL for a logical partition, refer to the zEnterprise System Support Element Operations
Guide or to the System z Hardware Management Console Operations Guide
v Messages that can show up on the operating systems console on the SE or Hardware Management
Console, refer to System z Small Computer Systems (SCSI) IPL - Machine Loader Messages
v How to use SCSI IPL for a z/VM guest, refer to http://www.vm.ibm.com/pubs for appropriate z/VM
publications
v How to prepare a Linux on System z IPL disk or a Linux on System z dump disk, refer to
http://www.ibm.com/developerworks/linux/linux390/ for appropriate Linux on System z publications.
ESCON channels
The ESCON channel provides a 17 MBps link data rate between host and control units for I/O devices.
ESCON supports half-duplex data transfers over 62.5 multimode fiber optic cabling.
ESCON can only be used in an I/O drawer.
The ESCON channel provides a light-emitting diode (LED) light source for fiber optic cables. It can
extend up to 3 kilometers (1.86 US miles), a range that can be further extended to 6 or 9 kilometers (km)
by retransmission through one or two ESCON directors.
With the availability of two LCSSs, you can define a maximum of 240 ESCON channels on your z114 up
to a maximum of 16 features per system. The maximum number of configurable channels is 256 per LCSS
and per operating system image. The high density ESCON feature has 16 ports, 15 of which can be
activated for your use. One port is always reserved as a spare, in the event of a failure of one of the other
ports. When four ports are ordered, two 16-port ESCON features are installed and two ports are activated
on each feature. After the first pair, ESCON features are installed in increments of one. ESCON channels
continue to be ordered in increments of four channels.
Chapter 5. I/O connectivity 63
Level 01b
ESCON supports these operating system environments: z/OS, z/VM, z/VSE, z/TPF, and Linux on
System z.
ESCON channels affect the performance of the channel subsystem. Maximizing channel subsystem
performance is an important consideration in configuring I/O devices to a z114 general purpose model
CPC. Channel subsystem performance depends on the factors described in this chapter.
For an explanation of basic ESCON channel concepts, refer to Introducing Enterprise Systems Connection.
For detailed information about synchronous and nonsynchronous I/O operation, refer to Storage
Subsystem Library Introduction to Nonsynchronous Direct Access Storage Subsystems.
ESCON converter operation
You can configure ESCON converter channels (attached to a parallel converter - for example, the IBM
9034 or the Optica 34600 FXBT) for block and byte multiplexer mode of operation. The data mode of
operation is determined by the multiplexer mode (byte or block). This is selected for specific channels
when either the CPC or an LPAR is initialized.
As many as eight channel paths are available to attach to any I/O device. During any I/O operation, one
of the available channel paths is selected. Channel path selection is a hardware function rather than a
function of the system control program.
At the start of an I/O operation, a central processor signals the channel subsystem that an I/O operation
is needed. An I/O request is posted to a queue; meanwhile, instruction execution in the central processor
continues.
Channel multiplexing modes
The data mode of operation is determined by the multiplexer mode (block or byte). This is selected for
specific channels when either the CPC or a logical partition is initialized.
Block Multiplexer Mode of Operation: In block multiplexer mode of operation, a device stays
connected to a channel continuously during the transfer of a full block of data.
Block multiplexer mode of operation allows a control unit to present channel end and to disconnect
from a channel at the completion of a specified operation. Device End is presented at a later point.
During the interval between channel end and device end another device attached to the same
channel can be started or can complete an operation that is ready. However, if the second device does
connect to the same channel during this interval, the first device may find the channel busy when it tries
to reconnect, and then the first device must wait for service.
ESCON can be configured for block multiplexer mode of operation. In block multiplexer mode of
operation, ESCON channels configured as CVC channel paths can operate in either interlock (high-speed
transfer) mode or in data-streaming mode. They can also be attached to control units that operate in
high-speed transfer or in data-streaming mode. Data rates can be as high 4.5 MBps for ESCON CVC
channel paths.
Byte multiplexer mode of operation: Byte interleave mode of operation allows the execution of multiple
I/O operations concurrently. Byte multiplexer mode permits several relatively slow-speed I/O devices to
operate at the same time. Each addressed device requesting service is selected for transfer of a byte or a
group of bytes to or from main storage. Bytes from multiple devices are interleaved on the channel and
routed to or from the desired locations in main storage.
The load that a byte multiplexer channel can sustain is variable. It is governed by I/O device
performance factors such as the data transfer rate, device buffers, number of bytes per data burst on the
channel, channel program requirements, synchronized mechanical motion, and priority sequence position
on the I/O interface.
64 z114 System Overview
Level 01b
ESCON converter channels (defined as CBY) can be configured for byte multiplexer mode of operation.
In byte multiplexer mode of operation, ESCON channels configured as CBY channel paths can operate in
either byte multiplexer mode or in burst mode. CBY channels require a 9034 ESCON converter. Byte
multiplexer mode permits several relatively slow-speed I/O devices to operate at the same time.
Refer to the 2818IO subset id in the 2818DEVICE upgrade ID of the preventive service planning (PSP)
bucket for prerequisite 9034 EC level information.
Byte multiplexer mode and burst mode: A byte multiplexer channel can be monopolized by one I/O
device (burst mode) or shared by many I/O devices (byte multiplexer mode). The number of bytes
transferred at a time in byte multiplexer mode can be one (single byte transfers) or more than one
(multibyte transfers). Most control units that operate in byte multiplexer mode can also operate in burst
mode. A manually set switch at the control unit determines whether the control unit operates in burst
mode or byte multiplexer mode.
Some devices offer a choice of how many bytes are transferred during a single data transfer sequence in
byte multiplexer mode.
Because most of the time spent in a data-transfer control sequence is for control, increasing the burst size
(the number of bytes transferred per sequence) results in a relatively small increase in the total channel
busy time for the sequence. Also, increasing the burst size reduces the number of data transfer sequences
required. The net effect is a significant improvement in channel efficiency and a higher allowable data
rate.
Burst mode, although most effective in the use of channel resources, can cause another device on the byte
multiplexer channel to exceed its critical time. From the perspective of the control unit, burst mode
occurs when the time contributed by the control unit in a transfer sequence is more than 32
microseconds. (Refer to the Enterprise System Architecture/390 System 360 and System 370 I/O Interface
Channel to Control Unit OEMI.)
If the device configuration guidelines are followed for byte multiplexer channels on a general purpose
model CPC, deferred accesses are minimized and data transfer sequences exceeding 32 microseconds are
acceptable when large burst sizes are specified.
Most class-2 and class-3 devices that can operate in burst mode should be attached to block multiplexer
channels for better performance.
I/O operations control
ESA/390 and z/Architecture I/O operations are performed by executing a channel program that consists
of one or more chained Channel Command Words (CCWs). Each CCW contains a command and other
information that is used by both the channel and control unit in executing the I/O operation.
Channel commands are segmented into six basic categories with many variations based on control unit
type. A channel program is initiated and controlled by executing one or more of the ESA/390 and
z/Architecture I/O instructions described below. I/O interruptions may result during the execution of a
channel program to notify the CP of progress or completion.
Channel commands
The six basic channel commands are:
Write Initiates the transfer of data from main storage to an I/O device.
Read Initiates the transfer of data from an I/O device to main storage.
Read Backward
Initiates the transfer of data from an I/O device to main storage, storing data bytes in reverse
order.
Chapter 5. I/O connectivity 65
Level 01b
Control
Specifies operations such as set tape density, rewind tape, advance paper in a printer, or sound an
audible alarm.
Sense Requests information from a control unit. The information contains unusual conditions detected
during the last I/O operation and detailed device status.
Transfer in Channel (TIC)
Specifies the location in main storage where the next CCW in the channel program is to be
fetched. The TIC command provides branching between CCWs in noncontiguous storage areas. A
TIC command cannot specify a CCW containing another TIC command.
ESA/390 and z/Architecture mode I/O instructions
In ESA/390 mode or z/Architecture mode, any CP can initiate I/O operations with any I/O device and
can handle I/O interruptions from any I/O device. Each I/O device is assigned a unique device number,
and is associated with one subchannel.
The CPs communicate with devices by specifying the appropriate subchannel. The subchannel uses the
assigned device address to communicate with the device over one or more channel paths. The device
number provides a path-independent means to refer to a device for use in operator messages or at the
time of IPL.
For descriptions of these instructions, refer to the Enterprise System Architecture/390 Principles of Operation
or z/Architecture Principles of Operation manual.
The I/O instructions for operation in ESA/390 mode or z/Architecture mode are:
v Start Subchannel (SSCH)
v Test Subchannel (TSCH)
v Clear Subchannel (CSCH)
v Halt Subchannel (HSCH)
v Resume Subchannel (RSCH)
v Store Subchannel (STSCH)
v Modify Subchannel (MSCH)
v Test Pending Interruption (TPI)
v Reset Channel Path (RCHP)
v Set Channel Monitor (SCHM)
v Store Channel Report Word (STCRW)
v Cancel Subchannel (XSCH)
v Set Address Limit (SAL)
v Store Channel Path Status (STCPS).
The SSCH instruction specifies an operation request block, which designates the channel program.
Chaining operations
Following the transfer of information over a channel designated by a Channel Command Word (CCW),
an operation initiated by the Start Subchannel (SSCH) instruction can be continued by fetching a new
CCW. Fetching a new CCW immediately following the completion of the previous CCW is called
chaining. Chaining is described in more detail in the Enterprise System Architecture/390 Principles of
Operation or z/Architecture Principles of Operation.
CCWs located in contiguous areas of central storage (successive doubleword locations) can be chained.
Chains of CCWs located in noncontiguous storage areas can be coupled for chaining purposes by using a
Transfer in Channel command. All CCWs in a chain refer to the I/O device specified in the original
instruction.
The type of chaining (data or command) is specified by chain-data and chain-command flag bits in the
CCW.
66 z114 System Overview
Level 01b
Data chaining
When the data transfer specified by the current CCW is finished, data chaining causes the
operation to continue by fetching a new CCW and using the storage area defined by the new
CCW. Execution of the operation at the I/O device is not affected.
Command chaining
Each time a new CCW is fetched during command chaining, a new I/O operation is specified.
The new operation is initiated when the device end signal for the current operation is received,
unless suspension is specified in the new CCW. When command chaining takes place, the
completion of the current operation does not cause an I/O interruption.
I/O interruptions
I/O interruptions report the completion of I/O operations to the CPs, error and time-out conditions, and
progress.
Ending status information about the operation is available to the control program at the end of the I/O
operation. When an I/O operation is completed, an I/O interruption request is sent to a central processor.
When the request is honored, an I/O interruption occurs and places the central processor under control
of the I/O new program status word (PSW). Until an I/O interruption condition is honored, it is called a
pending I/O interruption.
Errors detected by the channel subsystem are reported to the CPs as I/O interruptions or machine-check
interruptions. I/O interruptions report the following hardware-related conditions:
v Interface Control Check (IFCC) - For example, interface tag errors and time-outs.
v Channel Control Check (CCC) - For example, parity, decode, or control errors.
v Channel Data Check (CDC) - For example, a parity error detected in central storage.
Machine-check interruptions include the following:
v Unrecoverable errors (retry is unsuccessful).
v Persistent errors (retry can be attempted, but the error threshold is exceeded).
v Serious channel element errors that require immediate reporting or cannot be reported as an IFCC or
CCC with an I/O interruption.
Resets
An I/O system reset is issued to all channels, and the channels signal a system reset to all attached I/O
devices.
An I/O system reset:
v Stops all subchannel operations.
v Resets interruptions and status in all subchannels.
An I/O system reset occurs as part of:
v Channel subsystem power-on reset.
v Initial program load.
v System reset.
A channel issues a selective reset to a specific I/O device in response to an IFCC, CCC, or as part of
execution of the clear subchannel instruction. The status of the specific device is reset.
I/O interface protocol
The I/O interface protocol is determined by the interface sequencing operations selected for specific
control units and their associated devices that are attached to the channel.
Chapter 5. I/O connectivity 67
Level 01b
Channel-to-Channel connection
The Channel-to-Channel (CTC) function simulates an I/O device that can be used by one system control
program to communicate with another system control program. It provides the data path and
synchronization for data transfer between two channels. When the CTC option is used to connect two
channels that are associated with different system, a loosely coupled multiprocessing system is
established. The CTC connection, as viewed by either of the channels it connects, has the appearance of
an unshared I/O device.
The CTC is selected and responds in the same manner as any I/O device. It differs from other I/O
devices in that it uses commands to open a path between the two channels it connects, and then
synchronizes the operations performed between the two channels.
ESCON CTC support: The parallel I/O CTC architecture defines two operating modes for CTC
communication: basic mode and extended mode. ESCON CTC support for both of these modes is
available.
ESCON channels (using link-level and device-level protocols): You can achieve ESCON
channel-to-channel connections between CPCs with ESCON or FICON Express channels if one of the
ESCON channels is defined to operate in channel-to-channel (CTC) mode.
ESCON channels that operate in CTC mode (extended mode or basic mode) can be defined as shared
ESCON channels. For more information, refer to Multiple Image Facility (MIF) on page 51.
For detailed information about the ESCON channel-to-channel adapter, refer to Enterprise Systems
Architecture/390 ESCON Channel-to-Channel Adapter.
Channel time-out functions
The optional time-out function described here applies only to ESCON channels that attach to a 9034
ESCON converter channel.
Each channel path has I/O interface time-out functions that time the control unit delays in completing
the following I/O interface sequences:
v A 6-second time-out for all selection and status presentation sequences. A time-out occurs if the
sequence is not complete within 6 seconds.
v A 30-second time-out for data transfer. A time-out occurs if a byte of data is not transferred within 30
seconds.
If a time-out occurs, the channel terminates the I/O request to the control unit and generates an IFCC
interruption.
The time-out function detects malfunctions in control units and I/O devices that can cause the channel
path to be unusable to other control units and I/O devices. The time-out function is specified as active or
inactive for a device by IOCP when the IOCDS is created.
Control unit (CU) priority on an I/O interface
CU priority on an I/O interface applies only to ESCON channels attached to a 9034 ES connection
converter channel.
CU priority on the I/O interface of a channel depends on the order in which they were attached. If the
CUs are connected to the select out line, the first CU has the highest priority. If the CUs are attached to
the select in line, the priority sequence is reversed. CUs attached to the select out line have priority
over CUs attached to the select in line.
68 z114 System Overview
Level 01b
Dynamic reconnection
The channel subsystem permits dynamic reconnection of I/O devices that have the dynamic-reconnection
feature installed and that are set up to operate in a multipath mode, such as the IBM 3390 Direct Access
Storage Model A14 or A22. Dynamic reconnection allows the device to reconnect and continue a chain of
I/O operations using the first available channel path (one of as many as eight possible channel paths
defined in an IOCP parameter). The selected path is not necessarily the one used initially in the I/O
operation.
ESCON channel performance
Channel subsystem performance can be examined by observing two measurements:
v Response time (the amount of time taken to complete an I/O operation).
v Throughput (the number of I/O operations an I/O subsystem can complete in a given amount of
time).
Channel subsystem response time and throughput can be divided into four major components:
Figure 11. Control Unit (CU) priority on ESCON channels attached to a 9034 ES connection converter
Chapter 5. I/O connectivity 69
Level 01b
v Queuing and setup time
The time taken for a channel path, control unit, and device to become available.
The time taken for a channel to send the I/O operation commands to the control unit.
v Control unit and device time
The time required by the control unit and device to prepare for the transfer of data for the I/O
operation. For example, a non-cached DASD control unit may have to wait for the DASD's seek and
latency times before being ready to accept or send data.
v Data transfer time
The time it takes to transfer the data for the I/O operation.
v Completion time
The time it takes for the channel and control unit to post the status of and end the I/O operation.
Factors affecting ESCON channel performance
Factors that affect the various components of performance include:
v Synchronous or nonsynchronous type of operation
v Data transfer rate
v Attached device characteristics
v Channel subsystem workload characteristics.
Synchronous and nonsynchronous I/O operation: For detailed information about concepts described in
this section, refer to Storage Subsystem Library Introduction to Nonsynchronous Direct Access Storage
Subsystems.
Synchronous operation
Most DASD devices in a parallel environment transfer data synchronously. Synchronous
operation requires that the channel, control unit, and device be active at the same time.
All work involved in ending an operation and advancing to the next operation must be
completed before the DASD head reaches the next record (commonly referred to as the
inter-record gap). If this does not occur, a rotational positional sensing/sensor (RPS) miss or an
overrun is generated and the operation must wait for one DASD revolution before continuing.
Nonsynchronous operation
Nonsynchronous operation removes the requirements of synchronous operation. During
nonsynchronous operation, the channel, control unit, and device do not have to be active at the
same time to perform an I/O operation; thereby:
v Increasing DASD storage potential (by reducing inter-record gap).
v Allowing the channel and control units to be separated by longer distances.
v Eliminating command overruns.
v Reducing response time (by reducing RPS misses).
v Permitting the channel to perform other operations during the time it would normally wait for
the device (this increases the throughput of the system).
Extended count key data (ECKD) channel programs are required to gain the benefits of
nonsynchronous I/O operations. Count key data (CKD) channel programs are supported, but
without the benefit of nonsynchronous operation. CKD channel-program performance could be
degraded relative to ECKD channel programs in a nonsynchronous environment.
Data transfer rate: One of the factors that affects channel performance is the data transfer rate. The I/O
subsystem data rate is the data transfer rate between processor storage and the device during an I/O
operation.
The I/O subsystem data rate is made up of three components:
v Channel data rate
70 z114 System Overview
Level 01b
The rate that the channel transfers data between the transmission link and processor storage during an
I/O operation. For ESCON channels, the link speed is 20 MBps and the channel data rate is 17 MBps
at 0 distance. The data rate increases with distance.
v Control unit data rate
The rate that the control unit transfers data between the control unit and the transmission link during
an I/O operation.
v Device data rate
The rate of data transfer between the control unit and the device. This rate depends on the control unit
and device you use.
The I/O subsystem data rate is the lowest of the channel data rate, the control unit data rate, and the
device data rate. In cases where the data comes from the control unit or is stored on the control unit and
not directly to the device (for example, a cache read), the I/O subsystem data rate is the lower of the
two: channel data rate or the control unit data rate.
The I/O subsystem data rate affects only the data transfer portion of the response time for an I/O
operation. Response time and throughput both improve (response time decreases and throughput
increases).
I/O device characteristics: The characteristics of devices attached to a channel subsystem can have a
substantial effect on performance. Device characteristics such as caches, buffers, and data transfer rates all
affect response time and throughput.
Channel subsystem workload characteristics: The performance of a specific I/O configuration varies
based on the workload characteristics of that configuration. Two significant factors that determine
workload characteristics and affect response time and throughput are channel program characteristics and
cache-hit rates.
Channel program characteristics
Channel program characteristics affect channel subsystem performance. ESCON channel
subsystems using link-level and device-level protocols perform nonsynchronous data transfers,
and should use extended count key data (ECKD) channel programs.
Count key data (CKD) channel programs run in an ESCON environment, but may increase
response times and reduce throughput due to lost DASD rotations.
Channel programs that contain indirect data address words (IDAWs), Transfer in Channel
commands (TICs), and chained data commands, or that have poorly-aligned data boundaries,
cause longer storage-response and increase channel subsystem response times.
Chained data commands increase response time due to an additional interlocked exchange
between the channel and control unit. Refer to ESCON performance characteristics on page 72
for more information.
The amount of data to be transferred per I/O operation affects throughput. As the amount of
data transferred per I/O operation increases (the ratio of data transferred to overhead improves),
throughput improves.
Cache-hit rates
For control units which implement caches, cache-hit rates affect the channel subsystem
performance. As the cache-hit rate increases, response time and throughput improve. The
cache-hit rate is the percentage of times when data needed for a read operation is in the control
unit's cache. For example, a cache-hit rate of 70% means that the required data is in the cache for
7 out of 10 read operations.
Chapter 5. I/O connectivity 71
Level 01b
The cache-hit rate is significant because data is transferred out of the cache at the control unit's
maximum data transfer rate, while data from the device is transferred at lower device speeds.
This means that the higher the cache-hit rate, the better the response time and the better the
throughput.
ESCON performance characteristics
With ESCON channels you need to consider the distance between the channel and the control unit since
this affects the setup and completion times of an I/O operation. As the distance between the channel and
the control unit increases, the response time increases and the throughput decreases. Channel and control
unit utilization also increases as distance between the channel and control unit increases.
The speed of data transfer through fiber optic cable is subject to the Propagation delay time is
determined by two factors: the speed of light through the optical fiber (which is fixed), and the length of
the fiber optic link. Propagation delay time increases as the distance between elements in a fiber optic
environment increase.
Interlocked exchange affects response time. Interlocked exchange requires that the channel (or control
unit) wait for a response from the control unit (or channel) before proceeding with the next step of an
I/O operation. As distance increases, the interlocked-exchange response time increases because of longer
propagation delay times.
The throughput and response time for a shared ESCON channel are comparable to that of an unshared
ESCON channel with comparable workload.
OSA channels
OSA channels include all OSA-Express2, OSA-Express3, and OSA-Express4S features.
z114 supports a maximum number of 48 features and 96 ports for the combined OSA-Express4S,
OSA-Express3, and OSA-Express2 features.
Note: Unless noted differently, throughout this section, the term OSA features refers to all the
OSA-Express4S, OSA-Express3 and OSA-Express2 features.
Supported CHPID types
OSA channels support the following modes of operation:
v CHPID type OSD
OSA-Express4S, OSA-Express3, or OSA-Express2 feature is running in QDIO mode.
QDIO mode is the preferred architecture on z114 for high-speed communication, helping to reduce
host interruptions and improve response time.
TCP/IP traffic when Layer 3
Protocol-independent when Layer 2
v CHPID type OSE
OSA-Express3 or OSA-Express2 feature is running in non-QDIO mode
SNA/APPN/HPF and/or TCP/IP passthru (LCS)
OSA-Express2 1000BASE-T Ethernet or OSA-Express3 1000BASE-T Ethernet is required.
v CHPID type OSC
OSA-Integrated Console Controller (OSA-ICC)
TN3270E, non-SNA DFT to IPL CPCs and LPARs
Operating system console operations
OSA-Express2 1000BASE-T Ethernet or OSA-Express3 1000BASE-T Ethernet is required.
v CHPID type OSN
OSA-Express for Network Control Program (NCP)
72 z114 System Overview
Level 01b
Supports channel data link control (CDLC) protocol. This provides connectivity between System z
operating systems and IBM Communication Controller for Linux (CCL).
CCL allows you to keep data and applications on the mainframe operating systems while moving
NCP function to Linux on System z. CCL on System z helps to improve network availability by
replacing token-ring networks and ESCON channels with an Ethernet network and integrated LAN
adapters on zEnterprise, OSA-Express3 GbE or 1000BASE-T Ethernet features, or OSA-Express2 GbE
or 1000BASE-T Ethernet features.
Requires the configuring to be done on a port-by-port basis
Used exclusively for internal communication, LPAR-to-LPAR
CHPID type OSN is not supported on the OSA-Express4S GbE features.
v CHPID type OSX
Provides connectivity and access control to the intraensemble data network (IEDN) from zEnterprise
to zBX
Supported for OSA-Express3 10 GbE SR, OSA-Express3 10 GbE LR, OSA-Express4S 10 GbE SR, and
OSA-Express4S 10 GbE LR only.
v CHPID type OSM
Provides connectivity to the intranode management network (INMN) from zEnterprise to Unified
Resource Manager functions
Supported for OSA-Express3 1000BASE-T Ethernet only. Each z114 in an ensemble must have a pair
of OSA-Express3 1000BASE-T Ethernet connections to the Bulk Power Hub (BPH) operating at 1
Gbps.
For more detailed information on these CHPID types and operating modes, refer to zEnterprise, System
z10, System z9 and zSeries Open Systems Adapter-Express Customer's Guide and Reference.
OSA/SF
The Open Systems Adapter Support Facility (OSA/SF) is a host-based tool to support and manage the
OSA features operating in QDIO (CHPID type OSD), non-QDIO mode (CHPID type OSE), or for
OSA-Express for NCP (CHPID type OSN). The OSA/SF is used primarily to manage all OSA ports,
configure all OSA non-QDIO ports, and configure local MACs.
One OSA/SF application can communicate with all OSA features in a hardware complex. OSA/SF
communicates with an OSA feature through a device predefined on the OSA feature. The device type is
OSAD.
OSA/SF is not required to set up the OSA features in QDIO mode (CHPID type OSD). However, it can
be used to set up MAC addresses and set adapter speed. For channels (CHPID type OSN), OSA/SF does
not provide any configuration management capabilities but provides capabilities only for operations
management.
OSA/SF includes a Java-based Graphical User Interface (GUI) in support of the client application. The
Java GUI is independent of any operating system/server (transparent to operating system), and is
expected to operate wherever the Java 1.4 runtimes are available.
Interoperability testing has been performed for Windows 2000, Windows XP, and Linux on System z.
Use of the GUI is optional; a REXX command interface is also included with OSA/SF. OSA/SF has been,
and continues to be, integrated in z/OS, z/VM, and z/VSE and runs as a host application. For OSA/SF,
Java GUI communication is supported via TCP/IP only.
The Layer 3 OSA Address Table (OAT) displays all IP addresses registered to an OSA port.
Chapter 5. I/O connectivity 73
Level 01b
OSA/SF has the capability of supporting virtual Medium Access Control (MAC) and Virtual Local Area
Network (VLAN) identifications (IDs) associated with OSA-Express2, OSA-Express3, and OSA-Express4S
features configured as a Layer 2 interface.
These OSA/SF enhancements are applicable to CHPID type OSD, OSE, and OSN.
For more detailed information on OSA/SF, refer to zEnterprise, System z10, System z9 and zSeries Open
Systems Adapter-Express Customer's Guide and Reference.
OSA-Express4S features
The OSA-Express4S features are PCIe based cards used only in the PCIe I/O drawers.
Similar to OSA-Express3 features, OSA-Express4S features are designed for use in high-speed enterprise
backbones, for local area network connectivity between campuses, to connect server farms to z114, and to
consolidate files servers onto z114. The workload can be Internet protocol (IP) based or non-IP based. All
OSA-Express4S features are hot-pluggable. Each port can be defined as a spanned channel and can be
shared among LPARs within and across LCSS.
OSA-Express4S provides the following enhancements compared to OSA-Express3:
v Port granularity for increased flexibility allowing you to purchase the right number of ports to help
satisfy your application requirements and to better optimize for redundancy.
v 8 GBps PCIe interface to the PCIe I/O drawer
v Reduction in CPU utilization by moving the checksum function for LPAR-to-LPAR traffic from the
PCIe adapter to the OSA-Express4S hardware.
The OSA-Express4S features includes:
v OSA-Express4S Gigabit Ethernet (GbE) LX (FC 0404)
OSA-Express4S GbE LX has one CHPID per feature and two ports associated with a CHPID. Supports
CHPID type: OSD
OSA-Express4S GbE LX uses a 9 micron single mode fiber optic cable with an LC duplex connector
and a link data rate of 1 Gbps. It is designed to support unrepeated distances of up to 5 km (3.1 miles).
The sending and receiving transceiver must be the same type, LX.
v OSA-Express4S Gigabit Ethernet (GbE) SX (FC 0405)
OSA-Express4S GbE SX has one CHPID per feature and two ports associated with a CHPID. Supports
CHPID type: OSD
OSA-Express4S GbE SX uses a 50 or 62.5 micron multimode fiber optic cable with an LC duplex
connector and a link data rate of 1 Gbps. The supported unrepeated distances vary:
With 50 micron fiber at 500 MHz-km: 550 meters (1804 feet)
With 62.5 micron fiber at 200 MHz-km: 273 meters (902 feet)
With 62.5 micron fiber at 160 MHz-km: 220 meters (772 feet)
The sending and receiving transceiver must be the same type, SX.
v OSA-Express4S 10 Gigabit Ethernet (GbE) Long Reach LR (FC 0406)
OSA-Express4S 10 GbE LR has one CHPID per feature and one port associated with a CHPID.
Supports CHPID types: OSD and OSX
OSA-Express4S 10 GbE LR uses a 9 micron single mode fiber optic cable with an LC duplex connector
and a link data rate of 10 Gbps. It is designed to support unrepeated distances of up to 10 km (6.2
miles).
The sending and receiving transceiver must be the same type, LR.
v OSA-Express4S 10 Gigabit Ethernet (GbE) Short Reach SR (FC 0407)
OSA-Express4S 10 GbE SR has one CHPID per feature and one port associated with a CHPID.
Supports CHPID types: OSD and OSX
74 z114 System Overview
Level 01b
OSA-Express4S 10 GbE SR uses a 50 or 62.5 micron multimode fiber optic cable with an LC duplex
connector and a link data rate of 10 Gbps. The supported unrepeated distances vary:
With 50 micron fiber at 2000 MHz-km: 300 meters (984 feet)
With 50 micron fiber at 500 MHz-km: 82 meters (269 feet)
With 62.5 micron fiber at 200 MHz-km: 33 meters (108 feet)
The sending and receiving transceiver must be the same type, SR.
OSA-Express3 features
All OSA-Express3 features are hot-pluggable.
OSA-Express3 10 Gigabit Ethernet, OSA-Express3 Gigabit Ethernet, and OSA-Express3-2P Gigabit
Ethernet features can be carried forward or ordered on MES using RPQ 8P2534. During an MES, if slots
are available in an I/O drawer and no slots are available in a PCIe I/O drawer, RPQ 8P2534 is used to
order these features.
OSA-Express3 features can only be used in an I/O drawer.
The OSA-Express3 features includes:
v OSA-Express3 Gigabit Ethernet (GbE) LX (FC 3362)
OSA-Express3 GbE LX has two CHPIDs per feature and two ports associated with a CHPID. Supports
CHPID types: OSD and OSN
The OSA-Express3 GbE LX uses a 9 micron single mode fiber optic cable with an LC duplex connector
and a link data rate of 1000 Mbps (1 Gbps). However, OSA-Express3 GbE LX also accommodates the
reuse of existing multimode fiber (50 or 62.5 micron) when used with a pair of mode conditioning
patch (MCP) cables. It is designed to support unrepeated distances of up to 5 km (3.1 miles). If using
MCP cables, the supported unrepeated distance is 550 meters (1804 feet).
v OSA-Express3 Gigabit Ethernet (GbE) SX (FC 3363)
OSA-Express3 GbE SX has two CHPIDs per feature and two ports associated with a CHPID. Supports
CHPID types: OSD and OSN
The OSA-Express3 GbE SX uses a 50 or 62.5 micron multimode fiber optic cable with an LC duplex
connector and a link data rate of 1000 Mbps (1 Gbps). The supported unrepeated distances vary:
With 50 micron fiber at 500 MHz-km: 550 meters (1804 feet)
With 62.5 micron fiber at 200 MHz-km: 273 meters (902 feet)
With 62.5 micron fiber at 160 MHz-km: 220 meters (772 feet)
v OSA-Express3 1000BASE-T Ethernet (FC 3367)
OSA-Express3 1000BASE-T Ethernet has two CHPIDs per feature and two ports associated with a
CHPID. Supports CHPID types: OSD, OSE, OSC, OSN, and OSM
The OSA-Express3 1000BASE-T Ethernet uses a EIA/TIA Category 5 or Category 6 Unshielded Twisted
Pair (UTP) cable with an RJ-45 connector and a maximum length of 100 meters (328 feet). It supports a
link data rate of 10, 100, or 1000 Mbps; half duplex and full duplex operation modes; and
autonegotiations to other speeds.
v OSA-Express3-2P 1000BASE-T Ethernet (FC 3369)
OSA-Express3-2P 1000BASE-T Ethernet has one CHPID per feature and two ports associated with a
CHPID. Supports CHPID types: OSD, OSE, OSC, OSN, and OSM
The OSA-Express3-2P 1000BASE-T Ethernet uses a EIA/TIA Category 5 or Category 6 Unshielded
Twisted Pair (UTP) cable with an RJ-45 connector and a maximum length of 100 meters (328 feet). It
supports a link data rate of 10, 100, or 1000 Mbps; half duplex and full duplex operation modes; and
autonegotiations to other speeds.
v OSA-Express3 10 Gigabit Ethernet (GbE) Long Reach (LR) (FC 3370)
OSA-Express3 10 GbE LR has two CHPIDs per feature and one port associated with a CHPID.
Supports CHPID types: OSD and OSX
Chapter 5. I/O connectivity 75
Level 01b
OSA-Express3 10 GbE LR uses a 9 micron single mode fiber optic cable with an LC duplex connector
and a link data rate of 10 Gbps. It is designed to support unrepeated distances of up to 10 km (6.2
miles).
OSA-Express3 10 GbE LR does not support autonegotiation to any other speed. It supports 64B/66B
coding.
v OSA-Express3 10 Gigabit Ethernet (GbE) Short Reach (SR) (FC 3371)
OSA-Express3 10 GbE SR has two CHPIDs per feature and one port associated with a CHPID.
Supports CHPID types: OSD and OSX
OSA-Express3 10 Gigabit Ethernet SR uses a 50 or 62.5 micron multimode fiber optic cable with an LC
duplex connector and a link data rate of 10 Gbps. The supported unrepeated distances vary:
With 50 micron fiber at 2000 MHz-km: 300 meters (984 feet)
With 50 micron fiber at 500 MHz-km: 82 meters (269 feet)
With 62.5 micron fiber at 200 MHz-km: 33 meters (108 feet)
v OSA-Express3-2P Gigabit Ethernet (GbE) SX (FC 3373)
OSA-Express3-2P GbE SX has one CHPID per feature and two ports associated with a CHPID.
Supports CHPID types: OSD and OSN
The OSA-Express3-2P GbE SX uses a 50 or 62.5 micron multimode fiber optic cable with an LC duplex
connector and a link data rate of 1000 Mbps (1 Gbps). The supported unrepeated distances vary:
With 50 micron fiber at 500 MHz-km: 550 meters (1804 feet)
With 62.5 micron fiber at 200 MHz-km: 273 meters (902 feet)
With 62.5 micron fiber at 160 MHz-km: 220 meters (772 feet)
All OSA-Express3 features support full duplex operation and standard frames (1492 bytes) and jumbo
frames (8992 bytes).
OSA-Express2 features
All OSA-Express2 features are hot-pluggable.
OSA-Express2 features can only be carried forward to z114.
OSA-Express2 features can only be used in an I/O drawer.
OSA-Express2 features include:
v OSA-Express2 Gigabit Ethernet (GbE) LX (FC 3364)
OSA-Express2 GbE LX has two CHPIDs per feature and one port associated with a CHPID. Supports
CHPID types: OSD and OSN
The OSA-Express2 GbE LX uses a 9 micron single mode fiber optic cable with an LC duplex connector
and a link data rate of 1000 Mbps (1 Gbps). However, OSA-Express2 GbE LX also accommodates the
reuse of existing multimode fiber (50 or 62.5 micron) when used with a pair of mode conditioning
patch (MCP) cables. It is designed to support unrepeated distances of up to 5 km (3.1 miles). If using
MCP cables, the supported unrepeated distance is 550 meters (1804 feet).
v OSA-Express2 Gigabit Ethernet (GbE) SX (FC 3365)
OSA-Express2 GbE SX has two CHPIDs per feature and one port associated with a CHPID. Supports
CHPID types: OSD and OSN
The OSA-Express2 GbE SX uses a 50 or 62.5 micron multimode fiber optic cable with an LC duplex
connector and a link data rate of 1000 Mbps (1 Gbps). The supported unrepeated distances vary:
With 50 micron fiber at 500 MHz-km: 550 meters (1804 feet)
With 62.5 micron fiber at 200 MHz-km: 273 meters (902 feet)
With 62.5 micron fiber at 160 MHz-km: 220 meters (772 feet)
v OSA-Express2 1000BASE-T Ethernet (FC 3366)
OSA-Express2 1000BASE-T Ethernet has two CHPIDs per feature and one port associated with a
CHPID. Supports CHPID types: OSD, OSE, OSC, and OSN
76 z114 System Overview
Level 01b
The OSA-Express2 1000BASE-T Ethernet uses a EIA/TIA Category 5 Unshielded Twisted Pair (UTP)
cable with an RJ-45 connector and a maximum length of 100 meters (328 feet). It supports a link data
rate of 10, 100, or 1000 Mbps over a copper infrastructure; half duplex and full duplex operation
modes; and autonegotiations to other speeds.
All OSA-Express2 features support full duplex operation and standard frames (1492 bytes) and jumbo
frames (8992 bytes).
OSA-Express4S, OSA-Express3 and OSA-Express2 supported
functions
Note: Throughout this section, the term OSA refers OSA-Express4S, OSA-Express3, and OSA-Express2.
Query and display your OSA-Express4S and OSA-Express3 configuration
OSA-Express4S and OSA-Express3 provides the capability for z/OS to directly query and display your
current OSA-Express4S and OSA-Express3 configuration information using the TCP/IP command,
Display OSAINFO. This command allows the operator to monitor and verify your current
OSA-Express4S and OSA-Express3 configuration, which helps to improve the overall management,
serviceability, and usability of OSA-Express4S and OSA-Express3.
This function is supported by z/OS and applies to OSA-Express4S (CHPID types OSD and OSX) and
OSA-Express3 (CHPID types OSD, OSX, and OSM).
Optimized latency mode
Optimized latency mode helps to improve the performance of z/OS workloads by minimizing response
times for inbound and outbound data when servicing remote clients.
Optimized latency mode applies to OSA-Express4S and OSA-Express3 (CHPID type OSD (QDIO) and
CHPID type OSX).
Inbound workload queuing
To improve performance for business critical interactive workloads and reduce contention for resource
created by diverse workloads, OSA-Express4S and OSA-Express3 provides an inbound workload queuing
function.
The inbound workload queuing (IWQ) function creates multiple input queues and allows OSA-Express4S
and OSA-Express3 to differentiate workloads off the wire and assign work to a specific input queue
(per device) to z/OS. With each input queue representing a unique type of workload and each workload
having unique service and processing requirements, the inbound workload queuing function allows z/OS
to preassign the appropriate processing resources for each input queue. As a result, multiple concurrent
z/OS processing threads can process each unique input queue (workload) avoiding traditional resource
contention. In a heavily mixed workload environment, this function reduces the conventional z/OS
processing required to identify and separate unique workloads, which results in improved overall system
performance and scalability.
The types of z/OS workloads that are identified and assigned to unique input queues are:
v z/OS sysplex distributor traffic network traffic, which is associated with a distributed virtual internet
protocol address (VIPA), is assigned a unique input queue. This allows the sysplex distributor traffic to
be immediately distributed to the target host.
v z/OS bulk data traffic network traffic, which is dynamically associated with a streaming (bulk data)
TCP connection, is assigned to a unique input queue. This allows the bulk data processing to be
assigned the appropriate resources and isolated from critical interactive workloads.
v z/OS Enterprise Extender traffic network traffic, which is associated with SNA high performance
routing, is assigned a unique input queue. This improves the device and stack processing and avoids
injecting latency in the SNA workloads.
Chapter 5. I/O connectivity 77
Level 01b
The z/OS sysplex distributor traffic and z/OS bulk data traffic workloads are supported by z/OS V1.12
and z/VM 5.4 or later for guest exploitation. The z/OS Enterprise Extender traffic workload is supported
by z/OS V1.13 and z/VM 5.4 or later for guest exploitation.
Inbound workload queuing applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX).
Dynamic LAN idle
The OSA LAN idle timer value defines how long OSA will hold packets before presenting the packets to
the host. The LAN idle function now allows the host OS to dynamically update the existing LAN idle
timer values (defined within the QIB) while the specific QDIO data device is in the QDIO active state.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
Dynamic LAN idle timer function is exploited by z/OS V1.8 with PTFs or later and z/VM 5.4 or later for
guest exploitation.
OSA-Express Network Traffic Analyzer
The OSA-Express Network Traffic Analyzer is a diagnostic tool used to copy frames as they enter or
leave an OSA adapter for an attached host. This facility is controlled and formatted by the z/OS
Communications Server, but the data is collected in the OSA at the network port. Because the data is
collected at the Ethernet frame level, you can trace the MAC headers for packets. You can also trace ARP
packets, SNA packets, and packets being sent to and from other users sharing the OSA.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
To enable the OSA-Express Network Traffic Analyzer, you must be running with a minimum of z/OS
V1.8 with PTFs or later.
Queued Direct I/O Diagnostic Synchronization (QDIOSYNC)
Queued Direct I/O Diagnostic Synchronization provides the ability to coordinate and simultaneously
capture software (z/OS) and hardware (OSA) traces. This function allows the host operating system to
signal the OSA feature to stop traces and allows the operator to capture both the hardware and software
traces at the same time. You can specify an optional filter that alters what type of diagnostic data is
collected by the OSA adapter. This filtering reduces the overall amount of diagnostic data collected and
therefore decreases the likelihood that pertinent data is lost.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
To use the Queued Direct I/O Diagnostic Synchronization facility, you must be running with a minimum
of z/OS V1.8 with PTFs or later.
Dynamic link aggregation for the z/VM environment
This function dedicates an OSA port to the z/VM 5.4 or later operating system for link aggregation under
z/VM Virtual Switch-controlled link aggregation. Link aggregation (trunking) is designed to allow you to
combine multiple physical OSA ports of the same type into a single logical link. You can have up to eight
OSA ports in one virtual switch. This increases bandwidth and permits nondisruptive failover in the
event that a port becomes unavailable. This function also supports dynamic add/remove of OSA ports
and full-duplex mode (send and receive).
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) in Layer 2
mode in QDIO mode and OSA-Express2 (CHPID type OSD) in Layer 2 mode in QDIO mode.
78 z114 System Overview
Level 01b
Multiple Image Facility (MIF) and spanned channels
OSA features support the Multiple Image Facilitly (MIF) for sharing channels across LPARs. Then can be
defined as a spanned channel to be shared among LPARs within and across LCSS.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (all CHPID types).
QDIO data connection isolation
QDIO data connection isolation provides protection for workloads (servers and clients) hosted in a virtual
environment from intrusion or exposure of data and processes from other workloads.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
Layer 2 (Link Layer) support
OSA features can support two transport modes when using CHPID type OSD (QDIO): Layer 2 (Link
Layer) and Layer 3 (Network or IP Layer). Layer 2 support can help facilitate server consolidation and
will allow applications that do not use IP protocols to run on z114.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
640 TCP/IP stacks
Increasing the TCP/IP stacks allows you to host more Linux on System z images. OSA supports 640
TCP/IP stacks or connections per dedicated CHPID, or 640 total stacks across multiple LPARs using a
shared or spanned CHPID when priority specification is disabled.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
Large send
Large send improves performance by offloading TCP packet processing from the host to the TCP/IP
stack. Offloading allows the host to send IP datagrams up to 60K in size. The IP datagram is controlled
by the host TCP/IP stack. Sending larger data blocks reduces host processor utilization while increasing
network efficiencies.
Large send function of IPv4 packets is available for all in-service releases of z/OS, Linux on System z,
and z/VM for guest exploitation. This function applies to OSA-Express4S and OSA-Express3 (CHPID
types OSD and OSX) and OSA-Express2 (CHPID type OSD).
Large send function for IPv6 packets is supported on z/OS. It is not supported for LPAR-to-LPAR
packets. Large send support for IPv6 packets applies to the OSA-Express4S features (CHPID types OSD
and OSX).
Concurrent LIC update
Allows you to apply LIC updates without requiring a configuration off/on, thereby minimizing the
disruption of network traffic during the update.
This function applies to OSA-Express4S (CHPID types OSD and OSX), OSA-Express3 (CHPID types OSD,
OSX, OSM, and OSN), and OSA-Express2 (CHPID types OSD and OSN).
Layer 3 virtual MAC
The z/OS Layer 3 Virtual MAC (VMAC) function simplifies the network infrastructure and facilitates IP
load balancing when multiple TCP/IP instances are sharing the same OSA port or Media Access Control
(MAC) address. With Layer 3 VMAC support, each TCP/IP instance has its own unique "virtual" MAC
address instead of sharing the same universal or "burned in" OSA MAC address. Defining a Layer 3
Chapter 5. I/O connectivity 79
Level 01b
VMAC provides a way for the device to determine which stack, if any, should receive a packet, including
those received for IP addresses that are not registered by any TCP/IP stack. With Layer 3 VMAC in a
routed network, OSA appears as a dedicated device to the particular TCP/IP stack, which helps solve
many port-sharing issues.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID type OSD).
Layer 3 Virtual MAC function is supported by z/OS V1.8 with PTFs or later and z/VM V5.4 or later for
guest exploitation.
Jumbo frames
When operating at 1 Gbps (fiber or copper) and 10 Gbps (fiber), use of jumbo frames (8992 bytes) are
supported.
This function applies to OSA-Express4S and OSA-Express3 (CHPID types OSD and OSX) and
OSA-Express2 (CHPID types OSD).
HiperSockets
HiperSockets network within the box functionality allows high speed any-to-any connectivity among
OS images within the z114 server without requiring any physical cabling. This network within the box
concept minimizes network latency and maximizes bandwidth capabilities between z/VM, Linux on
System z, z/VSE, and z/OS images (or combinations of these) to enable optimized business and ERP
solutions within a single server. These images can be first level (i.e. directly under LPAR), or second level
images (i.e. under z/VM). Up to 32 separate internal LANs can be configured within a server thereby
allowing OS images to be grouped according to the function they provide. These groupings are
independent of sysplex affiliation.
Separate HiperSockets LANs are mainly required if some logical partitions need to be isolated from other
logical partitions. Each LAN is configured as an CHPID type IQD.
In addition the number of communication queues is 4096 and each queue can have three subchannels. If
you want the internal LANs shared between partitions in different LCSSs then the channel must be
spanned. For more information on spanned channels, refer to Spanned channels on page 51.
Asynchronous delivery of data
The HiperSockets completion queue function allows both synchronous and asynchronous transfer of data
between logical partitions. With the asynchronous support, during high-volume situations, data can be
temporarily held until the receiver has buffers available in its inbound queue. This provides end-to-end
performance improvement for LPAR to LPAR communications.
The HiperSockets completion queue function is available for HiperSockets on Linux on System z. Refer to
http://www.ibm.com/developerworks/linux/linux390/ for more information on Linux on System z support.
HiperSockets network integration with IEDN
z114 supports the integration of HiperSockets network with the existing intraensemble data network
(IEDN). This extends the reach of the HiperSockets network outside the CPC to the entire ensemble,
appearing as a single, Layer 2. Because HiperSockets and IEDN are both internal System z networks,
the combination allows System z virtual servers to use the optimal path for communications.
CHPID type for HiperSockets is IQD. However, IEDN IQD CHPID (and IQDX) is used to refer to the
IQD CHPID with the functional support for the IEDN.
80 z114 System Overview
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Level 01b
Each CPC can have only one IQD CHPID (IEDN IQD CHPID) defined to enable HiperSockets
communication to the virtual server.
This support is available for HiperSockets on z/OS.
Broadcast support
Internet Protocol Version 6 (IPv6) broadcast packets are supported over HiperSockets internal LANs.
TCP/IP applications that support IPv6 broadcast, such as OMPROUTE when running Routing
Information Protocol Version 1 (RIPv1), can send and receive broadcast packets over HiperSockets
interfaces.
IPv4 and IPv6 broadcast support is available for HiperSockets on z/OS, z/VM V5.4 or later, and Linux
on System z. Refer to http://www.ibm.com/developerworks/linux/linux390/ for more information on Linux on
System z support.
IPv6 support
HiperSockets supports Internet Protocol Version 6 (IPv6). IPv6 expands the IP address space from 32 bits
to 128 bits to enable a greater number of unique IP addresses in support of the proliferation of devices,
such as cell phones and PDAs, now connecting to the Internet.
IPv4 and IPv6 support is available for HiperSockets on z/OS, z/VM, z/VSE and Linux on System z. IPv6
on z/VSE requires z/VSE 4.2 with PTFs.
VLAN support
Virtual Local Area Networks (VLANs), IEEE standard 802.1q, is supported in HiperSockets in a Linux on
System z environment. VLANs increase bandwidth and reduce overhead by allowing networks to be
organized for more optimum traffic flow. The network is organized by traffic patterns rather than
physical location. This allows traffic to flow on a VLAN connection over HiperSockets and between
HiperSockets and OSA.
HiperSockets Network Concentrator
HiperSockets Network Concentrator simplifies network addressing between HiperSockets and OSA
allowing seamless integration of HiperSockets-connected operating systems into external networks,
without requiring intervening network routing overhead, thus helping to increase performance and
simplify configuration.
HiperSockets Network Concentrator is implemented between HiperSockets, OSA, and Linux on System z.
The Network Concentrator provides support for unicast, broadcast, and multicast. For more information,
refer to http://www.ibm.com/developerworks/linux/linux390/.
HiperSockets Network Traffic Analyzer
The HiperSockets Network Traffic Analyzer Trace facility is used to diagnose problems in a HiperSockets
network. As data flows over an IQD channel, the HiperSockets Network Traffic Analyzer captures and
analyzes each packet. The captured data can be displayed immediately or written to a file.
The captured data includes packets being sent to and from other users sharing the Hipersockets channel,
such as logical partitions with z/OS, Linux on System z, z/VSE, or z/VM and z/VM guests.
To use this function, the level of authorization for the HiperSockets network traffic analyzer must be
selected. This authorization determines the scope of the tracing. Then a HiperSockets tracing device must
be activated on your system. This is performed by the operating system of the owning partition.
Chapter 5. I/O connectivity 81
|
|
|
Level 01b
Setting the authorization level is performed on the Support Element using the Network Traffic Analyzer
Authorization task. The levels of authorization are as follows:
v No traffic on any IQD channel for the selected server can be traced
v No traffic on the selected IQD channel can be traced
v All traffic on the selected IQD channel can be traced. (This traces all traffic flowing between all the
logical partitions using this IQD CHPID.)
v Customized traffic flow between selected logical partitions can be traced.
From the Customize a HiperSockets NTA Logical Partition Authorization List window, select the logical
partition that will be authorized to set up, trace, and capture the HiperSockets network traffic. Then
select all eligible partitions to be traced. Only the traffic flowing between the selected eligible partition
or partitions will be traced.
The Support Element issues security logs to create an audit trail of the HiperSockets network traffic
analyzer tracing activity.
Layer 2 (Link Layer) support
HiperSockets supports two transport modes on the z114: Layer 2 (Link Layer) and Layer 3 (Network and
IP Layer). HiperSockets in Layer 2 mode can be used by Internet Protocol (IP) Version 4 or Version 6 and
non-IP protocols (such as AppleTalk, DECnet, IPCX, NetBIOS, or SNA).
Each HiperSockets device has its own Layer 2 MAC address and allows the use of applications that
depend on a Layer 2 address such as DHCP servers and firewalls. LAN administrators can configure and
maintain the mainframe environment in the same fashion as they do in other environments. This eases
server consolidation and simplifies network configuration.
The HiperSockets device performs automatic MAC address generation to create uniqueness within and
across logical partitions and servers. MAC addresses can be locally administered, and the use of Group
MAC addresses for multicast and broadcasts to all other Layer 2 devices on the same HiperSockets
network is supported. Datagrams are only delivered between HiperSockets devices using the same
transport mode (Layer 2 with Layer 2 and Layer 3 with Layer 3).
A HiperSockets Layer 2 device may filter inbound datagrams by VLAN identification, the Ethernet
destination MAC address, or both. This reduces the amount of inbound traffic, leading to lower CPU
utilization by the operating system.
As with Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary
connectors or multicast routers enabling high performance and highly available Link Layer switches
between the HiperSockets network and an external Ethernet.
HiperSockets Layer 2 is supported by Linux on System z and by z/VM guest exploitation.
For hardware and sofware requirements, refer to the z/OS, z/VM, z/VSE subsets of the 2818DEVICE
Preventive Service Planning (PSP) bucket prior to installing z114.
Multiple Write facility
HiperSockets allows the streaming of bulk data over a HiperSockets link between LPARs. The receiving
LPAR can process a much larger amount of data per I/O interrupt. This function is transparent to the
operating system in the receiving LPAR. HiperSockets Multiple Write facility, with fewer I/O interrupts,
is designed to reduce CPU utilization of the sending and receiving LPAR.
HiperSockets Multiple Write facility is supported in the z/OS environment.
82 z114 System Overview
Level 01b
Chapter 6. Sysplex functions
This chapter describes the following z114 sysplex functions:
v Parallel Sysplex
v Coupling facility on page 87
v System-managed CF structure duplexing on page 93
v GDPS on page 94
v Intelligent Resource Director (IRD) on page 97.
Parallel Sysplex
IBM Parallel Sysplex makes use of a broad range of hardware and software products to process, in
parallel, a transaction processing workload across multiple z/OS images with direct read/write access to
sharing data.
The Parallel Sysplex allows you to manage a transaction processing workload, balanced across multiple
z/OS images running on multiple Central Processor Complexes (CPCs), as a single data management
system. It also offers workload availability and workload growth advantages.
The Parallel Sysplex enhances the capability to continue workload processing across scheduled and
unscheduled outages of individual CPCs participating in a Parallel Sysplex using a coupling facility by
making it possible to dynamically reapportion the workload across the remaining active Parallel Sysplex
participants. Additionally, you can dynamically add processing capacity (CPCs or LPs) during peak
processing without disrupting ongoing workload processing.
z114 CPC support for the Parallel Sysplex consists of having the capability to do any or all of the
following:
v Configure IC links and define them as CHPID type ICP (peer link - connects to another IC)
v Install ISC-3 links and define them as CHPID type CFP (peer link - connects to another ISC-3)
v Install 12x IFB links (connects zEnterprise to zEnterprise, System z10 or System z9 and connects System
z10 to System z10 or System z9) and define them as CHPID type CIB
v Install 1x IFB links (connects zEnterprise to zEnterprise or System z10 and connects System z10 to
System z10) and define them as CHPID type CIB
v Define, as an LPAR, a portion or all of the CPC hardware resources (CPs, ICFs, storage, and coupling
connections) for use as a coupling facility that connects to z/OS or another CF
v Connect to a coupling facility for data sharing or resource sharing
v Define an Internal Coupling Facility (ICF).
z114 supports a maximum of 128 coupling CHPIDs for all link types (IFBs, ICs, and active ISC-3s per
server).
The z114 models provide the following support for the Parallel Sysplex:
v The z114's Parallel Sysplex support consists of supporting coupling facilities on z114, supporting
attachment to remote coupling facilities via various type of coupling links, supporting Server Time
Protocol (STP) for purposes of time synchronization, and supporting various ancillary CPC functions
used by Parallel Sysplex support.
v Internal coupling links can be used to connect either z/OS images to coupling facilities (CFs) or CF
images to other CF images within a z114. IC links have the advantage of providing CF communication
at memory speed and do not require physical links.
Copyright IBM Corp. 2011, 2012 83
Level 01b
These various interconnect formats provide the connectivity for data sharing between a coupling facility
and the CPCs or logical partitions directly attached to it.
Parallel Sysplex coupling link connectivity
z114 supports IC, ISC-3 and IFB for passing information back and forth over high speed links in a
Parallel Sysplex environment. These technologies are all members of the family of coupling connectivity
options available on z114. With Server Time Protocol (STP), coupling links can also be used to exchange
timing information. Refer to Server Time Protocol (STP) on page 91 for more information about Server
Time Protocol. Refer to Table 13 for a summary of the coupling link options.
Table 13. Coupling link options
Link type
Maximum links
M05
1
(4 fanouts available) M10
2
(8 fanouts available)
1x IFB
(HCA3-O LR)
16* 32*
12x IFB and 12x IFB3
(HCA3-O)
8* 16*
1x IFB
(HCA2-O LR)
3
8* 12
12x IFB
(HCA2-O)
8* 16*
ISC-3 48
IC 32
Note:
1. z114 M05 supports a maximum of 56 extended distance links (8 1x IFB and 48 ISC-3) with no 12x IFB links*.
2. z114 M10 supports a maximum of 72 extended distance links (24 1x IFB and 48 ISC-3) with no 12x IFB links*.
3. Carried forward only.
* Uses all available fanout slots. Allows no other I/O or coupling.
Notes:
1. ISC-3 and IFB links require a point-to-point connection (direct channel attach between a CPC and a
coupling facility).
2. ISC-3 and IFB links can be redundantly configured (two or more ISC-3 or IFB links from each CPC to
enhance availability and avoid extended recovery time.
3. z114 is designed to coexist in the same Parallel Sysplex environment with (n-2) server families. This
allows a z114 to coexist with the z10 and z9 servers. Connectivity to z990 or z890 is not supported.
Refer to Figure 12 on page 85 for an illustration of these coupling links.
84 z114 System Overview
Level 01b
When coupling within a z114 server, the IC channel can be shared among several LPARs and one
coupling facility partition.
ISC-3 links
The ISC-3 feature, with a link data rate of 2 Gbps, is a member of the family of coupling link options
available on z114. The ISC-3 feature is used by coupled systems to pass information back and forth over
high speed links in a Parallel Sysplex environment. When STP is enabled, ISC-3 links can be used to
transmit STP timekeeping information to other z114s as well as z196, z10 EC, z10 BC, z9 EC, and z9 BC
servers. ISC-3 links can also be defined as Timing-only links.
ISC-3 links support a maximum unrepeated fiber distance of 10 kilometers (6.2 miles) and a maximum
repeated distance of 100 kilometers (62 miles) when attached to a qualified Dense Wavelength Division
Multiplexer (DWDM). The list of qualified DWDM vendors is available on Resource Link,
(http://www.ibm.com/servers/resourcelink), located under the Hardware products for server on the Library
page.) RPQ 8P2197 is required for a maximum unrepeated fiber distance of 20 km. RPQ 8P2340 is
required for repeated fiber distances in excess of 100 kilometers.
The z114 ISC-3 feature is compatible with ISC-3 features on z196, z10 EC, z10 BC, z9 EC, and z9 BC
servers. ISC-3 (CHPID type CFP) can be defined as a spanned channel and can be shared among LPARS
within and across LCSSs. z114 supports 48 ISC-3 links in peer mode 12 features (four links per feature).
The ISC-3 feature is composed of:
v One Mother card (ISC-M), FC 0217
v Two Daughter cards (ISC-D), FC 0218.
z10 BC z10 EC and
IFB, ISC-3
z9 BC S07 z9 EC and
IFB, ISC-3
Not supported!
z800, z900,
z890, z990
ICB-4
1x IFB
10/100 km
z196 z114 and
IFB, ISC-3
12x IFB
150 meters
1x IFB
10/100 km
12x IFB
150 meters
ISC-3
10/200 km
z114
12x IFB
150 meters
ISC-3
10/200 km
12x IFB
150 meters
HCA3-O LR
or
HCA2-O LR
HCA3-O
HCA3-O LR
or
HCA2-O LR
HCA2-O
HCA1-O
HCA3-O
or
HCA2-O
HCA3-O
or
HCA2-O
HCA3-O LR
or
HCA2-O LR
HCA3-O
HCA2-O
HCA3-O
or
HCA2-O
HCA2-O LR
ISC-3
10/200 km
Figure 12. Coupling link connectivity
Chapter 6. Sysplex functions 85
Level 01b
Each daughter card has two ports or links, for a total of four links per feature. Each link is activated by
using the Licensed Internal Code Configuration Control (LICCC) and can only be ordered in increments
of one. The ISC-D is not orderable. Extra ISC-M cards can be ordered in increments of one, up to a
maximum of 12 or the number of ISC-D cards, which ever is less. When the quantity of ISC links (FC
0219) is selected, the appropriate number of ISC-M and ISC-D cards is selected by the configuration tool.
Each port operates at 2 Gbps.
Each port utilizes a Long Wavelength (LX) laser as the optical transceiver, and supports use of a
9/125-micrometer single mode fiber optic cable terminated with an industry standard small form factor
LC duplex connector. The ISC-3 feature accommodates reuse (at reduced distances) of 50/125-micrometer
multimode fiber optic cables when the link data rate does not exceed 1 Gbps. A pair of mode
conditioning patch cables are required, one for each end of the link.
InfiniBand (IFB) coupling links
There are two types of InfiniBand coupling links supported by z114, each supporting a point-to-point
topology:
v 12x InfiniBand coupling links
v 1x InfiniBand coupling links
The 12x IFB coupling links are used to connect a zEnterprise to either a zEnterprise or a System z10 with
a link data rate of 6 GBps. The 12x IFB coupling links are also used to connect a zEnterprise to a System
z9 or to connect a System z10 to either a System z10 or a System z9 with a link data rate of 3 GBps. The
12x IFB coupling links support a maximum link distance of 150 meters (492 feet) three meters are
reserved for intraserver connection.
The 12x IFB coupling links initialize at 3 GBps and auto-negotiate to a higher speed (6 GBps) if both ends
of the link support the higher speed. For example, when a zEnterprise is connected to a System z9, the
link auto-negotiates to the highest common data rate 3 GBps. When a zEnterprise is connected to a
zEnterprise, the link auto-negotiates to the highest common data rate 6 GBps.
The 12x IFB coupling links host channel adapter (HCA) fanout cards are as follows:
v HCA3-O fanout card on the zEnterprise
v HCA2-O fanout card on the zEnterprise or System z10
v HCA1-O fanout card on the System z9
12x IFB coupling links support use of a 50 micron OM3 multimode fiber optic cable with MPO
connectors. The HCA3-O, HCA2-O, and HCA1-O fanout cards contain two ports. Each port has an
optical transmitter and receiver module.
A 12x IFB coupling link using the 12x IFB3 protocol is used to connect a zEnterprise to a zEnterprise
when using HCA3-O fanout cards and if four or fewer CHPIDs are defined per HCA3-O port. If more
than four CHPIDSs are defined per HCA3-O port, the 12x IFB protocol is used. The 12x IFB3 protocol
improves service times.
The 1x IFB coupling links are used to connect a zEnterprise to either a zEnterprise or a System z10 or to
connect a System z10 to a System z10 with a link data rate of 5 Gbps. (When attached to a qualified
Dense Wavelength Division Multiplexer (DWDM), the link data rate is 2.5 or 5 Gbps). The list of qualified
DWDM vendors is available on Resource Link, (http://www.ibm.com/servers/resourcelink), located under
Hardware products for server on the Library page.) The 1x IFB coupling links support a maximum
unrepeated distance of 10 kilometers (6.2 miles) and the maximum repeated distance is 100 kilometers (62
miles) when attached to a qualified DWDM. RPQ 8P2340 is required for unrepeated fiber distance of
excess of 10 kilometers or repeated fiber distances in excess of 100 kilometers.
The 1x IFB coupling links host channel adapter (HCA) fanout cards are as follows:
86 z114 System Overview
Level 01b
v HCA3-O LR fanout card on the zEnterprise
v HCA2-O LR fanout card on the zEnterprise or System z10
1x IFB coupling links support use of 9 micron single mode fiber optic cables with LC duplex connectors.
The HCA3-O LR fanout card supports four ports, and the HCA2-O LR fanout card supports two ports.
Note: The InfiniBand link data rates do not represent the performance of the link. The actual
performance is dependent upon many factors including latency through the adapters, cable lengths, and
the type of workload.
When STP is enabled, IFB links can be used to transmit STP timekeeping information to other z114
systems, as well as z196, z10 EC, and z10 BC servers. IFB links can also be defined as Timing-only links.
The CHPID type assigned to InfiniBand is CIB. Up to 16 CHPID type CIB can be defined to an HCA3-O,
HCA3-O LR, or HCA2-O fanout card distributed across two ports as needed. Up to 16 CHPID type CIB
can be defined to an HCA3-O LR fanout card distributed across four ports as needed. The ability to
define up to 16 CHPIDs allows physical coupling links to be shared by multiple sysplexes. For example,
one CHPID can be directed to one Coupling Facility, and another CHPID directed to another Coupling
Facility on the same target server, using the same port. Note that if more than four CHPIDs are defined
per HCA3-O port, the 12x IFB3 protocol will not be used and service times will be reduced.
1x IFB links (both HCA2-O LR and HCA3-O LR fanout cards) support up to 32 subchannels per CHPID.
This provides improved link utilization and coupling throughput at increased distances between the
coupling facility (CF) and the operating system or between CFs without having to increased the number
of CHPIDs per link for 1x IFB or adding ISC-3 links.
IC links
Internal coupling (IC) links are used for internal communication between coupling facilities defined in
LPARs and z/OS images on the same server. IC link implementation is totally logical requiring no link
hardware. However, a pair of CHPID numbers must be defined in the IOCDS for each IC connection. IC
channels cannot be used for coupling connections to images in external systems.
IC links will have CHPID type of ICP (Internal Coupling Peer). The rules that apply to the CHPID type
ICP are the same as those that apply to CHPID type CFP (ISC-3 peer links), with the exception that the
following functions are not supported:
v Service On/Off
v Reset I/O Interface
v Reset Error Thresholds
v Swap Channel Path
v Channel Diagnostic Monitor
v Repair/Verify (R/V)
v Configuration Manager Vital Product Data (VPD).
IC channels have improved coupling performance over ISC-3 and IFB links. IC links also improve the
reliability while reducing coupling cost. Up to 32 IC links can be defined on z114. However, it is unusual
to require more than one link (two CHPIDs type ICP).
Refer to Internal coupling and HiperSockets channels on page 52 for recommendations on CHPID
usage.
Coupling facility
The coupling facility provides shared storage and shared storage management functions for the Parallel
Sysplex (for example, high speed caching, list processing, and locking functions). Applications running on
z/OS images in the Parallel Sysplex define the shared structures used in the coupling facility.
Chapter 6. Sysplex functions 87
Level 01b
PR/SM LPAR allows you to define the coupling facility, which is a special logical partition that runs
Coupling Facility Control Code (CFCC). Coupling Facility Control Code is Licensed Internal Control
Code (LICC). It is not an operating system. z114 supports a 64-bit CFCC.
When the CFCC is loaded by using the LPAR coupling facility logical partition activation, the
z/Architecture CFCC is always loaded. However, when CFCC is loaded into a coupling facility guest of
z/VM, the ESA architecture or z/Architecture CFCC version is loaded based on how that guest is
running.
At LPAR activation, CFCC automatically loads into the coupling facility LPAR from the Support Element
hard disk. No initial program load (IPL) of an operating system is necessary or supported in the coupling
facility LPAR.
CFCC runs in the coupling facility logical partition with minimal operator intervention. Operator activity
is confined to the Operating System Messages task. PR/SM LPAR limits the hardware operator controls
usually available for LPARs to avoid unnecessary operator activity. For more information, refer to
zEnterprise System Processor Resource/Systems Manager Planning Guide.
A coupling facility link provides the connectivity required for data sharing between the coupling facility
and the CPCs directly attached to it. Coupling facility links are point-to-point connections that require a
unique link definition at each end of the link.
CFCC considerations
CFCC code can be delivered as a new release level or as a service level upgrade within a particular
release level. Typically, a new release level is delivered as part of an overall system level driver upgrade
and requires a reactivate of the CFCC partition in order to utilize the new code. Service level upgrades
are delivered as LIC and are generally concurrent to apply.
Note: On rare occasions, we may be required to deliver a disruptive service level upgrade.
To support migration from one CFCC level to the next, you can run different levels of the coupling
facility code concurrently in different coupling facility LPARs on the same CPC or on different CPCs.
Refer to CFCC LIC considerations for a description of how a CFCC release or a service level can be
applied.
When migrating CF levels, the lock, list, and cache structure sizes may increase to support new functions.
This adjustment can have an impact when the system allocates structures or copies structures from one
coupling facility to another at different CFCC levels.
For any CFCC level upgrade, you should always run the CFSIZER tool which takes into account the
amount of space needed for the current CFCC levels. The CFSIZER tool is available at
http://www.ibm.com/systems/support/z/cfsizer/.
CFCC LIC considerations
CFCC LIC can be marked as Concurrent or Disruptive to activate.
CFCC Concurrent LIC maintenance and upgrades can be performed concurrently while the z/OS images
connected to it continue to process work and without requiring a POR or a deactivate of the LPAR image
of the server on which the coupling facility is located. When applying concurrent CFCC LIC, the code is
immediately activated on all of the coupling facility images that are defined on the CPC.
CFCC Disruptive LIC maintenance and new release level upgrades must be applied disruptively. Once
the code is installed, the LPAR images on which the coupling facility resides must be
deactivated/reactivated requiring z/OS images that are connected to this coupling facility to deallocate
CF structures.
88 z114 System Overview
Level 01b
The alternative to deallocating CF structures in the CF image being patched would be to move the
structures on the coupling facility to a backup coupling facility in the Parallel Sysplex, recycle the
coupling facility LPAR image, and move the structures back again once the new code has been activated.
This process significantly enhances the overall sysplex availability characteristics of disruptive CFCC LIC.
To support migration of new release or service levels that are marked as disruptive, you have the option
to selectively activate the new LIC to one or more coupling facility images running on z114, while still
running with the previous level active on other coupling facility images. For example, if you have a
coupling facility image that supports a test Parallel Sysplex and a different coupling facility image that
supports a production Parallel Sysplex on the same z114, you can install the new LIC to the z114, but
may only choose to deactivate/activate the test coupling facility image to utilize and test the new CFCC
code. Once you are confident with the new code, you can then selectively deactivate/activate all of the
other coupling facility images on the same CPC.
CFCC Level 17
CFCC Level 17 provides the following:
v Increases the number of structures that can be allocated in a CF image from 1023 to 2047. This function
permits more discrete data sharing groups to operate concurrently and satisfies the need for
environments that require a large number of structures to be defined.
v Supports the ability to capture nondisruptive CFCC diagnostic dumps.
v Supports more connectors to CF list and lock structures.
v Supports CF cache write-around to improve performance. DB2 can use a conditional write command
during batch update/insert processing to decide which entries should be written to the GBP caches
and which entries should be written around the cache to disk.
v Supports CF cache register attachment validation for error detection.
v Supports CF large structure testing
CFCC Level 17 includes the support introduced in previous CFCC levels.
CFCC Level 16
CFCC Level 16 provides the following enhancements:
v Coupling Facility duplexing protocol enhancements provide faster service time when running
System-managed CF structure duplexing by allowing one of the duplexing protocol exchanges to
complete asynchronously. More benefits are seen as the distance between the CFs becomes larger, such
as in a multisite Parallel Sysplex.
v CF subsidiary list notification enhancements provided to avoid false scheduling overhead for Shared
Message Queue CF exploiters.
CFCC Level 16 includes the support introduced in previous supported CFCC levels.
CFCC Level 15
CFCC Level 15 provides the following:
v Increase in the allowable tasks in the coupling facility from 48 to 112.
v RMF
measurement improvements.
CFCC Level 15 includes the support introduced in previous CFCC levels.
CFCC Level 14
CFCC Level 14 provides dispatcher and internal serialization mechanisms enhancements to improve the
management of coupled workloads from all environments under certain circumstances.
CFCC Level 14 includes the support introduced in previous CFCC levels.
Chapter 6. Sysplex functions 89
|
|
|
|
|
Level 01b
CFCC Level 13
CFCC level 13 provides Parallel Sysplex availability and performance enhancements. It provides changes
that affect different software environments that run within a Parallel Sysplex. For example, DB2 data
sharing is expected to see a performance improvement, especially for cast-out processing against very
large DB2 group buffer pool structures.
CFCC Level 13 includes the support introduced in previous CFCC levels.
CFCC Level 12
CFCC level 12 provides support for the following functions:
v 64-bit addressing
The 64-bit addressing supports larger structure sizes and eliminates the 2 GB control store line in the
coupling facility. With this support, the distinction between 'control store' and 'non-control store' (data
storage) in the coupling facility is eliminated, and large central storage can be used for all coupling
facility control and data objects.
v 48 internal tasks
Up to 48 internal tasks for improved multiprocessing of coupling facility requests.
v System-managed CF Structured duplexing (CF duplexing)
CF duplexing is designed to provide a hardware assisted, easy-to-exploit mechanism for duplexing CF
structure data. This provides a robust recovery mechanism for failures such as loss of single structure
or CF, or loss of connectivity to a single CF, through rapid failover to the other structure instance of the
duplex pair. Refer to System-managed CF structure duplexing on page 93 for more information.
CFCC Level 12 includes the support introduced in previous CFCC levels.
CFCC Level 11
CFCC Level 11 provides support for the following function:
v System-managed CF structured duplexing (CF duplexing)
CF duplexing is designed to provide an S/390
in an
External Time Reference (ETR) timing network with servers synchronized using the STP message based
protocol. In a Mixed CTN, the Sysplex Timer provides the timekeeping for the network. Each server
must be configured with the same CTN ID. The Sysplex Timer console is used for time related
functions, and the HMC is used to initialize or modify the CTN ID and monitor the CTN status.
v STP-only CTN - In an STP-only CTN, the Sysplex Timer does not provide time synchronization for
any of the servers in the timing network. Each server must be configured with same CTN ID. The
HMC provides the user interface for all time related functions, such as time initialization, time
adjustment, and offset adjustment. The HMC or Support Element must also be used to initialize or
modify the CTN ID and network configuration.
Chapter 6. Sysplex functions 91
Level 01b
z114 is designed to coexist in the same CTN with (n-2) server families. This allows a z114 to participate in
the same CTN with z10 and z9 servers, but not with z990 or z890 servers.
In an STP-only CTN, you can:
v Initialize the time manually or use an external time source to keep the Coordinated Server Time (CST)
synchronized to the time source provided by the external time source (ETS).
v Configure access to an ETS so that the CST can be steered to an external time source. The ETS options
are:
Dial-out time service (provides accuracy of 100 milliseconds to ETS)
NTP server (provides accuracy of 100 milliseconds to ETS)
NTP server with pulse per second (PPS) (provides accuracy of 10 microseconds to PPS)
v Initialize the time zone offset, daylight saving time offset, and leap second offset.
v Schedule changes to offsets listed above. STP can automatically schedule daylight saving time based on
the selected time zone.
v Adjust time by up to +/- 60 seconds.
As previously stated, STP can be used to provide time synchronization for servers that are not in a
sysplex. For a server that is not part of a Parallel Sysplex, but required to be in the same Coordinated
Timing Network (CTN), additional coupling links must be configured in order for the server to be
configured in the CTN. These coupling links, called Timing-only links, are coupling links that allow two
servers to be synchronized using STP messages when a coupling facility does not exist at either end of
the coupling link. Use HCD to define Timing-only links and generate an STP control unit.
The benefits of STP include:
v Allowing clock synchronization without requiring the Sysplex Timer and dedicated timer links. This
reduces costs by eliminating Sysplex Timer maintenance costs, power costs, space requirements, and
fiber optic infrastructure requirements.
v Supporting a multisite timing network of up to 200 km over fiber optic cabling, thus allowing a
sysplex to span these distances. This overcomes the limitation of timer-to-timer links being supported
only up to 40 km.
v Potentially reducing the cross-site connectivity required for a multisite Parallel Sysplex. Dedicated links
are no longer required to transport timing information because STP and coupling facility messages may
be transmitted over the same links.
STP enhancements
z196 and z114 have introduced an STP recovery enhancement to improve the availability of the STP-only
CTN. The new generation of host channel adapters (HCA3-O or HCA3-O LR), introduced for coupling,
have been designed to send a reliable unambiguous going away signal to indicate that the server on
which the HCA3-O or HCA3-O LR is running is about to enter a failed (check stopped) state. When the
going away signal sent by the Current Time Server (CTS) in an STP-only Coordinated Timing Network
(CTN) is received by the Backup Time Server (BTS), the BTS can safely take over as the CTS without
relying on the previous recovery methods of Offline Signal (OLS) in a two-server CTN or the Arbiter in a
CTN with three or more servers.
This enhancement is exclusive to z114 and z196 and is available only if you have an HCA3-O or HCA3-O
LR on the Current Time Server (CTS) communicating with an HCA3-O or HCA3-O LR on the Backup
Time Server (BTS). The STP recovery design that has been available is still available for the cases when a
going away signal is not received or for other failures besides a server failure.
Some of the other notable STP enhancements released since STP became general availability are:
v Improved availability when an Internal Battery Feature (IBF) is installed. If an Internal Battery Feature
(IBF) is installed on your z114, STP can receive notification that power has failed and that the IBF is
92 z114 System Overview
Level 01b
engaged. When STP receives this notification from a server that has the role of PTS/CTS, STP can
automatically reassign the role of the Current Time Server (CTS) to the Backup Time Server (BTS), thus
automating the recovery action and improving availability.
v Save the STP configuration and time information across Power on Resets (POR) or power outages for a
single or dual server STP-only CTN. This means you do not need to reinitialize the time or reassign the
PTS/CTS role for a single server STP-only CTN or the Preferred Time Server (PTS), Backup Time
Server (BTS), or Current Time Server (CTS) roles for a dual server STP-only CTN across Power on
Resets (POR) or power outage events.
v Supporting the configuration of different NTP servers for the Preferred Time Server (PTS) and the
Backup Time Server (BTS), which improves the availability of NTP servers used as an external time
source.
v An Application Programming Interface (API) on the HMC to automate the assignment of the Preferred
Time Server (PTS), Backup Time Server (BTS), and Arbiter.
System-managed CF structure duplexing
A set of architectural extensions to the Parallel Sysplex is provided for the support of system-managed
coupling facility structure duplexing (CF duplexing) of coupling facility structures for high availability.
All three structure types (cache structures, list structures, and locking structures) can be duplexed using
this architecture.
Support for these extensions on z114 is concurrent with the entire System z family of servers. It also
requires the appropriate level for the exploiter support of CF duplexing. CF duplexing is designed to:
v Provide the necessary base for highly available coupling facility structure data through the redundancy
of duplexing
v Enhance Parallel Sysplex ease of use by reducing the complexity of CF structure recovery
v Enable some installations to eliminate the requirement for standalone CFs in their Parallel Sysplex
configuration.
For those CF structures that support use of CF duplexing, customers have the ability to dynamically
enable (selectively by structure) or disable the use of CF duplexing.
The most visible change for CF duplexing is the requirement to connect coupling facilities to each other
with coupling links. The required connectivity is bidirectional with a peer channel attached to each
coupling facility for each remote CF connection. A single peer channel provides both the sender and
receiver capabilities; therefore, only one physical link is required between each pair of coupling facilities.
If redundancy is included for availability, then two peer mode links are required. However, this
connectivity requirement does not necessarily imply any requirement for additional physical links. Peer
mode channels can be shared between ICF partitions and local z/OS partitions, so existing links between
servers can provide the connectivity between both:
v z/OS partitions and coupling facility images
v Coupling facility images.
One of the benefits of CF duplexing is to hide coupling facility failures and structure failures and make
total loss of coupling facility connectivity incidents transparent to the exploiters of the coupling facility.
This is handled by:
v Shielding the active connectors to the structure from the observed failure condition so that they do not
perform unnecessary recovery actions.
v Switching over to the structure instance that did not experience the failure.
v Reestablishing a new duplex copy of the structure at a specified time. This could be as quickly as when
the coupling facility becomes available again, on a third coupling facility in the Parallel Sysplex, or
when it is convenient for the customer.
Chapter 6. Sysplex functions 93
Level 01b
System messages are generated as the structure falls back to simplex mode for monitoring and
automation purposes. Until a new duplexed structure is established, the structure will operate in a
simplex mode and may be recovered through whatever mechanism provided for structure recovery prior
to the advent of CF duplexing.
As the two instances of a system-managed duplex structure get update requests, they must coordinate
execution of the two commands to ensure that the updates are made consistently to both structures. Most
read operations do not need to be duplexed.
z/OS operator commands display the status of the links for problem determination. In addition, the
Resource Management Facility (RMF) provides the performance management aspects about the CF-CF
connectivity and the duplexed structures. Together, these enable the installation to manage and monitor
the coupling facility configuration and new structure instances resulting from CF duplexing.
For more information on CF duplexing, you can refer to the technical white paper, System-Managed CF
Structure Duplexing at the Parallel Sysplex web site, http://www.ibm.com/systems/z/pso/.
GDPS
In business, two important objectives for survival are systems that are designed to provide continuous
availability and near transparent disaster recovery (DR). Systems that are designed to deliver continuous
availability combine the characteristics of high availability and near continuous operations to deliver high
levels of service targeted at 24 x 7.
To attain high levels of continuous availability (CA) and near transparent disaster recovery (DR), the
solution should be based on geographical clusters and data mirroring. These technologies are the
backbone of the GDPS solution. GDPS offers the following solutions based on the underlying mirroring
technology:
v GDPS/PPRC based on IBM System Storage
TS, IMS
,
WebSphere) or database manager (e.g., DB2, IMS, and VSAM) being used, and is enabled by means of
key IBM technologies and architectures.
For more details on GDPS, refer to the GDPS website located at http://www.ibm.com/systems/z/advantages/
gdps/index.html.
GDPS/PPRC
GDPS/PPRC is a near CA or DR solution across two sites separated by metropolitan distances. The
solution is based on the Metro Mirror (also known as PPRC) synchronous disk mirroring technology. It is
designed to manage and protect IT services by handling planned and unplanned exception conditions,
and maintain data integrity across multiple volumes and storage subsystems. By managing both planned
and unplanned exception conditions, GDPS/PPRC can help to maximize application availability and
provide business continuity.
GDPS/PPRC includes automation to manage remote copy pairs, automation to invoke CBU, and
automation to restart applications on the recovery site.
GDPS/PPRC can deliver the following capabilities:
v Near continuous availability or disaster recovery solution across two sites separated by metropolitan
distances (distance between sites limited to 200 fiber km). Optionally, applications managed end-to-end
is provided by the Distributed Cluster Management (DCM) and Tivoli
System Automation
Application Manager or the Distributed Cluster Management (DCM) and Veritas Cluster Server (VCS).
v Recovery Time Objective (RTO) less than one hour
v Recovery Point Objective (RPO) of zero
GDPS/PPRC HyperSwap Manager
GDPS/PPRC HyperSwap
Enterprise Edition.
I.
IBF. Internal Battery Feature
IBM blade. A customer-acquired, customer-installed
select blade to be managed by IBM zEnterprise Unified
Resource Manager. One example of an IBM blade is a
POWER7 blade.
IBM DB2 Analytics Accelerator for z/OS. A
workload-optimized, LAN-attached appliance based on
Netezza technology.
IBM Smart Analytics Optimizer for DB2 for z/OS.
An optimizer that processes certain types of data
warehouse queries for DB2 for z/OS.
IBM System z Application Assist Processor (zAAP).
A specialized processor that provides a Java execution
environment, which enables Java-based web
applications to be integrated with core z/OS business
applications and backend database systems.
IBM System z Integrated Information Processor
(zIIP). A specialized processor that provides
computing capacity for selected data and transaction
processing workloads and for selected network
encryption workloads.
IBM WebSphere DataPower Integration Appliance
XI50 for zEnterprise (DataPower XI50z). A
purpose-built appliance that simplifies, helps secure,
and optimizes XML and Web services processing.
IBM zEnterprise BladeCenter Extension (zBX). A
heterogeneous hardware infrastructure that consists of
a BladeCenter chassis attached to an IBM zEnterprise
196 (z196) or IBM zEnterprise 114 (z114). A BladeCenter
chassis can contain IBM blades or optimizers.
IBM zEnterprise BladeCenter Extension (zBX) blade.
Generic name for all blade types supported in an IBM
zEnterprise BladeCenter Extension (zBX). This term
includes IBM blades and optimizers.
IBM zEnterprise Unified Resource Manager. Licensed
Internal Code (LIC), also known as firmware, that is
part of the Hardware Management Console. The
Unified Resource Manager provides energy monitoring
and management, goal-oriented policy management,
increased security, virtual networking, and data
management for the physical and logical resources of a
given ensemble.
IC. Internal Coupling link
ICB. Integrated Cluster Bus link
ICF. Internal Coupling Facility
ICSF. Integrated Cryptographic Service Facility
164 z114 System Overview
Level 01b
IEDN. See intraensemble data network (IEDN).
IEDN TOR switch. See intraensemble data network
(IEDN) TOR switch.
IFB. InfiniBand
IFB-MP (InfiniBand Multiplexer) card. The IFB-MP
card can only be used in the I/O cage or I/O drawer.
The IFB-MP cards provide the intraconnection from the
I/O cage or I/O drawer to the HCA2-C fanout card in
a book or processor drawer.
IFCC. Interface control check
IFL. Integrated Facility for Linux
IML. Initial machine load
IMS. Information Management System
initial machine load (IML). A procedure that prepares
a device for use.
initial program load (IPL). The initialization
procedure that causes an operating system to
commence operation.
The process by which a configuration image is loaded
into storage at the beginning of a work day or after a
system malfunction.
The process of loading system programs and preparing
a system to run jobs.
initialization. The operations required for setting a
device to a starting state, before the use of a data
medium, or before implementation of a process.
Preparation of a system, device, or program for
operation.
To set counters, switches, addresses, latches, or storage
contents to zero or to other starting values at the
beginning of, or at the prescribed points in, a computer
program or process.
INMN. See intranode management network (INMN).
input/output (I/O). Pertaining to a device whose parts
can perform an input process and an output process at
the same time.
Pertaining to a functional unit or channel involved in
an input process, output process, or both, concurrently
or not, and to the data involved in such a process.
input/output configuration. The collection of channel
paths, control units, and I/O devices that attach to the
processor complex.
input/output configuration data set (IOCDS). The
data set that contains an I/O configuration definition
built by the I/O configuration program (IOCP).
input/output configuration program (IOCP). A
program that defines to a system all the available I/O
devices and the channel paths.
Integrated Facility for Applications (IFA). A general
purpose assist processor for running specific types of
applications.
interrupt. A suspension of a process, such as
execution of a computer program caused by an external
event, and performed in such a way that the process
can be resumed.
intraensemble data network (IEDN). A private
high-speed network for application data
communications within an ensemble. Data
communications for workloads can flow over the IEDN
within and between nodes of an ensemble. The Unified
Resource Manager configures, provisions, and manages
all of the physical and logical resources of the IEDN.
intraensemble data network (IEDN) TOR switch. A
top-of-rack switch that provides connectivity to the
intraensemble data network (IEDN), supporting
application data within an ensemble.
intranode management network (INMN). A private
service network that the Unified Resource Manager
uses to manage the resources within a single
zEnterprise node. The INMN connects the Support
Element to the IBM zEnterprise 196 (z196) or IBM
zEnterprise 114 (z114) and to any attached IBM
zEnterprise BladeCenter extension (zBX).
I/O. Input/output
IOCDS. I/O configuration data set
IOCP. I/O configuration program
IPL. Initial program load
IPv6. Internet Protocol Version 6
ISC. InterSystem Channel
K.
KB. Kilobyte
kilobyte (KB). A unit of measure for storage size.
Loosely, one thousand bytes.
km. Kilometer
L.
LAN. Local area network
laser. A device that produces optical radiation using a
population inversion to provide light amplification by
stimulated emission of radiation and (generally) an
Glossary 165
Level 01b
optical resonant cavity to provide positive feedback.
Laser radiation can be highly coherent temporally, or
spatially, or both.
LCSS. Logical channel subsystem
LED. Light-emitting diode
LIC. Licensed Internal Code
Licensed Internal Code (LIC). Software provided for
use on specific IBM machines and licensed to
customers under the terms of IBM's Customer
Agreement.
light-emitting diode (LED). A semiconductor chip
that gives off visible or infrared light when activated.
local area network (LAN). A computer network
located on a user's premises within a limited
geographical area. Communication within a local area
network is not subject to external regulations; however,
communication across the LAN boundary can be
subject to some form of regulation.
logical address. The address found in the instruction
address portion of the program status word (PSW). If
translation is off, the logical address is the real address.
If translation is on, the logical address is the virtual
address.
logical control unit. A group of contiguous words in
the hardware system area that provides all of the
information necessary to control I/O operations
through a group of paths that are defined in the
IOCDS. Logical control units represent to the channel
subsystem a set of control units that attach common
I/O devices.
logical partition (LPAR). A subset of the processor
hardware that is defined to support the operation of a
system control program (SCP).
logical processor. In LPAR mode, central processor
resources defined to operate in an LPAR like a physical
central processor.
logical unit (LU). In SNA, a port to the network
through which an end user accesses the SNA network
and the functions provided by system services control
points (SSCPs). An LU can support at least two
sessions - one with an SSCP and one with another LU
and may be capable of supporting many sessions with
other LUs.
logically partitioned (LPAR) mode. A central
processor complex (CPC) power-on reset mode that
enables use of the PR/SM feature and allows an
operator to allocate CPC hardware resources (including
central processors, central storage, expanded storage,
and channel paths) among LPARs.
LU. Logical unit
M.
MAC. Message Authentication Code
main storage. Program-addressable storage from
which instructions and other data can be loaded
directly into registers for subsequent processing.
maintenance change level (MCL). A change to correct
a single licensed internal code design defect. Higher
quality than a patch, and intended for broad
distribution. Considered functionally equivalent to a
software PTF.
Manage suite (Manage). The first suite of
functionality associated with the IBM zEnterprise
Unified Resource Manager. The Manage suite includes
core operational controls, installation, and configuration
management, and energy monitoring.
management TOR switch. A top-of-rack switch that
provides a private network connection between an IBM
zEnterprise 196 (z196) or IBM zEnterprise 114 (z114)
Support Element and an IBM zEnterprise BladeCenter
Extension (zBX).
Mb. Megabit
MB. Megabyte
MBA. Memory bus adapter
MCL. Maintenance Change Level
megabit (Mb). A unit of measure for storage size. One
megabit equals 1,000,000 bits.
megabyte (MB). A unit of measure for storage size.
One megabyte equals 1,048,576 bytes. Loosely, one
million bytes.
menu bar. The area at the top of the primary window
that contains keywords that give users access to actions
available in that window. After users select a choice in
the action bar, a pulldown menu appears from the
action bar.
MIDAW. Modified Data Indirect Address Word
MIF. Multiple Image Facility
modem. A device that converts digital data from a
computer to an analog signal that can be transmitted
on a telecommunication line, and converts the analog
signal received to data for the computer.
Multiple Image Facility (MIF). A facility that allows
channels to be shared among PR/SM LPARs in an
ESCON or FICON environment.
multichip module (MCM). The fundamental
processor building block for System z. Each System z
book is comprised of a glass ceramic multichip
166 z114 System Overview
Level 01b
module of processor units (PUs) and memory cards,
including multilevel cache memory.
multiplexer channel. A channel designed to operate
with a number of I/O devices simultaneously. Several
I/O devices can transfer records at the same time by
interleaving items of data.
MVS
T.
target processor. The processor that controls execution
during a program restart, instruction trace, standalone
dump, or IPL, and whose ID is identified by
highlighting on the status line.
TCP/IP. Transmission Control Protocol/Internet
Protocol
TDES. Triple Data Encryption Standard
time-of-day (TOD) clock. A system hardware feature
that is incremented once every microsecond, and
provides a consistent measure of elapsed time suitable
for indicating date and time. The TOD clock runs
regardless of whether the processor is in a running,
wait, or stopped state.
timing-only links. Coupling links that allow two
servers to be synchronized using STP messages when a
coupling facility does not exist at either end of the
coupling link.
TKE. Trusted Key Entry
TOD. Time of day
Glossary 169
Level 01b
token. A sequence of bits passed from one device to
another on the token-ring network that signifies
permission to transmit over the network. It consists of
a starting delimiter, an access control field, and an end
delimiter. The access control field contains a bit that
indicates to a receiving device that the token is ready to
accept information. If a device has data to send along
the network, it appends the data to the token. When
data is appended, the token then becomes a frame.
token-ring network. A ring network that allows
unidirectional data transmission between data stations,
by a token passing procedure, such that the transmitted
data return to the transmitting station.
A network that uses ring topology, in which tokens are
passed in a circuit from node to node. A node that is
ready to send can capture the token and insert data for
transmission.
Note: The IBM token-ring network is a baseband LAN
with a star-wired ring topology that passes tokens from
network adapter to network adapter.
top-of-rack (TOR) switch. A network switch that is
located in the first rack of an IBM zEnterprise
BladeCenter Extension (zBX).
TOR switch. See intraensemble data network (IEDN)
TOR switch and management TOR switch.
TPF. Transaction processing facility
transaction. A unit of processing consisting of one or
more application programs, affecting one or more
objects, that is initiated by a single request.
transaction processing. In batch or remote batch
processing, the processing of a job or job step. In
interactive processing, an exchange between a terminal
and another device that does a particular action; for
example, the entry of a customer's deposit and the
updating of the customer's balance.
transaction. A unit of processing consisting of one or
more application programs, affecting one or more
objects, that is initiated by a single request.
U.
Unified Resource Manager. See IBM zEnterprise
Unified Resource Manager.
user interface. Hardware, software, or both that
allows a user to interact with and perform operations
on a system, program, or device.
V.
VLAN. Virtual Local Area Network
VSE. Virtual Storage Extended
W.
workload. A collection of virtual servers that perform
a customer-defined collective purpose. A workload
generally can be viewed as a multi-tiered application.
Each workload is associated with a set of policies that
define performance goals.
workstation. A terminal or microcomputer, usually
one that is connected to a mainframe or network, at
which a user can perform applications.
Z.
z/OS discovery and autoconfiguration (zDAC). z/OS
function for FICON channels designed to detect a new
disk or tape device and propose configuration changes
for the I/O definition file (IODF). This applies to all
FICON channels supported on that are configured as
CHPID type FC.
zAAP. See IBM System z Application Assist Processor.
zBX. See IBM zEnterprise BladeCenter Extension
(zBX).
zBX blade. See IBM zEnterprise BladeCenter
Extension (zBX) blade.
zCPC. The physical collection of main storage, central
processors, timers, and channels within a zEnterprise
mainframe. Although this collection of hardware
resources is part of the larger zEnterprise central
processor complex, you can apply energy management
policies to the zCPC that are different from those that
you apply to any attached IBM zEnterprise BladeCenter
Extension (zBX) or blades.
See also central processor complex.
zIIP. See IBM System z Integrated Information
Processor.
z10 BC. IBM System z10 Business Class
z10 EC. IBM System z10 Enterprise Class
z114. IBM zEnterprise 114
z196. IBM zEnterprise 196
z800. IBM eServer zSeries 800
z890. IBM eServer zSeries 890
z900. IBM eServer zSeries 900
z990. IBM eServer zSeries 990
z9 BC. IBM System z9 Business Class
z9 EC. IBM System z9 Enterprise Class
170 z114 System Overview
Level 01b
Index
Special characters
(CSS) channel subsystem
planning 47
Numerics
3390 69
64-bit addressing 90
A
A frame 13
acoustic door, zBX 34
activation 120
adapters
definitions 42
maximums 41
addressing
FCP 61
network concentrator 81
AID (adapter ID)
assignments 49
description 42
API (Application Programming
Interfaces) 120
Application Programming Interfaces
(API) 120
architecture
ESA/390 10
z/Architecture 10
assignments
AID 47
CHPID 47
PCHID 47
asynchronous delivery of data 80, 131
ATM 106
availability guidelines 44
B
Backup Time Server 137
blade slot 33
blade, zBX 33
BladeCenter 33
blade 33
blowers 33
management modules 33
switch modules 33
block multiplexer mode 64
blowers, zBX 33
BPA (Bulk Power Assembly) 23
BPC 23
BPE (Bulk Power Enclosure) 23
BPF (Bulk Power Fan) 23
BPH (Bulk Power Hub) 23
BPI (Bulk Power Interface) 23
BPR (Bulk Power Regulator) 23
Bulk Power Assembly (BPA) 23
Bulk Power Enclosure (BPE) 23
Bulk Power Fan (BPF) 23
Bulk Power Hub (BPH) 23
Bulk Power Interface (BPI) 23
Bulk Power Regulator (BPR) 23
burst mode 65
byte multiplexer mode 64, 65
C
cable ordering 108
cabling
fiber optic 10
report 110
responsibilities 108
cache-hit rates 71
Cancel Subchannel (XSCH)
instruction 66
Capacity Backup (CBU) 139, 149
capacity upgrades 147
cards
DCA 20, 23
fanout 18
ISC-D 85
ISC-M 85
memory 18
oscillator 20
PSC24V 23
cascaded directors 136
CBU (Capacity Backup) 139
CCC (Channel Control Check) 67
CCW (Channel Command Word) 66
CDC (Channel Data Check) 67
Central Processor (CP) 15, 26
central storage 17
certification
EAL5 99
FIPS 105
CF (coupling facility) 87
CF duplexing 90, 93
CFCC (Coupling Facility Control
Code) 88
48 internal tasks 90
64-bit addressing 90
CF duplexing 90
considerations 88
LIC considerations 88
chaining operations 66
Channel Command Word (CCW) 66
channel commands 65
Channel Control Check (CCC) 67
Channel Data Check (CDC) 67
channel hardware
FCP 60
channel path definition 52
channel program characteristics 71
channel sharing
FCP 61
channel subsystem (CSS) 41, 42
planning 47
workload characteristics 71
channel time-out 68
channel-to-channel connection 68
channels 26
dedicated 26
ESCON 63
FICON 55
HiperSockets 52
internal coupling (IC) 52
InterSystem Coupling-3 (ISC-3) 85
IOCP definitions 42
maximums 41
peer 43
reconfigurable 26
shared 26
spanned 26, 51
CHPID
assignments 47
types 42
CHPID Mapping Tool 50
CIU (Customer Initiated Upgrade)
application 147
classes
device 65
Clear Subchannel (CSCH) instruction 66
clustering 97
command
chaining 67
compatibility
programming 39
concurrent channel upgrade 136
concurrent hardware maintenance 137
concurrent undo CBU 140
configurations
system 13
connection
CTC 67
connectivity
subchannel 44
control check operation 67
control unit priority 68
converter operation
ESCON 64
coupling connection 90
coupling facility (CF) 87
duplexing 93
Coupling Facility Control Code
(CFCC) 88
48 internal tasks 90
64-bit addressing 90
CF duplexing 90
considerations 88
LIC considerations 88
coupling link
peer channels 43
CP (Central Processor) 15, 26
CP Assist for Cryptographic Function
(CPACF) 101
CPACF (CP Assist for Cryptographic
Function) 101
critical time 65
cryptography 101
CSS (channel subsystem) 42
Copyright IBM Corp. 2011, 2012 171
Level 01b
CSS (channel subsystem) (continued)
workload characteristics 71
CTC connection 68
CUoD (Capacity Upgrade on Demand)
Reserved CP support 150
Customer Initiated Upgrade (CIU)
application 147
D
data
chaining 67
streaming protocol 64
data check operation 67
data rates
channel 71
control unit 71
device 71
I/O subsystem 71
data transfer rate 70
DataPower XI50z 29, 34
DCA (Distributed Converter Assembly)
cards 14, 20, 23
dedicated channels 26
degrade indicator 117
device
class 65
I/O 61, 64
performance 64
sharing 27
device sharing 26
FCP 61
Distributed Converter Assembly (DCA)
cards 14, 20, 23
drawer
I/O 21
PCIe I/O 21
positions 14
dynamic channel path management 98
dynamic I/O configuration 136
dynamic link aggregation 78
dynamic reconnection 69
dynamic storage reconfiguration 26
E
ECC (Error Checking and
Correction) 18, 135
ending status 67
entitlement, zBX 34
Error Checking and Correction
(ECC) 18, 135
error handling 67
ESA/390 mode I/O instructions 66
ESCON
channel commands 65
channel multiplexing modes 64
channels 63
converter operation 64
I/O operations control 65
performance characteristics 72
ethernet switch 23
expanded storage 17
External Time Reference (ETR) 137
F
fan pack, zBX 33
fanout cards
HCA 18
PCIe 18
FCP (Fibre Channel Protocol)
addressing 61
channel hardware 60
channel sharing 61
device sharing 61
for SCSI 59
positioning 62
features
I/O 21
fiber optic cabling 10
Fiber Quick Connect (FQC) 107
Fibre Channel Protocol (FCP)
for SCSI 59
FICON
cascaded directors 136
channels 55
FICON Express4 57
FICON Express8 56
FICON Express8S 55
FQC (Fiber Quick Connect) 107
frame, A 13
frames, system 13
FSP card 14
G
GDPS 94, 140
H
Halt Subchannel (HSCH) instruction 66
Hardware Configuration Definition
(HCD) 53, 119
Hardware Management Console
(HMC) 28, 113
availability 134
capabilities 114
features 116
wiring options 116
Hardware Management Console
Application (HWMCA) 115
hardware messages 117
Hardware System Area (HSA) 18
HCA (host channel adapter) fanout
cards 18
HCD (Hardware Configuration
Definition) 53, 119
highlights 2
HiperSockets
CHPID 52
completion queue function 80, 131
I/O connectivity 80
network integration with IEDN 80
network traffic analyzer (NTA) 81,
141
host channel adapter (HCA) fanout
cards 18
HSA (Hardware System Area) 18
HWMCA (Hardware Management
Console Application) 115
I
I/O
device 61, 64
device definition 53
features 21
interface mode 64
interface protocol 67
interruptions
CCC 67
CDC 67
IFCC 67
operations control 65
PCHID 48
performance 64
system reset 67
I/O device characteristics 71
I/O drawer 21
I/O priority queuing (IOPQ) 97
I/O Subsystem (IOSS) 42
IBF (Internal Battery Feature) 24
IBM DB2 Analytics Accelerator for z/OS
V2.1 35
IBM Resource Link 145
IBM Smart Analytics Optimizer 29, 34
IBM System x blade 29, 34
IBM zEnterprise BladeCenter
Extension 29
ICF (Internal Coupling Facility) 2, 15, 26
IEDN (intraensemble data network) 32
IFCC (Interface Control Check) 67
IFL (Integrated Facility for Linux) 2, 26
INMN (intranode management
network) 32
Integrated Facility for Linux (IFL) 26
Intelligent Resource Director (IRD) 97
Interface Control Check (IFCC) 67
interface mode
data streaming 64
interlocked 64
interlocked protocol 64
Internal Battery Feature (IBF) 24
internal coupling (IC)
channels 52
links 87
Internal Coupling Facility (ICF) 15, 26
interruptions
control check 67
data check 67
I/O 67
machine check 67
InterSystem Coupling-3 (ISC-3)
channels 85
intraensemble data network (IEDN) 32
intraensemble data network TOR
switches 32
intranode management network
(INMN) 32
IOCP
channel definitions 42
considerations 52
IOSS (I/O Subsystem) 42
iQDIO (internal Queued Direct
Input/Output) 80, 82
IRD (Intelligent Resource Director) 97
ISC-D 85
ISC-M 85
172 z114 System Overview
Level 01b
K
key-controlled storage protection 17
L
Large Systems Performance Reference for
IBM System z 146
Layer 2 (Link Layer) 79
Layer 3 virtual MAC 79
LCSS (logical channel subsystem) 42
Licensed Internal Code (LIC) 118
links
InfiniBand (IFB) 86
internal coupling (IC) 87
ISC-3 85
Linux on System z
VLAN support 81
Linux on System z supported levels 39
logical channels subsystem (LCSS) 42
logical partition (LPAR)
increased 6
logically partitioned operating mode 25
LPAR (logical partition)
clustering 97
definition 52
LPAR mode 25
LPAR time offset 27
M
machine information 145
machine-check interruptions 67
management modules, zBX 33
management TOR switches 32
maximums
channel, ports, adapters 41
memory
central storage 17
expanded storage 17
rules 18
memory cards
characteristics 18
memory scrubbing 135
MIDAW (Modified Indirect Data Address
Word) facility 58
MIF (Multiple Image Facility) 51
modes
burst 65
byte multiplexer 65
channel multiplexing 64
LPAR 25
operation 66
Modified Indirect Data Address Word
(MIDAW) facility 58
Modify Subchannel (MSCH)
instruction 66
MSS (multiple subchannel sets) 25
multipath IPL 58
Multiple Image Facility (MIF) 51
multiple subchannel sets (MSS) 25
N
network concentrator 81
network traffic analyzer (NTA)
HiperSockets 81, 141
OSA-Express 78, 141
nonsynchronous operation 70
O
On/Off CoD (On/Off Capacity on
Demand) 148
operating system messages 117
operation
block multiplexer 64
ESA/390 66
nonsynchronous 70
retry 67
synchronous 70
OSA LAN idle timer 78
OSA-Express network traffic
analyzer 78, 141
OSA-Express2 76
OSA-Express3 75
OSA-Express4S 74
OSA/SF (OSA/Support Facility) 73
OSA/Support Facility (OSA/SF) 73
Oscillator (OSC) Passthru cards 20
oscillator cards 14, 20
Oscillator/Pulse Per Second (OSC/PPS)
cards 20
P
Parallel Sysplex 83
coupling link connectivity 84
partial memory restart 135
PCHID
assignments 47
I/O drawer 48
report, sample 50
PCIe fanout cards 18
PCIe I/O drawer 21
peer channels 43
performance
device 64
ESCON channel 69
ESCON characteristics 72
I/O 64
system 9
permanent upgrades 147
ports
maximums 41
POS 106
positions
drawers 14
power consumption
reducing 36
Power Distribution Unit (PDU), zBX 33
power estimation tool 145
power modules, zBX 33
power sequence controller (PSC) 29
power supply 23
POWER7 blade 29, 34
PR/SM (Processor Resource/Systems
Manager) 25, 97
PR/SM LPAR
CPU management (Clustering) 97
time offset 27
Preferred Time Server 137
priority queuing 97
problem analysis and reporting 118
processor drawer 14
Processor Resource/Systems Manager
(PR/SM) 25
processor unit (PU) 15, 26
sparing 133
programming
compatibility 39
support 39
PSC (power sequence controller) 29
PSC24V card 23
PU (processor unit) 15, 26
purge path extended 59
Q
Queued Direct I/O Diagnostic
Synchronization (QDIOSYNC) 78
R
rack, zBX 32
RAS (Reliability, Availability,
Serviceability)
availability 131
reliability 131
serviceability 142
rear door heat exchanger, zBX 34
reconfigurable channels 26
reconnection, dynamic 69
reducing power consumption 36
remote automated operations 129
remote key loading 106
remote operations
manual 128
using a Hardware Management
Console 128
using a web browser 129
overview 127
Remote Support Facility (RSF) 120
reports
cabling 110
PCHID 50
Reset Channel Path (RCHP)
instruction 66
reset, system 67
Resource Link 10
Resume Subchannel (RSCH)
instruction 66
retry
operations 67
RMF monitoring 105
RSF (Remote Support Facility) 120
S
sample
PCHID report 50
SAP (System Assist Processor) 16
SASP support for load balancing 115,
131
scheduled operations 119
SCM (single chip module) 14
Index 173
Level 01b
SCSI (Small Computer Systems
Interface) 59
security 126
CPACF 101
cryptographic accelerator 102
cryptographic coprocessor 102
Server Time Protocol (STP)
description 27, 91
service required state 117
Set Address Limit (SAL) instruction 66
Set Channel Monitor (SCHM)
instruction 66
shared channels 26
SIE (start interpretive execution)
instruction 40
single chipmModule (SCM) 14
Small Computer Systems Interface
(SCSI) 59
software support 39
spanned channels 26, 51
Start Subchannel (SSCH) instruction 66
status reporting 116
storage
central 17, 26
expanded 17, 26
z/Architecture 17
Store Channel Path Status (STCPS)
instruction 66
Store Report Word (STCRW)
instruction 66
Store Subchannel (STSCH)
instruction 66
STP (Server Time Protocol)
description 27, 91
subchannel
connectivity 44
support
broadcast 81
operating systems 39
Support Element 23, 113
features 116
wiring options 116
zBX management 34
switch modules, zBX 33
synchronous operation 70
sysplex functions
parallel 83
system
configurations 13
I/O reset 67
System Assist Processor (SAP) 16
system power supply 23
System x blade 29, 34
system-managed coupling facility
structure duplexing (CF duplexing) 93
T
Test Pending Interruption (TPI)
instruction 66
Test Subchannel (TSCH) instruction 66
time-out functions
channel 68
TKE (Trusted Key Entry) 104
tools
CHPID mapping 50
top-of-rack (TOR) switches
intraensemble data network TOR
switches 32
management TOR switches 32
Trusted Key Entry (TKE) 104
U
Unified Resource Manager 125
unsupported features 11
upgrade progression 11
upgrades
nondisruptive 151
permanent 147
V
virtual RETAIN 118
W
workload manager 98
WWPN tool 145
Z
z/Architecture 10
z/OS supported levels 39
z/TPF supported levels 39
z/VM supported levels 39
z/VSE supported levels 39
zAAP 16
zAAP (System z Application Assist
Processor) 2
zBX 29
entitlement 34
management 34
rack 32
zBX rack
acoustic door 34
BladeCenter 33
intraensemble data network TOR
switches 32
management TOR switches 32
Power Distribution Unit (PDU) 33
rear door heat exchanger 34
zIIP 16
zIIP (System z Integrated Information
Processor) 2
174 z114 System Overview
Level 01b
Printed in USA
SA22-1087-01