DB2 10 For Z - OS Technical Overview
DB2 10 For Z - OS Technical Overview
DB2 10 For Z - OS Technical Overview
com/redbooks
Front cover
DB2 10 for z/OS
Technical Overview
Paolo Bruni
Rafael Garcia
Sabine Kaschta
Josef Klitsch
Ravi Kumar
Andrei Lurie
Michael Parbs
Rajesh Ramachandran
Explore the new system and
application functions
Obtain information about expected
performance improvements
Decide how and when to
migrate to DB2 10
International Technical Support Organization
DB2 10 for z/OS Technical Overview
December 2010
SG24-7892-00
Copyright International Business Machines Corporation 2010. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
First Edition (December 2010)
This edition applies to IBM DB2 Version 10.1 for z/OS (program number 5605-DB2).
Note: Before using this information and the product it supports, read the information in Notices on
page xxvii.
Note: This book is based on a pre-GA version of a program and may not apply when the program becomes
generally available. We recommend that you consult the program documentation or follow-on versions of
this IBM Redbooks publication for more current information.
Copyright IBM Corp. 2010. All rights reserved. iii
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxiii
Stay connected to IBM Redbooks publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxiii
Summary of changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxv
December 2010, First Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxv
March 2011, First Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxv
December 2011, Second Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxvi
August 2013, Third Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxvi
Chapter 1. DB2 10 for z/OS at a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Executive summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Benefits of DB2 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Subsystem enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Application functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Operation and performance enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Part 1. Subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2. Synergy with System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Synergy with System z in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Synergy with IBM System z and z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 DBM1 64-bit memory usage and virtual storage constraint relief . . . . . . . . . . . . . . 4
2.2.2 ECSA virtual storage constraint relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.3 Increase of 64-bit memory efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.4 Improved CPU cache performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.5 HiperDispatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.6 XML virtual storage constraint relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.7 XML fragment validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.8 Improved DB2 startup times and DSMAX with z/OS V1R12 . . . . . . . . . . . . . . . . . 9
2.2.9 CPU measurement facility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Synergy with IBM zEnterprise System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Synergy with storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.1 Extended address volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.2 More data set types supported on EAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.3 Dynamic volume expansion feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.4 SMS data set separation by volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.5 High Performance FICON. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
iv DB2 10 for z/OS Technical Overview
2.4.6 IBM System Storage DS8800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.7 DFSMS support for solid-state drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.8 Easy Tier technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.9 Data set recovery of moved and deleted data sets. . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.10 Synergy with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.11 DB2 catalog and directory now SMS managed . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5 z/OS Security Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.1 RACF password phrases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.2 RACF identity propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Synergy with TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6.1 z/OS V1R10 and IPv6 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.6.2 z/OS UNIX System Services named pipe support in FTP . . . . . . . . . . . . . . . . . . 27
2.6.3 IPSec encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6.4 SSL encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7 WLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7.1 DSN_WLM_APPLENV stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7.2 Classification of DRDA workloads using DB2 client information. . . . . . . . . . . . . . 32
2.7.3 WLM blocked workload support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.7.4 Extend number of WLM reporting classes to 2,048 . . . . . . . . . . . . . . . . . . . . . . . 38
2.7.5 Support for enhanced WLM routing algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8 Using RMF for zIIP reporting and monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8.1 DRDA workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8.2 Batch workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.9 Warehousing on System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.9.1 Cognos on System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.10 Data encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.10.1 IBM TotalStorage for encryption on disk and tape . . . . . . . . . . . . . . . . . . . . . . . 43
2.11 IBM WebSphere DataPower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.12 Additional zIIP and zAAP eligibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.12.1 DB2 10 parallelism enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.12.2 DB2 10 RUNSTATS utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.12.3 DB2 10 buffer pool prefetch and deferred write activities . . . . . . . . . . . . . . . . . . 46
2.12.4 z/OS sort utility (DFSORT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.12.5 DRDA workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.12.6 zAAP on zIIP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.12.7 z/OS XML system services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 3. Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Virtual storage relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Support for full 64-bit run time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.2 64-bit support for the z/OS ODBC driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Reduction in latch contention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Reduction in catalog contention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Increased number of packages in SPT01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.5 The WORKFILE database enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5.1 Support for spanned work file records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5.2 In-memory work file enhancements for performance . . . . . . . . . . . . . . . . . . . . . . 57
3.5.3 The CREATE TABLESPACE statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.4 Installation changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6 Elimination of UTSERIAL for DB2 utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.7 Support for Extended Address Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.8 Shutdown and restart times, and DSMAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.9 Compression of SMF records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Contents v
Chapter 4. Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1 Online schema enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.1 UTS with DB2 9: Background information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 UTS with DB2 10: The ALTER options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.3 Pending definition changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.1.4 Online schema changes in detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.5 Materialization of pending definition changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.1.6 Impact on immediate options and DROP PENDING CHANGES . . . . . . . . . . . . . 89
4.1.7 UTS ALTER for MEMBER CLUSTER. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.1.8 Utilities support for online schema changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 Autonomic checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.3 Dynamically adding an active log data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.4 Preemptible backout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.5 Support for rotating partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.6 Compress on insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.6.1 DSN1COMP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.6.2 Checking whether the data in a table space is compressed. . . . . . . . . . . . . . . . 106
4.6.3 Data is not compressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.7 Long-running reader warning message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.8 Online REORG enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.9 Increased availability for CHECK utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Chapter 5. Data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.1 Subgroup attach name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.2 Delete data sharing member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.3 Buffer pool scan avoidance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.4 Universal table space support for MEMBER CLUSTER. . . . . . . . . . . . . . . . . . . . . . . 114
5.5 Restart light handles DDF indoubt units of recovery. . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.6 Auto rebuild coupling facility lock structure on long IRLM waits during restart . . . . . . 118
5.7 Log record sequence number spin avoidance for inserts to the same page. . . . . . . . 118
5.8 IFCID 359 for index split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.9 Avoid cross invalidations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.10 Recent DB2 9 enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.10.1 Random group attach DSNZPARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.10.2 Automatic GRECP recovery functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.10.3 The -ACCESS DATABASE command enhancements . . . . . . . . . . . . . . . . . . . 121
5.10.4 Reduction in forced log writes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Part 2. Application functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Chapter 6. SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.1 Enhanced support for SQL scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.1.1 SQL scalar functions syntax changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.1.2 Examples of SQL scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.3 SQL scalar function versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.1.4 Deploying non-inline SQL scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.1.5 ALTER actions for the SQL scalar functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.2 Support for SQL table functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.2.1 SQL table functions syntax changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.2.2 Examples of CREATE and ALTER SQL table functions. . . . . . . . . . . . . . . . . . . 147
6.3 Enhanced support for native SQL procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.4 Extended support for implicit casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.4.1 Examples of implicit casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.4.2 Rules for result data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
vi DB2 10 for z/OS Technical Overview
6.4.3 Function resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.5 Support for datetime constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.6 Variable timestamp precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.6.1 String representation of timestamp values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.6.2 Timestamp assignment and comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.6.3 Scalar function changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.6.4 Application programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.7 Support for TIMESTAMP WITH TIME ZONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.7.1 Examples of TIMESTAMP WITH TIME ZONE . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.7.2 String representation of TIMESTAMP WITH TIME ZONE values. . . . . . . . . . . . 169
6.7.3 Determination of the implicit time zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.7.4 TIMESTAMP WITH TIME ZONE assignment and comparison . . . . . . . . . . . . . 171
6.7.5 Rules for result data type with TIMESTAMP WITH TIME ZONE operands . . . . 173
6.7.6 CURRENT TIMESTAMP WITH TIME ZONE special register. . . . . . . . . . . . . . . 174
6.7.7 Scalar function changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.7.8 Statements changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.7.9 Application programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.8 Support for OLAP aggregation specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Chapter 7. Application enablement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
7.1 Support for temporal tables and versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.1.1 System period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.1.2 Application period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
7.2 Access plan stability and instance-based statement hints . . . . . . . . . . . . . . . . . . . . . 220
7.2.1 Access path repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
7.2.2 BIND QUERY and FREE QUERY DB2 commands . . . . . . . . . . . . . . . . . . . . . . 222
7.2.3 Access plan stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7.2.4 DB2 9 access plan stability support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7.2.5 DB2 10 access plan stability support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
7.2.6 Instance-based statement hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.3 Addition of extended indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.4 New Universal Language Interface program (DSNULI) . . . . . . . . . . . . . . . . . . . . . . . 235
7.5 Access to currently committed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7.6 EXPLAIN MODE special register to explain dynamic SQL . . . . . . . . . . . . . . . . . . . . . 240
7.7 Save LASTUSED information for packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Chapter 8. XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.1 DB2 9 XML additional functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
8.1.1 The XMLTABLE function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
8.1.2 The XMLCAST specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8.1.3 XML index for XML joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.1.4 Index enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
8.1.5 XML index use by queries with XMLTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
8.1.6 XPath scan improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
8.1.7 XPath functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
8.2 XML type modifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
8.3 XML schema validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
8.3.1 Enhancements to XML schema validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8.3.2 Determining whether an XML document has been validated . . . . . . . . . . . . . . . 268
8.4 XML consistency checking with the CHECK DATA utility . . . . . . . . . . . . . . . . . . . . . . 269
8.5 Support for multiple versions of XML documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8.5.1 XML versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.5.2 Storage structure for XML data with versions . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Contents vii
8.6 Support for updating part of an XML document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.6.1 Updates to entire XML documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.6.2 Partial updates of XML documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.7 Support for binary XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8.8 Support for XML date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
8.8.1 Enhancements for XML date and time support. . . . . . . . . . . . . . . . . . . . . . . . . . 290
8.8.2 XML date and time support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8.9 XML in native SQL stored procedures and UDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
8.9.1 Enhancements to native SQL stored procedures . . . . . . . . . . . . . . . . . . . . . . . . 301
8.9.2 Enhancements to user defined SQL scalar and table functions . . . . . . . . . . . . . 302
8.9.3 Decompose to multiple tables with a native SQL procedure. . . . . . . . . . . . . . . . 305
8.10 Support for DEFINE NO for LOBs and XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
8.10.1 IMPDSDEF subsystem parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
8.10.2 Usage reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
8.11 LOB and XML data streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Chapter 9. Connectivity and administration routines . . . . . . . . . . . . . . . . . . . . . . . . . 309
9.1 DDF availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
9.1.1 Online communications database changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
9.1.2 Online DDF location alias name changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.1.3 Domain name is now optional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
9.1.4 Acceptable period for honoring cancel requests. . . . . . . . . . . . . . . . . . . . . . . . . 315
9.1.5 Sysplex balancing using SYSIBM.IPLIST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
9.1.6 Message-based correlation with remote IPv6 clients . . . . . . . . . . . . . . . . . . . . . 316
9.2 Monitoring and controlling connections and threads at the server . . . . . . . . . . . . . . . 317
9.2.1 Create the tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.2.2 Insert a row in DSN_PROFILE_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.2.3 Insert a row in DSN_PROFILE_ATTRIBUTES. . . . . . . . . . . . . . . . . . . . . . . . . . 319
9.2.4 Activate profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.2.5 Deactivate profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.2.6 Activating a subset of profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.3 JDBC Type 2 driver performance enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.3.1 Limited block fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.3.2 Conditions for limited block fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.4 High performance DBAT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.4.1 High performance DBAT to reduce CPU usage . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.4.2 Dynamic switching to RELEASE(COMMIT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.4.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.5 Use of RELEASE(DEALLOCATE) in Java applications . . . . . . . . . . . . . . . . . . . . . . . 325
9.6 Support for 64-bit ODBC driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
9.7 DRDA support of Unicode encoding for system code pages . . . . . . . . . . . . . . . . . . . 328
9.8 Return to client result sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.9 DB2-supplied stored procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
9.9.1 Administrative task scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
9.9.2 Administration enablement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.9.3 DB2 statistics routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Part 3. Operation and performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Chapter 10. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
10.1 Policy-based audit capability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
10.1.1 Audit policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
10.1.2 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10.1.3 Creating audit reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
viii DB2 10 for z/OS Technical Overview
10.1.4 Policy-based SQL statement auditing for tables . . . . . . . . . . . . . . . . . . . . . . . . 349
10.1.5 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.2 More granular system authorities and privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.2.1 Separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.2.2 Least privilege. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.2.3 Grant and revoke system privilege changes. . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.2.4 Catalog changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
10.2.5 SECADM authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
10.2.6 System DBADM authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10.2.7 DATAACCESS authority. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
10.2.8 ACCESSCTRL authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10.2.9 Authorities for SQL tuning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10.2.10 The CREATE_SECURE_OBJECT system privilege . . . . . . . . . . . . . . . . . . . 376
10.2.11 Using RACF profiles to manage DB2 10 authorities. . . . . . . . . . . . . . . . . . . . 377
10.2.12 Separating SECADM authority from SYSADM and SYSCTRL authority . . . . 377
10.2.13 Minimize need for SYSADM authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10.3 System-defined routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10.3.1 Installing DB2-supplied system-defined routines . . . . . . . . . . . . . . . . . . . . . . . 381
10.3.2 Define your own system-defined routines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10.3.3 Mark user-provided SQL table function as system defined. . . . . . . . . . . . . . . . 381
10.4 The REVOKE dependent privilege clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.4.1 Revoke statement syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.4.2 Revoke dependent privileges system default . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.5 Support for row and column access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
10.5.1 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
10.5.2 New terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
10.5.3 Object types for row and column based policy definition . . . . . . . . . . . . . . . . . 386
10.5.4 SQL DDL for managing new access controls and objects . . . . . . . . . . . . . . . . 387
10.5.5 Built-in functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
10.5.6 Catalog tables and row and column access control . . . . . . . . . . . . . . . . . . . . . 389
10.5.7 Sample customer table used in examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
10.5.8 Row access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
10.5.9 Row permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
10.5.10 Column access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
10.5.11 Column masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
10.5.12 Row permissions and column masks in interaction . . . . . . . . . . . . . . . . . . . . 398
10.5.13 Application design considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
10.5.14 Operational considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
10.5.15 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
10.5.16 Catalog changes for access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
10.5.17 Added and changed IFCIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6 Support for z/OS security features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6.1 z/OS Security Server password phrase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6.2 z/OS Security Server identity propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Chapter 11. Utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
11.1 Support FlashCopy enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
11.1.1 Getting started with FlashCopy image copies with the COPY utility. . . . . . . . . 426
11.1.2 FlashCopy image copies and utilities other than COPY . . . . . . . . . . . . . . . . . . 437
11.1.3 Requirements and restrictions for using FlashCopy image copies . . . . . . . . . . 444
11.2 Autonomic statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
11.2.1 Using RUNSTATS profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
11.2.2 Updating RUNSTATS profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Contents ix
11.2.3 Deleting RUNSTATS profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
11.2.4 Combining autonomic and manual statistics maintenance . . . . . . . . . . . . . . . . 448
11.3 RECOVER with BACKOUT YES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
11.4 Online REORG enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
11.4.1 Improvements to online REORG for base tables spaces with LOBs . . . . . . . . 452
11.4.2 REORG SHRLEVEL options for LOB table spaces . . . . . . . . . . . . . . . . . . . . . 458
11.4.3 Improved usability of REORG of disjoint partition ranges. . . . . . . . . . . . . . . . . 458
11.4.4 Cancelling blocking claimers with REORG FORCE . . . . . . . . . . . . . . . . . . . . . 460
11.5 Increased availability for CHECK utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
11.6 IBM DB2 Sort for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
11.7 UTSERIAL elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
11.8 REPORT utility output improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Chapter 12. Installation and migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
12.1 Planning for migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
12.1.1 DB2 10 pre-migration health check job. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
12.1.2 Fallback SPE and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
12.1.3 Partitioned data set extended support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
12.1.4 Convert bootstrap data set to expanded format . . . . . . . . . . . . . . . . . . . . . . . . 477
12.1.5 Plans and packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
12.2 Some release incompatibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
12.2.1 CHAR and VARCHAR formatting for decimal data. . . . . . . . . . . . . . . . . . . . . . 479
12.2.2 Fall back restriction for native SQL procedures . . . . . . . . . . . . . . . . . . . . . . . . 480
12.2.3 Fall back restriction for index on expression. . . . . . . . . . . . . . . . . . . . . . . . . . . 480
12.3 DB2 10 product packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
12.3.1 Removed features and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
12.3.2 Deprecated features and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
12.3.3 Base engine and features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
12.4 Command changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
12.5 Catalog changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
12.5.1 SMS-managed DB2 catalog and directory data sets . . . . . . . . . . . . . . . . . . . . 490
12.5.2 CLOB and BLOB columns added to the catalog. . . . . . . . . . . . . . . . . . . . . . . . 490
12.5.3 Reduced catalog contention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
12.5.4 Converting catalog and directory table spaces to partition-by-growth . . . . . . . 492
12.5.5 Added catalog objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
12.6 Implications of DB2 catalog restructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
12.7 DSNZPARM change summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
12.8 EXPLAIN tables in DB2 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
12.8.1 EXPLAIN table changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
12.8.2 Tools to help you convert to new format and Unicode . . . . . . . . . . . . . . . . . . . 509
12.8.3 Converting the EXPLAIN table to new format and Unicode . . . . . . . . . . . . . . . 514
12.9 SMS-managed DB2 catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
12.10 Skip level migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
12.11 Fallback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
12.11.1 Implication to catalog image copy job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
12.11.2 Frozen objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
12.12 Improvements to DB2 installation and samples . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
12.12.1 Installation pop-up panel DSNTIPSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
12.12.2 Job DSNTIJXZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
12.12.3 Installation verification procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
12.12.4 Sample for XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
12.13 Simplified installation and configuration of DB2-supplied routines . . . . . . . . . . . . . 523
12.13.1 Deploying the DB2-supplied routines when installing DB2 10 for z/OS . . . . . 528
x DB2 10 for z/OS Technical Overview
12.13.2 Validating deployment of DB2-supplied routines . . . . . . . . . . . . . . . . . . . . . . 529
12.14 Eliminating DDF private protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
12.15 Precompiler NEWFUN option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Chapter 13. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
13.1 Performance expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
13.2 Improved optimization techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
13.2.1 Safe query optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
13.2.2 RID pool enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
13.2.3 Range-list index scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
13.2.4 IN-LIST enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
13.2.5 Aggressive merge for views and table expressions . . . . . . . . . . . . . . . . . . . . . 542
13.2.6 Improvements to predicate processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
13.2.7 Sort enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
13.3 Dynamic prefetch enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13.3.1 Index scans using list prefetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13.3.2 Row level sequential detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
13.3.3 Progressive prefetch quantity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
13.4 DDF enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
13.4.1 The RELEASE(DEALLOCATE) BIND option . . . . . . . . . . . . . . . . . . . . . . . . . . 548
13.4.2 Miscellaneous DDF performance improvements . . . . . . . . . . . . . . . . . . . . . . . 551
13.5 Dynamic statement cache enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
13.6 INSERT performance improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
13.6.1 I/O parallelism for index updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
13.6.2 Sequential inserts into the middle of a clustering index . . . . . . . . . . . . . . . . . . 559
13.7 Referential integrity checking improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
13.8 Buffer pool enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
13.8.1 Buffer storage allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
13.8.2 In-memory table spaces and indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
13.8.3 Reduce latch contention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
13.9 Work file enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
13.10 Support for z/OS enqueue management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
13.11 LOB enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
13.11.1 LOAD and UNLOAD with spanned records . . . . . . . . . . . . . . . . . . . . . . . . . . 565
13.11.2 File reference variable enhancement for 0 length LOBs. . . . . . . . . . . . . . . . . 567
13.11.3 Streaming LOBs and XML between DDF and DBM1 . . . . . . . . . . . . . . . . . . . 568
13.11.4 Inline LOBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
13.12 Utility BSAM enhancements for extended format data sets . . . . . . . . . . . . . . . . . . 571
13.13 Performance enhancements for local Java and ODBC applications. . . . . . . . . . . . 572
13.14 Logging enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
13.14.1 Long term page fix log buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
13.14.2 LOG I/O enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
13.14.3 Log latch contention reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
13.15 Hash access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
13.15.1 The hashing definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
13.15.2 Using hash access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
13.15.3 Monitoring the performance of hash access tables. . . . . . . . . . . . . . . . . . . . . 584
13.15.4 New SQLCODEs to support hash access. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
13.16 Additional non-key columns in a unique index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
13.17 DB2 support for solid state drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
13.18 Extended support for the SQL procedural language. . . . . . . . . . . . . . . . . . . . . . . . 591
13.19 Preemptable backout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
13.20 Eliminate mass delete locks for universal table spaces . . . . . . . . . . . . . . . . . . . . . 592
Contents xi
13.21 Parallelism enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
13.21.1 Remove some restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
13.21.2 Improve the effectiveness of parallelism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
13.21.3 Straw model for workload distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
13.21.4 Sort merge join improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
13.22 Online performance buffers in 64-bit common . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
13.23 Enhanced instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
13.23.1 One minute statistics trace interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
13.23.2 IFCID 359 for index leaf page split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
13.23.3 Separate DB2 latch and transaction lock in Accounting class 8 . . . . . . . . . . . 599
13.23.4 Storage statistics for DIST address space . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
13.23.5 Accounting: zIIP SECP values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
13.23.6 Package accounting information with rollup . . . . . . . . . . . . . . . . . . . . . . . . . . 601
13.23.7 DRDA remote location statistics detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
13.24 Enhanced monitoring support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
13.24.1 Unique statement identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
13.24.2 New monitor class 29 for statement detail level monitoring . . . . . . . . . . . . . . 605
13.24.3 System level monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
Part 4. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Appendix A. Information about IFCID changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
A.1 IFCID 002: Dynamic statement cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
A.2 IFCID 002 - Currently committed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
A.3 IFCID 013 and IFCID 014. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
A.4 IFCID 106 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
A.5 IFCID 225 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
A.6 IFCID 267 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
A.7 IFCID 316 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
A.8 IFCID 357 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A.9 IFCID 358 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A.10 IFCID 359 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A.11 IFCID 360 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
A.12 IFCID 363 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
Appendix B. Summary of relevant maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
B.1 DB2 APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
B.2 z/OS APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
B.3 OMEGAMON PE APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
IBM Redbooks publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
How to get Redbooks publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
xii DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. xiii
Figures
2-1 DB2 9 and DB2 10 VSCR in the DBM1 address space . . . . . . . . . . . . . . . . . . . . . . . . . 5
2-2 Address translation for 4 KB pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2-3 Address translation for 1 MB pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2-4 Memory and CPU cache latencies for z10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2-5 IBM zEnterprise System components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2-6 IBM zEnterprise System: Capacity and scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2-7 zEnterprise CPU reduction with DB2 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2-8 EAV breaking the limit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2-9 Overview DS8000 EAV support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2-10 EAV support for ICF catalogs and VVDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2-11 zHPF performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2-12 zHPF link protocol comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2-13 DS8000 versus DS8300 for DB2 sequential scan . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2-14 Log writes throughput comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2-15 FLASHCOPY NO COPY utility accounting report. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2-16 FLASHCOPY YES COPY utility accounting report . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2-17 UNIX System Services file system information for a UNIX System Services named
pipe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2-18 How FTP access to UNIX named pipes works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2-19 z/OS IPSec overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2-20 IPSEC zIIP eligibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2-21 z/OS V1R12 AT-TLS in-memory encrypt / decrypt improvement . . . . . . . . . . . . . . . . 31
2-22 DSN_WLM_APPLENV output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2-23 WLM classification DRDA work based on program name . . . . . . . . . . . . . . . . . . . . . 33
2-24 SDSF enclave display of DRDA request. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2-25 SDSF enclave information for client program ZSYNERGY. . . . . . . . . . . . . . . . . . . . . 34
2-26 DB2 display thread command showing DB2 client information. . . . . . . . . . . . . . . . . . 35
2-27 Blocked workload RMF CPU activity report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2-28 Blocked workload RMF workload activity report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2-29 RMF workload activity report for RMF report class ZSYNERGY . . . . . . . . . . . . . . . . 39
2-30 WLM classification for batch job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2-31 SDSF display active batch WLM classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2-32 RMF workload activity report for the RUNSTATS report class . . . . . . . . . . . . . . . . . . 41
2-33 DB2 RUNSTATS utility accounting report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2-34 WebSphere DataPower DRDA capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2-35 Specialty engines applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2-36 Comparison of zAAP and zIIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2-37 OMEGAMON PE statistics report DBM1 zIIP usage . . . . . . . . . . . . . . . . . . . . . . . . 47
2-38 WLM report class assignment for the MSTR address space . . . . . . . . . . . . . . . . . . . 48
2-39 SDSF display active panel to verify MSTR report class assignment . . . . . . . . . . . . . 48
2-40 RMF workload activity report zIIP usage for buffer pool prefetches . . . . . . . . . . . . . . 48
2-41 z/OS XML system services zIIP and zAAP processing flow. . . . . . . . . . . . . . . . . . . . 50
3-1 Updated installation panel DSNTIP9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4-1 Possible table space type conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4-2 Possible online schema changes for table spaces and indexes. . . . . . . . . . . . . . . . . . 71
4-3 ALTER TABLESPACE ... BUFFERPOOL statement and resulting SQL code . . . . . . . 75
4-4 SYSIBM.SYSPENDINGDDL after ALTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4-5 -647 SQL code after ALTER TABLESPACE ... BUFFERPOOL . . . . . . . . . . . . . . . . . . 77
xiv DB2 10 for z/OS Technical Overview
4-6 ALTER TABLESPACE ... SEGSIZE decreased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4-7 ALTER TABLE ADD ORGANIZE BY HASH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4-8 ALTER TABLE organization-clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4-9 Partition-by-growth to hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4-10 REORGed HASH SPACE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4-11 DROP ORGANIZATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4-12 REORG TABLESPACE messages for materialization of pending changes . . . . . . . . 86
4-13 SQLCODE +610 message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4-14 One wrapped row showing pending change in SYSPENDINGDDL . . . . . . . . . . . . . . 90
4-15 SQLCODE for normally immediate change due to pending change. . . . . . . . . . . . . . 90
4-16 Recover after alter before materialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4-17 Recover to current after materialization of pending definition changes . . . . . . . . . . . 93
4-18 Effects of REPAIR SET NOAREORPEND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4-19 -DIS LOG OUTPUT for CHECKTYPE=BOTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4-20 -SET LOG syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4-21 ROTATE 3 TO LAST sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4-22 SELECT to identify right physical partition for ROTATE. . . . . . . . . . . . . . . . . . . . . . 102
4-23 ROTATE 4 TO LAST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4-24 -DIS DB after two ROTATE table space executions. . . . . . . . . . . . . . . . . . . . . . . . . 102
4-25 LIMITKEY, PARTITION, and LOGICAL_PART after second rotate . . . . . . . . . . . . . 103
4-26 Error message you get when trying to rotate the last partition . . . . . . . . . . . . . . . . . 103
4-27 Message DSNU241I compression dictionary build. . . . . . . . . . . . . . . . . . . . . . . . . . 104
4-28 Compression dictionary pages spread over table space . . . . . . . . . . . . . . . . . . . . . 105
5-1 MEMBER CLUSTER with RECOVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5-2 ACCESS DATABASE command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6-1 CREATE FUNCTION (SQL scalar) syntax diagram: Main . . . . . . . . . . . . . . . . . . . . . 128
6-2 CREATE FUNCTION (SQL scalar) syntax diagram: parameter-declaration . . . . . . . 129
6-3 CREATE FUNCTION (SQL scalar) syntax diagram: SQL-routine-body. . . . . . . . . . . 129
6-4 ALTER FUNCTION (SQL scalar) syntax diagram: Main. . . . . . . . . . . . . . . . . . . . . . . 133
6-5 CREATE FUNCTION (SQL table) syntax diagram: Main . . . . . . . . . . . . . . . . . . . . . . 144
6-6 CREATE FUNCTION (SQL table): Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6-7 CREATE FUNCTION (SQL table) syntax: SQL-routine-body. . . . . . . . . . . . . . . . . . . 145
6-8 RETURN SQL control statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6-9 ALTER FUNCTION (SQL table): Main . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6-10 Enhanced assignment (SET) statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6-11 Stored length of the timestamp based on its precision . . . . . . . . . . . . . . . . . . . . . . . 157
6-12 Described length of the timestamp based on its precision . . . . . . . . . . . . . . . . . . . . 157
6-13 TIMESTAMP_STRUCT used for SQL_C_TYPE_TIMESTAMP . . . . . . . . . . . . . . . . 165
6-14 TIMESTAMP data types supported by DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6-15 Stored length of the TIMESTAMP WITH TIME ZONE based on its precision . . . . . 168
6-16 Described (external) length of the timestamp based on its precision . . . . . . . . . . . . 169
6-17 Syntax for time zone specific expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6-18 TIMESTAMP_TZ syntax diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6-19 Syntax diagram for SET SESSION TIME ZONE . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6-20 Scalar function processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6-21 Aggregate function processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6-22 OLAP specification processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6-23 Window-partition-clause syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6-24 Window-ordering-clause syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6-25 Window aggregation-group-clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6-26 Aggregation specification syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
7-1 Access path repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
7-2 BI ND QUERY syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Figures xv
7-3 FREE QUERY syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7-4 REBIND (TRIGGER) PACKAGE...PLANMGMT option . . . . . . . . . . . . . . . . . . . . . . . 226
7-5 REBIND (TRIGGER) PACKAGE...PLANMGMT(BASIC) . . . . . . . . . . . . . . . . . . . . . . 226
7-6 REBIND (TRIGGER) PACKAGE...PLANMGMT(EXTENDED) . . . . . . . . . . . . . . . . . . 227
7-7 REBIND (TRIGGER) PACKAGE...SWITCH (1 of 2). . . . . . . . . . . . . . . . . . . . . . . . . . 228
7-8 REBIND (TRIGGER) PACKAGE...SWITCH (2 of 2). . . . . . . . . . . . . . . . . . . . . . . . . . 228
7-9 FREE PACKAGE command with PLANMGMTSCOPE option . . . . . . . . . . . . . . . . . . 229
7-10 Application program or stored procedure linked with DSNULI . . . . . . . . . . . . . . . . . 236
7-11 BIND option CONCURRENTACCESSRESOLUTION . . . . . . . . . . . . . . . . . . . . . . . 238
7-12 USE CURRENTLY COMMITTED clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7-13 CREATE PROCEDURE option list option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7-14 SET CURRENT EXPLAIN MODE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
8-1 The XMLTABLE function syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
8-2 Avoid re-evaluation of XMLEXISTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
8-3 XML schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
8-4 XML schemas in XML Schema Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8-5 Schema determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
8-6 CHECK DATA syntax: New keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
8-7 The CHECK DATA utility: SHRLEVEL REFERENCE considerations . . . . . . . . . . . . 272
8-8 CHECK DATA: SHRLEVEL CHANGE considerations (1 of 2) . . . . . . . . . . . . . . . . . . 273
8-9 CHECK DATA: SHRLEVEL CHANGE considerations (2 of 2) . . . . . . . . . . . . . . . . . . 273
8-10 Multiversioning scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
8-11 Multiversioning for XML data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
8-12 XML locking scheme (with DB2 9 APAR PK55966) . . . . . . . . . . . . . . . . . . . . . . . . . 279
8-13 XML Locking scheme with multiversioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
8-14 Insert expression syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8-15 Insert expression examples (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
8-16 Insert expression examples (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
8-17 Insert expression examples (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
8-18 Replace expression syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
8-19 Replace expression examples (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
8-20 Replace expression examples (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
8-21 Delete expression syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8-22 Delete expression example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8-23 Binary XML is not the same as FOR BIT DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
8-24 Binary XML in the UNLOAD and LOAD utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8-25 XML date and time support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8-26 XML date and time related comparison operators . . . . . . . . . . . . . . . . . . . . . . . . . . 293
8-27 XML date and time comparison with SQL date. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
8-28 Arithmetic operations on XML duration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
8-29 Arithmetic operations on XML duration, date, and time . . . . . . . . . . . . . . . . . . . . . . 295
8-30 Date and time related XPath functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
8-31 Date and time related XPath functions examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
8-32 Time zone adjustment functions on date and time (1 of 2) . . . . . . . . . . . . . . . . . . . . 298
8-33 Time zone adjustment functions on date and time (2 of 2) . . . . . . . . . . . . . . . . . . . . 299
8-34 XML index improvement for date and time stamp . . . . . . . . . . . . . . . . . . . . . . . . . . 300
8-35 Native SQL stored procedure using XML parameter . . . . . . . . . . . . . . . . . . . . . . . . 301
8-36 Native a SQL stored procedure using XML variable. . . . . . . . . . . . . . . . . . . . . . . . . 302
8-37 Decompose to multiple tables with a native SQL procedure. . . . . . . . . . . . . . . . . . . 305
8-38 Decompose to multiple tables with a native SQL procedure (DB2 9) . . . . . . . . . . . . 306
9-1 -DISPLAY LOCATION output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9-2 -MODIFY DDF ALIAS syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9-3 DISPLAY DDF command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
xvi DB2 10 for z/OS Technical Overview
9-4 DISPLAY DDF DETAIL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9-5 Extended correlation token in messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9-6 Contents of DNS_PROFILE_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
9-7 Contents of DSN_PROFILE_ATTRIBUTES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
9-8 JDBC type 2 with DB2 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9-9 JDBC type 2 with DB2 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
9-10 Administrative task scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10-1 Start audit trace parameter AUDTPLCY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
10-2 Use of AUDTPLCY in start, stop trace commands . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10-3 Start AUDSYSADMIN audit policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10-4 Start multiple audit trace policies with one start trace command . . . . . . . . . . . . . . . 345
10-5 Display audit policy AUDSYSADM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
10-6 Stop audit policy AUDSYSADMIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
10-7 Start AUDTPLCY reason code 00E70022 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
10-8 Start AUDTPLCY reason code 00E70021 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10-9 Start AUDTPLCY reason code 00E70024 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10-10 OMEGAMON PE IFCID 362 RECTRACE report . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10-11 OMEGAMON PE IFCID 361 record trace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10-12 OMEGAMON PE V1R5 JCL sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
10-13 OMEGAMON PE record trace for static SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10-14 System authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
10-15 SQL syntax grant system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10-16 SQL syntax revoke system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
10-17 user DB2R53 is connected to RACF group DB0BSECA . . . . . . . . . . . . . . . . . . . . 362
10-18 OMEGAMON PE category SECMAINT audit report . . . . . . . . . . . . . . . . . . . . . . . . 363
10-19 OMEGAMON PE auth type SYSDBADM audit report . . . . . . . . . . . . . . . . . . . . . . 367
10-20 OMEGAMON PE auth type DATAACCESS audit report . . . . . . . . . . . . . . . . . . . . 369
10-21 OMEGAMON PE auth type ACCESSCTRL audit report . . . . . . . . . . . . . . . . . . . . 371
10-22 SQLADM authority and EXPLAIN privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10-23 EXPLAIN privilege failure with CURRENT EXPLAIN MODE special register. . . . . 373
10-24 SQLADM authority to use CURRENT EXPLAIN MODE special register . . . . . . . . 375
10-25 OMEGAMON PE auth type SQLADM audit report . . . . . . . . . . . . . . . . . . . . . . . . . 376
10-26 SECADM and ACCESSCTRL not separated from SYSADM. . . . . . . . . . . . . . . . . 378
10-27 SECADM and ACCESSCTRL separated from SYSADM. . . . . . . . . . . . . . . . . . . . 379
10-28 SQLADM Execution failure non-system-defined routine. . . . . . . . . . . . . . . . . . . . . 382
10-29 SQLADM run system-defined user provided UDF . . . . . . . . . . . . . . . . . . . . . . . . . 382
10-30 Revoke syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10-31 Row permission enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
10-32 Column mask enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
10-33 Alter table row access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
10-34 Alter table column access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
10-35 CREATE PERMISSION SQL DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
10-36 ALTER PERMISSION DDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
10-37 CREATE MASK SQL DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
10-38 ALTER MASK DDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
10-39 Catalog tables and access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
10-40 Sample data customer table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
10-41 Default row permission stored in SYSIBM.SYSCONTROLS . . . . . . . . . . . . . . . . . 391
10-42 Query row access control with default predicate 1=0 . . . . . . . . . . . . . . . . . . . . . . . 392
10-43 Impact of default predicate on query result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
10-44 SYSIBM.SYSDEPENDENCIES RA01_CUSTOMER . . . . . . . . . . . . . . . . . . . . . . . 394
10-45 Query permission RA01_CUSTOMER without query predicate . . . . . . . . . . . . . . . 394
10-46 Query permission RA01_CUSTOMER and with query predicate . . . . . . . . . . . . . . 395
Figures xvii
10-47 SYSIBM.SYSDEPENDENCIES rows for INCOME_BRANCH column mask . . . . . 397
10-48 Query with column access control with column mask applied . . . . . . . . . . . . . . . . 398
10-49 Permission and mask policies activated for query sample . . . . . . . . . . . . . . . . . . . 399
10-50 Customer table query for SQLID DB0B#B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
10-51 Customer table query for SQLID DB0B#B salary > 80000 . . . . . . . . . . . . . . . . . . . 399
10-52 Customer table query for SQLID DB0B#C selecting rows of branch C . . . . . . . . . 400
10-53 Alter trigger syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
10-54 RA02_CUSTOMERS EXPLAIN output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10-55 RA02_CUSTOMERS DSN_PREDICAT_TABLE query output . . . . . . . . . . . . . . . . 404
10-56 TRA02_CUSTOMERS DSN_STRUCT_TABLE query output . . . . . . . . . . . . . . . . 404
10-57 OMEGAMON PE formatted audit report for audit category EXECUTE . . . . . . . . . 405
10-58 Query SYSIBM.SYSPACKDEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
10-59 Query DSN_STATEMENT_CACHE_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
10-60 DRDA Environment for password phrase scenario. . . . . . . . . . . . . . . . . . . . . . . . . 412
10-61 SYSIBM.USERNAMES row DSNLEUSR encrypted. . . . . . . . . . . . . . . . . . . . . . . . 414
10-62 SPUFI and outbound translation with password phrase . . . . . . . . . . . . . . . . . . . . . 414
10-63 OMEGAMON PE RECTRACE report on DRDA outbound translation . . . . . . . . . . 414
10-64 TestJDBC output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
10-65 DISPLAY THREAD taken during COBOL SQL connect scenario . . . . . . . . . . . . . 416
10-66 z/OS identity propagation TrustedContextDB2zOS flow. . . . . . . . . . . . . . . . . . . . . 417
10-67 RACMAP LISTMAP command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10-68 getDB2TrustedPooledConnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
10-69 getDB2Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
10-70 Display thread output right after trusted context pool connection . . . . . . . . . . . . . . 421
10-71 Display thread output right after trusted context switch . . . . . . . . . . . . . . . . . . . . . 422
10-72 Audit report trusted context SET CURRENT SQLID. . . . . . . . . . . . . . . . . . . . . . . . 422
10-73 Audit report establish trusted context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
10-74 Audit report distributed identity: RACF authorization ID mapping. . . . . . . . . . . . . . 423
10-75 Audit report trusted context switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
11-1 FCCOPY keyword on COPY syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
11-2 FLASHCOPY_COPY=YES versus FLASHCOPY_COPY=NO in DSNZPARM . . . . 429
11-3 Create FlashCopy image copy and sequential copy depending on ZPARM setting. 430
11-4 FlashCopy image copy of partitioned table space plus sequential copy. . . . . . . . . . 432
11-5 FLASHCOPY CONSISTENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
11-6 REBUILD_INDEX without FLASHCOPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
11-7 REBUILD INDEX with FLASHCOPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
11-8 REORG INDEX without FLASHCOPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
11-9 Profile definition for RUNSTATS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
11-10 RECOVER BACKOUT YES: Base situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
11-11 RECOVER TABLESPACE without BACKOUT YES. . . . . . . . . . . . . . . . . . . . . . . . 449
11-12 RECOVER TABLESPACE ... BACKOUT YES. . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
11-13 Recover multiple table spaces to the same UR with BACKOUT YES . . . . . . . . . . 451
11-14 Enhanced REORG TABLESPACE syntax for PART keyword . . . . . . . . . . . . . . . . 459
11-15 -DIS DB command output after a couple of ROTATE commands . . . . . . . . . . . . . 460
11-16 LISTDEF syntax change for multiple partition ranges. . . . . . . . . . . . . . . . . . . . . . . 460
11-17 FORCE keyword on REORG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
11-18 Possible effect of option FORCE ALL on REORG . . . . . . . . . . . . . . . . . . . . . . . . . 462
11-19 CHECK DATA with inconsistencies found and CHECK_SETCHKP=YES . . . . . . . 465
11-20 CHECK DATA with inconsistencies found and CHECK_SETCHKP=NO. . . . . . . . 466
12-1 DB2 version summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
12-2 Migrating from DB2 9 to DB2 10 and fallback paths . . . . . . . . . . . . . . . . . . . . . . . . . 475
12-3 Migrating from DB2 V8 to DB2 10 and fallback paths. . . . . . . . . . . . . . . . . . . . . . . . 476
12-4 DB2 10 optional features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
xviii DB2 10 for z/OS Technical Overview
12-5 DB2 catalog evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
12-6 DB2 directory table changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
12-7 Catalog tables for security and auditing in DB2 10 . . . . . . . . . . . . . . . . . . . . . . . . . . 496
12-8 Catalog tables for pending object changes in DB2 10 . . . . . . . . . . . . . . . . . . . . . . . 497
12-9 Catalog tables for BIND QUERY support in DB2 10. . . . . . . . . . . . . . . . . . . . . . . . . 498
12-10 Installation pop-up panel DSNTIPSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
13-1 DB2 10 performance objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
13-2 Prefetch window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
13-3 MODIFY DDF command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
13-4 DB2 trace records IFCID 58 and 59 for OPEN and unbundled DRDA block prefetch552
13-5 DB2 trace records IFCID 58 and 59 for OPEN and bundled DRDA block prefetch . 552
13-6 Statistics report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
13-7 Attributes clause of the PREPARE statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
13-8 Summary of main insert performance improvements . . . . . . . . . . . . . . . . . . . . . . . . 557
13-9 ALTER BUFFERPOOL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
13-10 Streaming LOBs and XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
13-11 Hash space structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
13-12 Hash access and partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
13-13 ORGANIZE BY HASH clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
13-14 ORGANIZE BY HASH Partition clause. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
13-15 SQL PL multiple assignment statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
13-16 Key range partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
13-17 Dynamic record based partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
13-18 Workload balancing straw model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
13-19 Accounting suspend times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
13-20 Statistics, DIST storage above 2 GB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
13-21 Statistics, DIST storage below 2 GB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
13-22 System level monitoring - Invalid profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
13-23 System level monitoring - Attributes table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
Copyright IBM Corp. 2010. All rights reserved. xix
Examples
2-1 DSN_WLM_APPLENV stored procedure call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2-2 SET_CLIENT_INFO stored procedure invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2-3 JCL RMF workload activity report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3-1 Queries with parallelism enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3-2 DSNTWFG and description of its parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4-1 Table space and index space as partition-by-growth . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4-2 REORG TABLESPACE with AUTOESTSPACE YES. . . . . . . . . . . . . . . . . . . . . . . . . . 83
4-3 REORG utility message DSNUGHSH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4-4 REORG TABLESPACE with AUTOESTSPACE NO . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4-5 Data set sizes after REORG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4-6 REPORT RECOVERY job output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4-7 Define clusters for the new active log data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4-8 -DIS LOG command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4-9 Message for LRDRTHLD threshold exceeded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6-1 inline SQL scalar function KM2MILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6-2 Non-inline function KM2MILES_SLOW using option not previously available . . . . . . 136
6-3 Inline SQL scalar function TRYXML using XML data type . . . . . . . . . . . . . . . . . . . . . 136
6-4 Non-inline SQL scalar function TRYSFS using scalar fullselect . . . . . . . . . . . . . . . . . 137
6-5 Non-inline SQL scalar function TRYTBL with transition table parameter . . . . . . . . . . 137
6-6 Special register behavior within the SQL scalar function body . . . . . . . . . . . . . . . . . . 138
6-7 SQL scalar function versioning example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6-8 Deployment of a SQL scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6-9 Initial definition of the function for the running example . . . . . . . . . . . . . . . . . . . . . . . 141
6-10 ALTER FUNCTION (SQL scalar) ALTER option-list. . . . . . . . . . . . . . . . . . . . . . . . . 141
6-11 ALTER FUNCTION (SQL scalar) ADD VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6-12 ALTER FUNCTION (SQL scalar) ACTIVATE VERSION . . . . . . . . . . . . . . . . . . . . . 142
6-13 ALTER FUNCTION (SQL scalar) REPLACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6-14 ALTER FUNCTION (SQL scalar) REGENERATE . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6-15 ALTER FUNCTION (SQL scalar) DROP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6-16 SQL table function definition and invocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6-17 ALTER FUNCTION (SQL table) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6-18 Distinct type in SQL procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6-19 Implicit cast from numeric to string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6-20 Implicit cast from string to numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6-21 Rounding with implicit cast from string to decimal . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6-22 Implicit cast with function resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6-23 Examples of datetime constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6-24 Difference between datetime constant and character string constant . . . . . . . . . . . 156
6-25 Examples of timestamp precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6-26 Timestamp assignment example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6-27 Timestamp comparison example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6-28 Example of timestamp precision of the result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6-29 CURRENT TIMESTAMP reference examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6-30 EXTRACT scalar function example for timestamp . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6-31 SECOND scalar function example for timestamp . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6-32 LENGTH scalar function example for timestamp . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6-33 MICROSECOND scalar function example for timestamp . . . . . . . . . . . . . . . . . . . . . 162
6-34 TIMESTAMP scalar function example for timestamp . . . . . . . . . . . . . . . . . . . . . . . . 162
xx DB2 10 for z/OS Technical Overview
6-35 TIMESTAMP declaration examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6-36 Actual and reserved lengths of a TIMESTAMP WITH TIME ZONE . . . . . . . . . . . . . 169
6-37 Examples of string representation of a TIMESTAMP WITH TIME ZONE. . . . . . . . . 169
6-38 Promotion of timestamp data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6-39 Example of TIMESTAMP WITH TIME ZONE casts . . . . . . . . . . . . . . . . . . . . . . . . . 171
6-40 Timestamp with time zone comparison examples . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6-41 Result data type when operation involves TIMESTAMP WITH TIME ZONE . . . . . . 173
6-42 CURRENT TIMESTAMP WITH TIME ZONE examples . . . . . . . . . . . . . . . . . . . . . . 174
6-43 CURRENT TIMESTAMP: Current and implicit time zone. . . . . . . . . . . . . . . . . . . . . 175
6-44 SESSION TIME ZONE example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6-45 Time zone specific expression examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6-46 Timestamp with time zone arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6-47 TIMESTAMP_TZ invocation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6-48 Invocations of EXTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6-49 TRYOLAP table definition and its data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6-50 GROUP BY versus PARTITION BY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6-51 ORDER BY ordering in partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6-52 Aggregation using ROWS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6-53 Sparse input data effect on physical aggregation grouping . . . . . . . . . . . . . . . . . . . 190
6-54 Duplicate rows effect on physical aggregation grouping: Query. . . . . . . . . . . . . . . . 191
6-55 Duplicate rows effect on physical aggregation grouping: Result 1 . . . . . . . . . . . . . . 191
6-56 Duplicate rows effect on physical aggregation grouping: Result 2 . . . . . . . . . . . . . . 191
6-57 Aggregation using RANGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6-58 Sparse input data effect on logical aggregation grouping. . . . . . . . . . . . . . . . . . . . . 193
6-59 Duplicate rows effect on logical aggregation grouping . . . . . . . . . . . . . . . . . . . . . . . 193
6-60 OLAP-specification aggregation group boundary example 1 . . . . . . . . . . . . . . . . . . 196
6-61 OLAP-specification aggregation group boundary example 2 . . . . . . . . . . . . . . . . . . 196
6-62 OLAP-specification aggregation group boundary example 3 . . . . . . . . . . . . . . . . . . 197
6-63 OLAP-specification aggregation group boundary example 4 . . . . . . . . . . . . . . . . . . 197
6-64 OLAP-specification aggregation group boundary example 5 . . . . . . . . . . . . . . . . . . 198
6-65 OLAP-specification aggregation group boundary example 6 . . . . . . . . . . . . . . . . . . 198
6-66 OLAP-specification aggregation group boundary example 7 . . . . . . . . . . . . . . . . . . 199
6-67 OLAP-specification aggregation group boundary example 8 . . . . . . . . . . . . . . . . . . 199
6-68 OLAP-specification aggregation group boundary example 9 . . . . . . . . . . . . . . . . . . 200
6-69 OLAP-specification aggregation group boundary example 10 . . . . . . . . . . . . . . . . . 200
6-70 OLAP-specification aggregation group boundary example 11 . . . . . . . . . . . . . . . . . 200
6-71 OLAP-specification aggregation group boundary example 12 . . . . . . . . . . . . . . . . . 201
7-1 SYSTEM_TIME definition for a table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7-2 Creating a system-period temporal table and enabling data versioning . . . . . . . . . . . 206
7-3 Example of SYSIBM.SYSTABLE data versioning information . . . . . . . . . . . . . . . . . . 207
7-4 Enabling system-period data versioning for an existing table. . . . . . . . . . . . . . . . . . . 207
7-5 System-period data versioning with generated columns. . . . . . . . . . . . . . . . . . . . . . . 210
7-6 DML with system-period data versioning: INSERT. . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7-7 DML with system-period data versioning: Update. . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
7-8 DML with system-period data versioning: Delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
7-9 Example of FOR SYSTEM_TIME AS OF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
7-10 Example of FOR SYSTEM_TIME FROM... TO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
7-11 Example of FOR SYSTEM_TIME BETWEEN... AND. . . . . . . . . . . . . . . . . . . . . . . . 215
7-12 BUSINESS_TIME period definition for a table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
7-13 Examples of BUSINESS_TIME WITHOUT OVERLAPS clause. . . . . . . . . . . . . . . . 216
7-14 Example of FOR BUSINESS_TIME clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7-15 Example of FOR PORTION OF BUSINESS_TIME semantics: Case 1 . . . . . . . . . . 218
7-16 Example of FOR PORTION OF BUSINESS_TIME semantics: Case 2 . . . . . . . . . . 218
Examples xxi
7-17 Example of FOR PORTION OF BUSINESS_TIME semantics: Case 3 . . . . . . . . . . 219
7-18 Example of FOR PORTION OF BUSINESS_TIME semantics: Case 4 . . . . . . . . . . 219
7-19 FREE QUERY Example 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7-20 FREE QUERY Example 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7-21 FREE QUERY Example 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7-22 Extended indicator variable example: Table declaration . . . . . . . . . . . . . . . . . . . . . 233
7-23 Extended indicator variable example: C application for INSERT . . . . . . . . . . . . . . . 233
7-24 Extended indicator variable example: C application for UPDATE. . . . . . . . . . . . . . . 234
7-25 Link-editing DSNULI to your application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
8-1 Iteration over results using the XMLTABLE function . . . . . . . . . . . . . . . . . . . . . . . . . 245
8-2 Sorting values from an XML document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
8-3 Stored XML documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
8-4 Inserting values returned from XMLTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
8-5 Returning one row for each occurrence of phone item as a string . . . . . . . . . . . . . . . 247
8-6 Returning one row for each occurrence of phone item as an XML document . . . . . . 248
8-7 Specifying a default value for a column in the result table . . . . . . . . . . . . . . . . . . . . . 249
8-8 Specifying an ordinality column in a result table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
8-9 The XMLCAST specification: Implicit casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8-10 The XMLCAST specification: Explicit casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8-11 XML index usage by join predicates example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8-12 XML index usage by join predicates example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8-13 Query example using function fn:exists() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
8-14 Query example using function fn:not() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
8-15 Query example using function fn:upper-case() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
8-16 Query example using function fn:starts-with(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
8-17 Query example using function fn:substring() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
8-18 Query example using XMLTABLE (original query) . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8-19 Query example using XMLTABLE (transformed query) . . . . . . . . . . . . . . . . . . . . . . 255
8-20 Query example using XMLTABLE with WHERE clause (original query) . . . . . . . . . 256
8-21 Query example using XMLTABLE with WHERE clause (First transformation). . . . . 256
8-22 Query example using XMLTABLE with WHERE clause (Second transformation) . . 256
8-23 Specify XML type modifier for XML column at create time . . . . . . . . . . . . . . . . . . . . 260
8-24 Table definition without XML type modifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8-25 Specify XML type modifier for XML column at alter time . . . . . . . . . . . . . . . . . . . . . 260
8-26 Add an XML schema to the XML type modifier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8-27 Reset XML type modifier for XML column at alter time. . . . . . . . . . . . . . . . . . . . . . . 261
8-28 Identify an XML schema by target namespace and schema location. . . . . . . . . . . . 261
8-29 Identify an XML schema by target namespace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
8-30 No namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
8-31 Specifying global element name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
8-32 Schema selection for validation from an XML type modifier: Example 1 . . . . . . . . . 265
8-33 Schema selection for validation from an XML type modifier: Example 2 . . . . . . . . . 265
8-34 Schema selection for validation from an XML type modifier: Example 3 . . . . . . . . . 265
8-35 Schema selection for validation from an XML type modifier: Example 4 . . . . . . . . . 266
8-36 Schema selection for validation for DSN_XMLVALIDATE: Example 1. . . . . . . . . . . 267
8-37 Schema selection for validation for DSN_XMLVALIDATE: Example 2. . . . . . . . . . . 267
8-38 Schema selection for validation for DSN_XMLVALIDATE: Example 3. . . . . . . . . . . 267
8-39 Schema selection for validation for DSN_XMLVALIDATE: Example 4. . . . . . . . . . . 268
8-40 Search for documents not validated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
8-41 Retrieve target namespaces and XML schema names used for validation . . . . . . . 269
8-42 CHECK DATA example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
8-43 Self referencing UPDATE of an XML column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8-44 XML data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
xxii DB2 10 for z/OS Technical Overview
8-45 JDBC application example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
8-46 COBOL example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
8-47 Row filtering using XMLEXISTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8-48 Application using JDBC 4.0 (fetch XML as SQLXML type). . . . . . . . . . . . . . . . . . . . 288
8-49 Application using JDBC 4.0 (insert and update XML using SQLXML) . . . . . . . . . . . 289
8-50 Samples of the time zone adjustment functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
8-51 User-defined SQL scalar function using XML variable . . . . . . . . . . . . . . . . . . . . . . . 302
8-52 User-defined SQL table function using XML variable . . . . . . . . . . . . . . . . . . . . . . . . 304
9-1 REBIND job example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
10-1 Audit all SQL for multiple tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10-2 Audit policy for category SYSADMIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10-3 Mark audit policy for automatic start during DB2 startup . . . . . . . . . . . . . . . . . . . . . 343
10-4 AUD dynamic SQL for auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
10-5 AUD SQL static SQL for auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10-6 Create SQL statement auditing policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10-7 Start SQL statement auditing policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10-8 Display SQL statement audit policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
10-9 Stop SQL statement audit policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
10-10 OMEGAMON PE record trace for dynamic SQL. . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10-11 SECAM to revoke privileges granted by others . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10-12 SECMAINT audit policy - grant - revoke auditing . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10-13 SECMAINT audit policy - grant DATAACESS SQL . . . . . . . . . . . . . . . . . . . . . . . . 363
10-14 Grant or revoke system DBADM privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
10-15 Grant system DBADM WITHOUT DATAACCESS WITHOUT ACCESSCTRL. . . . 365
10-16 DBADMIN audit policy - system DBADM auditing . . . . . . . . . . . . . . . . . . . . . . . . . 367
10-17 DBADMIN audit policy - create table by system DBADM. . . . . . . . . . . . . . . . . . . . 367
10-18 Grant or revoke the DATAACCESS privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
10-19 DBADMIN audit policy: DATAACCESS auditing. . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10-20 DBADMIN audit policy - query a table by DATAACCESS authority . . . . . . . . . . . . 369
10-21 Grant and revoke the ACCESSCTRL privilege. . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10-22 SECADM to revoke privileges granted by others . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10-23 DBADMIN audit policy: ACCESSCTRL auditing. . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10-24 DBADMIN audit policy: Grant privilege by ACCESSCTRL authority . . . . . . . . . . . 371
10-25 Grant, revoke the explain privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10-26 EXPLAIN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
10-27 Grant, revoke SQLADM privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10-28 DBADMIN audit policy: SQLADM auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
10-29 DBADMIN audit policy: Query table by SQLADM authority . . . . . . . . . . . . . . . . . . 376
10-30 SQLADM marking UDF as system defined through dummy alter. . . . . . . . . . . . . . 382
10-31 Activate row access control on customer table. . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
10-32 Row permission object RA01_CUSTOMER SQL DDL. . . . . . . . . . . . . . . . . . . . . . 393
10-33 Activate column access control on customer table . . . . . . . . . . . . . . . . . . . . . . . . . 395
10-34 Deactivate row access control on customer table. . . . . . . . . . . . . . . . . . . . . . . . . . 396
10-35 Column mask INCOME_BRANCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
10-36 RA02_CUSTOMERS row permission referencing SYSIBM.SYSDUMMY1 . . . . . . 402
10-37 RA02_CUSTOMERS explain a query accessing the customer table . . . . . . . . . . . 402
10-38 Change the password phrase using DB2 CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
10-39 Configure DB2 communications database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
10-40 DSNLEUSR stored procedure invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
10-41 Shell script for invoking the TestJDBC application . . . . . . . . . . . . . . . . . . . . . . . . . 415
10-42 COBOL logic for connecting to a remote location. . . . . . . . . . . . . . . . . . . . . . . . . . 415
10-43 Activate RACF class IDIDMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10-44 RACMAP MAP command to add identity mapping. . . . . . . . . . . . . . . . . . . . . . . . . 418
Examples xxiii
10-45 Refresh IDIDMAP class profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10-46 Trusted context definition for identify propagation . . . . . . . . . . . . . . . . . . . . . . . . . 419
11-1 SAMPLE COPY FLASHCOPY YES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
11-2 Sample COPY FLASHCOPY YES job output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11-3 Invalid name error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11-4 COPYTOCOPY messages for FlashCopy image copy input . . . . . . . . . . . . . . . . . . 431
11-5 Messages from RECOVER utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
11-6 First part of FLASHCOPY CONSISTENT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
11-7 Messages for FlashCopy image copy consistency processing. . . . . . . . . . . . . . . . . 436
11-8 Error message you get when no SYSCOPY DD specified . . . . . . . . . . . . . . . . . . . . 439
11-9 DSNU552I trying to recover TS in ICOPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
11-10 REBUILD INDEX FLASHCOPY YES output extract. . . . . . . . . . . . . . . . . . . . . . . . 442
11-11 RACF error message when trying to involve FlashCopy in recover . . . . . . . . . . . . 445
11-12 RUNSTATS PROFILE utility execution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
11-13 RECOVER TABLESPACE ... BACKOUT YES job output . . . . . . . . . . . . . . . . . . . 450
11-14 Error message preventing BACKOUT YES recovery . . . . . . . . . . . . . . . . . . . . . . . 452
11-15 REORG ... AUX YES job output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
11-16 REORG ... AUX YES and TEMPLATE job output . . . . . . . . . . . . . . . . . . . . . . . . . 456
11-17 Unsuccessful attempt to work with FORCE READERS . . . . . . . . . . . . . . . . . . . . . 463
11-18 -DIS DB ... LOCKS for database object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
11-19 -DIS THREAD(*) output for agent 543 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
11-20 REPORT RECOVERY output with SLBs available. . . . . . . . . . . . . . . . . . . . . . . . . 469
11-21 REPORT RECOVERY utility output new section . . . . . . . . . . . . . . . . . . . . . . . . . . 470
12-1 Message if fallback SPE is not applied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
12-2 Messages to indicate whether BSDS was converted . . . . . . . . . . . . . . . . . . . . . . . . 477
12-3 Expanded format BSDS required to start DB2 10. . . . . . . . . . . . . . . . . . . . . . . . . . . 477
12-4 Output of successful run of DSNJCNVB conversion program . . . . . . . . . . . . . . . . . 478
12-5 Output of DSNJCNVB BSDS conversion program on an already converted BSDS. 478
12-6 CHAR casting in DB2 9 NFM and DB2 10 CM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
12-7 Sample call to DSNXTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
12-8 Warning message DSNT0921 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
12-9 Sample call to DSNTXTB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
12-10 Output of job DSNTIJXA program DSNTXTA to convert all schemas . . . . . . . . . . 514
12-11 Failure if no SMS definitions for DB2 catalog and directory are defined. . . . . . . . . 517
12-12 Output of CATMAINT job DSNTIJTC DB2 V8 NFM to DB2 10 CM . . . . . . . . . . . . 517
12-13 Execution of DSNTRIN with PREVIEW option in INSTALL mode . . . . . . . . . . . . . 521
12-14 Sample output from DSNTIJRW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
12-15 Program DSNTWLMB needs to be APF-authorized to activate WLM changes . . . 528
13-1 OMEGAMON PE statistics report sample on DBATs . . . . . . . . . . . . . . . . . . . . . . . . 550
13-2 Unloading in spanned format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
13-3 Query to identify indexes for possible INCLUDE . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
13-4 Possible INCLUDE candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
A-1 Changes to IFCID 002 - Dynamic statement cache . . . . . . . . . . . . . . . . . . . . . . . . . . 616
A-2 IFCID 002 - Currently committed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
A-3 IFCID 013 and IFCID 014. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
A-4 Changes to IFCID 106 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
A-5 Changes to IFCID 225 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
A-6 Changes to IFCID 267 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
A-7 Changes to IFCID 316 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
A-8 New IFCID 357. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A-9 New IFCID 358. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A-10 New IFCID 359 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
A-11 New IFCID 363 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
xxiv DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. xxv
Tables
2-1 Eligibility for zHPF of DB2 I/O types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3-1 EAV enablement and EAS eligibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4-1 SYSIBM.SYSPENDINGDDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4-2 Maximum DSSIZE depending on table space attributes . . . . . . . . . . . . . . . . . . . . . . . 75
4-3 Number of partitions by DSSIZE and page size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4-4 New SYSIBM.SYSCOPY information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4-5 SYSIBM.SYSPENDINGOBJECTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5-1 DB2 9 GRECP/LPL maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6-1 CREATE and ALTER FUNCTION options resulting in rebind when changed . . . . . . 143
6-2 Implicit cast target data types and length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6-3 Datetime constant formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6-4 Formats for string representation of dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6-5 Formats for string representation of times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6-6 Declarations generated by DCLGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6-7 Equivalent SQL and Assembly data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6-8 Equivalent SQL and C data types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6-9 Equivalent SQL and COBOL data types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6-10 Equivalent SQL and PL/I data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6-11 Declarations generated by DCLGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6-12 Equivalent SQL and Assembly data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6-13 Equivalent SQL and C data types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6-14 Equivalent SQL and COBOL data types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6-15 Equivalent SQL and PL/I data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6-16 Equivalent SQL and Java data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6-17 Supported aggregation group boundary combinations . . . . . . . . . . . . . . . . . . . . . . . 195
7-1 Values for extended indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7-2 Catalog changes for CONCURRENTACCESSRESOLUTION. . . . . . . . . . . . . . . . . . 239
8-1 Contents of CUSTADDR table after insert of a result table generated by XMLTABLE 247
8-2 Result table of a query using XMLTABLE to retrieve multiple occurrences of an item248
8-3 Result table of a query using XMLTABLE to retrieve multiple occurrences of an item248
8-4 Result table from a query in which XMLTABLE has a default value for an item. . . . . 249
8-5 Result table from a query in which XMLTABLE has a ordinality column . . . . . . . . . . 249
8-6 CHECK DATA invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8-7 Data in table T1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
8-8 Return table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
9-1 Categories for system group monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10-1 Audit categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
10-2 The SYSIBM.SYSAUDITPOLICIES catalog table . . . . . . . . . . . . . . . . . . . . . . . . . . 340
10-3 New and changed IFCIDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
10-4 Catalog table changes for SYSIBM.SYSUSERAUTH. . . . . . . . . . . . . . . . . . . . . . . . 359
10-5 Privileges held by SECADM authority. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10-6 DSNZPARMs for SECADM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10-7 SECADM authority DSNZPARMs settings of our sample scenario . . . . . . . . . . . . . 362
10-8 System DBADM privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
10-9 Additional privileges required by system DBADM. . . . . . . . . . . . . . . . . . . . . . . . . . . 366
10-10 DATAACCESS implicitly held grantable privileges . . . . . . . . . . . . . . . . . . . . . . . . . 368
10-11 ACCESSCTRL implicitly held grantable privileges . . . . . . . . . . . . . . . . . . . . . . . . . 370
10-12 Privileges of EXPLAIN privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
xxvi DB2 10 for z/OS Technical Overview
10-13 DB2 RACF profiles for DB2 10 new authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
10-14 Default behavior for REVOKE_DEPENDING_PRIVILEGES . . . . . . . . . . . . . . . . . 384
10-15 DSN_FUNCTION_TABLE change for row and column access control . . . . . . . . . 401
10-16 DSN_PREDICAT_TABLE changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10-17 DSN_STRUCT_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
10-18 Access control changes for SYSIBM.SYSCOLUMNS . . . . . . . . . . . . . . . . . . . . . . 407
10-19 Access control new table SYSIBM.SYSCONTROLS . . . . . . . . . . . . . . . . . . . . . . . 407
10-20 Access control changes for SYSIBM.SYSDEPENDENCIES . . . . . . . . . . . . . . . . . 409
10-21 Access control changes to SYSIBM.SYSOBJROLEDEP. . . . . . . . . . . . . . . . . . . . 409
10-22 Access control changes to SYSIBM.SYSROUTINES. . . . . . . . . . . . . . . . . . . . . . . 409
10-23 Access control changes to SYSIBM.SYSTABLES . . . . . . . . . . . . . . . . . . . . . . . . . 410
10-24 Access control changes to SYSIBM.SYSTRIGGERS. . . . . . . . . . . . . . . . . . . . . . . 410
10-25 Access control changes to SYSIBM.SYSUSERAUTH . . . . . . . . . . . . . . . . . . . . . . 410
10-26 Changed IFCIDs for row and column access control . . . . . . . . . . . . . . . . . . . . . . . 411
10-27 SYSIBM.USERNAMES change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
11-1 DSNZPARM, FLASHCOPY keyword, resulting copy . . . . . . . . . . . . . . . . . . . . . . . . 430
11-2 REORG TABLESPACE ... AUX option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
11-3 REORG TABLESPACE ... AUX option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
11-4 Available SHRLEVELs for LOB REORG depending on DB2 versions . . . . . . . . . . . 458
11-5 DBET state by function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
12-1 System software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
12-2 Catalog and directory tables and their partition-by-growth table spaces . . . . . . . . . 493
12-3 Catalog tables and table spaces in DB2 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
12-4 Removed system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
12-5 Deprecated system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
12-6 System parameters that are deprecated and have new default values . . . . . . . . . . 502
12-7 System parameters with new maximums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
12-8 System parameters with new default settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
12-9 DB2 10 new system parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
12-10 DPSEGSZ and type of table space created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
12-11 DSN_PTASK_TABLE column name corrections . . . . . . . . . . . . . . . . . . . . . . . . . . 516
12-12 New or revised installation or migration verification jobs . . . . . . . . . . . . . . . . . . . . 522
12-13 Revised sample job DSNTEJ2H: Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
12-14 DB2-supplied stored procedures and WLM environment setup . . . . . . . . . . . . . . . 523
12-15 DB2 default WLM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
13-1 PLAN_TABLE extract for range list access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
13-2 PLAN_TABLE extract for IN-list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
13-3 Column LITERAL_REPL values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
13-4 Catalog table changes for inline LOBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
13-5 Catalog table changes for hash access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
13-6 Catalog table changes for INCLUDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
13-7 Column DRIVETYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
13-8 DSN_PGROUP_TABLE changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
13-9 Sort merge join PLAN_TABLE changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
B-1 DB2 10 current function-related APARsPM43293 . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
B-2 z/OS DB2-related APARs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
B-3 OMEGAMON PE GA and DB2 10 related APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Copyright IBM Corp. 2010. All rights reserved. xxvii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xxviii DB2 10 for z/OS Technical Overview
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX
BladeCenter
CICS
Cognos
DataPower
DB2 Connect
DB2 Universal Database
DB2
developerWorks
Distributed Relational Database
Architecture
DRDA
DS8000
Enterprise Storage Server
Enterprise Workload Manager
FICON
FlashCopy
GDPS
Geographically Dispersed Parallel
Sysplex
IBM
IMS
Language Environment
MQSeries
MVS
Net.Data
OMEGAMON
OmniFind
Optim
OS/390
Parallel Sysplex
POWER6+
POWER7
PowerVM
PR/SM
pureXML
QMF
Query Management Facility
RACF
Redbooks
Redbooks (logo)
Resource Measurement Facility
RMF
S/390
Service Request Manager
Solid
System Storage
System z10
System z9
System z
Tivoli
TotalStorage
VTAM
WebSphere
z/Architecture
z/OS
z/VM
z10
z9
zSeries
The following terms are trademarks of other companies:
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Copyright IBM Corp. 2010. All rights reserved. xxix
Preface
IBM DB2 Version 10.1 for z/OS (DB2 10 for z/OS or just DB2 10 throughout this book) is
the fourteenth release of DB2 for MVS. It brings improved performance and synergy with
the System z hardware and more opportunities to drive business value in the following
areas:
Cost savings and compliance through optimized innovations
DB2 10 delivers value in this area by achieving up to 10% CPU savings for traditional
workloads and up to 20% CPU savings for nontraditional workloads, depending on the
environments. Synergy with other IBM System z platform components reduces CPU
use by taking advantage of the latest processor improvements and z/OS
enhancements.
Streamline security and regulatory compliance through the separation of roles between
security and data administrators, column level security access, and added auditing
capabilities.
Business insight innovations
Productivity improvements are provided by new functions available for pureXML, data
warehousing, and traditional online TP applications
Enhanced support for key business partners that allow you to get more from your data
in critical business disciplines like ERP
Bitemporal support for applications that need to correlate the validity of data with time.
Business resiliency innovations
Database on demand capabilities to ensure that information design can be changed
dynamically, often without database outages
DB2 operations and utility improvements enhancing performance, usability, and
availability by exploiting disk storage technology.
The DB2 10 environment is available either for brand new installations of DB2, or for
migrations from DB2 9 for z/OS or from DB2 UDB for z/OS Version 8 subsystems.
This IBM Redbooks publication introduces the enhancements made available with DB2 10
for z/OS. The contents help you understand the new functions and performance
enhancements, start planning for exploiting the key new capabilities, and justify the
investment in installing or migrating or skip migrating to DB2 10.
The team who wrote this book
This book was produced by a team of specialists from around the world working at the Silicon
Valley Lab, San Jose.
Paolo Bruni is a DB2 Information Management Project Leader at the International Technical
Support Organization based in the Silicon Valley Lab. He authored several Redbooks
publications about DB2 for z/OS and related tools and conducts workshops and seminars
worldwide. During Paolos many years with IBM, in development and in the field, his work is
mostly related to database systems.
xxx DB2 10 for z/OS Technical Overview
Rafael Garcia has been in the IT business for 28 years and has held various positions. He
was a COBOL and CICS Developer, a DOS/VSE Systems Programmer, an Application
Development Manager, and a DB2 Applications DBA for one of the top 10 banks in the U.S.
Rafael has 18 years of experience working with DB2 for z/OS. For the last 13 years, he has
been a field DB2 Technical Specialist working for the IBM Silicon Valley Laboratory and
supporting DB2 for z/OS customers across various industries, including migrations to data
sharing, release migration support, and assistance in resolution of customer critical issues.
He has an Associates Degree in Arts and an Associates Degree in Science in Business Data
Processing from Miami-Dade Community College. He has co-authored several IBM
Redbooks publications, including DB2 for z/OS Application Programming Topics, SG24-6300,
DB2 UDB for z/OS Version 8: Everything You Ever Wanted to Know, ... and More,
SG24-6079, DB2 UDB for z/OS Version 8 Technical Preview, SG24-6871, and DB2 9 for z/OS
Technical Overview, SG24-7330.
Sabine Kaschta is a DB2 Specialist working for the IBM Software Group in Germany.
Currently, she primarily works as a Segment Skills Planner for the worldwide curriculum for
DB2 for z/OS training. She also works on course development and teaches several DB2 for
z/OS classes worldwide. Sabine has 17 years of experience working with DB2. Before joining
IBM in 1998, she worked for a third-party vendor providing second-level support for DB2
utilities. She is experienced in DB2 system programming and client/server implementations in
the insurance industry in Germany. She co-authored the several IBM Redbooks publications,
including DB2 UDB for OS/390 and Continuous Availability, SG24-5486, Cross-Platform DB2
Distributed Stored Procedures: Building and Debugging, SG24-5485, IBM TotalStorage
Migration Guide for the SAP User, SG24-6400, DB2 UDB for z/OS Version 8: Everything You
Ever Wanted to Know, ... and More, SG24-6079, DB2 9 for z/OS Technical Overview,
SG24-7330, DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond, SG24-7604,
and DB2 9 for z/OS: Deploying SOA Solutions, SG24-7663.
Josef Klitsch is a Senior IT Specialist for z/OS Problem Determination tools with IBM
Software Group, Switzerland. After he joined IBM in 2001, he provided DB2 consultancy and
technical support to Swiss DB2 for z/OS customers and worked as a DB2 subject matter
expert for IBM China and as a DB2 for z/OS technical resource for IBM France in Montpellier.
Prior to his IBM employment, Josef worked, for more than 15 years, for several European
customers as an Application Developer, Database Administrator, and DB2 systems
programmer with a focus on DB2 for z/OS and its interfaces. His preferred area of expertise in
DB2 for z/OS is stored procedures programming and administration. He co-authored the IBM
Redbooks publication DB2 9 for z/OS: Deploying SOA Solutions, SG24-7663.
Ravi Kumar is a Senior Instructor and Specialist for DB2 with IBM Software Group, Australia.
He has about 25 years of experience in DB2. He was on assignment at the International
Technical Support Organization, San Jose Center, as a Data Management Specialist from
1994 to 1997. He has coauthored many IBM Redbooks publications, including DB2 UDB for
z/OS Version 8: Everything You Ever Wanted to Know, ... and More, SG24-6079 and DB2 9
for z/OS Technical Overview, SG24-7330. He is currently on virtual assignment as a Course
Developer in the Education Planning and Development team, Information Management, IBM
Software Group, U.S.
Andrei Lurie is a Senior Software Engineer in IBM Silicon Valley Lab, San Jose. He has 10
years of experience in developing DB2 for z/OS and has participated in the development of
many major line items for DB2 since DB2 for z/OS Version 8. Andreis main areas of expertise
are in SQL and application development. Currently, he is a technical lead for Structure
Generator and Runtime components of the DB2 RDS area. He holds a master degree in
Computer Science from the San Jose State University, where he enjoys teaching DB2 for
z/OS topics as a guest lecturer from time to time.
Preface xxxi
Michael Parbs is a Senior DB2 Specialist with IBM Global Technology Services A/NZ, from
Canberra, Australia. He has over 20 years experience with DB2, primarily on the z/OS
platform. Before joining IBM he worked in the public sector in Australia, as a DB2 DBA and
DB2 Systems Programmer. Since joining IBM, Michael has worked as a subject matter expert
with a number of DB2 customers both in Australia and China. Michaels main areas of
expertise are data sharing, and performance and tuning, but his skills include database
administration and distributed processing. Michael is an IBM Certified IT Specialist in Data
Management and has coauthored several IBM Redbooks publications, including DB2 for
MVS/ESA Version 4 Data Sharing Implementation, SG24-4791, DB2 UDB Server for OS/390
and z/OS Version 7 Presentation Guide, SG24-6121, DB2 UDB for z/OS Version 8:
Everything You Ever Wanted to Know, ... and More, SG24-6079, and DB2 UDB for z/OS
Version 8 Performance Topics, SG24-6465.
Rajesh Ramachandran is a Senior Software Engineer in IBM System z e-Business Services.
He currently works in the Design Center in Poughkeepsie as an IT Architect and DB2
assignee. He has 14 years of experience in application development on various platforms,
which include z/OS, UNIX, and Linux using COBOL, Java, CICS, and Forte.
Acknowledgments
The authors thank Dave Beulke for his contribution in written content. David is an
internationally recognized DB2 consultant, author, and instructor. He is known for his
extensive expertise in database performance, data warehouses, and Internet applications. He
is currently a member of the IBM DB2 Gold Consultant program, an IBM Information
Champion, past president of the International DB2 Users Group (IDUG), coauthor of the IBM
DB2 V8 and V7 z/OS Administration and the Business Intelligence Certification exams,
columnist for the IBM Data Management Magazine, former instructor for The Data
Warehouse Institute (TDWI), and former editor of the IDUG Solutions Journal. With over 22
years experience working with mainframe, UNIX, and Windows environments, he redesigns
and tunes systems, databases, and applications dramatically, reducing CPU demand and
saving his clients from CPU upgrades and millions in processing charges.
Thanks to the following people for their contributions to this project:
Rich Conway
Bob Haimowitz
Emma Jacobs
IBM International Technical Support Organization
Saghi Amirsoleymani
Jeff Berger
Chuck Bonner
Mengchu Cai
Gayathiri Chandran
Julie Chen
Chris Crone
Jason Cu
Marko Dimitrijevic
Margaret Dong
Cathy Drummond
Thomas Eng
Bill Franklin
Craig Friske
Shivram Ganduri
James Guo
xxxii DB2 10 for z/OS Technical Overview
Keith Howell
Akiko Hoshikawa
David Kuang
Laura Kunioka-Weis
Jeff Josten
Andrew Lai
Dave Levish
Kim Lyle
Bob Lyle
John Lyle
Tom Majithia
Bruce McAlister
Pat Malone
Roger Miller
Chris Munson
Ka-Chun Ng
Betsy Pehrson
Jim Pickel
Terry Purcell
Haakon Robers
Mike Shadduck
Michael Schenker
Jack Shedden
Akira Shibamija
Derek Tempongko
John Tobler
Lingyun Wang
Peter Wansch
Jay Yothers
David Zhang
Guogen Zhang
Binghui Zhong
IBM Silicon Valley Lab
Rick Butler
BMO Toronto
Heidi Arnold
Boris Charpiot
Norbert Heck
Norbert Jenninger
Joachim Pioch
Rosemarie Roehm-Frenzel
IBM Boeblingen Lab
Stephane Winter
IBM Montpellier
Chris Meyer
IBM Durham, NC USA
Christian Heimlich
IBM Frankfurt, Germany
Preface xxxiii
Now you can become a published author, too!
Heres an opportunity to spotlight your skills, grow your career, and become a published
authorall at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks publication
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxxiv DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. xxxv
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7892-00
for DB2 10 for z/OS Technical Overview
as created or updated on August 22, 2013.
December 2010, First Edition
The revisions of this First Edition, first published on December 30, 2010, reflect the changes
and additions described below.
March 2011, First Update
This revision reflects the addition, deletion, or modification of new and changed information
described below.
Changed information
Corrections in 3.9, Compression of SMF records on page 64.
Removed wrong sentence in 4.1.1, UTS with DB2 9: Background information on
page 68.
Removed the previous section 4.1.9, Convert a partition-by-growth table space to a hash
table space which only cross referenced 13.15, Hash access on page 575.
Corrections in 7.5, Access to currently committed data on page 238.
Updated section 10.2, More granular system authorities and privileges on page 354.
Corrections in text in 11.4.1, Improvements to online REORG for base tables spaces with
LOBs on page 452.
Corrections in 13.2.1, Safe query optimization on page 537.
Corrections in 13.2.2, RID pool enhancements on page 537.
Corrections in query in 13.2.5, Aggressive merge for views and table expressions on
page 542.
Updated section 13.7, Referential integrity checking improvement on page 560.
Updated text to disclaimer in Example 13-3 on page 588.
Replaced Figure 13-17 on page 595.
Removed a section in 13.23, Enhanced instrumentation on page 598 on SMF
Compression since it is already described in 3.9, Compression of SMF records on
page 64.
Updated the referenced bibliography in Related publications on page 641.
xxxvi DB2 10 for z/OS Technical Overview
New information
Added an Attention box in 2.12.3, DB2 10 buffer pool prefetch and deferred write
activities.
Added text in 3.9, Compression of SMF records on page 64.
Added text in 4.1.2, UTS with DB2 10: The ALTER options on page 69.
Added section 5.2, Delete data sharing member on page 112.
Added PTFs to 9.1, DDF availability on page 310.
Added section 9.8, Return to client result sets on page 328.
Added section Secure Audit trace on page 344.
Added text in section 11.1, Support FlashCopy enhancements on page 426.
Added text in 11.4.3, Improved usability of REORG of disjoint partition ranges on
page 458.
Added section 12.2, Some release incompatibilities on page 479
Added Example 12-6 on page 479.
Added text in Table 12-8 on page 503.
Added text in Table 12-9 on page 505.
Added the whole Appendix B, Summary of relevant maintenance on page 627.
December 2011, Second Update
This revision reflects the addition, deletion, or modification of new and changed information
described below.
Changed information
Correction in 3.1.1, Support for full 64-bit run time on page 52.
Updated APARs in Appendix B, Summary of relevant maintenance on page 627.
New information
Added section A.11, IFCID 360 on page 623.
Added APARs in Appendix B, Summary of relevant maintenance on page 627.
Added section B.3, OMEGAMON PE APARs on page 635.
August 2013, Third Update
This revision reflects the addition, deletion, or modification of new and changed information
described below.
Changed information
Correction in 13.5, Dynamic statement cache enhancements on page 553 for JCC driver
literal support enabling.
Updated APARs in Appendix B, Summary of relevant maintenance on page 627.
New information
Added APARs in Appendix B, Summary of relevant maintenance on page 627.
Copyright IBM Corp. 2010. All rights reserved. 1
Chapter 1. DB2 10 for z/OS at a glance
Todays organizations need to lower operating costs by reducing CPU cycle times while
building and maintaining a strong foundation for service-oriented architecture (SOA) and XML
initiatives. Database administrators (DBAs) need to find improved database performance,
scalability, and availability while also reducing memory management so that managing growth
is a simpler process. DB2 10 for z/OS provides innovations in the following key areas:
Improved operational efficiencies
Unsurpassed resiliency for business-critical information
Rapid application and warehouse deployment for business growth
Enhanced query and reporting facilities
In this chapter, we provide a general overview of DB2 10 for z/OS. We discuss the following
topics:
Executive summary
Benefits of DB2 10
Subsystem enhancements
Application functions
Operation and performance enhancements
1
2 DB2 10 for z/OS Technical Overview
1.1 Executive summary
DB2 10 for z/OS is a tremendous step forward in database technology because of its
improvements in performance, scalability, availability, security, and application integration. Its
technology provides a clear competitive advantage through its continued improvements in
SQL, XML, and integrated business intelligence capabilities.
DB2 10 takes advantage of the recent advances in chip technology, storage devices, and
memory capacities through its extensive exploitation of System z 64-bit architecture to reduce
CPU requirements, to improve performance, and potentially to reduce the total cost of
ownership. The DB2 10 CPU reduction features provide enhanced capabilities so that
companies can analyze every factor to improve the bottom line.
Initial testing of DB2 10 shows that its many enhancements optimize the runtime environment
by reducing CPU consumption by 5-10% with rebind for traditional workloads, with an
additional possible 10% benefit using new functions, and up to 20% for new workloads. In
addition, DB2 10 handles five to 10 times more concurrent users, up to 20,000 users,
providing improvements in capacity, scalability, and overall performance for any type of large
scale application.
Capacity, scalability, and performance continue to be DB2 for z/OS strengths. Through
extensive exploitation of the System z 64-bit architecture, DB2 enhancements such as
shorter optimized processing, using solid-state disk, in-memory work file enhancements,
index insert parallelism improvements, and better SQL/XML access paths, provide many
reductions in CPU costs and performance improvements without requiring any application
changes.
DB2 10 features also enhance continuous business processing, database availability, and
overall ease of use. DB2 is continuously available due to more options with online database
changes, more concurrent utilities, and easier administration processes. DB2 10 database
systems are always available. Transactions are processed even as changes are made to the
database. These new database change capabilities combined with more options for parallel
utility execution provide a smaller concurrent window for processing and administration tasks
to streamline overall system operations.
Security, regulatory compliance, and audit capability improvements are also included in
DB2 10. Enhanced security extends the role-based model introduced in DB2 9. DB2 10
provides granular authorities that separate data access from the administration of the
application, database, and system. DB2 10 provides administration flexibility for specific
security role settings, preventing data access exposure to unauthorized applications or
administrators. This role-based security model, combined with the label-based row and
column access control and masking or encryption of sensitive information, enhances the
ultimate secure database environment for your business. All of these features provide tighter
controls, allow more security flexibility, and provide tremendous regulatory and audit
compliance capabilities.
Application integration and portability benefit from the DB2 10 SQL and XML enhancements.
DB2 10 SQL improvements further embrace the DB2 family with more enhancements for
porting other DBMS vendor products into the DB2 10 for z/OS environment. Additional
enhancements with timestamps with time zones, Java timestamp compatibility, and
timestamp 12-digit pico-second precision granularity provide unique business transaction
timestamps for every transaction in a table. These enhancements help international
companies understand all their businesses throughout all the global events of the day.
Chapter 1. DB2 10 for z/OS at a glance 3
Data versioning with current and history tables is also available with DB2 10. Data versioning
provides flexibility to migrate current operational data into a historical data table based on set
time periods. Data versioning enhances the performance for the application operations and
maintains historical data for audit purposes and for better integration into regulatory
compliance architecture.
In addition, data warehouse and business intelligence capabilities are now built in directly to
the DB2 10 with the new temporal table capabilities and the SQL capabilities for calculating
moving sums and moving averages. These capabilities enhance the bottom line by integrating
business intelligence capabilities into front-line operational applications and significantly
reduce programming effort.
DB2 10 XML improvements help application flexibility, usability, and performance. DB2 10
provides a number of important XML performance and management improvements through
the following capabilities:
Replace, delete, or insert XML document nodes
Manage multiple XML schema version documents
Use the binary pre-tokenized XML format
Additional enhancements within utilities, date time data types, and XML parameter support
within SQL functions and procedures provide application flexibility to use XML within any
application architecture solution.
DB2 10 for z/OS delivers performance, scalability, availability, and security and also improves
application integration and regulatory compliance. DB2 10 remains the technology leader and
best choice database for business systems that seek a competitive advantage.
1.2 Benefits of DB2 10
DB2 10 can reduce the total DB2 CPU demand from 5-20% when you take advantage of all
the enhancements. Many CPU reductions are built in directly to DB2, requiring no application
changes. Some enhancements are implemented through normal DB2 activities through
rebinding, restructuring database definitions, improving applications, and utility processing.
The CPU demand reduction features have the potential to provide significant total cost of
ownership savings based on the application mix and transaction types.
Improvements in optimization reduce costs by processing SQL automatically with more
efficient data access paths. Improvements through a range-list index scan access method,
IN-list list prefetch, more parallelism for select and index insert processing, better work file
usage, better record identifier (RID) pool overflow management, improved sequential
detection, faster log I/O, access path certainty evaluation for static SQL, and improved
distributed data facility (DDF) F transaction flow all provide more efficiency without changes to
applications. These enhancements can reduce total CPU enterprise costs because of
improved efficiency in the DB2 10 for z/OS.
Other enhancements require new database definitions or application programming
techniques to reduce overall costs. These enhancements, such as the Hash Access space
and access method, including columns on a unique index, consolidating SQL by using the
Attributes feature, and automatic data statistics, can reduce CPU costs through improved
access paths.
DB2 10 includes numerous performance enhancements for Large Objects (LOBs) that save
disk space for small LOBs and that provide dramatically better performance for LOB retrieval,
4 DB2 10 for z/OS Technical Overview
inserts, load, and import/export using DB2 utilities. DB210 can also more effectively REORG
partitions that contain LOBs.
Improved operations and availability with DB2 10 can also reduce costs. DB2 10 can
eliminate system downtime costs through the following capabilities:
Improved memory management
Ability to handle five to 10 times more users
Ability to skip locked rows
Improvements in the DB2 catalog
More online schema change capabilities
More online utilities
DB2 10 keeps the application available, even while more users, applications, database
changes, and utilities are executing in the system.
More automated processes help install, configure, and simplify the installation of the DB2 10
system. Pre-migration installation steps capture existing settings and provide appropriate
settings for the DB2 10 supplied routines and programs. These procedures can help reduce
the installation, configuration, and testing time.
The ability to migrate directly from DB2 V8 also allows you to skip the implementation of
DB2 9. Although using this method, you cannot take advantage of all the DB2 9
enhancements until you migrate to DB2 10. By skipping DB2 9, you do not have to install,
test, and implement DB2 9 as an interim step to DB2 10. You can then use all the DB2 9 and
DB2 10 enhancements after going to DB2 10 NFM. Many improvements are available in
DB2 10 CM, regardless of the release from which you started the migration.
1.3 Subsystem enhancements
As with all previous versions, DB2 10 for z/OS takes advantage of the latest improvements in
the platform. DB2 10 increases the synergy with System z hardware and software to provide
better performance, more resilience, and better function for overall improved value.
There are several new functions that are generally related to the synergy of DB2 subsystems
with the z/OS platform that can help improve scalability and availability. We describe those
functions briefly in this section. Many enhancements are related to the removal of the 2 GB
constraint in virtual storage for the DBM1 address space by allocating most of DB2 virtual
storage areas above the 2 GB bar.
Verification of real storage and ECSA/ESQA requirements still applies.
Work file enhancements
Work files are required for sorting and materialization. DB2 10 reduces work file usage for
very small sorts and adds support for large sorts and sparse index builds. These
enhancements might improve performance and reduce the number of merge passes that are
required for sort.
In the WORKFILE database, DB2 10 also supports the definition of partition-by-growth table
spaces to support in-memory tables for improved performance. This capability provides a
dedicated definition type for in-memory data and relieves the extra administration work of
defining a single table space within dedicated buffer pools.
In addition, DB2 10 expands the record length size for work files to 65,529 bytes for handling
larger record size answer sets like sorts and joins.
Chapter 1. DB2 10 for z/OS at a glance 5
Buffer pool enhancements
System z has improved memory allocations, and DB2 10 takes advantage of this capability
through its buffer pool management enhancements. In previous versions, DB2 allocated the
defined size of the buffer pool at startup for use by all the associated table and index objects.
In DB2 10, the system allocates the memory for the buffer pools as the data is brought into
the buffer pool. This method keeps the buffer pool size at the minimum that is needed by
applications.
In addition, DB2 10 reduces its latch contention and provides the ability to define larger buffer
pools from current megabytes to gigabytes of memory for critical active objects. This
capability can improve system I/O rates and reduce contention.
In data sharing environments, when an object changes its state to or from inter-system
read/write interest, DB2 10 avoids scanning the buffer pool that contains that object, thereby
saving CPU time when using large buffer pools.
System z10 1 MB page size handles larger buffer pools better. Because buffer pool memory
allocations can quickly become very large, DB2 10 can use the z10 1 MB page size and keep
overall system paging down to a minimum.
Specialty engines
DB2 10 continues the trend initiated with DB2 V8 and provides additional zIIP workload
eligibility in the following areas:
DB2 10 parallelism enhancements
More queries qualify for query parallelism which in turn introduces additional zIIP eligibility.
DB2 10 RUNSTATS utility
Portions of the RUNSTATS utility are eligible to be redirected to run on a zIIP processor.
The degree of zIIP eligibility entirely depends upon what statistics that you gather.
DB2 10 buffer pool prefetch activities
Buffer pool prefetch, including dynamic prefetch, list prefetch, and sequential prefetch
activities, is 100% zIIP eligible in DB2 10. Because asynchronous services buffer pool
prefetch activities are not accounted to the DB2 client, they are reported in the DB2
statistics report instead.
Deferred write activity
DB2 bunches together, sorting by data set, the modified buffers that are accumulated in
the buffer pool by updates and writes them out asynchronously. This I/O activity executed
by DB2 for the applications is now a candidate for zIIP.
Extended 64-bit runtime environment exploited
DB2 10 improves scalability with exploitation of the 64-bit System z environment. Exploiting
the 64-bit environment and moving 80-90% of the DB2 memory that is now below the
barworking storage, EDMPOOL, and even some ECSAabove the 2 GB bar eliminates
the main memory constraints within the DB2 DBM1 address space for most systems.
Increase in the number of users, up to 20,000
By addressing the memory constraint in the overall system, virtual memory is no longer
critical allowing five to 10 times more concurrent threads in a single DB2 10 member. This
increase in threads removes one key reason for additional DB2 data sharing members and
allows some consolidation of LPARs and members previously built for handling more users.
6 DB2 10 for z/OS Technical Overview
Improved data set open and close management
DB2 10 takes advantage of a new z/OS 1.12 interface to enable data sets to be opened more
quickly and to maintain more open data sets.
Improved online schema changes
Some of the best features within DB2 10 are the online schema change features. Attributes
on any DB2 table space, index, or table component are available with the ALTER statement.
Within DB2 10, the attributes list is enhanced to handle the most common activities in an
online ALTER and then REORG process. Being able to change almost any component of the
common database attributes online provides administration flexibility and application
availability for changes to your database systems.
You can use the following attributes of universal table spaces with the ALTER statement in
DB2 10:
Data set size
Table or index page size
Segment size
Page size (buffer pool)
You can also migrate old style table space definitions to the new partition-by-growth or
partition-by-range universal table spaces.
DB2 10 online schema enhancements improve adoption of the new attributes by recording
the ALTER as a pending change and then executing the changes later through an online
REORG process. The ALTER and REORG processes leave the database tables available for
application activity, while the attribute changes are applied into the system. This method of
enhancing the database eliminates downtime and provides relief for mission-critical, very
large database availability issues.
The new ALTER and REORG method of applying changes introduces a database advisory
state of AREOR to signify that attribute changes are pending. This database AREOR state,
along with DB2 catalog tables, and the DROP PENDING CHANGES command, provides full
functionality for managing ALTER and REORG processes.
As DB2 databases continue to grow in size and transaction volume, the amount of
administration time that is required for database changes continues to be a challenge. A
number of DB2 utility improvements further minimize downtime during normal operational
using the FlashCopy capability of the storage systems.
With DB2 10, DBAs can create or rebuild a non-unique index against tables with LOBs
without any application impact or locking downtime. This online schema change
enhancement can help with newly installed applications where another index can be defined
quickly and can improve SQL access or resolve a performance issue. This enhancement
alone can help improve performance for any installed application.
Universal range-partitioned table space
DB2 10 also continues to enhance the universal range-partitioned table space. This table
space is the updated version of the classic range-partitioned table space with additional
segment size specification, universal settings and capabilities. This universal
range-partitioned table space is the migration target for the classic range partitioned table
space that is deprecated and will be only supported for a few more DB2 releases.
Database administrators are encouraged to use the universal range-partitioned table space
instead of the classic range-partition definitions to take advantage of the utility capabilities,
availability, and performance features.
Chapter 1. DB2 10 for z/OS at a glance 7
DB2 catalog enhancements
The DB2 catalog and directory are restructured removing special structures and links.
Through this restructuring, the DB2 catalog now uses the new universal partition-by-growth
table spaces, reordered row format, and one table per table space, thus expanding the
number of DB2 catalog table spaces. These universal table spaces are defined with DSSIZE
64 MAXPART 1, row level locking, and some CLOB and BLOB data types to handle repeating
long strings. These common table space definitions allow you to manage the catalog tables to
in a similar manner to the application database with online reorganizations and check utilities.
In addition to these enhancements, the UTSERIAL lock that caused lock contention with older
versions of DB2 utility processing is eliminated. This improvement, along with a reduction in
log latch contention through compare and swap logic, the new option of readers to avoid
waiting for inserters, and improvements in system thread latching serialization, can help
reduce many types of DB2 thread processing contention. All of these enhancements help
with concurrency of DDL, BIND, utility, and overall processing within application database
systems and especially with processes that reference the DB2 catalog.
Another enhancement provides the ability to add a new log into the system inventory while
the subsystem is active. The newly added log is available immediately without recycling DB2,
which can help recovery procedures and application performance.
MEMBER CLUSTER option
DB2 tries to maintain the table space clustering sequence when data is inserted into the
table. For data-sharing systems with robust INSERT processing, maintaining the clustering
sequence can cause locking contention within the table space across the different
data-sharing members. This contention causes extra CPU cycles to negotiate deadlocks and
extended response time to maintain the clustering sequence.
DB2 10 partition-by-range and partition-by-growth table spaces now support the MEMBER
CLUSTER parameter that allows DB2 to allocate space for insert disregarding the clustering
sequence of the table space. This method relieves the contention, but you need to monitor the
clustering sequence of the table space to avoid additional I/O for other high performance
applications.
1.4 Application functions
The standard and customizable capabilities that are now available in DB2 10 provide
advanced business analytics and fertile ground for customization. Built-in pureXML, LOB, and
open SQL scalar and table function interfaces provide many functions and capabilities that
can be extended to any type of custom functionality for business requirements.
Temporal tables and their ability to archive historical data automatically provide active data
management to help improve performance, audit, and compliance capabilities. Coupled with
SQL, you can now use business time or system time SQL parameters to qualify the answer,
DB2 10 provides unique, industry-leading database advanced business analytical capabilities
that are easy to implement without significant coding effort.
DB2 for z/OS continues to provide functions that can help with the deployment of a data
warehouse on System z.
DB2 9 for z/OS provides support for XML data type through the pureXML capabilities and
hybrid database engine. DB2 10 for z/OS expands such XML support, including more
functionality and performance improvements.
8 DB2 10 for z/OS Technical Overview
Greater timestamp precision for Java applications
DB2 10 enhances the TIMESTAMP data type with greater precision. The new TIME ZONE
sensitive capability provides more compatibility and functionality for all types of applications.
The TIMESTAMP precision enhancement supports up to 12 digits of fractional seconds
(picoseconds) with the default matching the Java default of 6 digits precision of fractional
seconds. The 6 digits default also helps Java functionality and DB2 family compatibility along
with SQL Server compatibility. The enhanced CURRENT TIMESTAMP uses a special register
so that applications can specify the desired fractional precision for application requirements.
Support for TIMESTAMP WITH TIMEZONE
The TIMESTAMP WITH TIMEZONE data type incorporates the TIMESTAMP 12-digit
fractional seconds capabilities and uses the new industry-standard Universal Coordinated
Time (UTC), replacing Greenwich Mean Time (GMT). This support provides applications with
additional TIMEZONE capabilities to compare business divisions along the exact timeline
throughout the world, which is vital for global financial, retail, and banking systems.
Extended indicator variable
DB2 10 introduces an extended indicator variable that provides a way to specify that there is
no value provided for an INSERT, UPDATE, and MERGE statement column. As the extended
indicator variable name implies, this variable extends the functionality within an indicator
variable for providing values within an SQL statement.
For example, a value of -5 within an enabled extended indicator variable specifies the
DEFAULT value. If the extended indicator variables are not enabled on the SQL package,
then the -5 specifies a NULL value. If the extended indicator value is enabled and given a
value of -7, this setting indicates the variable is to be UNASSIGNED, ignored, and treated as
though it did not exist within the SQL statement.
These extended indicator variables are typically for Java applications and are quite useful for
dynamic statements and variable SQL statement coding where the number of possible host
variable parameters is unknown until the transaction logic is completed and the SQL is to be
executed.
This capability addresses the application issue that previously required multiple SQL
statements coded to match the values that were available for the SQL statements. Now, these
multiple SQL statements can be consolidated. When the column value is not known, the host
variable value can use the keyword UNASSIGNED for the appropriate column or columns.
This feature is especially important for applications that use dynamic statements that are
clogging up the systems Dynamic Statement Cache with many copies of essentially the
same SQL statements.
Extended support for implicit casting
Implicit casting is the automatic conversion of different types of data to be compatible. DB2
enhances its implicit casting by handling numeric data types that can be cast implicitly to
character or graphical string data types. It also supports converting the data in the other
direction, from character or graphical string data types to numeric data types.
In previous releases of DB2, you had to do this casting manually, which could be a
labor-intensive application process. Now, numeric data, character, and graphical string data
can be handled, compared, and assigned implicitly. DB2 for z/OS is more compatible and
enhances portability of SQL from other database vendor systems.
Chapter 1. DB2 10 for z/OS at a glance 9
Enhanced scalar function support
DB2 10 enhances its compatibility with other database vendors with improvements in SQL
scalar and table functions. These built-in functions are used throughout application SQL,
making quick work of OLAP functions and calculations, such as SUM, AVG, SIN, COS, and
other functions. As in previous releases, these inline functions with their SQL statements
return a single value.
Non-inline SQL scalar functions that contain logic provide additional application functionality
and flexibility. This flexibility helps DB2 family compatibility and acceptance of data migrations
from other database vendors. DB2 also supports multiple versions and source code
management of these functions based on their parameter list, routine options, and function
body. These functions can be altered or replaced with different versions that are distributed to
multiple servers to assist in testing and overall performance. Function version fallback to a
previous version occurs instantly without a rebind or recompile when a function version is
dropped.
DB2 10 introduces support for SQL user-defined table functions that help ease application
migrations from other database vendors. DB2 table functions are flexible because they return
a single data table result that is based on the many different type of parameters, such as
LOBs, distinct types, and transition tables.
SQL procedural language enhancements
DB2 9 provides native support for the SQL procedural language, eliminating the requirement
to generate a C program from the SQL procedure that then executes as an external stored
procedure. DB2 9 SQL procedures can be executed natively within the DB2 engine for better
runtime execution and stored in the DB2 catalog for better management and version control.
Running the native code within the DB2 engine also helps in debugging, deploying, and
managing SQL procedural versions throughout multiple servers. Storing the SQL procedures
improves the overall change control of this application code so that you can manage it as you
do other application developer modules.
The SQL procedural language enhancements include SQL Table functions, nested
compound SQL statements within a procedure, and the RETURN statement that can return
the result set of a SELECT SQL statement. These stored procedures and other SQL
procedural language enhancements allow all types of processing.
The DB2 10 SQL procedural language enhancements provide needed compatibility with
other database vendors. The procedure language is enhanced to accept many data types
and XML as parameters and to provide limited use of scrollable cursors. The DB2 10
concurrency improvements and SQL procedural language compatibility provide the
opportunity to migrate other relational database management solutions to the z/OS
environment for a better cost-of-ownership experience, while providing the unique
performance, availability, and scalability capabilities that only DB2 for z/OS and System z
offer.
Support for OLAP: Moving sums, averages, and aggregates
The OLAP capabilities of moving sums, averages, and aggregates are now built in directly to
DB2. Improvements within SQL, intermediate work file results, and scalar or table functions
provide performance for these OLAP activities. Moving sums, averages, and aggregates are
common OLAP functions within any data warehousing application. These moving sums,
averages and aggregates are typical standard calculations that are accomplished using
different groups of time-period or location-based data for product sales, store location, or
other common criteria.
10 DB2 10 for z/OS Technical Overview
Having these OLAP capabilities built in directly to DB2 provides an industry-standard SQL
process, repeatable applications, SQL function or table functions, and robust performance
through better optimization processes.
These OLAP capabilities are further enhanced through scalar, custom table functions or the
new temporal tables to establish the window of data for the moving sum, average, or
aggregate to calculate its answer set. By using a partition, time frame, or common table SQL
expression, the standard OLAP functions can provide the standard calculations for complex
or simple data warehouse requirements. Also, given the improvements within SQL, these
moving sums, averages, and aggregates can be included in expressions, select lists, or
ORDER BY statements, satisfying any application requirements.
Bi-temporal queries and their business advantages
DB2 10 provides temporal data functionality, often referred to as time travel queries, through
the BUSINESS_TIME and SYSTEM_TIME table period definitions. These period definitions
are used for temporal table definitions to provide system-maintained, period-maintained, or
bi-temporal (both system and period maintained) data stores. These temporal data tables are
maintained automatically, and when the designated time period criterion is met, the data is
archived to an associated history table.
The PERIOD SYSTEM_TIME or PERIOD BUSINESS_TIME definition over two columns
defines the temporal period for the data within the table:
The SYSTEM_TIME relates to the time the data was put into the system.
The BUSINESS_TIME relates to the business transaction or business relevant time period
of the data.
These definitions control the criteria for which data exists in the table and when it is migrated
to the associated history table. By using both definitions, PERIOD SYSTEM_TIME and
PERIOD BUSINESS_TIME, a table has bi-temporal criteria that control the data that exists in
the table.
With the BUSINESS_TIME WITHOUT OVERLAPS definition parameter, the temporal tables
can make all transaction time stamps unique using the TIMESTAMP picosecond precision
(precision 12) enhancements to provide unique transaction time stamps throughout the
temporal table. This capability provides an advantage for robust global systems that might
have issues with the uniqueness of business timestamp transactions.
Access to currently committed data
With five to 10 times more concurrent threads within a single DB2 member, DB2 10 focuses a
significant amount of enhancements on application concurrency. DB2 10 provides an
individual package option for managing concurrency within applications. This enhancement
provides a DB2 package level BIND parameter to let you choose the way applications handle
data concurrency situations. DB2 10 now provides the following parameters and settings:
DB2 10 introduces the CURRENTACCESSRESOLUTION parameter with the
USECURRENTLYCOMMITTED and WAITFOROUTCOME settings. This parameter
setting overrides the DB2 subsystem parameters of EVALUNC and SKIPUNCI and helps
the application package quickly perform the desired concurrency action.
The USECURRENTLYCOMMITTED setting instructs the system to ignore rows that are in
the process of being inserted and use only currently committed rows. This clause is
contingent on the package BIND isolation level settings being either Cursor Stability or
Read Stability.
Chapter 1. DB2 10 for z/OS at a glance 11
The WAITFOROUTCOME setting specifies that applicable scans must wait for a commit
or rollback operation to complete when data is in the process of being updated or deleted.
Rows that are in the process of being inserted are not skipped.
These different settings provide the application with the flexibility to handle highly concurrent
web transactions, wait or use uncommitted data. These different parameter settings help
provide the desired package level of concurrency and also provide capabilities that mimic
some other database vendors application concurrency settings.
Plan stability, package preservation
The plan stability in DB2 9 offers a way to handle testing of a new version of a DB2 application
package. Plan stability offers a way to preserve multiple package copies and allows you to
switch back to a previous copy of the bound package. If the access path of the current
package is not as efficient, package stability allows the administrator to switch back to a
previous version of the package with a simple REBIND SWITCH command to the old version
of the package
DB2 10 for z/OS delivers a new framework that is geared towards the support of the existing
and future features that pertain to query access path. The access path repository holds query
text, query access paths, and optimization options. The repository supports versioning by
maintaining copies of access paths and associated options for each query.
pureXML enhancements
Within DB2 10, XML can be used almost anywhere within SQL variables, scalar functions,
SQL table functions, and SQL procedures. DB2 10 pureXML incorporates many
enhancements that improve overall XML performance, that provide easier XML schema
management, and that embrace DB2 family compatibility.
These enhancements start with XML schema validation that is now built in to DB2 10. The
XML schema no longer needs to be specified, because DB2 handles XML schema validation
more easily through a built-in function that determines the XML schemas from the instance
documents. DB2 uses the schema registration timestamp and schema location hints to match
the XML document to the correct schema version. This function allows multiple schema
versions to coexist and validation of new or older XML documents against their appropriate
XML schema versions.
Additional functionality enhancements provide the capability to manipulate any part of an
XML document. By using SQL statements with XQuery updating expressions, any single or
multiple XML document nodes can be inserted, updated, or deleted or can have their data
values updated. This function provides tremendous XML document capabilities, overall
performance, and flexibility for any application process.
An additional XML type modifier is also available with DB2 10. The XML type modifier adds
constraints on XML column data and enforces and validates the XML column data against the
schema definition information. This XML type modifier can be ALTERed onto older XML
schemas so that their XML column types can be validated. This process helps to ensure that
the XML schema documents stored elements have only the desired XML content.
DB2 10 adds XML date and time types. These data types are supported within XML indexes,
and the timestamp is expanded to handle more precision for finer data management. DB2 10
also includes XML time and date arithmetic comparison functions to further support
application processing.
DB2 10 improves XML performance with support for binary XML objects. Binary support is
better for server and application interaction. It uses pre-tokenized format and length
12 DB2 10 for z/OS Technical Overview
definitions that can improve overall performance and provide additional ease of use for Java
applications.
Binary XML also has additional flexibility features, such as String IDs, for text that represents
some or all occurrences of the same text with an integer identifier. This capability can help
reduce the size of XML, thus improving application performance.
DB2 10 provides a new LOB_INLINE_LENGTH installation parameter that sets the default
number of bytes for storing inline LOBs. Having a minimized LOB length or a predefined
standard length provides better I/O capabilities and buffering and optimizes the usage of the
inline LOB space.
The administrators now have the option of delaying the definition of the LOB or XML data sets
and their indexes, which can help save storage space by defining them into the DB2 catalog
and letting application SELECT and FETCH them. The LOB or XML data sets and their
indexes are allocated only when the first insert is issued, saving storage and application
performance until the data is saved in the database (especially useful for vendor packages).
The administrator also can use the CHECK DATA utility to check the consistency between the
XML schema, its document data, and its XML index data.
1.5 Operation and performance enhancements
Technical innovations in operational compliance help with security and enhanced auditing
capabilities.
DB2 Utilities Suite for z/OS delivers full support for enhancements in DB2 10 for z/OS as well
as integration with the storage systems functions, such as FlashCopy, exploitation of new sort
options (DB2 Sort), and backup and restore.
Performance is enhanced with DB2 10 for z/OS, offering reduced regression and
opportunities for reduced CPU time. For relational database customers, database
performance is paramount. DB2 10 for z/OS can reduce CPU demand 5-10% immediately
with no application changes. CPU demand can be further reduced up to 20% when using all
the DB2 10 enhancements in new-function mode (NFM). By pushing the performance limits,
IBM and DB2 10 continue to lead the database industry with state-of-the-art technology and
the most efficient database processing available.
Security and regulatory compliance
DB2 10 also includes security, regulatory compliance, and audit capability improvements.
DB2 10 enhances the DB2 9 role-based security with additional administrative and other
finer-grained authorities and privileges. This authority granularity helps separate
administration and data access that provide only the minimum appropriate authority.
The SECADM authorization level provides the authority to manage access to the tables while
being prohibited from creating, dropping, or altering the tables. The enhanced DBADM
authority provides an option to have administration capabilities without data access or without
access control. These authority profiles provide better separation of duties while limiting or
eliminating blanket authority over all aspects of a table and its data.
In addition, DB2 10 embraces audit and regulatory compliance through a new audit policy
that provides a set of criteria for auditing for the possible abuse and overlapping of authorities
within a system. This feature helps management, administrators, and the business
community understand, configure, and audit security policies and data access quickly for any
Chapter 1. DB2 10 for z/OS at a glance 13
role or user. Many audit policies can be developed to quickly verify audit and document the
security compliance across the environments critical data resources and application users.
Support for row and column access control
DB2 10 also enhances security through its row and column access control. This access
control lets administrators enable security on a particular column or particular row in the
database. Such a security method complements the privilege model. After you have enforced
row or column level access control for a table, any SQL DML statement that attempts to
access that table is subject to the row and column access control rules that you defined for
that table. During table access, DB2 transparently applies these rules to every user.
This capability allows you to define fine-grained security against any data. The role-based
security model, combined with the row and column access control and masking or encryption
of sensitive information, enhances the ultimate secure database environment for your
business. These features provide tighter controls, allow more security flexibility, and provide
tremendous regulatory and audit compliance capabilities.
Real time statistics stored procedures
Because the optimizer access paths improve performance, up-to-the-moment statistics are
vital. DB2 10 includes a set of stored procedures to monitor and collect table and index
statistics. These procedures monitor the current statistics, determine whether statistics need
to be collected, and then perform the collection autonomically to ensure good access path
optimization.
These procedures especially help volatile environments and can dynamically improve access
path optimization by getting index filtering statistics for SQL WHERE clause predicates to
make the best access path decisions. By gathering statistics for you, these DB2 10 stored
procedures take the burden off the administrators for large and especially dynamically
created objects to help ensure overall application performance.
Skip level migration
DB2 10 for z/OS provides the capability to migrate not only from DB2 9 NFM but also directly
from DB2 V8 NFM. The process of migrating directly from DB2 V8 NFM to DB2 10 CM is
called skip level migration. This capability affords you greater flexibility as to how and when
you can migrate to DB2 10. The actual skip level migration process itself is rather simple and
allows you to jump directly from DB2 V8 NFM to DB2 10 CM without having any intermediate
steps; however, the planning phase and learning curve must take into account new functions
and restrictions for both releases.
Performance is pervasive throughout the product
IBM DB2 10 for z/OS with its CPU reduction for applications and many new-function mode
enhancements that are ready for immediate use delivers the best performance improvements
since DB2 Version 2.1. DB2 10 emphasis on performance is pervasive throughout the
features and enhancements. Availability, scalability, security, compliance, application
integration, XML and SQL all contain performance improvements. These enhancements
provide an improved operational environment, making administration easier and reducing the
total cost of ownership for the business.
Performance is the major emphasis of DB2 10. Many enhancements make direct reduction in
CPU time and suspension time for applications. By migrating to the new version and
deploying and binding your applications within the environment, your applications can take
advantage of many of the performance enhancements without changing the application. DB2
10 uses the full 64-bit architecture, optimization of new access paths and provides
performance improvements that are independent of access path choice.
14 DB2 10 for z/OS Technical Overview
Rebinding your applications is now even easier. Concerns about regression are easily
addressed with plan stability, introduced in DB2 9 and further enhanced in DB2 10. DB2 10
plan stability with fallback, and package level DSNZPARM settings provide administrators the
ability to rebind and reduce performance risk.
Optimizer access path range-list index scan
DB2 10 offers a new application access path called range-list index scan. This access path
improves SQL processing against an index when multiple WHERE clauses can all reference
the same index. For example, when the SQL has multiple OR conditions that reference the
same index, the optimizer now recognizes it and scans the index only once, maintaining the
index order, instead of multiple index access that requires a final sort to reinstate order. This
process cuts down the number of RID list entries for the processing, which can improve I/O
and CPU performance.
You can use this access path when all the SQL WHERE clauses with multiple OR statements
reference the same table, every OR predicate has at least one matching predicate and is
mapped to the same index. This type of SQL WHERE clause with multiple OR statements is
typical for many types of applications, especially searching, scrolling, or pagination
application processes.
Optimizer uses more parallelism
DB2 10 also improves several existing access paths through parallelism. These specifically
designed enhancements eliminate some previous DB2 restrictions, increase the amount of
work redirected to the zIIP processors, and distribute work more evenly across the parallel
tasks. These enhancements provides additional reasons to enable parallelism within your
environment.
Parallelism improves your application performance and DB2 10 can now take full advantage
of parallelism with the following types of SQL queries:
Multi-row fetch
Full outer joins
Common table expressions (CTE) references
Table expression materialization
A table function
A CREATE GLOBAL TEMPORARY table (CGTT)
A work file resulting from view materialization
These new DB2 10 CP parallelism enhancements are active when the SQL Explain
PLAN_TABLE PARALLELISM_MODE column contains a C.
The new parallelism enhancements can also be active during the following specialized SQL
situations:
When the optimizer chooses index reverse scan for a table
When a SQL subquery is transformed into a join
When DB2 chooses to do a multiple column hybrid join with sort composite
When the leading table is sort output and the join between the leading table and the
second table is a multiple column hybrid join
Additional DB2 10 optimization and access improvements also help many aspects of
application performance. In DB2 10, index lookaside and sequential detection help improve
referencing parent keys within referential integrity structures during INSERT processing. This
process is more efficient for checking the referential integrity dependent data and reduces the
overall CPU required for the insert activity.
Chapter 1. DB2 10 for z/OS at a glance 15
List prefetch improves index access
List prefetch is used more within DB2 10 to access index leaf and non-leaf pages. In previous
versions of DB2, when the index became disorganized and had large gaps between non-leaf
pages, accessing index entries through sequential reading of non-leaf pages became
degraded by huge numbers of synchronous I/Os. DB2 10 improvements use non-leaf page
information to perform a list prefetch of the leaf pages. This function eliminates most of the
synchronous I/Os and the I/O waits that are associated to the large gaps in the non-leaf
pages during the sequential read. This list prefetch processing especially helps long running
queries that are dependent on non-leaf page access and also helps all the index-related
utilities, such as partition-level REORG, REORG INDEX, CHECK INDEX, and RUNSTATS.
Improved sequential detection
When DB2 uses an index that has a high cluster ratio and has to then fetch the rows that are
qualified by the index scan, it typically uses dynamic prefetch on the data. Dynamic prefetch
has a sequential detection algorithm whose objective is to avoid synchronous I/Os for
clustered pages. DB2 10 makes this algorithm more robust, resulting in fewer synchronous
I/Os as the cluster ratio degrades below 100%.
Improved log I/O
DB2 10 reduces the number of log I/Os and provides better I/O overlap when using dual
logging.
Optimizer no longer likely to change access path because of RID Pool
overflows
DB2 10 also improves the handling of SQL statements that reference a large amount of data
through list processing. This list processing uses a large number of record IDs (RIDs) and
sometimes overflows the RID pool. In previous versions of DB2, the RID pool overflow caused
DB2 to change the SQL access method to a table space scan. Now, when the RID pool
resources are exhausted, the RID list is written to work file resources and processing
continues. This improvement helps avoid the table space scan with its associated elapsed
time, locking impact and performance overhead.
Optimizer does more during stage 1 SQL evaluation
The DB2 10 optimizer can now evaluate scalar functions and other stage 2 predicates during
the first (stage 1) evaluation of the SQL access path. For indexed columns, this means the
optimizer can apply these stage 2 predicates as non-matching screening predicates to
potentially reduce the number of data rows that need to be accessed. This process eliminates
or reduces the amount of data evaluated in stage 2, thus improving query elapsed time and
overall query performance.
Optimizer determines which access is more certain
In previous versions of DB2, the optimizer evaluates the SQL WHERE predicate, the table
indexes available, and various statistics to determine the most efficient access path. With
enhancements in DB2 10, the optimizer analyzes additional filter factor variables evaluating
the SQL range predicate, the non-uniform distribution of the data, usage of parameter
markers, host variables, or literals where values are unknown.
When choosing between two different indexes there are situations where these unknown filter
factor variables make the cost estimate between two different access paths close. Analyzing
which of the different filter factor variables are known, DB2 determines which of the index
access paths has a higher degree of certainty. DB2 can choose an index access with a
slightly higher cost that provides a more certain runtime performance. This analysis is
especially important to provide the best consistent reliable performance with the different
16 DB2 10 for z/OS Technical Overview
types of programming languages and the application diversity of the parameter markers,
literals, and host variables available.
Dynamic statement cache ATTRIBUTES improvements
One of the most important DB2 10 system improvements is the ability to combine some
variations of SQL within the dynamic statement cache. Using the new ATTRIBUTES clause
within the PREPARE SQL statement, DB2 10 can recognize that the SQL is the same except
for the WHERE clause literal values. This process helps DB2 to recognize that these
statements already exist within the cache and to reuse the cache resources that were
generated previously for the SQL statement, which can help to avoid additional DB2 catalog
activity, such as object verification and access path creation for another SQL statement. This
process also helps free more cache space for other SQL statements for reuse, which can
improve performance and transaction response time.
Improved DDF transaction flow
Application performance and network transaction traffic is optimized when SELECT
statements are coded using the FETCH 1 ROW ONLY clause. DB2 now recognizes this
SELECT statement FETCH 1 ROW ONLY clause and combines the API calls and messages,
reducing the number of messages flowing across the network through the system.
The change to the FETCH 1 ROW ONLY clause also improves the performance of the JDBC
and CLI APIs. After the query data is retrieved, the FETCH 1 ROW ONLY clause causes the
APIs default action for DB2 to close the resources. DB2 closes the resources regardless of
whether a CURSOR WITH HOLD was declared and notifies the API driver to cancel any
additional FETCH or CLOSE statement requests. This process reduces the number and
amount of transaction network messages transmitted, improves DB2 performance, and
minimizes locking contention.
Hash space and access method
DB2 10 also introduces an access type called hash access, which is supported by a hash
space. DB2 uses an internal hash algorithm with the hash space to reference the location of
the data rows. In some cases, this direct hash access reduces data access to a single I/O,
decreasing the CPU workload and speeding up application response time. Queries that use
full key equal predicates, such as customer number or product number lookups, are good
candidates for hash access. You can create additional indexes to support other range, list, or
keyed access types.
The definition of the hash space requires column or columns for the direct hash access keys.
Each table with hash access has an associated hash space or hash space partitions. Hash
access requires some additional storage space to reduce access CPU workload.
There are tradeoffs for using hash access. Parallelism is not available, and traditional
clustering is not allowed for the hash data. Nevertheless, hash access can be beneficial for
database designs where unique keys are already using equal predicates on product or
customer IDs, object IDs, XML document IDs, and other direct key retrievals.
Smarter insert page candidate selection algorithm
DB2 10 modifies the algorithm to choose a page to insert a new row when a key does not
match any existing key in the cluster index. Instead of choosing the page of the next highest
key, DB2 10 chooses the page of the next lowest key. When inserting a series of rows with a
sequentially increasing key at the same insertion point of the cluster index, the new algorithm
leads DB2 to pick the same page where free space was previously found. This algorithm
tends to reduce the number of getpages needed to locate free space while at the same time
maintaining good clustering.
Chapter 1. DB2 10 for z/OS at a glance 17
INCLUDE non-unique columns within a unique index
One of the schema changes that your applications need right away is the ability to include
more columns into unique indexes. This DB2 10 feature allows you to include non-unique
columns in the definition of a unique index definition. Before this enhancement, you needed
one index for the unique constraint and another index for the non-unique columns. Using the
new CREATE or ALTER INCLUDE clause, a unique index definition can include additional
columns in the definition of the index. This capability eliminates all the extra I/O spent
maintaining the other index along with the additional storage needed for multiple index
definitions with similar columns and improves performance for all access to the table.
Parallel inserts into multiple indexes
DB2 10 also improves insert performance by using more parallelism. When INSERT SQL
modifies a table with multiple indexes, DB2 10 does the prefetch of multiple indexes in
parallel. By initiating parallel I/Os for the multiple indexes, the process is not waiting for the
synchronous I/Os, reducing the overall insert process time. This method reduces the time
frame of possible contention within the system and improves performance of all applications.
Utilities enhancements
Utilities support for RECFM=VBS and a more widespread and integrated use of FlashCopy
technology offer better performance and more availability.
Data compression
A new DSNZPARM provides the ability to compress DB2 SMF trace data. This function helps
minimize the cost of disk space that is needed to save valuable accounting information.
There is no need to run REORG to create a dictionary to apply compression to table spaces.
Insert can do that dynamically.
18 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 1
Part 1 Subsystem
In this part, we describe functions generally related to the DB2 subsystem and the z/OS
platform.
This part includes the following chapters:
Chapter 2, Synergy with System z on page 3
Chapter 3, Scalability on page 51
Chapter 4, Availability on page 67
Chapter 5, Data sharing on page 109
Part 1
2 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 3
Chapter 2. Synergy with System z
As with all previous versions, DB2 10 for z/OS takes advantage of the latest improvements in
the platform. DB2 10 increases the synergy with System z hardware and software to provide
better performance, more resilience, and better function for an overall improved value.
DB2 benefits from large real memory support, faster processors, and better hardware
compression. DB2 uses the enhanced features of the storage servers, such as the IBM
System Storage DS8000. FlashCopy is used by DB2 utilities, allowing higher levels of
availability and ease of operations. DB2 makes unique use of the z/Architecture instruction
set for improvements in reliability, performance, and availability. DB2 continues to deliver
synergy with hardware data compression, Fibre Channel connection (FICON), channels,
disk storage, advanced networking function, and Workload Manager (WLM).
In this chapter, we discuss the following topics:
Synergy with System z in general
Synergy with IBM System z and z/OS
Synergy with storage
z/OS Security Server
Synergy with TCP/IP
WLM
Using RMF for zIIP reporting and monitoring
Warehousing on System z
Data encryption
IBM WebSphere DataPower
Additional zIIP and zAAP eligibility
2
4 DB2 10 for z/OS Technical Overview
2.1 Synergy with System z in general
DB2 for z/OS is designed to take advantage of the System z platform to provide capabilities
that are unmatched in other database software products. The DB2 development team works
closely with the System z hardware and software teams to take advantage of existing
System z enhancements and to drive many of the enhancements available on the System z
platform.
In this chapter, we describe the efforts that have been made in DB2 10 to more fully exploit
synergy potential between the System z hardware and software to remove constraints for
growth, to improve reliability and availability, to continue to improve total cost of ownership,
and to improve performance across the board. We furthermore outline features and functions
of the IBM zEnterprise platform and z/OS V1R12 that are expected to benefit DB2 for z/OS.
2.2 Synergy with IBM System z and z/OS
In this section, we discuss interfaces that are passively or actively used by DB2 10 to fully
exploit the synergy potential between the System z hardware and the z/OS operating system
software.
2.2.1 DBM1 64-bit memory usage and virtual storage constraint relief
For many years, virtual storage has been the most common constraint for large customers.
Prior to DB2 10, the amount of available virtual storage below the bar potentially limited the
number of concurrent threads for a single data sharing member or DB2 subsystem. DB2 8
provided the foundation for virtual storage constraint relief (VSCR) below the bar and moved
a large number of DB2 control blocks and work areas (buffer pools, castout buffers,
compression dictionaries, RID pool, internal trace tables, part of the EDM pool, and so forth)
above the bar. DB2 9 provided additional relief of about 10-15%. Although this level of VSCR
helped many customers to support a growing number of DB2 threads, other customers had to
expand their environments horizontally to support workload growth in DB2, by activating data
sharing or by adding further data sharing members, which added complexity and further
administration to existing system management processes and procedures.
DB2 10 for z/OS provides a dramatic reduction of virtual private storage below the bar,
moving 50-90% of the current storage above the bar by exploiting 64-bit virtual storage. This
change allows as much as 10 times more concurrent active tasks in DB2. Customers need to
perform much less detailed virtual storage monitoring. Some customers can have fewer DB2
members and can reduce the number of logical partitions (LPARs). The net results for DB2
customers are cost reductions, simplified management, and easier growth.
Chapter 2. Synergy with System z 5
Figure 2-1 illustrates the virtual storage constraint relief that is provided by DB2 10.
Figure 2-1 DB2 9 and DB2 10 VSCR in the DBM1 address space
For more information about VSCR in DB2 10, refer to 3.1, Virtual storage relief on page 52.
2.2.2 ECSA virtual storage constraint relief
Because of the DBM1 VSCR delivered in DB2 10 (see 2.2.1, DBM1 64-bit memory usage
and virtual storage constraint relief on page 4), it becomes more realistic to think about data
sharing member or LPAR consolidation. DB2 9 and earlier versions of DB2 use 31-bit
extended common service area (ECSA) to share data across address spaces. If several data
sharing members are consolidated to run in one member or DB2 subsystem, the total amount
of ECSA that is needed can cause virtual storage constraints to the 31-bit ECSA.
To provide virtual storage constraint relief (VSCR) to that situation, DB2 10 uses more often
64-bit common storage instead of 31-bit ECSA. For example, in DB2 10 the instrumentation
facility component (IFC) uses 64-bit common storage. When you start a monitor trace in DB2
10, the online performance (OP) buffers are allocated in 64-bit common storage to support a
maximum OP buffer size of 64 MB (increased from 16 MB in DB2 9).
Other DB2 blocks were also moved from ECSA to 64-bit common. Installations that use a lot
of stored procedures (especially nested stored procedures) might see a reduction of several
megabytes of ECSA.
2.2.3 Increase of 64-bit memory efficiency
Every access to an address space memory frame must go through a virtual-to-real mapping
contained in page table entries (PTEs). If the mapped address is cached on the translation
look-aside buffer (TLB), which is a hardware cache on the processor, the mapping process is
DB2 10: 64 bit Evolution (Virtual Storage Relief)
Thread storage
EDMPOOL
Working memory
DBD Pool
Global Stmt Pool
2GB
Skeleton Pool
DB2 9 helped (~ 10%15%)
DB2 10 supports 5-10x more
active threads, up to 20,000 per
member
80-90% of thread storage moved
above the bar
More concurrent work
Reduce need to monitor
Consolidate members and LPARs
Reduced cost
Easier to manage
Easier to grow
Scalabi lity: Virtual storage constraint is sti ll number 1 issue
for many DB2 customers, until DB2 10
EDMPOOL
DBD Pool
Global Stmt Pool
Working memory
2GB
Skeleton Pool
Thread storage
6 DB2 10 for z/OS Technical Overview
efficient. If there is a TLB cache miss, processing must stop and wait for a search in main
memory to search for the entry and place it on the TLB.
Figure 2-2 illustrates the address translation for 4 KB pages.
Figure 2-2 Address translation for 4 KB pages
The address space control element (ASCE) describes an address space with Region Table or
Segment Table Origin or Real Space Token as follows:
A virtual space that is described by translation tables
The Real Space for which virtual addresses are translated to the same identical real
address with no translation tables
In this case, the effective virtual address can be the result of up to the following levels of
dynamic address translation tables searches:
Region First (indexed by RFX)
Region Second (indexed by RSX)
Region Third (indexed by RTX)
Segment (indexed by SX)
Page (indexed by PX) Translation can start at R1T, R2T, R3T, or SGT
The starting point of the translation is designated in the Address Space Control Element.
Recently, application memory sizes have increased dramatically due to support for 64-bit
addressing in both physical and virtual memory. However, translation look-aside buffer (TLB)
sizes have remained relatively small due to low access time requirements and hardware
space limitations. A z10 hardware can cache 512 TLB entries per processor. The TLB
coverage in todays applications represents a much smaller fraction of an applications
working set size leading to a larger number of TLB misses. An application can suffer a
significant performance penalty, resulting from an increased number of TLB misses and the
increased cost of each TLB miss.
The workload characteristics where large pages are beneficial must include:
A large DBM1 address space (for example, large buffer pools)
High user concurrency
A high GETPAGE rate
4 KB data page
Page Table
Segment
Table
Region 3rd
Table
Region 2nd
Table
Region 1st
Table
ASCE
RFX RSX RTX SX PX BX
11 11 11 11 8 12
Effective Virtual Address
11
10
01
00
Chapter 2. Synergy with System z 7
In addition to these characteristics, to see the benefits of large pages, the workload should be
well-tuned and mainly CPU-bound (not I/O bound); otherwise, potentially bigger performance
issues can overshadow any benefits of making virtual memory management more efficient.
z/OS V1R10 introduced 1 MB real memory frames. DB2 10 uses 1 MB real memory frames
for buffer pools that are defined with the PGFIX(YES) attribute. Using 1 MB memory frames
for PGFIX(YES) buffer pools is a perfect match with the 1 MB page frames attribute because
1 MB frames are not paged out by z/OS.
Using 1 MB memory frames is intended to reduce the burden of virtual to real mapping
memory management in z/OS by reducing the number of TLB entries that are required for
DB2 buffer pool pages. The use of 1 MB pages in DB2 for z/OS increases the TLB coverage
without proportionally enlarging the TLB size. This results in better performance by
decreasing the number of TLB misses that DB2 might incur.
To illustrate the difference between using 4 KB and 1 MB memory page frames, let us
assume that we have defined a 5 GB address space for DBM1, where the buffer pool is
allocated. With 4 KB page frames, there are 1.3 million mapping entries. If z/OS uses 1 MB
pages, there are only 5,120 entries.
Figure 2-3 illustrates the address translation for 1 MB pages.
Figure 2-3 Address translation for 1 MB pages
For more information about how DB2 10 uses 1 MB page frames for PGFIX(YES) buffer
pools, refer to 13.8.1, Buffer storage allocation on page 561.
2.2.4 Improved CPU cache performance
In the past, System z hardware and software have remained relatively independent of each
other. The use of faster processors and large 64-bit memory require a closer cooperation
between hardware and software. With the System z10 architecture, DB2 for z/OS operates in
an environment that supports more and faster processors with larger amounts of real and
virtual storage.
In such a fast processing environment, CPU cache misses can occur and can cause an
increased memory and CPU cache latency for moving critical data structures from real
storage into the level 2 (L2) CPU cache. To reduce memory and cache latency, DB2 10
1 MB data page
Segment
Table
Region 3rd
Table
Region 2nd
Table
Region 1st
Table
ASCE
RFX RSX RTX SX BX
11 11 11 11 20
Effective Virtual Address
11
10
01
00
Segment table entry is marked as large page entry
8 DB2 10 for z/OS Technical Overview
extensively uses the prefetch data hardware instruction available in z10 and z196 to prefetch
critical data structures ahead of time from real storage to the L2 CPU cache.
Some of DB2s most frequently referenced internal structures are also rearranged for cache
alignment to improve performance.
Figure 2-4 illustrates for a z10 the different CPU caches, their memory and cache latencies
(expressed in machine cycles), and the data flow between CPU caches and real memory.
Processors access instructions or data that reside in the L1 cache. Instructions or data are
loaded as needed from L2 cache into the L1.5 cache and then into the L1 cache. There is a
separate L1 cache for instructions and for data.
Figure 2-4 Memory and CPU cache latencies for z10
2.2.5 HiperDispatch
z/OS workload management and dispatching are enhanced to take advantage of the The IBM
System z10 hardware design. The System z10 processor supports HiperDispatch, which
provides a new mode of dispatching that increases the system capacity by up to 10%. The
amount of improvement varies according to the system configuration and workload.
HiperDispatch is a combination of hardware features, z/OS dispatching, and the z/OS
Workload Manager that increases system capacity by increasing the probability of cache hits
when executing z/OS instructions. Each CPU has its own level 1 (L1) and level 1.5 (L1.5)
cache. This level of cache is the best place to find data, because L1 and L1.5 cache requires
the fewest machine cycles to access the data. CPUs are grouped at the hardware level in
books.
All CPUs in the same book share a common level 2 (L2) cache, which is the next best place to
access data. A CPU can also access the L2 cache of other books, but this access requires
more machine cycles. The difference in machine cycles required to access a piece of data
found in the L1 cache versus the same book L2 cache is relatively small. However, there is a
significant difference in the number of machine cycles to access a piece of data in the same
book L2 cache versus a separate book L2 cache. To optimize for same book L2 cache hits, a
unit of work must run on a subset of the available processors in the same book.
Cache and memory latency on a z10
L1 Cache 1 machine cycle
L1.5 Cache 4 machine cycles
L2 Cache variable, 10s of machine cycles
Real memory ~ 600 machine cycles
Prefetch
Real
Memory
L1
L1.5
CP
. . . . . . . . . . . . . . . . . . . . . . . . . .
.
L1
L2
CP
. . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
CP
L2
L1.5
Chapter 2. Synergy with System z 9
You can take advantage of CPU cache optimization by activating HiperDispatch. When
activated in z/OS, HiperDispatch provides the following capabilities:
Consistently dispatches work on the same physical CPU to optimize for L1 and L1.5 CPU
cache access.
Dispatches work on a set of physical processors in the same book to optimize cache
efficiency across the suspend/resume.
For more information about how to activate and use HiperDispatch, refer to z/OS Version 1
Release 11 Implementation, SG24-7729.
2.2.6 XML virtual storage constraint relief
In z/OS V1R11, XML code page dependent tables that are used for z/OS XML System
Services processing are moved above the bar, which frees up common storage below the bar
that is currently used.
2.2.7 XML fragment validation
In z/OS V1R12, XML System Services validating parsing performance is anticipated to
improve. A revolutionary XML fragment validation capability enables you to validate only a
portion of a document. For example, by revalidating only the single fragment being updated,
DB2 10 for z/OS pureXML can avoid the costly revalidation of the entire document, which
without this function can take many times longer.
2.2.8 Improved DB2 startup times and DSMAX with z/OS V1R12
DB2 startup times can be time consuming, especially when DB2 for z/OS has to go through
lengthy data set allocation processing to open thousands of data sets during start. z/OS
V1R12 delivers functions that allow for significant data set allocation elapsed time reductions.
These improvements can result in significant DB2 start time reductions.
DB2 10 exploits z/OS 1.12 new allocation functions to improve the performance of allocation,
deallocation, open, and close of the data sets in DB2 page sets. These functions improve the
performance when opening a large number of data sets concurrently, especially for DB2
users with a high value of DSMAX. Also significant reduction in elapsed time has been
observed by DB2 performance tests with a DSMAX of 100,000.
DB2 APARs PM00068, PM17542, and PM18557 enable the improvements for DB2 V8 and
DB2 9.
2.2.9 CPU measurement facility
Enabling the gathering of hardware performance data on customer systems with Hardware
Instruction Services (HIS) facilitates the improvement of z/OS-based software during:
Problem determination
CPU measurement facility (CPU MF, System z10 GA2)
Supported by z/OS Hardware Instrumentation Facility (HIS)
SMF type 113
Always active in z/OS V1R12
To be separately activated in z/OS V1R10 and z/OS V1R11
Provides information about CPU cache
10 DB2 10 for z/OS Technical Overview
2.3 Synergy with IBM zEnterprise System
Today, many clients deploy their multitier workloads on heterogeneous infrastructures. For
example, their mission-critical workloads and data serving need the availability, resiliency,
security, and scalability strength of the mainframe. Meanwhile, other workloads, such as
those handling intensive computations or low-cost, non-mission-critical transactions, might be
better suited to run on UNIX or x86 architectures. Creating and managing these multiple and
heterogeneous workloads, implemented on many physically discrete servers, can lead to
inefficient and ineffective solutions.
The zEnterprise System provides an architecture that consists of heterogeneous virtualized
processors that work together as one infrastructure. The system introduces a revolution in the
end-to-end management of heterogeneous systems and offers expanded and evolved
traditional System z capabilities.
The zEnterprise System consists of the following components:
IBM zEnterprise 196 (z196)
IBM zEnterprise BladeCenter Extension (zBX)
IBM zEnterprise Unified Resource Manager (Unified Resource Manager)
Figure 2-5 summarizes these components.
Figure 2-5 IBM zEnterprise System components
Uni fi es resources, extending
System z qualities of service
across the infrastructure
Instal l, monitor, manage,
optimize, diagnose & service
IBM zEnterprise
Unified Resource
Manager
IBM zEnterprise 196
The industry' s fastest and most scalable enterprise server
Ideally suited for large scale data and transaction serving and mission-cri ti cal
enterprise applicati ons
Capabl e of delivering50 BIPS (billions of instructions per second)
Opti mizers
Workload-specific accelerators to deliver a lower cost per transaction
IBM Bl ades
Logical device integration between System z and distri buted resources
IBM zEnterprise BladeCenter Extension
zBX
z196
zEnterpri se pri vate network
Linux on System z
IBM Blades
Unified
Resource
Manager
z/OS
Special purpose
optimizer blades
Ensemble
En semb le
Chapter 2. Synergy with System z 11
The existing capacities for zEnterprise are improved and additional capacities added using
heterogeneous technology and Unified Resource Manager. See Figure 2-6.
Figure 2-6 IBM zEnterprise System: Capacity and scalability
Here is a a closer look at each component and its capabilities:
High-end mission-critical platform: z196
The z196 is the industrys fastest and most scalable enterprise server, and it improves
upon the capabilities of its predecessor, the System z10 Enterprise Class (System z10
EC). The z196 not only has the capabilities and scalability of physical resources (for
example, processor, memory, I/O, and so on), but also offers better reliability, availability,
and serviceability (RAS). It is the ideal platform for mission-critical enterprise workloads.
Traditional workloads and data serving: z/OS
z/OS offers extremely high scalability and performance for applications and data serving,
and high availability and cross-system scalability enabled by Parallel Sysplex and
GDPS solutions. z/OS provides a highly optimized environment for application
integration and data management, with an additional performance benefit if both the
application and the data are hosted on z/OS. It provides the ideal environment for both
traditional application workloads and leading-edge technologies, as well as large scalable
data serving, especially for mission-critical workloads.
Mission-critical scale-out workload: z/VM and Linux
The z196 offers software virtualization through z/VM. The extreme virtualization
capabilities provided by z/VM enable the high virtualization of thousands of distributed
servers on Linux on System z. Linux on System z is an ideal platform for mission-critical
scale-out workloads such as web applications, BI applications, and more.
Special purpose optimizer blades
zEnterprise provides an architecture that allows you to attach IBM special purpose
optimizer blades. The first of this kind is IBM Smart Analytics Optimizer for DB2 for z/OS
V1.1, which can accelerate certain data warehouse queries for DB2 for z/OS running on
the z196, thereby reducing operational costs and improving the performance of BI
processes.
zEnterprise Unified Resource Manager
The zEnterprise Unified Resource Manager is Licensed Internal Code (LIC), also known
as firmware, that is part of the Hardware Management Console (HMC). Unified Resource
Manager is a key component of zEnterprise. It provides integrated management across all
elements of the system. Unified Resource Manager can improve your ability to integrate,
monitor, and dynamically manage heterogeneous server resources as a single, logical
z990
IBM System z9
C
o
n
f
i
g
u
r
a
b
l
e
E
n
g
i
n
e
s
IBM System z10
z900
IFL
Crypto
+
zAAP
+
zIIP
+
Unified Resource
Manager
zEnterprise
+
Compute-Int ensive
Decimal FP
Federated capacity from heterogeneous technology
+
Accelerator
Capacity
+
Out-of-Order
12 DB2 10 for z/OS Technical Overview
virtualized environment, while contributing to cost reduction, risk management, and
service improvement.
zEnterprise ensemble
A zEnterprise ensemble is a collection of up to eight nodes, each composed of a z196 and
optionally a zBX. The physical resources of servers are managed as a single virtualized
pool by the Unified Resource Manager using the Hardware Management Console.
Private networks
Two new internal, secure networks are introduced for the zEnterprise ensemble. These
networks are the intraensemble data network (IEDN) and the intranode management
network (INMN). Existing external networks are supported as well. An IEDN is used for
application data communications. An INMN is used for platform management within a
zEnterprise. These networks have enough bandwidth for their purposes (10 Gbps for
IEDN and 1 Gbps for INMN).
Hypervisors
The IBM POWER7 blade provides PowerVM for IBM POWER7. PowerVM offers
industry-leading virtualization capabilities for AIX. This hypervisor is managed, along
with the hypervisors of z196 (PR/SM and z/VM), by a single point of control using
Unified Resource Manager.
From the DB2 subsystem point of view, we expect synergy with DB2 10. Faster CPUs, more
CPUs, and more memory mean better DB2 performance, scalability.
Combined with DB2 10 improvements in buffer pool management, virtual storage
constraint relief and latch contention reduction, DB2 applications can observe significant
cost reductions and scalability improvements on zEnterprise.
Compression hardware improvements expected to improve DB2 data compression
performance.
192 MB L4 Cache expected to benefit DB2 workloads.
TLB changes expected to improve DB2 10 performance for 1 MB page sizes.
Hybrid architecture to open opportunities for DB2 query performance acceleration.
IBM Smart Analytics Optimizer for DB2 for z/OS V1.1 requires z196 and DB2 9 for z/OS.
Preliminary measurements currently available with DB2 9 show (see Figure 2-7):
DB2 OLTP workloads observing 1.3 to 1.6 times DB2 CPU reduction compared to System
z10 processors
Higher DB2 CPU reduction can be achieved as the number of processors per LPAR
increases
With DB2 10 and zEnterprise, CPU reduction can be up to 1.8 times compared to DB2 9
and System z10 with large number of processors per LPAR
Chapter 2. Synergy with System z 13
Figure 2-7 zEnterprise CPU reduction with DB2 9
2.4 Synergy with storage
Many DB2 for z/OS database systems have grown rapidly to accommodate terabytes of
business critical information. Fast growing data volumes and increasing I/O rates can
introduce new challenges that make it difficult to comply with existing service level
agreements (SLA). For example, recovery times are expected to stay within SLA boundaries
or must not cause outages of business critical applications.
While I/O rates are increasing, existing applications have to perform according to SLA
expectations. To support existing SLA requirements in an environment of rapidly increasing
data volumes and I/O rates, DB2 for z/OS uses features in the Data Facility Storage
Management Subsystem (DFSMS) that help to benefit from performance improvements in
DFSMS software and hardware interfaces.
2.4.1 Extended address volumes
The track addressing architecture of traditional volumes allows for relatively small gigabyte
(GB) capacity volumes, which put pressure on the 4-digit device number limit. The largest
traditional 3390 model volume (3390-54) has a capacity of 65,520 cylinders or approximately
54 GB. In a Parallel Sysplex you can define up to 65280 I/O devices such as volumes, tapes,
cartridges, printers, terminals, network interfaces, and other devices. Due to rapidly growing
data, more of these relatively small GB capacity volumes are required, making it more likely to
hit the 4-digit device number limit.
To address this issue, z/OS V1R10 introduces extended address volumes (EAV) to provide
larger volume capacity in z/OS to enable you to consolidate a large number of small GB
capacity volumes onto a significantly smaller number of large EAV (see Figure 2-8).
0
10
20
30
40
%
r
e
d
u
c
t
i
o
n
1 way 16 way 32 way
#of processors per LPAR
z10->zEnterprise DB2 CPU Reduction - DB2 9
14 DB2 10 for z/OS Technical Overview
Figure 2-8 EAV breaking the limit
EAV implement an architecture that provides the architectural capacity of 100s of terabytes of
data for a single volume. For example, the maximum architectural capacity of an EAV is
268,434,453 cylinders. However, the current EAV implementation is limited to a capacity for a
single volume of 223 GB or 262,668 cylinders (see the virtualization shown in Figure 2-9 on
page 14).
Figure 2-9 Overview DS8000 EAV support
z/OS
V1R10
z/OS
V1R11
Breaking the 65,520 cylinder limit
What is an Extended Address Volume (EAV)?
A volume with more than 65,520 cylinders
Size limited to 223 GB (262,668 Max cylinders)
Supported in z/OS V1R10 and higher
z/OS
V1R12
Overview - DS8000 Support
3390 Model A
A device confi gured to
have 1 to 268,434,453
cylinders
225 TB
DS8000 Rel ease 4
Licensed Internal Code
An EAV is configured as an
3390 Model A in the DS8000
3390-9
3390-A
EAV
54 GB
Max cyls: 65,520
Up to 225 TB
3390-3
3390-9
3390-9
3 GB
Max cyls: 3,339
9 GB
Max cyls: 10,017
27 GB
Max cyls: 32,760
Size limited to
223 GB (262,668
Max cylinders)
in z/OS V1R10
and hi gher
Theoretically up to
268,434,453 Cylinders
Chapter 2. Synergy with System z 15
When you consolidate big numbers of data sets that reside on multiple volumes onto bigger
EAV, you also increase the I/O density of these EAV. You concurrently access the data sets
that previously resided on multiple small GB capacity volumes through just one single large
EA volume, which can increase the disk I/O response times due to disk queuing. To solve this
issue, you need to configure the EAV to use parallel access volumes (PAVs). The PAV
capability represents a significant performance improvement by the storage unit over
traditional I/O processing. With PAVs, your system can access a single volume from a single
host with multiple concurrent requests.
When you PAV-enable a logical volume, you statically assign a fixed number of PAVs to that
volume. If one single volume becomes over used by the workload, the number of PAVs that
are assigned to that volume might still not be sufficient to handle the I/O density of the
workload that is flooding the volume. To address this situation, the DS8000 series of disk
subsystems offers enhancements to PAV with support for HyperPAV, which enables
applications to achieve equal or better performance than PAVs alone, while also using the
same or fewer operating system resources.
For more information about PAV and HyperPAV technology, refer to DB2 9 for z/OS and
Storage Management, SG24-7823.
For additional information about EAV, refer to 3.7, Support for Extended Address Volumes
on page 62.
2.4.2 More data set types supported on EAV
Extended addressing space (EAS) eligible data sets are defined to be those that can be
allocated in the extended addressing space of an EA volume. They can reside in track or
cylinder-managed space and can be SMS-managed or non-SMS managed, sometimes
referred to as cylinder-managed space.
EAS-eligible data set types include VSAM, sequential, direct and partitioned data sets. As
illustrated in Figure 2-10, VSAM files became EAS eligible in z/OS V1R10, extended format
sequential data sets were added in z/OS V1R11, basic and large format sequential, BDAM,
PDS and PDSE, VSAM volume data sets (VVDS) and basic catalog structure (BCS) data sets
have become EAS eligible in z/OS V1R12.
With the new support provided in z/OS V1R12, you can define and use integrated catalog
facility (ICF) basic catalog structure (BCS) with EA, allowing catalogs to grow larger than
4 GB. The maximum size the ICF catalog is limited by the size of the ICF catalog volume. For
example, if the ICF catalog resides on a 223 GB volume, the catalog cannot grow beyond that
volume size.
Using this feature, you can store a DB2-related ICF catalog in an EAS of an EA volume, which
simplifies ICF catalog administration and additionally contributes to high availability because
the ICF catalog can grow bigger and is unlikely to run out of space.
16 DB2 10 for z/OS Technical Overview
Figure 2-10 EAV support for ICF catalogs and VVDS
2.4.3 Dynamic volume expansion feature
The IBM System Storage DS8000 series supports dynamic volume expansion, which
increases the capacity of existing system volumes, while the volume remains connected to a
host system. This capability simplifies data growth by allowing volume expansion without
taking volumes offline. Using dynamic volume expansion significantly reduces the complexity
of migrating to larger volumes (beyond 65,520 cylinders).
Dynamic volume expansion is performed by the DS8000 Storage Manager and can be
requested using its web interface. You can increase the 3390 volumes in size, for example
from a 3390 model 3 to a model 9 or a model 9 to a model A (EA volume). z/OS V1R11
introduces an interface that can be used to make requests for dynamic volume expansion of a
3390 volume on a DS8000 from the system.
For more information about the Dynamic Volume Expansion feature, refer to DB2 9 for z/OS
and Storage Management, SG24-7823.
2.4.4 SMS data set separation by volume
When allocating new data sets or extending existing data sets to new volumes, SMS volume
selection frequently calls SRM to select the best volumes. Unfortunately, SRM might select
the same set of volumes that currently have the lowest I/O delay. Poor performance or single
point of failure (SPOF) can occur when a set of functional-related critical data sets are
allocated onto same volumes. In z/OS V1R11, DFSMS provides a function to separate critical
data sets, such as DB2 partitions, active log, and boot strap data sets, onto different volumes
to prevent DASD hot spots, to reduce I/O contentions, and to increase data availability by
putting critical data sets behind separate controllers.
This function provides a solution by expanding the scope of the data set separation function
currently available at PCU level to the volume level. To use this function, you need to define
the volume separation groups in the data set separation profile. During data set allocation,
V1R10
VSAM
V1R11
VSAM
EF SEQ
z/OS
V1R12
ALL
EAS eligible data set sets in z/OS
EAS eligible: A dat a set on an EAV that is eligible t o have extents
in the extended addressing space and described by ext ended
at tribut e DSCBs (format 8/9)
Can reside in track or cylinder-managed space
SMS-managed or non-SMS managed
Any data set type can reside in track-managed space
Data set types supported
VSAM data types (KSDS, (V)RRDS, ESDS and linear)
This covers DB2, IMS, CICS, zFS and NFS
CA sizes: 1, 3, 5, 7, 9 and 15 tracks
Sequential (Extended Format)
Sequential (Basic and Large Format)
Direct (BDAM)
Partitioned (PDS, PDSE)
Catalog (VVDS and BCS)
Chapter 2. Synergy with System z 17
SMS attempts to separate data sets that are specified in the same separation group onto
different extent pools and volumes.
This function provides a facility for the installation to separate functional-related critical data
sets onto different extent pools and volumes for better performance and to avoid SPOFs.
For more information about SMS data set separation by volume, refer to DB2 9 for z/OS and
Storage Management, SG24-7823.
2.4.5 High Performance FICON
Higher CPU capacity requires greater I/O bandwidth and efficiency. High Performance FICON
(zHPF) enhances the z/Architecture and the FICON interface architecture to provide I/O
optimizations for OLTP workloads. zHPF is a data transfer protocol that is optionally employed
for accessing data from an IBM DS8000 storage subsystem. Data accessed by the following
methods can benefit from the improved transfer technique:
DB2
IMS Fast Path
Partitioned data set extended (PDSE)
Virtual Storage Access Method (VSAM)
zSeries File System (zFS)
Hierarchical file system (HFS)
Common VTOC access facility (CVAF)
ICF Catalog
Extended format sequential access method (SAM).
Performance tests with zHPF have shown performance improvements as illustrated in
Figure 2-11.
Figure 2-11 zHPF performance
IBM introduced High Performance FICON, known as zHPF, with the z10 processor. zHPF
introduces a more efficient I/O protocol that significantly reduces the back and forth
communication between devices and channels. zHPF has been further enhanced by the new
channel technology provided with the IBM z196. The FICON Express8 channels on a z196
support up to 52,000 zHPF 4 KB IOPS or up to 20,000 FICON 4 KB IOPS. When comparing
to the FICON Express4 channels, this is an improvement of about 70% for zHPF protocols
18 DB2 10 for z/OS Technical Overview
and 40% for FICON protocols. The maximum zHPF 4 KB IOPS measured on a FICON
Express8 channel is 2.6 times the maximum FICON protocol capability.
Not all media manager I/Os can use zHPF. I/Os that access discontiguous pages are
ineligible and format-writes are ineligible. On the z10, I/Os that read or write more than 64 KB
are ineligible, but this restriction is removed on the zEnterprise system.
The protocol simplification introduced by zHPF is illustrated in Figure 2-12. This example
includes a 4 KB read FICON channel program where three information units (IUs) are sent
from the channel to the control unit and plus IUs from the control unit to the channel. zHPF
cuts in half the total number of IUs sent using only one IU from the channel to the control unit
and two IUs from the control unit to the channel.
Figure 2-12 zHPF link protocol comparison
For more information about zHPF, refer to IBM zEnterprise 196 I/O Performance Version 1,
which is available from:
ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF
DB2 for z/OS prefetch I/O
A set of measurements focused on prefetch operations performed by DB2 for z/OS. DB2
supports three types of prefetch, namely sequential prefetch, dynamic prefetch, and list
prefetch. In most cases, these prefetch types were not eligible for zHPF prior to the z196,
because they read more than 64 KB bytes, but the z196 makes sequential prefetch and
dynamic prefetch both zHPF-eligible.
Starting with DB2 9, sequential prefetch can read up to 256 KB for SQL and 512 KB for utility
per I/O, depending on how many buffers are available for prefetch. In contrast, dynamic
prefetch continues to read 128 KB per I/O, but unlike sequential prefetch, it can do two
parallel prefetch I/Os. Thus, dynamic prefetch achieves better I/O performance than
sequential prefetch.
The prefetch measurements using 4 KB page sizes show that zHPF increases the throughput
in case of reading from disk as well as reading from cache. The best case is DB2s dynamic
prefetch when the disks are not involved. Remember that dynamic prefetch can generate two
concurrent streams on the link, even though DB2 is reading only one data set.
C
H
A
N
N
E
L
C
O
N
T
R
O
L
U
N
I
T
OPEN EXCHANGE,
PREFIX CMD & DATA
READ COMMAND
CMR
4 KB of DATA
STATUS
CLOSE EXCHANGE
FICON
Link protocol comparison (4 KB READ)
C
H
A
N
N
E
L
C
O
N
T
R
O
L
U
N
I
T
OPEN EXCHANGE, send a
Transport Command IU
4 KB OF DATA
Send Transport Response IU
CLOSE EXCHANGE
zHPF
zHPF provides a much simpler link protocol than FICON
Chapter 2. Synergy with System z 19
DB2 logging and deferred writes
If the number of 4 KB log buffers is less than or equal to 16, these writes are eligible for zHPF
on a z10. If the number of buffers is greater than 16, the z10 cannot use zHPF, but the z196
can. Measurements show that DB2 10 can achieve a slightly higher log throughput with zHPF
compared to FICON.
DB2 deferred writes write up to 128 KB for SQL (exceptions are 256 KB writes for LOBs and
512 KB for format-write in utilities), but each write I/O is limited to a range of 180 pages. Thus,
whether or not the update write I/Os that are generated by deferred writes are zHPF-eligible
depends on the density of the modified pages. The z196 enhancement enables larger size
update writes (over 64 KB) to use zHPF. This enhancement applies to sequential Inserts,
which is associated with table spaces growing. However, if a table space is growing, it has to
be preformatted. The preformat I/Os are usually more of a bottleneck than the sequential
deferred write I/Os and they are not eligible for zHPF.
As stated previously, zHPF can help increase the DB2 log throughput by 3%. Although 3%
might not be viewed as a large number, the cumulative effect of making more DB2 I/Os
eligible for zHPF is to lower the overall utilization of the channels and host adapters and these
lower utilizations enable more dramatic performance effects.
DB2 utilities
DB2 utilities use a variety of different types of I/Os. The z196 helps with sequential reads. The
sequential reads, from a table space or index, are now zHPF-eligible. Format writes and list
prefetch I/Os are not eligible. The sequential reads from DSORG=PS data sets are
zHPF-eligible if the data set has extended format.
Table 2-1 summarizes all of the different types of I/Os that DB2 uses. It shows which I/Os are
eligible for zHPF on the z10 and on the z196.
Table 2-1 Eligibility for zHPF of DB2 I/O types
Type of DB2 I/O z10 z196
Random single 4 KB read/write YES YES
Sequential prefetch and dynamic prefetch NO YES
DB2 work files, reads and update writes YES YES
List prefetch NO NO
Log writes <=64 K YES YES
Log writes > 64 K NO YES
Log reads (with DB2 9 or 10) NO YES
Sequential update writes NO YES
Update writes with a discontinuity NO NO
Random update writes without a discontinuity YES YES
Format and preformat NO NO
Utility table space scans (sequential prefetch) NO YES
Sequential reads from DSORG=PS, EF data sets NO YES
Sequential reads from DSORG=PS, non-EF data sets NO NO
Sequential writes to DSORG=PS NO NO
20 DB2 10 for z/OS Technical Overview
2.4.6 IBM System Storage DS8800
The DS8800 high-performance flagship model features IBM POWER6+ processor
technology to help support high performance. Compared to the performance of the previous
DS8000 system (DS8300) the new processor, along with 8 Gbps Host Adapters and 8 Gbps
Device adapters, helps the DS8800 to improve sequential read and write performance. The
DS8800 is the first IBM enterprise class control unit to use 2.5 serial-attached SCSI (SAS)
drives in place of 3.5 Fibre Channel drives, which allows the footprint in terms of space and
energy consumption to be reduced. A single frame of the DS8300 holds up to 144 drives. The
DS8800 holds up to 240 drives.
The performance can improve in the following ways:
Whereas the speeds of the host adapters and disk adapters in the DS8300 are 4 Gbps,
the speeds of the host adapters and device adapters of the DS8800 are 8 Gbps. Together
with FICON Express 8, the DS8800 enables 8 Gbps speeds to be achieved from the host
across the channel links and all the way to the disks.
A single frame of the DS8800 has twice as many device adapters as the DS8300 (which is
important to support the extra drives) and can, therefore, achieve higher throughput for
parallel sequential I/O streams.
Measurements were done of a DB2 table scan using 4 KB pages on a DS8300 and a DS8800
attached to a z196 processor, which enables DB2 prefetch I/Os to become zHPF eligible. The
buffer pool is defined large enough to enable DB2 to read 256 KB per I/O. See Figure 2-13.
Figure 2-13 DS8000 versus DS8300 for DB2 sequential scan
The results show that using FICON, the DS8800 throughput is 44% higher than the DS8300.
Using zHPF, the DS8800 throughput is 55% higher than the DS8300. And, if we compare the
DS8800 with zHPF to the DS8300 with FICON, the throughput increases 66%.
Table scans are not the only type of DB2 workload that can benefit from faster sequential I/O.
Dynamic prefetch, sequential inserts, and DB2 utilities, all become much faster. Synchronous
0
50
100
150
200
250
300
M
B
/
s
e
c
DS8300 DS8800
DB2 table scan with 4 KB pages,
256 KB per I/O
FICON
zHPF
Chapter 2. Synergy with System z 21
DB2 I/O might also benefit from 8 Gbps host adapters, but not to the degree that sequential
I/O does.
Another recent set of measurement was performed on DB2 logging. As the DB2 log buffer
queues build up, the size of the DB2 log I/Os becomes large and it requires fast channels.
The FICON Express 8, in combination with the fast host adapters in the DS8800, helps boost
the throughput.
Figure 2-14 shows that the DS8800 improves the maximum throughput by about 70% when
the log buffer queues become large. These measurements were done on a z196 which
enables the log I/Os exceeding 64 KB to be zHPF eligible, providing an additional boost in
performance. Figure 2-14 also shows that DB2 10 achieves higher throughput than DB2 9 for
the reason described in 13.14, Logging enhancements on page 574.
Figure 2-14 Log writes throughput comparisons
For details about the storage systems, see:
http://www.ibm.com/systems/storage/news/center/disk/enterprise/
2.4.7 DFSMS support for solid-state drives
Solid-state drives are high I/O operations per second (IOPS) enterprise-class storage
devices that are targeted at business-critical production applications, which can benefit from a
high level of fast-access storage to speed up random I/O operations. In addition to better
IOPS performance, solid-state drives offer a number of potential benefits over
electromechanical hard disk drives (HDDs), including better reliability, lower power
consumption, less heat generation, and lower acoustical noise.
In z/OS V1R11, DFSMS allows SMS policies to help direct new data set allocations to
solid-state drive volumes. Tooling also helps to identify existing data that might perform better
when placed on solid-state drive volumes. A new device performance capability table is
available for volumes that are backed by solid-state drives. Selection preference toward
solid-state drive or non-solid-state drive volumes can be achieved easily by setting the Direct
Millisecond Response Time and Direct Bias values for the storage class.
Maximum DB2 log throughput
0
20
40
60
80
100
120
140
160
180
200
M
B
/
s
e
c
DS8300 DS8800
DB2 9 FICON
DB2 9 zHPF
DB2 10 FICON
DB2 10 zHPF
22 DB2 10 for z/OS Technical Overview
For more information about solid-state drives, refer to the following resources:
z/OS HOT TOPICS Newsletter, issue 20th March 2009, Stop spinning your storage
wheels, which is available from:
http://publibz.boulder.ibm.com/epubs/pdf/e0z2n191.pdf
DB2 9 for z/OS and Storage Management, SG24-7823
Ready to Access DB2 for z/OS Data on Solid-State Drives, REDP-4537
Washington Systems Center Flash WP101466, IBM System Storage DS8000 with SSDs
An In-Depth Look at SSD Performance in the DS8000
2.4.8 Easy Tier technology
IBM DS8700 disk storage system now includes a technology invented by IBM Research that
can make managing data in tiers easier and more economical. The IBM System Storage
Easy Tier feature uses ongoing performance monitoring to move only the most active data to
faster solid-state drives (SSD), which can eliminate the need for manual storage tier policies
and help reduce costs.
By automatically placing the clients most critical data on SSDs, Easy Tier provides quick
access to data so it can be analyzed for insight as needed, to provide competitive advantage.
The Easy Tier technology in that respect potentially offers an alternative to the DFSMS policy
based SSD volume data set placement that we described in 2.4.7, DFSMS support for
solid-state drives on page 21.
For more information about the Easy Tier technology, refer to DB2 9 for z/OS and Storage
Management, SG24-7823.
2.4.9 Data set recovery of moved and deleted data sets
With versions of DFSMShsm prior to V1R11, Fast Replication recover (FRRECOVER) of
individual data sets was not applicable to data sets that were deleted or moved to a different
volume after the volume FlashCopy backup was taken.
Starting with DFSMShsm V1R11, you can configure the SMS copypool to use the CAPTURE
CATALOG option. With this option enabled, DFSMShsm collects additional information during
FlashCopy volume backup that enables the FRRECOVER process to restore a data set on its
original source volume, even in situations in which the data set to restore was deleted or
moved to a different volume.
Considerations when using DB2 9
In DB2 9, you can generally recover an individual table space from a DB2 system level
backup (SLB). In a z/OS V1R11 environment, this capability also supports the recovery of
deleted or moved DB2 data sets. However, DB2 9 does not allow you to recover a table space
from an SLB in the following situations:
The table space was moved by DB2 9 (for example, by a DB2 utility) after the SLB was
taken.
A RECOVER, REORG, REBUILD INDEX utility or a utility with the NOT LOGGED option
was run against the table space after the SLB was taken.
Enhancements with DB2 10
In DB2 10, these DB2 9 RECOVER considerations are removed. For example, if a REORG
utility runs after an SLB is created, DB2 restores the data set to its original name and
Chapter 2. Synergy with System z 23
subsequently renames it to its current name to deal with the I and J instances of the data set
name. DB2 10 can handle that situation even if multiple REORG utilities have been executed
since the SLB was created.
2.4.10 Synergy with FlashCopy
The use of FlashCopy for data copy operations can reduce elapsed and CPU times spent in
z/OS. A z/OS FlashCopy client, such as the DFSMSdss ADRDSSU fast replication function,
waits only until the FlashCopy operation has logically completed, which normally takes a
much shorter time than the actual physical FlashCopy operation spends inside the DS8000
disk subsystem. No CPU is charged to z/OS clients for the DS8000 physical data copy
operation.
DB2 for z/OS and FlashCopy use
FlashCopy use in DB2 for z/OS began with Version 8 with the use of disk volume FlashCopy
for the DB2 SYSTEM BACKUP and RESTORE utility, which helped to reduce the z/OS
elapsed time that is required to backup and restore the entire DB2 system. For example,
backing up a few terabytes of data consisting of active logs, Bootstrap data sets, catalog,
directory, and user table and index spaces can become a matter of just a few seconds to
logically complete in z/OS without application unavailability.
In DB2 9, the BACKUP and RESTORE SYSTEM utilities were enhanced to support functions
that are available with DFSMShsm V1R8. For example, in DB2 9, you can keep several
SYSTEM BACKUP versions on DASD or on tape, you can use the SYSTEM BACKUP utility
to perform incremental track level FlashCopy operations, and you can use a SYSTEM
BACKUP utility created backup to recover individual table spaces or index spaces. Also in
DB2 9, the CHECK INDEX SHRLEVEL CHANGE utility performs consistency checks on a
table and index space shadow that the utility creates using the DFSMSdss ADRDSSU fast
replication function which implicitly uses FlashCopy.
For more information about FlashCopy usage in DB2 for z/OS, refer to DB2 9 for z/OS and
Storage Management, SG24-7823.
FlashCopy use by DB2 COPY utility
In DB2 10, the COPY utility is enhanced to provide an option to use the DFSMSdss fast
replication function for taking full image copies by the COPY utility or the inline COPY function
of the REORG and LOAD utilities. The DFSMSdss fast replication function invokes
FlashCopy to perform the physical data copy operation, which in turn offloads the physical
data copy operation to the DS8000 disk subsystem. As a result, no data pages need to be
read into the table space buffer pool, which by itself reduces CPU usage that is normally
caused by buffer pool getpage processing. For more information about FlashCopy use by the
DB2 10 COPY utility, refer to 11.1, Support FlashCopy enhancements on page 426.
To illustrate the performance benefit that the new FlashCopy use can provide, we created and
populated a sample table with 100000 rows in a table space. We defined the table space with
MAXROWS 1 to force DB2 to allocate 100000 data pages with one row per page. We then
performed two COPY utility executions, one with and one without using FlashCopy, to
compare COPY utility performance.
24 DB2 10 for z/OS Technical Overview
Figure 2-15 shows the accounting report highlights of the utility execution using the
FLASHCOPY NO COPY utility option. In the buffer pool activity section, DB2 reads all data
pages into the local buffer pool for image copy processing.
Figure 2-15 FLASHCOPY NO COPY utility accounting report
Figure 2-16 shows the accounting report highlights of the utility execution using the
FLASHCOPY YES COPY utility option. In the buffer pool activity section, DB2 does not read
the data pages that are to be processed into the local buffer pool. Instead, DB2 invokes the
DFSMSdss ADRDSSU fast replication function, which in this particular situation results in a
97% z/OS CPU time and a 94% elapsed time reduction compared to the COPY utility
execution, as illustrated in Figure 2-15.
Figure 2-16 FLASHCOPY YES COPY utility accounting report
2.4.11 DB2 catalog and directory now SMS managed
In DB2 10 the catalog and directory table and index spaces are defined as extended attribute
(EA) data sets and can reside on EAV.
Note that if you plan to use the DB2 Backup System utility for backing up and recovering your
DB2 subsystem data, this prerequisite must also be met. Thus, it makes sense to have your
overall DB2 backup strategy in mind when you plan the installation of or the migration to
DB2 10 for z/OS and if your catalog and directory VSAM data sets are not SMS managed.
Careful planning makes it possible to get your catalog and directory SMS configuration
correct for DB2 10 and for your backup and recovery strategy.
HIGHLIGHTS
--------------------
PARALLELISM: UTILITY
TIMES/EVENTS APPL(CL.1) DB2 (CL.2) TOTAL BPOOL ACTIVITY TOTAL
------------ ---------- ---------- --------------------- ------
ELAPSED TIME 11.381112 0.047218 GETPAGES 100299
CP CPU TIME 0.981997 0.472171 BUFFER UPDATES 58
AGENT 0.022562 0.006620 SEQ. PREFETCH REQS 1564
PAR.TASKS 0.959435 0.465551 PAGES READ ASYNCHR. 100093
HIGHLIGHTS
--------------------
PARALLELISM: NO
TIMES/EVENTS APPL(CL.1) DB2 (CL.2) TOTAL BPOOL ACTIVITY TOTAL
------------ ---------- ---------- --------------------- ------
ELAPSED TIME 0.731253 0.034548 GETPAGES 103
CP CPU TIME 0.020734 0.004625 BUFFER UPDATES 25
AGENT 0.020734 0.004625 SEQ. PREFETCH REQS 0
PAR.TASKS 0.000000 0.000000 PAGES READ ASYNCHR. 0
CPU and elapsed time savings: The CPU and elapsed time savings that you can achieve
using the COPY utility FlashCopy exploitation can vary, depending on I/O configuration and
table space size.
Chapter 2. Synergy with System z 25
With the DB2 catalog and directory data set SMS managed, DB2 passively exploits SMS
features that are only available to SMS managed data sets. Some of the key features are:
Data set attributes, performance characteristics and management rules of data sets are
transparently defined in SMS using data classes (DATACLAS), storage classes
(STORCLAS) and management classes (MGMTCLAS).
The assignment of DATACLAS, STORCLAS, MGMTCLAS can be externally provided (for
example in the IDCAMS DEFINE CLUSTER command) and enforced through SMS
policies, also known as automatic class selection (ACS) routines.
DASD volumes are grouped into SMS storage groups (STORGRP). During data set
creation, the STORGRP assignment is transparently enforced by the SMS policy through
the STORGRP ACS routine. Within a selected STORGRP, SMS places the data set that is
to be created onto one of the volumes within that SMS STORGRP. To prevent a SMS
STORGRP from becoming full, you can provide overflow STORGRPs, and you can define
utilization thresholds that are used by SMS to send alert messages to the console in case
the STORGRP utilization threshold is exceeded. Monitoring overflow STORGRPs and
automating SMS STORGRP utilization messages provide strong interfaces for ensuring
DASD capacity availability. You can also use the information provided by IDCAMS
DCOLLECT for regular STORGRP monitoring and capacity planning.
Data set attributes are transparently assigned through SMS data classes (DATACLAS)
during data set creation. New SMS data set attributes that are required to support better
I/O performance and data set availability can transparently be activated by changing
online the DATACLAS in SMS. For example, you can change online the DATACLAS
volume count which becomes active immediately for all data sets using that DATACLAS,
There is no need for you to run an IDCAMS ALTER command to add volumes to the data
set as you would have to if the data sets were non-SMS managed.
The use of SMS storage classes (STORCLAS) allow you to assign performance attributes
on data set level, because a SMS STORCLAS is assigned during data set creation. With
non-SMS managed data sets storage related performance attributes are normally only
available on volume level affecting all data sets residing on that volume. For example, you
can support a minimum disk response time for a particular data set by assigning a
particular SMS STORCLAS that supports that performance requirement.
Some data set attributes (for example, the volume count) can simply be adjusted by
changing the SMS data class. For example, instead of using the ALTER command to add
volumes to an existing DB2 VSAM LDS data set, you can simply increase the volume
count in the data class definition that is used by the DB2 VSAM LDS data set
We discuss SMS managed DB2 catalog and directory data sets in 12.5.1, SMS-managed
DB2 catalog and directory data sets on page 490.
2.5 z/OS Security Server
DB2 10 introduces security enhancements to support z/OS Security Server (RACF)
features that were introduced in z/OS V1R10 and z/OS V1R11. These features, now
supported in DB2 10, are RACF password phrases and RACF identity propagation.
2.5.1 RACF password phrases
In z/OS V1R10, you can set up a RACF user to use a password phrase instead of a textual
password. A textual passwords can be up to eight characters in length and in most cases
must not contain special characters other than numbers and letters. For example, you cannot
26 DB2 10 for z/OS Technical Overview
use a blank in a textual password. Textual passwords are considered weak passwords,
because they are easier to guess or to crack. With a password phrase, you can use a
memorable phrase to authenticate with the database. You can use uppercase or lowercase
letters, special characters, or even spaces. Due to its complexity (up to 100 characters in
z/OS), it is impossible to guess, unlikely to crack, and easier to remember.
For more information about how DB2 10 supports the use of password phrases, refer to
10.6.1, z/OS Security Server password phrase on page 411.
2.5.2 RACF identity propagation
In todays business world, a user authenticates to the enterprise network infrastructure using
an enterprise identity user ID (for example, the user ID that you use when you log in to an
enterprise network from your workstation). After successful authentication, users want to be
able to use the same enterprise identity user ID to authenticate to application systems and
database servers that belong to the enterprise infrastructure, which in our case includes a
DB2 for z/OS database server.
Due to z/OS Security Server (RACF) specific restrictions or limitations, an enterprise identity
user ID often cannot be used to authenticate to a z/OS server. For example, a RACF user ID
can be up to eight characters in length and might not support all the characters that you can
use in an enterprise identity user ID. Additionally, for the enterprise identity user ID to work,
you have to define one RACF user for each enterprise identity user, which causes additional
administration effort.
This case is where RACF Enterprise Identity Mapping (EIM) is most useful. With EIM support
activated in RACF, you can authenticate to DB2 for z/OS using your enterprise identity user
ID. DB2 for z/OS then uses a RACF service to map that enterprise identity user ID to an
existing RACF user ID that is used by DB2 for further authentication and security validation.
In z/OS Security Server V1R11, the EIM function is described as z/OS identity propagation.
z/OS identity propagation is fully integrated into the RACF database and has no external
dependency to an LDAP server infrastructure.
DB2 10 for z/OS supports z/OS identity propagation for Distributed Relational Database
Architecture (DRDA) clients that submit SQL to DB2 through trusted context database
connections. You can audit SQL activities performed by enterprise identity users by using the
DB2 10 policy based auditing capability.
For more information about how DB2 10 supports z/OS identity propagation and how to audit
DB2 activities performed by enterprise identity users, refer to 10.6.2, z/OS Security Server
identity propagation on page 416.
2.6 Synergy with TCP/IP
The DB2 for z/OS server uses TCP/IP functions in many places. As a consequence,
performance improvements in TCP/IP can help to improve DB2 for z/OS performance in
cases where the TCP/IP improvement provided affects a TCP/IP interface or infrastructure
that is also used by DB2.
Synergy with TCP/IP affects DB2 for z/OS as a server. In addition, DB2 client processes can
benefit from improvements in TCP/IP. For example, you can take advantage of the FTP
named pipe support introduced by z/OS V1R11 to load data into a DB2 table while the pipe is
Chapter 2. Synergy with System z 27
still open for write and while an FTP client simultaneously delivers data through the UNIX
System Services named pipe.
2.6.1 z/OS V1R10 and IPv6 support
Industry sources highlight the issue of IPv4 running out of addresses. For more information,
see:
http://www.potaroo.net/tools/ipv4/index.html
To address this issue, support for IPv6 IP-addresses is added in DB2 9 for z/OS. DB2 10 for
z/OS runs on z/OS V1R10 or later releases of z/OS. z/OS V1R10 is IPv6 certified and,
therefore, is officially enabled to support DB2 10 for z/OS workloads that use IPv6 format IP
addresses.
For details about the z/OS V1R10 IPv6 certification, refer to Special Interoperability Test
Certification of the IBM z/OS Version 1.10 Operating System for IBM Mainframe Computer
Systems for Internet Protocol Version 6 Capability, US government, Defense Information
Systems Agency, Joint Interoperability Test Command, which is available from:
http://jitc.fhu.disa.mil/adv_ip/register/certs/ibmzosv110_dec08.pdf
2.6.2 z/OS UNIX System Services named pipe support in FTP
Named pipes are a feature that is provided by z/OS UNIX System Services.
UNIX System Services named pipes are similar to non-persistent message queues, allowing
unrelated application processes to communicate with each other. Named pipes provide a
method of fast interprocess communication and also provide first in, first out (FIFOs) so that
applications can establish a one way flow of data. The named pipe has to be explicitly deleted
by the application when it is no longer needed. After you create a named pipe using the
mkfifo command, the named pipe is visible in the UNIX System Services file system as
shown in Figure 2-17 (size 0, file type = p).
Figure 2-17 UNIX System Services file system information for a UNIX System Services named pipe
Unlike a regular file, you can open a named pipe for write and read at the same time, allowing
for overlapping processing by the pipe writer and the pipe reader. The contents of a named
pipe resides in virtual storage. UNIX System Services does not write any data to the file
system. If you read from a named pipe that is empty, the pipe reader is blocked (put on wait)
until the pipe writer writes into the named pipe. UNIX System Services named pipes are read
and written strictly in FIFO order. You cannot position a file marker to read a named pipe other
than in FIFO order. UNIX System Services removes data from a named pipe upon read. You
cannot go back and reread data from a named pipe. All writes to a named pipe are appended
to whatever is currently in the named pipe. You cannot replace the contents of a named pipe
by writing to it.
FTP support for UNIX System Services named pipes
When you use FTP to transfer data into a regular data set to load that data set into a DB2
table, you cannot start the DB2 load utility until the FTP data transfer of that data set is
complete. For huge data volumes, the time needed for the FTP operation to complete can
take many hours delaying the start of the DB2 load utility for that amount of time.
mkfifo npipe; ls -l npipe
prw-r--r-- 1 DFS DFSGRP 0 Sep 24 16:11 npipe
28 DB2 10 for z/OS Technical Overview
In z/OS V1R11, you can use FTP to send your data directly into a UNIX System Services
named pipe while the DB2 load utility is simultaneously processing the same named pipe for
read. If the named pipe is empty the load utility is put on wait until more data arrives through
the named pipe writer. Figure 2-18 illustrates the data flow of this process.
Figure 2-18 How FTP access to UNIX named pipes works
Using FTP, UNIX System Services named pipes in context with DB2 utilities (for example the
DB2 LOAD and UNLOAD utilities) are just some of the use cases for which UNIX System
Services named pipes can be useful. You can determine when use UNIX System Services
named pipes when doing so is useful for solving a business problem.
For example, you can use the FTP named pipe support or write your own UNIX System
Services named pipe writer application to continuously and asynchronously collect and ship
information from multiple sources on z/OS for further processing to an application that on the
other end implements the UNIX System Services named pipe reader application. One pipe
reader application is the DB2 LOAD utility. However, you can use named pipes from your own
application programs. Using named pipes from a program is easy, because you can access
named pipes as you do normal UNIX files.
For more information about FTP access to UNIX System Services named pipes, refer to IBM
z/OS V1R11 Communications Server TCP/IP Implementation Volume 2: Standard
Applications, SG24-7799.
2.6.3 IPSec encryption
You can use z/OS Communication Server IP Security (IPSec) to provide end-to-end
encryption entirely configured in TCP/IP transparent to DB2 for z/OS client and server. Using
IPSec in a DB2 for z/OS server environment is usually recommended when all traffic needs to
Distributed
data
Temporary
intermediate
file on z/OS
(store and
forward)
Distributed
data
z/OS UNIX pipe
between z/OS FTP
server and DB2 batch
load utilities
An un-broken pipe
from the distributed
data to DB2
Example
based on
DB2 batch
load utility
DB2
DB2 batch load
utilities
z/OS FTP
Server
Distributed FTP
Client
DB2
DB2 batch load
utilities
z/OS FTP
Server
Distributed FTP
Client
Chapter 2. Synergy with System z 29
be encrypted. If only a few of your applications need to access secure data, then use AT-TLS
and DB2 SSL for data encryption. Figure 2-19 illustrates the general z/OS IPSec architecture.
Figure 2-19 z/OS IPSec overview
To make IPSec solutions on z/OS more attractive, the z/OS Communications Server is
designed to allow IPSec processing to take advantage of IBM System z Integrated
Information Processors (zIIP). The zIIP IPSecurity design allows Communication Server to
interact with z/OS Workload Manager to have a portion of its enclave Service Request Block
(SRB) work directed to zIIP. Beginning with z/OS Communication Server V1R8, most of the
processing related to security routines (Encryption and Authentication algorithms) run in
enclave SRBs that can be dispatched to run on available zIIP processors.
In IPSec, the following work is eligible to run on zIIP processors:
Inbound traffic is dispatched to run in enclave SRBs and, therefore, is 100% zIIP eligible.
For outbound traffic, zIIP eligibility depends on the size of the messages being sent. Each
message sent starts out on the application TCB. If the message is short, then all of the
data is sent in a single operation under that TCB. However, if the message size requires
segmentation, then all subsequent segments will be processed in an enclave SRB which
is eligible to be dispatched on a zIIP processor.
30 DB2 10 for z/OS Technical Overview
Figure 2-20 summarizes the zIIP eligibility of the IPSec workload.
Figure 2-20 IPSEC zIIP eligibility
2.6.4 SSL encryption
DB2 for z/OS provides secure SSL encrypted communication for remote inbound and
outbound connections by exploiting TCP/IP Application Transparent Transport Layer Security
(AT-TLS), which is also known as z/OS System SSL.
In z/OS V1R12, AT-TLS is improved to consume up to 30% less CPU compared to z/OS
V1R11 and earlier releases of z/OS. The improvement is achieved by eliminating one
complete data copy from the overall instruction path. Figure 2-21 explains the z/OS V1R12
AT-TLS change and the results that were observed during our testing.
What IPSec workload is eligible for zIIP?
The zIIP assisted IPSec function is designed to move most of the IPSec
processing from the general purpose processors to the zIIPs
z/OS CS TCP/IP recognizes IPSec packets and routes a portion of them
to an independent enclave SRB this workload is eligible for the zIIP
Inbound operation (not initiated by z/OS)
All inbound IPSec processing is dispatched to enclave
SRBs and is eligible for zIIP
All subsequent outbound IPSec responses from z/OS
are dispatched to enclave SRB. This means that all
encryption/decryption of message integrity and IPSec
header processing is sent to zIIP
Outbound operati on (initiated by z/OS)
Operation which starts on a TCB is not zIIP eligible
BUT any inbound response or acknowledgement is
SRB-based and therefore zIIP eligible
AND all subsequent outbound IPSec responses from
z/OS are also zIIP eligi ble
Source
Source
z/OS
z/OS
z/OS
z/OS
z/OS
Sink
Sink
Sink
zIIP
zIIP
zIIP
zIIP
zIIP
IPSec
IPSec
Chapter 2. Synergy with System z 31
Figure 2-21 z/OS V1R12 AT-TLS in-memory encrypt / decrypt improvement
2.7 WLM
Since DB2 9 for z/OS became generally available, several WLM features and functions were
introduced or shown to be useful in an operative DB2 environment. In this section, we
introduce these WLM functions and features and provide information about interfaces that
help in WLM administration and in DB2-related workload monitoring in WLM.
2.7.1 DSN_WLM_APPLENV stored procedure
DB2 for z/OS external stored procedures require WLM application environments (APPLENV)
to be defined and activated in z/OS Workload Manager (WLM). If you want to define a WLM
APPLENV, you normally have to use the WLM provided ISPF panel driven application in
which you have to enter the WLM APPLENV definition manually. Manually entering a WLM
APPLENV is difficult to integrate into existing system administration processes and
procedures.
To address this issue and to provide an interface for batch scripting WLM APPLENV
definitions that you need for executing your external procedures and functions, DB2 10
introduces the DSN_WLM_APPLENV stored procedure. You can call the
DSN_WLM_APPLENV stored procedure from a DB2 client, such as the DB2 command line
processor, to define, install, and activate a new WLM application environment in WLM.
2010 IBM Corporation Page 1
System SSL z/OS service
V1R12 AT-TLS in-memory encrypt/decrypt
CICS
SSL
MQ
SSL
WAS
SSL
zAAP
NetView,
OMEGAMON, DB2,
CIMOM, FTP,
TN3270, IMS,
JES/NJE, CICS
Sockets, 3rd party,
any customer TCP
application
AT-TLS
SSL/TLS
remote
application
z/OS
z/OS
Communications
server
CSM
DataSpace
ECSA
CSM
DataSpace
Copy
System SSL
decrypt
To Application
CSM
DataSpace
ECSA
System
SSL
decrypt
To Application
59
70
36
41
56
22
0
10
20
30
40
50
60
70
Befo re Aft er
360M B in 360M B out 6K Interact ive
Up to 41% reduction in networking
CPU per transaction
Transaction rate increase up to 62%
All data collected in a controlled
envi ronment; your actual resul ts
will vary
BEFORE AFTER
Prototype
Measurements
DB2
SSL
zIIP
32 DB2 10 for z/OS Technical Overview
The DSN_WLM_APPLENV call statement shown in Example 2-1 adds and activates WLM
APPLENV DSNWLM_SAMPLE. At run time, WLM starts the APPLENV DSNWLM_SAMPLE
using the DSNWLMS JCL procedure.
Example 2-1 DSN_WLM_APPLENV stored procedure call
CALL SYSPROC.DSN_WLM_APPLENV('ADD_ACTIVATE',
'ACTIVE',
'WLMNAME(DSNWLM_SAMPLE)
DESCRIPTION(DB2 SAMPLE WLM ENVIRONMENT)
PROCNAME(DSNWLMS)
STARTPARM(DB2SSN=&IWMSSNM,APPLENV=''DSNWLM_SAMPLE'')
WLMOPT(WLM_MANAGED)', ?, ?)
Upon successful completion the stored procedure returns the result shown in Figure 2-22.
Figure 2-22 DSN_WLM_APPLENV output
For more information about the DSN_WLM_APPLENV stored procedure, refer to DB2 10 for
z/OS Application Programming and SQL Guide, SC19-2969.
2.7.2 Classification of DRDA workloads using DB2 client information
JDBC, ODBC (DB2 CLI), and RRSAF DB2 clients can pass the following DB2 client
information to DB2 for z/OS:
Client user ID
Client workstation name
Client application or transaction name
Client accounting string
When set by the client application, DB2 for z/OS stores the client information in the DB2
accounting trace records, which enables you to analyze application performance or to profile
applications based on this client information.
This function addresses a the issue of applications connecting to DB2 using the same plan
name, package collection ID, or authorization ID. For example, a JDBC application connecting
to DB2 for z/OS through DRDA always uses the DISTSERV plan in DB2 and often uses the
RETURN_CODE: 0
MESSAGE: DSNT023I DSNTWLMS ADD WLM APPLICATION ENVIRONMENT DSNWLM_SAMPLE
SUCCESSFUL
APPLICATION ENVIRONMENT NAME : DSNWLM_SAMPLE
DESCRIPTION : DB2 SAMPLE WLM ENVIRONMENT
SUBSYSTEM TYPE : DB2
PROCEDURE NAME : DSNWLMS
START PARAMETERS : DB2SSN=&IWMSSNM,APPLENV='DSNWLM_SAMPLE'
STARTING OF SERVER ADDRESS SPACES FOR A SUBSYSTEM INSTANCE:
(x) MANAGED BY WLM
( ) LIMITED TO A SINGLE ADDRESS SPACE PER SYSTEM
( ) LIMITED TO A SINGLE ADDRESS SPACE PER SYSPLEX
DSNT023I DSNTWLMS ACTIVATE WLM POLICY WLMPOLY1 SUCCESSFUL
Chapter 2. Synergy with System z 33
NULLID package collection ID and the standard JDBC package names. The DB2 accounting
trace records in such cases are not useful because they do not allow you to trace back the
JAVA application. With DB2 client accounting information, for example, each JAVA application
can set a unique client program name, allowing you to create accounting reports based on
these client program names.
DB2 provides a variety of interfaces for setting client information. For example. you can set
DB2 client information data source properties in WebSphere Application Server, in the
JDBC or DB2CLI properties file, you can invoke JDBC provided JAVA classes, DB2CLI, or
RRSAF APIs for setting client information.
DB2 9 introduced the WLM_SET_CLIENT_INFO stored procedure through APAR PK74330.
When this procedure is called, it uses the procedure input parameters to invoke the RRSAF
SET_CLIENT_ID API to set the client information in DB2. For example, COGNOS can be
configured to invoke the WLM_SET_CLIENT_INFO procedure to set individual client
information for particular business intelligence (BI) workloads.
The WLM_SET_CLIENT_INFO stored procedure can also be used by local DB2 applications,
which can be useful because it keeps the complexity of handling the RRSAF SET_CLIENT
API away from your application logic.
In z/OS WLM, you can use the DB2 client information for service classification and RMF
report class assignment, which takes resource accounting to a different level. It allows you to
classify DRDA workloads based on client information set by the client application and enables
you to use RMF for client information based reporting, application profiling, monitoring, and
capacity planning.
The example shown in Figure 2-23 assigns the WLM service class DDFONL and the RMF
report class ZSYNERGY when an SQL workload is run in the DB2 subsystem DB0B, and
sets the DB2 client program name to ZSYNERGY.
Figure 2-23 WLM classification DRDA work based on program name
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 8 of 35
Command ===> ___________________________________________ Scroll ===> PAGE
Subsystem Type . : DDF Fold qualifier names? Y (Y or N)
Description . . . DDF Work Requests
Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: DDFBAT ________
____ 1 SI DB0B ___ DDFDEF ________
____ 2 PC ZSYNERGY ___ DDFONL ZSYNERGY
34 DB2 10 for z/OS Technical Overview
The example then uses the z/OS UNIX System Services command line processor to run the
SQL shown in Example 2-2 to invoke the SET_CLIENT_INFO stored procedures and to run a
simple SQL query.
Example 2-2 SET_CLIENT_INFO stored procedure invocation
update command options using c off;
connect to DB0B;
call SYSPROC.WLM_SET_CLIENT_INFO
('DataJoe','JoeWrkst','ZSYNERGY','ZSYNERGY ACCT STRING');
commit; -- commit activates the new client info
call sysproc.dsnwzp(?);
select count(*) from FCTEST;
TERMINATE;
While the query was running, we used the SDSF ENCLAVE display to verify the WLM
settings assigned by WLM. Figure 2-24 shows the SQL query listed in Example 2-2. This
query ran in WLM service class DDFONL with an RMF report class of ZSYNERGY.
Figure 2-24 SDSF enclave display of DRDA request
From the SDSF ENCLAVE display, we then displayed detailed enclave information. As shown
in Figure 2-25, the SDSF process name field shows client program name that we set using
the WLM_SET_CLIENT_INFO stored procedure invocation in Example 2-2.
Figure 2-25 SDSF enclave information for client program ZSYNERGY
Display Filter View Print Options Help
----------------------------------------------------------------
SDSF ENCLAVE DISPLAY SC63 ALL LINE 1-9
COMMAND INPUT ===> SC
NP NAME SSType Status SrvClass Per PGN RptClass
400000006D DDF ACTIVE DDFONL 1 ZSYNERGY
Enclave 400000006D on System SC63
Subsystem type DDF Plan name DISTSERV
Subsystem name DB0B Package name SYSSTAT
Priority Connection type SERVER
Userid DB2R5 Collection name NULLID
Transaction name Correlation db2jcc_appli
Transaction class Procedure name
Netid Function name DB2_DRDA
Logical unit name Performance group
Subsys collection Scheduling env
Process name ZSYNERGY
Reset NO
Chapter 2. Synergy with System z 35
We then used the DB2 display thread command to display the entire DB2 client information
used at query execution time. The command output shown in Figure 2-26 shows all client
information that we previously set using the WLM_SET_CLIENT_INFO stored procedure.
Figure 2-26 DB2 display thread command showing DB2 client information
2.7.3 WLM blocked workload support
If the CPU utilization of your system is at 100%, workloads with low importance (low dispatch
priority) might not get dispatched anymore. Such low importance workloads are CPU starved
and are blocked from being dispatched by WLM due to its low dispatching priority. They are
also known as blocked workloads. Blocked workloads can impact application availability if the
low priority work (for example, a batch job) holds locks on DB2 resources that are required by
high priority workloads (for example, CICS, IMS or WebSphere Application Server
transactions).
To address this issue, z/OS V1R9 delivered a blocked workload support function, which was
retrofitted into z/OS V1R7 and V1R8 through APAR OA17735.
The WLM blocked workload support temporarily promotes a blocked workload to a higher
dispatch priority provided the corresponding z/OS work unit (TCB or SRB) is ready-to-run but
does not get CPU service because of its low dispatch priority. WLM blocked workload support
does not consider swapped out address spaces for promotion.
For details about WLM blocked workload support, refer to WSC FLASH z/OS Availability:
Blocked Workload Support, which is available at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10609
IEAOPTxx function
The blocked workload support provides installation control over the amount of CPU blocked
workloads can receive. The following IEAOPTxx parameters are introduced with this support:
BLWLINTHD
Parameter BLWLINTHD specifies the threshold time interval in seconds for which a
blocked address space or enclave must wait before being considered by WLM for
promotion. You can specify a value ranging from 5 to 65535 seconds. The BLWINTHD
default value is 20 seconds.
BLWLTRPCT
Parameter BLWLTRPCT specifies in units of 0.1% how much of the CPU capacity is to be
used to promote blocked workloads. This parameter does not influence the amount of
CPU service that a single blocked address space or enclave is given. Instead, this
parameter influences how many different address spaces or enclaves can be promoted at
the same point in time. If the value specified with this parameter is not large enough,
-dis thd(*)
DSNV401I -DB0B DISPLAY THREAD REPORT FOLLOWS -
DSNV402I -DB0B ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SW * 24 db2jcc_appli DB2R5 DISTSERV 00A0 151
V437-WORKSTATION=JoeWrkst, USERID=DataJoe,
APPLICATION NAME=ZSYNERGY
V429 CALLING PROCEDURE=SYSPROC.DSNWZP,
PROC= , ASID=0000, WLM_ENV=DSNWLM_NUMTCB1
36 DB2 10 for z/OS Technical Overview
blocked workloads might need to wait longer than the time interval defined by
BLWLINTHD. You can specify a value ranging from 0 up to 200 (up to 200 x 0.1% = 20%).
If you specify 0 you disable blocked workload support. The BLWTRPCT default value is 5
(5 x 0.1% = 0.5%).
For details about the IEAOPTxx parameters for WLM blocked workload support, refer to MVS
Initialization and Tuning Reference:
http://publibz.boulder.ibm.com/epubs/pdf/iea2e2a1.pdf
Promotion for chronic contentions
z/OS V1R10 introduces changes to the WLM IWMCNTN service that allow resource
managers, such as DB2, to tell WLM that a long lasting contention situation exists. WLM in
turn manages the dispatching priority of the resource holder according to the goals of the
most important waiter. DB2 10 uses the IWMCNTN WLM service to resolve chronic
contention situations that potentially lead to deadlock or timeout situations due to DB2 lock
holders that are blocked for long periods caused by circumstances such as lack of CPU.
IEAOPTxx parameter MaxPromoteTime
z/OS V1R10 introduces the IEAOPTxx z/OS parmlib parameter MaxPromoteTime to limit the
time a resource holder (can be an address space or an enclave) that is suffering from a
chronic contention is allowed to run promoted while it is causing a resource contention. You
specify the time in units of 10 seconds. When MaxPromoteTime is exceeded, the promotion
is cancelled. During promotion the resource holder runs with the highest priority of all
resource waiters to guarantee the needed importance. Furthermore, the resource holders
address space (including the address space that is associated with an enclave) is be
swapped out. The default value for the MaxPromote parameter is 6, which results in a
maximum time a resource holder can run promoted of 60 seconds (6 x 10 seconds).
SDSF support for chronic resource contention promoted workloads
You can configure SDSF to provide a PROMOTED column in the DISPLAY ACTIVE (DA) and
ENCLAVE (ENC) panel to show whether an address space or an enclave is promoted due to
a chronic resource contention.
SMF type 30 record
The processor accounting section of the SMF type 30 record, field SMF30CRP, contains the
CPU time that a resource holder consumed while being promoted due to a chronic resource
contention.
For details about WLM and SRM resource contention and the WLM IWMCNTN service, refer
to z/OS Version 1 Release 10 Implementation, SG24-7605.
RMF support for blocked workloads
In support of blocked workload, RMF has extended the SMF record types 70-1 (CPU activity)
and 72-3 (workload activity) to contain information about blocked workloads and accordingly
provided extensions to the CPU Activity and Workload Activity report.
Chapter 2. Synergy with System z 37
RMF CPU activity report
The RMF postprocessor CPU activity report provides a new section with information about
blocked workloads. A sample of the new blocked workload section is shown in Figure 2-27.
Figure 2-27 Blocked workload RMF CPU activity report
PROMOTE RATE
DEFINED: Number of blocked work units which can be promoted in their dispatching
priority per second. This value is derived from IEAOPTxx parameter BLWLTRPCT.
USED (%): The utilization of the defined promote rate during the reporting interval.
This value is calculated per RMF cycle and averaged for the whole RMF interval. It
demonstrates how many trickles were actually given away (in percent of the allowed
maximum) for the RMF interval.
WAITERS FOR PROMOTE
Average: Number of TCBs/SRBs and enclaves found blocked during the interval and
not promoted according to IEAOPTxx parameter BLWLINTHD.
PEAK: The maximum number of TCBs/SRBs and enclaves found blocked and not
promoted during the interval according to the IEAOPTxx parameter BLWLINTHD. The
AVG value might be quite low although there were considerable peaks of blocked
workload. Thus, the peak value is listed as well.
As long as WAITERS FOR PROMOTE is greater than 0, the system has work being blocked
longer than the BLWLINTHD setting. In such a case, it might be advisable to increase
BLWLTRPCT. If there are still problems with blocked work holding resources for too long even
though there are no waiters in the RMF data, then decreasing the BLWLINTHD setting might
be advisable.
RMF workload activity report
The RMF postprocessor workload activity report provides the CPU time, that transactions of a
certain service or report class were running at a promoted dispatching priority. Figure 2-28
provides an example of a workload activity report promoted section.
Figure 2-28 Blocked workload RMF workload activity report
BLOCKED WORKLOAD ANALYSIS
OPT PARAMETERS: BLWLTRPCT (%) 20.0 PROMOTE RATE: DEFINED 173 WAITERS FOR PROMOTE: AVG 0.153
BLWLINTHD 5 USED (%) 13 PEAK 2
SERVICE CLASS=STCHIGH RESOUCE GROUP=*NONE
CRITICAL =NONE
DESCRIPTION =High priority fo STC workloads
--------------------------------------------- SERVICE CLASS(ES)
--PROMOTED--
BLK 1.489
ENQ 0.046
CRM 5.593
LCK 0.000
38 DB2 10 for z/OS Technical Overview
The workload activity promoted section shows the CPU time in seconds that transactions in
this group were running at a promoted dispatching priority, separated by the reason for the
promotion. RMF currently reports the following promotion reasons:
BLK: CPU time in seconds consumed while the dispatching priority of work with low
importance was temporarily raised to help blocked workloads
ENQ: CPU time in seconds consumed while the dispatching priority was temporarily
raised by enqueue management because the work held a resource that other work
needed
CRM: CPU time in seconds consumed while the dispatching priority was temporarily
raised by chronic resource contention management because the work held a resource that
other work needed
LCK: In HiperDispatch mode, the CPU time in seconds consumed while the dispatching
priority was temporarily raised to shorten the lock hold time of a local suspend lock held by
the work unit
2.7.4 Extend number of WLM reporting classes to 2,048
With systems and environments becoming larger, more reporting classes are needed in
WLM. In z/OS V1R11, the number of report classes is increased from 999 to 2,048. This
increase is expected to allow more fine-grained reporting of your DB2 workloads.
2.7.5 Support for enhanced WLM routing algorithms
z/OS V1R11 Communications Server enhances server-specific workload WLM
recommendations that are used by the sysplex distributor to balance workload when
DISTMETHOD SERVERWLM is configured on the VIPADISTRIBUTE statement. These
configuration parameters enable WLM to do the following tasks:
Direct more workload targeted for zIIP or zAAP specialty processors to systems that have
these more affordable processors available, thereby reducing the overall cost of running
those workloads. For this function, a minimum of an IBM zIIP or IBM zAAP processor is
required.
Consider the different importance levels of displaceable capacity when determining
server-specific recommendations.
WLM can use the configuration options for server-specific recommendations only when all
systems in the sysplex are at z/OS V1R11 or later.
For more information about the support for enhanced WLM routing algorithms, refer to z/OS
V1R11.0 Communications Server New Function Summary z/OS V1R10.0-V1R11.0,
GC31-8771.
2.8 Using RMF for zIIP reporting and monitoring
DB2 for z/OS no longer provides information about zIIP eligible time that was processed on a
central processor (CP) due to unavailability of sufficient zIIP resources. As a consequence
the information provided by DB2 for z/OS can no longer be used to determine whether further
zIIP capacity might be required for TCO optimization. To monitor zIIP eligible time that was
dispatched to be processed on a CP you therefore need to use the RMF report options. In this
part of this book we outline how to set up WLM and to use RMF to monitor zIIP usage for
DRDA and z/OS batch workloads using RMF report class definitions.
Chapter 2. Synergy with System z 39
2.8.1 DRDA workloads
In 2.7.2, Classification of DRDA workloads using DB2 client information on page 32, we
define a service classification for DB2 subsystem DB0B for SQL workloads that have their
DB2 client program name set to a value of ZSYNERGY and subsequently ran an SQL
workload under the ZSYNERGY client program name. Upon SQL completion we extracted
the SMF RMF records and created an RMF workload activity report for RMF report class
ZSYNERGY. The RMF report for that report class is shown in Figure 2-29.
Figure 2-29 RMF workload activity report for RMF report class ZSYNERGY
2.8.2 Batch workloads
You can use RMF reports for batch workload monitoring and zIIP capacity planning. As a
prerequisite for RMF reporting, you need to perform the following tasks:
Define a WLM service classification rule for the batch job or the group of jobs that you
want to monitor. For ease of use, make use of RMF report classes in your WLM
classification.
Configure SMF and RMF to gather the RMF SMF records.
Upon successful job completion, create an RMF II workload activity report.
Using RMF for zIIP reporting and monitoring: If you want to use RMF for zIIP reporting
and monitoring, you need to collect the SMF RMF records as described in Effective
zSeries Performance Monitoring Using Resource Measurement Facility, SG24-6645.
REPORT BY: POLICY=WLMPOL REPORT CLASS=ZSYNERGY
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %---
AVG 0.00 ACTUAL 2.24.813 SSCHRT 1.7 IOC 0 CPU 0.442 CP 0.01
MPL 0.00 EXECUTION 2.24.812 RESP 1.3 CPU 12551 SRB 0.000 AAPCP 0.00
ENDED 2 QUEUED 1 CONN 1.2 MSO 0 RCT 0.000 IIPCP 0.00
END/S 0.00 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 12551 HST 0.000 AAP 0.00
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 3 AAP 0.000 IIP 0.00
AVG ENC 0.00 STD DEV 1.21.329 IIP 0.178
REM ENC 0.00 ABSRPTN 1303
MS ENC 0.00 TRX SERV 1303
40 DB2 10 for z/OS Technical Overview
WLM classification
In the WLM service classification shown in Figure 2-30, we assign the WLM service class
BATCHMED to any job that runs in class A with a job name starting with RS. For ease of RMF
reporting, we assign a report class of RUNSTATS.
Figure 2-30 WLM classification for batch job
When we ran job RSFCTEST in class A we used the SDSF display active panel to confirm the
service class and report class assignment (see Figure 2-31 for details).
Figure 2-31 SDSF display active batch WLM classification
RMF workload activity report
Upon successful job completion we created an RMF workload activity report for the
RUNSTATS report class using the JCL shown in Example 2-3.
Example 2-3 JCL RMF workload activity report
//RMFPP EXEC PGM=ERBRMFPP
//MFPINPUT DD DISP=SHR,DSN=DB2R5.RS.RMF
//MFPMSGDS DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD *
SYSRPTS(WLMGL(RCPER(RUNSTATS)))
RTOD(0220,0225)
DINTV(0100)
SUMMARY(INT,TOT)
NODELTA
NOEXITS
Subsystem-Type Xref Notes Options Help
-------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 8 of
Command ===> ___________________________________________ Scroll ===> PAGE
Subsystem Type . : JES Fold qualifier names? Y (Y or N)
Description . . . Batch Jobs
Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: BATCHLOW BATCHDEF
____ 1 TC A ___ BATCHLOW LOWJES2
____ 2 TN RS* ___ BATCHMED RUNSTATS
Display Filter View Print Options Help
-------------------------------------------------------------------------------
SDSF DA SC63 SC63 PAG 0 CPU/L/Z 4/ 3/ 0 LINE 1-1 (1)
COMMAND INPUT ===> SCROLL ===> CSR
NP JOBNAME U% CPUCrit StorCrit RptClass MemLimit Tran-Act Tran-Res Spin
RSFCTEST 12 NO NO RUNSTATS 16383PB 0:00:01 0:00:01 NO
Chapter 2. Synergy with System z 41
MAXPLEN(225)
SYSOUT(A)
Figure 2-32 shows the RMF workload activity report for the RUNSTATS report class.
Figure 2-32 RMF workload activity report for the RUNSTATS report class
The RMF report shown in Figure 2-32 provides information about zIIP utilization in the
SERVICE TIME and the APPL report blocks. In our example we ran a RUNSTATS utility to
collect standard statistics for a table space (all tables and indexes of that table space). In
DB2 10, you can expect RUNSTATS to redirect some of its processing to run on zIIP
processors.
The RMF SERVICE TIME report block shows a total CPU time of 0.359 seconds (includes CP
and zIIP) and a total IIP (zIIP) time of 0.329 seconds which indicates a RUNSTATS zIIP
redirect ratio of about 91% (0.329*100/0.359). If you want to verify whether there was any zIIP
eligible time processed on a CP, review the IIPCP (IIP processed on CP in percent)
information that is provided in the RMF APPL report block.
In our RUNSTATS utility example, the value for IIPCP is zero, indicating that there was no zIIP
eligible work processed on a CP. Therefore, we come to the conclusion that there were
sufficient zIIP resources available at the time of RUNSTATS utility execution.
REPORT BY: POLICY=WLMPOL REPORT CLASS=RUNSTATS PERIOD=1
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %---
AVG 0.01 ACTUAL 5.016 SSCHRT 0.2 IOC 63 CPU 0.359 CP 0.01
MPL 0.01 EXECUTION 4.892 RESP 0.4 CPU 10182 SRB 0.000 AAPCP 0.00
ENDED 1 QUEUED 123 CONN 0.3 MSO 1173 RCT 0.000 IIPCP 0.00
END/S 0.00 R/S AFFIN 0 DISC 0.0 SRB 3 IIT 0.002
#SWAPS 1 INELIGIBLE 0 Q+PEND 0.1 TOT 11421 HST 0.000 AAP 0.00
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 22 AAP 0.000 IIP 0.06
AVG ENC 0.00 STD DEV 0 IIP 0.329
REM ENC 0.00 ABSRPTN 2347
MS ENC 0.00 TRX SERV 2347
GOAL: EXECUTION VELOCITY 20.0% VELOCITY MIGRATION: I/O MGMT 100% INIT MGMT 100%
RESPONSE TIME EX PERF AVG --EXEC USING%-------
SYSTEM VEL% INDX ADRSP CPU AAP IIP I/O TOT
SC63 --N/A-- 100 0.2 0.0 0.0 0.0 2.7 16 0.0
EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
CRY CNT UNK IDL CRY CNT QUI
0.0 0.0 81 0.0 0.0 0.0 0.0
42 DB2 10 for z/OS Technical Overview
To reconfirm that the RUNSTATS utility zIIP eligible time matches the zIIP eligible time shown
in the RMF report, we created a DB2 accounting report for the interval in question. The
special engine time (SE CPU TIME) of the accounting report shown in Figure 2-33 matches
the zIIP eligible time of the RMF workload activity report shown in Figure 2-32.
Figure 2-33 DB2 RUNSTATS utility accounting report
The RMF zIIP support is explained in detail in DB2 9 for z/OS Technical Overview,
SG24-7330.
2.9 Warehousing on System z
For performance and security reasons many z/OS customers (and especially those who have
most of their business critical information stored in DB2 for z/OS) want to apply their BI
processes as close as possible to the place their data is stored. Applying BI processes as
close as possible to where your data is stored provides tight integration with existing
resources on the z/OS platform and can assist you in providing high performance throughput
BI processes that comply with the requirement of a real-time operational data store.
2.9.1 Cognos on System z
Cognos for Linux on System z gives you the opportunity to host a Cognos server
environment on the System z platform. Cognos for System z can be redirected to run on
another kind of speciality engine, the Integrated Facility for Linux (IFL). Like zIIPs and zAAPs
an IFL is just another type of speciality engine that you can take advantage of to run Linux on
System z applications just next to the z/OS LPAR that hosts your business critical information
stored in DB2 for z/OS. The close proximity of the Cognos for Linux on System z server to
DB2 for z/OS (both reside in the same physical box) provides fast access to your data and
allows you to apply BI processes as efficiently and close to your DB2 for z/OS data as
possible.
MAINPACK : DSNUTIL CORRNMBR: 'BLANK' LUW INS: C6A4809D233B
PRIMAUTH : DB2R5 CONNTYPE: UTILITY LUW SEQ: 57
ORIGAUTH : DB2R5 CONNECT : UTILITY
TIMES/EVENTS APPL(CL.1) DB2 (CL.2) IFI (CL.5) CLASS 3 SUSPENSIONS ELAPSED TIME
------------ ---------- ---------- ---------- -------------------- ------------
ELAPSED TIME 4.840189 4.778304 N/P LOCK/LATCH(DB2+IRLM) 0.000000
NONNESTED 4.840189 4.778304 N/A IRLM LOCK+LATCH 0.000000
STORED PROC 0.000000 0.000000 N/A DB2 LATCH 0.000000
UDF 0.000000 0.000000 N/A SYNCHRON. I/O 0.057179
TRIGGER 0.000000 0.000000 N/A DATABASE I/O 0.007112
LOG WRITE I/O 0.050067
CP CPU TIME 0.021537 0.008329 N/P OTHER READ I/O 4.053438
AGENT 0.021537 0.008329 N/A OTHER WRTE I/O 0.000000
NONNESTED 0.021537 0.008329 N/P SER.TASK SWTCH 0.021068
STORED PRC 0.000000 0.000000 N/A UPDATE COMMIT 0.011318
UDF 0.000000 0.000000 N/A OPEN/CLOSE 0.000000
TRIGGER 0.000000 0.000000 N/A SYSLGRNG REC 0.009750
PAR.TASKS 0.000000 0.000000 N/A EXT/DEL/DEF 0.000000
OTHER SERVICE 0.000000
SECP CPU 0.000000 N/A N/A ARC.LOG(QUIES) 0.000000
LOG READ 0.000000
SE CPU TIME 0.329314 0.329314 N/A DRAIN LOCK 0.000000
Chapter 2. Synergy with System z 43
zIIP eligibility
Cognos connects to DB2 z/OS as a DRDA application requestor. Therefore, SQL DML
requests can benefit from DRDA zIIP redirect. If the SQL submitted by Cognos qualifies for
parallel query processing, the additional CPU required for parallel processing is eligible to be
redirected onto a zIIP processor and provides additional TCO savings.
Synergy with System z
Using System z for your Cognos for Linux on System z environment, you benefit from the
System z hardware infrastructure that already is provided to support high or even continuous
availability for your existing z/OS applications. Since Cognos on System z version 8.4, you
can store all Cognos data in DB2 for z/OS. Previous to version 8.4, you could not store the
Cognos content store back-end data in DB2 for z/OS. Using DB2 for z/OS for Cognos, you
can take advantage of existing DB2 backup and recovery procedures that are already in place
to support the high availability requirements of the z/OS platform.
2.10 Data encryption
z/OS has a large breadth of cryptographic capabilities. The cryptographic capabilities are
highly available and scalable and can take advantage of System z technologies, such as
Parallel Sysplex and Geographically Dispersed Parallel Sysplex (GDPS). Key management
is simpler on z/OS because a central keystore is easier to maintain than having many
distributed keystores using multiple Hardware Security Modules. In addition, encryption keys
are highly secure. For secure key processing, the key never appears within or leaves the
System z server in the clear.
z/OS has a great depth of encryption technologies, with support for or support planned for the
following encryption standards:
Advanced Encryption Standard (AES)
Data Encryption Standard (DES) and Triple DES
Secure Hashing Algorithms (SHA)
Public Key Infrastructure (PKI)
Elliptical Curve Cryptography (ECC)
Galois/Counter Mode encryption for AES (GCM)
Elliptic Curve Diffie-Hellman key derivation (ECDH)
Elliptic Curve Digital Signature Algorithm (ECDSA)
Keyed-Hash Message Authentication Code (HMAC)
RSA algorithms
2.10.1 IBM TotalStorage for encryption on disk and tape
DS8000 and DS8700 features, along with z/OS and IBM Tivoli Key Lifecycle Manager
functions, allow DS8000 and DS8700 storage controllers to encrypt sensitive data in place.
Drive-level encryption is designed to have no performance impact, to be transparent to the
application and to the server, and to help minimize the costs associated with encrypting data
at rest with a solution that is simple to deploy and simple to maintain.
Two tape encryption options are also available:
IBM System Storage tape drives (TS1120 and TS1130) with integrated encryption and
compression are intended to satisfy the needs for high volume data archival and
back-up/restore processes. Handling encryption and compression in the tape drive
offloads processing from System z servers to the tape subsystems, freeing up cycles for
your mission-critical work-loads.
44 DB2 10 for z/OS Technical Overview
The Encryption Facility for z/OS can help protect valuable assets from exposure by
encrypting data on tape prior to exchanging it with trusted business partners. The
Encryption Facility for z/OS V1.2 with OpenPGP support provides a host-based security
solution designed to help businesses protect data from loss and inadvertent or deliberate
compromise.
2.11 IBM WebSphere DataPower
IBM WebSphere DataPower is a hardware solution that is well known for its security
features and its high throughput in data transformations (for example, XML processing).
WebSphere DataPower can access existing data through WebSphere MQ and IMS Connect.
You can configure it to function as a multiprotocol gateway, because it supports protocols
such as SOAP, REST, HTTP, XML, FTP, MAILTO, SMTP, SSH, and other protocols.
With the ODBC option enabled, WebSphere DataPower can access DB2 for z/OS through
DRDA. Through DRDA access, WebSphere DataPower can host IBM Data Web Services
(implemented as XSL stylesheets that query DB2 and return the DB2 response as an XML
document to the Web Services consumer).
Figure 2-34 illustrates the DataPower DB2 access capabilities.
Figure 2-34 WebSphere DataPower DRDA capabilities
If you use WebSphere DataPower to access existing data, you need to set up your DataPower
appliances to cater for workload balancing and failover capability to avoid an SPOF. Starting
with WebSphere DataPower firmware level 3.8.1, you can configure a Sysplex Distributor
Target Control Service on your WebSphere DataPower appliance. The Sysplex Distributor
Target Control Service on the DataPower appliance establishes control connections with the
z/OS Sysplex Distributor that allow the z/OS Sysplex Distributor to intelligently distribute
traffic across multiple DataPower appliances. By letting z/OS Sysplex Distributor manage
your WebSphere DataPower appliances you implicitly reuse the availability features that are
already in place for your Parallel Sysplex Distributor infrastructure.
Web Service Client DataServer
Relational
Data,
Stored
Procedures
XML
DRDA
SOAP/HTTP
XML/HTTP
XML/MQ
HTTP POST
HTTP GET
..
WebSphere DataPower XI50
Transform
Request
Response
SQL: DML, CALL
DML: Data Manipulation Language
SELECT,INSERT,UPDATE,DELETE
TCP/IP
Chapter 2. Synergy with System z 45
2.12 Additional zIIP and zAAP eligibility
DB2 for z/OS began using zIIP speciality processors in Version 8 and continued to improve
total cost of ownership (TCO) by further using zIIP engines in DB2 9. DB2 10 continues this
trend and provides additional zIIP workload eligibility as we describe in this section.
Figure 2-35 summarizes the availability of the specialty engines and their applicability to DB2
workloads.
Figure 2-35 Specialty engines applicability
For further information about zIIP eligibility as of DB2 9 for z/OS, refer to DB2 9 for z/OS
Technical Overview, SG24-7330.
IBMCorporation
Internal Coupling
Facility (ICF) 1997
Integrated Facility
for Linux
(IFL)
2000
System z Application
Assist Processor (zAAP)
2004
Mainframe Innovation
Specialty Engines
Eligible for zII P:
DB2remote access ,
BI/DW,Utilities Build Index
and Sort processing, XML
Parsing, RUNSTATS, BP
Prefetch, Deferred Write
z/OS XML System Services
HiperSocketsfor large
messages
IPSec encryption
z/OS Global Mirror (XRC)
IBM GBS Scalable
Architecture for Financial
Reporting
z/OS CIM Server
ISVs
El igi bl e for zAAP:
Java execution
environment
z/OS XML System
Services
IBM System z Integrated
Information Processor
(zIIP) 2006
46 DB2 10 for z/OS Technical Overview
Figure 2-36 provides a quick comparison of characteristics of zAAP and zIIP.
Figure 2-36 Comparison of zAAP and zIIP
2.12.1 DB2 10 parallelism enhancements
When you use DB2 for z/OS to run parallel queries, portions of such parallel SQL requests
are zIIP eligible and can be directed to run on a zIIP speciality processor.
In DB2 10, more queries qualify for query parallelism which in turn introduces additional zIIP
eligibility. For further details on DB2 10 additional query parallelism refer to 13.21, Parallelism
enhancements on page 593.
2.12.2 DB2 10 RUNSTATS utility
In DB2 10, portions of the RUNSTATS utility are eligible to be redirected to run on a zIIP
processor. The degree of zIIP eligibility depends upon the statistics that you gather. If you run
RUNSTATS with no additional parameters, the zIIP eligibility can be up to 99.9%. However, if
you require more complex statistics (for example, frequency statistics), the degree of zIIP
eligibility is less.
Depending on the characteristics and complexity of your SQL workload, you might need to
collect additional statistics to support the application performance that you have to deliver to
comply with your service level agreements (SLA). Therefore, you might find a varying degree
of zIIP eligibility when you execute your RUNSTATS utility workload.
2.12.3 DB2 10 buffer pool prefetch and deferred write activities
Buffer pool prefetch, which includes dynamic prefetch, list prefetch, and sequential prefetch
activities, is 100% zIIP eligible in DB2 10. DB2 10 zIIP eligible buffer pool prefetch activities
are asynchronously initiated by the database manager address space (DBM1) and are
IBMCorporation
New zAAPon zIIP
capabil ity
Exploiters include:
Java via the I BM SDK (IBM Java Virtual Machine
(JVM)), such as portions of:
WebSphere Application Server
IMS
DB2
CICS
Java batch
CIM Client appli cations
z/OS XML Syst em Services, such as portions of:
DB2 9 (New Functi on Mode), and later
Enterprise COBOL V4.1, and later
Enterprise PL/I V3.8, and later
IBM XML Tool kit for z/OS, V1.9 and later
CICS TS V4.1
Intended to help implement new application
t echnologies on System z, such as Java and XML.
System z Application Assist Processor
(originally the zSeries Application Assist Processor).
Available on I BM zEnterprise 196 (z196), IBM
System z10
zSeries
Notes:
1 If you specify SINGLE, you must also specify LOGLOAD or CHKTIME.
Chapter 4. Availability 99
relief with additional active log space until you can correct the original archiving problem that
is causing DB2 to hang.
Figure 4-20 shows the syntax for the -SET LOG NEWLOG keyword. With the new command,
the procedure to add a new active log to the log inventory and to have DB2 switch to this new
log is as follows:
1. Define the VSAM clusters for the new active log data sets.
Installation job DSNTIJIN contains the sample JCL and IDCAMS DEFINE CLUSTER
statements. In our environment, we used the job listed in Example 4-7.
Example 4-7 Define clusters for the new active log data sets
//DSNTIC PROC
//* *******************************************************************
//* DIRECTORY/CATALOG AMS INVOCATION INSTREAM JCL PROCEDURE
//* *******************************************************************
//DSNTIC EXEC PGM=IDCAMS,COND=(2,LT)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//DSNTIC PEND
//*
//DSNTDBL EXEC DSNTIC
//SYSIN DD *
DEFINE CLUSTER -
( NAME (DB0BL.LOGCOPY1.DS04) -
KILOBYTES(34560) -
LINEAR ) -
DATA -
( NAME (DB0BL.LOGCOPY1.DS04.DATA) -
) CATALOG(UCAT.DB0BLOGS)
DEFINE CLUSTER -
( NAME (DB0BL.LOGCOPY2.DS04) -
KILOBYTES(34560) -
LINEAR ) -
DATA -
( NAME (DB0BL.LOGCOPY2.DS04.DATA) -
) CATALOG(UCAT.DB0BLOGS)
2. Run stand-alone utility DSN1LOGF to format the new active log data sets. This step is not
necessary for things to work, but recommended.
3. Execute the -SET LOG command. Make sure you execute it for each log data set you want
to add and remember that you must execute it once per log copy. We used:
-db0b set log newlog(db0bl.logcopy1.ds04)copy(1)
-db0b set log newlog(db0bl.logcopy2.ds04)copy(2)
Example 4-8 shows the -DIS LOG command output.
Example 4-8 -DIS LOG command output
DSNJ370I -DB0B DSNJC00A LOG DISPLAY 736
CURRENT COPY1 LOG = DB0BL.LOGCOPY1.DS02 IS 20% FULL
CURRENT COPY2 LOG = DB0BL.LOGCOPY2.DS02 IS 20% FULL
H/W RBA = 000004CEF6C8
H/O RBA = 00000464FFFF
FULL LOGS TO OFFLOAD = 0 OF 8
100 DB2 10 for z/OS Technical Overview
OFFLOAD TASK IS (AVAILABLE)
DSNJ371I -DB0B DB2 RESTARTED 19:52:26 JUL 27, 2010 737
RESTART RBA 0000045C5000
CHECKPOINT FREQUENCY 5 MINUTES OR 10000000 LOGRECORDS
LAST SYSTEM CHECKPOINT TAKEN 15:56:49 JUL 29, 2010
DSN9022I -DB0B DSNJC001 '-DIS LOG' NORMAL COMPLETION
As a result of the -SET LOG command, the information about the existence of the new
active log data set or sets is now stored permanently in the BSDS as though you used
DSNJU003.
4. Restart the offload. If you are in a situation where DB2 hangs still trying to archive the data
sets that already existed before you added new active log data sets, you let DB2 find the
new active log data sets by issuing the following command:
-ARCHIVE LOG CANCEL OFFLOAD
This command cancels any offloading currently in progress and restarts the off-load
process beginning with the oldest active log data set that has not been offloaded and
proceeding through all active log data sets that need offloading. Any suspended offload
operations are restarted.
4.4 Preemptible backout
When DB2 9 executes a ROLLBACK or ABORT, either due to an explicit request or abnormal
termination of a thread, a service request block (SRB) is scheduled to do the backout
processing. By default, SRBs that are not part of an enclave are non-preemptible. They can
be interrupted, but they are dispatched immediately after the interrupt is serviced.
On a system with a single processor, it might seem that the system is hung or stuck in a loop
or responding slowly because the backout operation can take a long time. Furthermore, if you
cancel or restart DB2, the backout operation resumes eventually, again making it look as
though the system was hung. Generally, if there are n threads being backed out on a n-way
system, it will look like the system is hung.
With DB2 10, the SRB that performs the backout operation is scheduled to an enclave so that
it can be preempted by higher priority work. The n-thread n-way scenario can cause CPU
starvation for lower priority work, but the system is still responsive to operator commands and
higher priority work.
4.5 Support for rotating partitions
DB2 helps you drop existing partitions.
DB2 V8 and 9 provide the ALTER TABLE ROTATE option. This option allows you to specify
FIRST to LAST ENDING AT. As a result of this statement, DB2 wipes out the information of
the first logical partition and makes this the new last logical partition. The partition range for
the new last logical partition must be higher than the old last logical partition if the limit keys
are ascending and must be lower if the limit keys are descending.
DB2 10 provides more flexibility. You can choose any partition (not just the first) and rotate it
to the logical end of the table space. The same rules apply for the limit keys that you specify in
the ENDING AT part of the statement.
Chapter 4. Availability 101
The left side of Figure 4-21 shows the original partitioned table space with five partitions. The
partitioning key is a date column, and the limit keys are used in an ascending order. DB2 V8
introduced the concept of logical and physical partition numbers. The physical partition
number is the number stored in the PARTITION column in SYSIBM.SYSTABLEPART, and the
logical partition number is in the LOGICAL_PART column in the same table space. When you
create a partitioned table space, if you did not run ROTATE statements, both numbers are the
same, as shown in Figure 4-21. The numbers will change when you use ROTATE.
Figure 4-21 ROTATE 3 TO LAST sample
In our example, you want to replace all the data that is currently in Partition 3 (logical and
physical still the same) with new data. If you prefer to the used ALTER statement, you can
identify Partition 3 as the partition that is being rotated, and as a result it is emptied and goes
to the logical end of the table space. The result of this ALTER is that the now empty partition
becomes the logical Partition 5 and the physical Partition 3. Note that the ENDING AT date of
June 2009 is higher than the ending date for the old logical Partition 5, which was May 2009.
Let us go one step further and assume that now you want to remove all the information for
April 2009. April 2009 data is stored in logical Partition 3 and physical Partition 4.
If you do not have Figure 4-21 in front of you, especially after a few rotates, you have to find
the partition number that you must specify now for the ALTER statement querying the catalog
table SYSIBM.SYSTABLEPART. Figure 4-22 shows a sample result.
Note: You must specify the physical partition number on your ALTER TABLE ... ROTATE
statement.
2009 Jan
ts pi
Partition 1
Partition 2 2009 Feb
2009 Apr
Partition 5 2009 May
Partition 4
A002
A005
A001
Original table space
2009 Mar
A003 Partition 3
logical
A004
1
2009 Jan
ts pi
Partition 1
Partition 2
2009 Jun
2009 Feb
2009 Apr
Partition 5
2009 May Partition 4
A002
A005
A001
A003
ALTER TABLE ROTTB1 ROTATE
PARTITION 3 TO LAST
ENDING AT (2009 Jun') RESET;
2009 Mar
A003
Partition 3
logical
A004
2
physical physical
102 DB2 10 for z/OS Technical Overview
Figure 4-22 SELECT to identify right physical partition for ROTATE
To initiate the rotation, you specify the following ALTER statement:
ALTER TABLE ROTTB1 ROTATE PARTITION 4 TO LAST ENDING AT (2009 Jul') RESET;
Figure 4-23 shows the result of this second ROTATE. The April 2009 data is no longer
available in the table space. The new ending limit key is July 2009.
Figure 4-23 ROTATE 4 TO LAST
DB2 must know which physical partition is logically on which position. This logical partition
number is for example used when you ask for a -DIS DB(...) SPACE(...) output. If you look at
Figure 4-24, at first sight the order the partitions that display seem to be completely unsorted.
However, if you compare Figure 4-24 with Figure 4-23, you can see that the -DIS DB
command output lists the table space partitions following the sort order of the logical
partitions.
Figure 4-24 -DIS DB after two ROTATE table space executions
SELECT LIMITKEY, PARTITION, LOGICAL_PART,A.*
FROM SYSIBM.SYSTABLEPART A WHERE DBNAME = 'ROTATE'
ORDER BY LIMITKEY ;
---------+---------+---------+---------
LIMITKEY PARTITION LOGICAL_PART
---------+---------+---------+---------
2009-01-31 1 1
2009-02-28 2 2
2009-04-30 4 3
2009-05-31 5 4
2009-06-30 3 5
NAME TYPE PART STATUS
-------- ---- ----- -------
TS1 TS 0001 RW
-THRU 0002
TS1 TS 0005 RW
TS1 TS 0003 RW
-THRU 0004
TS1 TS RW
2009 Jan
ts pi
Partition 1
Partition 2
2009 Jul
2009 Feb
2009 May
Partition 5
2009 Jun Partition 4
A002
A003
A001
A004
ALTER TABLE TAB1 ROTATE
PARTITION 4 TO LAST
ENDING AT (2009 Jul') RESET;
2009 Apr
A004
Partition 3
logical
A005
3
physical
Chapter 4. Availability 103
Remember that we wiped out the data for months March 2009 and April 2009. Figure 4-25
shows the relationship between limit keys and physical and logical partitions after the
second rotate. If data was inserted into the table now that has a date of March 2009 or April
2009, that data would all go into physical Partition 5.
This function is not bad, but you want to keep this in mind in case if you plan to use the
ROTATE function in DB2 10 NFM.
Figure 4-25 LIMITKEY, PARTITION, and LOGICAL_PART after second rotate
DB2 does not allow you to wipe out the last logical partition. You get the error message shown
in Figure 4-26. The message does not say clearly that the reason for the failure of the ALTER
TABLE statement is the fact that you tried to replace the last logical partition with the same
physical partition with which it is currently associated. However, this is the reason for this
negative SQL code.
Figure 4-26 Error message you get when trying to rotate the last partition
4.6 Compress on insert
Starting with DB2 V3, data compression has been widely used by DB2 for z/OS customers. In
order to manage the compression, DB2 needs a dictionary for every partition and, up to
DB2 V7, compression dictionaries were allocated below the bar. Therefore, virtual storage
constrained compression to large partitioned table spaces. With DB2 V8, the compression
dictionaries are allocated above the bar and, therefore, do not cause storage constraints. As a
consequence, you might want to turn on compression for additional table spaces.
Prior to DB2 10, if you turn on compression for a table space using the ALTER TABLESPACE
command, DB2 needs to build the compression dictionary. Compression dictionaries are built
as part of REORG TABLESPACE or LOAD utility runs. You might not be able to run LOAD or
REORG when you decide to turn on compression for a given table space.
With DB2 10 NFM, you can turn on compression with ALTER any time, and the compression
dictionary is built when you execute the following statements:
INSERT statements
MERGE statements
LOAD SHRLEVEL CHANGE
LIMITKEY PARTITION LOGICAL_PART
---------+---------+---------+--------
2009-01-31 1 1
2009-02-28 2 2
2009-05-31 5 3
2009-06-30 3 4
2009-07-31 4 5
DSNT408I SQLCODE = -20183, ERROR: THE PARTITIONED, ADD PARTITION, ADD
PARTITIONING KEY, ALTER PARTITION, ROTATE PARTITION, OR PARTITION BY
RANGE CLAUSE SPECIFIED ON CREATE OR ALTER FOR DB2R8.ROTTB1 IS NOT
VALID
104 DB2 10 for z/OS Technical Overview
Additionally, when you LOAD XML data, a dictionary can be built specifically for the XML table
space so that the XML data is compressed in the following circumstances:
The table space or partition is defined with COMPRESS YES
The table space or partition has no compression dictionary built yet
The amount of data in the table space is large enough to build the compression dictionary
The threshold is about 1.2 MB, which is determined by reading the RTS statistics in memory.
When the threshold is reached, DB2 builds the dictionary asynchronously and issues the
message shown in Figure 4-27.
Figure 4-27 Message DSNU241I compression dictionary build
The DSNU241I message is issued in this case because the table space is partitioned.
DSNU231I is written out in case of a non-partitioned table space. These messages are
issued on the console only by compress on insert. The LOAD and REORG utilities have the
same messages, but these messages are written in the utility output, not on the console.
After the dictionary is built, DB2 inserts the data in compressed format. At least 1.2 MB of
data in the table space is not compressed. The additional data is compressed.
Data rows that you insert while the dictionary is still under construction are inserted
uncompressed. When building the dictionary asynchronously, DB2 reads the existing data
with isolation level uncommitted read.
If the compression dictionary is built using LOAD REPLACE or REORG TABLESPACE, the
dictionary pages follow the system pages (header and space map), which is not the case
when the compression dictionary is built on-the-fly. Because there must be at least 1.2 MB
worth of data in the table space before the new compression functionality kicks in, the
compression dictionary gets any page numbers. Furthermore, the dictionary pages might not
be contiguous, as illustrated in Figure 4-28.
The left side of this figure shows the dictionary pages (DI) in the table space. When the table
space is image copied with option SYSTEMPAGES YES
3
the dictionary pages are replicated
after the header and space map pages.
DSNU241I -DB0B DSNUZLCR - DICTIONARY WITH 4096 755
ENTRIES HAS BEEN SUCCESSFULLY BUILT FROM 598 ROWS FOR TABLE SPACE
SABI6.TS6, PARTITION 1
Inserting a dictionary: If the table space is defined as COMPRESS YES and you insert
rows, then a dictionary is built. Use COMPRESS NO if you want to insert, but not build, a
dictionary. If you ALTER to COMPRESS YES and use LOAD REPLACE or REORG, then
you can have the utilities build the dictionary.
3
The SYSTEMPAGES YES option does not apply to FlashCopy image copies.
Chapter 4. Availability 105
Figure 4-28 Compression dictionary pages spread over table space
If you use image copies as input for the UNLOAD utility and if you plan on using this
automatic compression without REORG or LOAD, you must run the COPY utilities with option
SYSTEMPAGES YES. This option requests the collection of all SYSTEMPAGES during utility
execution and places a copy of those directly behind the first space map page. The original
dictionary pages remain at their original location. So, the SYSTEMPAGES YES option on
COPY TABLESPACE produces duplicates of those system pages.
If you specify SYSTEMPAGES NO, DB2 copies only the table space as is. When you attempt
to unload from this image copy, the UNLOAD utility recognizes that the rows in the table
space are compressed but cannot uncompress the rows because it looks only for dictionary
pages at the beginning of the table space prior to first data page.
If you run REORG later, depending on your choice regarding the KEEPDICTIONARY utility
control keyword, REORG either moves the dictionary pages behind the space map page and
before the first data page or creates a new dictionary, which is then located at this point.
4.6.1 DSN1COMP considerations
Generally, you can use the DSN1COMP utility, which is available since DB2 V3, to estimate
the space savings that can be achieved by DB2 data compression in table spaces and
indexes.
If you run the DSN1COMP utility without any special option, the utility calculates the
estimated space savings based on the algorithms that are used for building compression
during LOAD. If you use the LOAD utility to build a new compression dictionary, the
compression ratio is likely to be a bit less effective than during REORG. If you specify the
REORG keyword on the compression utility, DB2 calculates the compression ratio based on
what will be accomplished by the REORG utility. There is no specific keyword for the
COMPRESS on INSERT method. Compress on INSERT reaches similar compression ratios
as the LOAD utility for table spaces, considerably larger than 1.2 MB. As a consequence, if
you do not specify REORG for your estimate, the calculated savings is about the same as
COMPRESS ON INSERT.
1
UNLOAD
DATA....
FROMCOPY
DSNU1232I -
COMPRESSED
ROW
IS IGNORED
BECAUSE THE
DICTIONARY IS
NOT AVAILABLE
FOR TABLE
table-name
H SM D D .
.
D D DI D
D DI DI D
COPY TS. ..
SYSTEMPAGES
NO
.
.
H SM D D .
.
D D DI D
D DI DI D
.
.
COPY TS.. .
SYSTEMPAGES
YES
(this opt ion
makes DB2 copy
the dictionary
pages after
space map page)
H SM
DI DI .
.
DI D DI D
D DI DI D
.
.
Dictionary pages
not necessarily
after space map
page nor contiguous
D D D
106 DB2 10 for z/OS Technical Overview
4.6.2 Checking whether the data in a table space is compressed
Let us assume that you have turned on compression for your table space and now you want
to know whether COMPRESS on INSERT actually worked and if the data is now compressed.
The easiest way to verify whether the table space is compressed is to check the SYSLOG for
the following message:
DSNU241I -DB0B DSNUZLCR - DICTIONARY WITH 4096 755
ENTRIES HAS BEEN SUCCESSFULLY BUILT FROM 598 ROWS FOR TABLE SPACE
SABI6.TS6, PARTITION 1
If you can find this message, you know that the dictionary was built for a given partition of a
table space. For a non-partitioned table space, the message is DSNU231I.
If you cannot find this message in the SYSLOG, check whether the data volume is large
enough so that the threshold of 1.2 MB of data is passed and compression can theoretically
begin. You can for example use the following SQL query to check the RTS tables:
SELECT DATASIZE
FROM SYSIBM.SYSTABLESPACESTATS
WHERE DBNAME='yourdb'
AND NAME='yourts';
Remember that column DATASIZE contains the amount of data stored in your table space in
bytes, that is you need at least 1,310,720 bytes. However, the time that the current value is
reflected in the catalog depends on the interval specified for externalizing RTS statistics
(STATSINT DSNZPARM default is 30 minutes).
4.6.3 Data is not compressed
In some cases, it might appear that you turn on compression and that the data volume is
larger than 1.2 MB but that COMPRESS on INSERT did not take place. This situation can
occur if DB2 cannot build a usable compression dictionary because of the data contents. If
this is the case, you do not get an error message. An error message, such as DSNU235I or
DSNU245I, displays on the console if there are issues, such as out of space conditions or
similar.
4.7 Long-running reader warning message
Although long-running readers cause fewer performance issues than long-running writers,
quite often long-running readers also cause problems. One common problem is the SWITCH
phase of online reorg, because long-running readers do not allow REORG to break in and
gain exclusive control.
DB2 V8 introduced DSNZPARM LRDRTHLD, which provides the opportunity to proactively
identify long-running readers. You can specify the number of minutes a reader is allowed to
execute without a commit before IFCID 313 record is cut. The problem with this solution is
that no DB2 message is produced when the specified number of minutes is exceeded. You
have to collect and edit a record trace to get a report about the existence of long-running
readers.
In addition, the default value for DSNZPARM LRDRTHLD is zero for DB2 versions 8 and 9. As
a consequence many DB2 users have not activated this functionality.
Chapter 4. Availability 107
DB2 10 addresses these issues. First, the new default for DSNZPARM LRDRTHLD is 10
minutes. This new default is also applied during release migration. So in case you do not want
to turn on this functionality, you must manually overwrite it with zero. Secondly besides the
IFID 313, which is still cut when your applications exceed the threshold, DB2 generates the
new message DSNB260I shown in Example 4-9. Note that the message is not only popping
up with each new threshold being reached, but also each new message tells you for how long
in total the reader is already holding resources. In Example 4-9, the first message displays
after 10 minutes, and then the second message displays after 20 minutes.
Example 4-9 Message for LRDRTHLD threshold exceeded
DSNB260I -DB0B DSNB1PCK WARNING - A READER HAS BEEN 424
RUNNING FOR 10 MINUTES
CORRELATION NAME=DB2R8
CONNECTION ID=TSO
LUWID=USIBMSC.SCPDB0B.C6615CBED23B=160
PLAN NAME=DSNESPCS
AUTHID=DB2R8
END USER ID=*
TRANSACTION NAME=*
WORKSTATION NAME=*
DSNB260I -DB0B DSNB1PCK WARNING - A READER HAS BEEN 430
RUNNING FOR 20 MINUTES
CORRELATION NAME=DB2R8
CONNECTION ID=TSO
LUWID=USIBMSC.SCPDB0B.C6615CBED23B=160
PLAN NAME=DSNESPCS
AUTHID=DB2R8
END USER ID=*
TRANSACTION NAME=*
WORKSTATION NAME=*
Depending on how well your applications behaves on your various DB2 systems, the current
default setting of 10 minutes can generate a large number or messages. After migrating to
DB2 10 conversion mode (CM), check the MSTR address space and change the value to a
higher number to prevent the system spool from being flooded by these messages.
Long-running reader warning messages are available starting with DB2 10 CM.
4.8 Online REORG enhancements
DB2 10 improves the usability and performance of online reorganization in several ways.
This release of DB2 for z/OS supports the reorganization of disjoint partition ranges of a
partitioned table space, and improves SWITCH phase performance and diagnostics. It also,
removes restrictions that are related to the online reorganization of base table spaces that
use LOB columns.
In DB2 10 new-function mode, the syntax for the REORG TABLESPACE statement is
changed. For partitioned table spaces, the PART specification is extended to allow for multiple
parts or part ranges, and the SHRLEVEL REFERENCE and CHANGE specifications are
extended to add the new keyword, AUX YES/NO. This new keyword allows for the
108 DB2 10 for z/OS Technical Overview
reorganization of LOB and XML table spaces that are associated with the base table. It is also
possible to specify SHRLEVEL CHANGE for REORG of a LOB.
This is just a brief overview of how online REORG has been improved. Refer to 11.4, Online
REORG enhancements on page 452 for more information about these enhancements.
4.9 Increased availability for CHECK utilities
DB2 10 provides increased availability and consistency for the CHECK DATA utility and the
CHECK LOB utility.
In previous releases of DB2 for z/OS, the CHECK DATA and CHECK LOB utilities set
restrictive states on the table space after completion. With DB2 10, if these utilities find
inconsistencies and if the object was not previously restricted, a restrictive state is not set
automatically on the table space. The new CHECK_SETCHKP subsystem parameter
specifies whether the CHECK DATA and CHECK LOB utilities are to place inconsistent
objects in CHECK-pending status.
This is just a brief overview of how CHECK is improved. Refer to 11.5, Increased availability
for CHECK utilities on page 464 for details about this enhancement.
Copyright IBM Corp. 2010. All rights reserved. 109
Chapter 5. Data sharing
In this chapter, we describe the improvements to data sharing that are introduced with
DB2 10. The main focus of these functions is on availability and performance.
This chapter includes the following topics:
Subgroup attach name
You can connect to only a subset of DB2 members, in much the same way as you can
connect to a subset of DB2 members through distributed data facility (DDF) using Location
Alias names.
Delete data sharing member
You can more easily consolidate the number of data sharing members in a group.
Buffer pool scan avoidance
DB2 now eliminates buffer pool scans when page sets change group buffer pool
dependency, removing some transaction delays.
Universal table space support for MEMBER CLUSTER
Universal table spaces now support MEMBER CLUSTER for those heavy INSERT
workloads.
Restart light handles DDF indoubt units of recovery
DB2 restart light is now able to resolve indoubt DDF units of recovery.
Auto rebuild coupling facility lock structure on long IRLM waits during restart
DB2 can recover from some types of restart delays caused by locking delays.
Log record sequence number spin avoidance for inserts to the same page
DB2 10 enhances the log record sequence number (LRSN) spin avoidance introduced in
DB2 9 to further reduce CPU time for heavy INSERT multi-row applications.
IFCID 359 for index split
Tracing is now provided to allow you to monitor where and when index leaves page splits
occur. You can then develop strategies to reduce this expensive data sharing activity.
5
110 DB2 10 for z/OS Technical Overview
Avoid cross invalidations
DB2 10 avoids excessive cross invalidations when a page set is converted from group
buffer pool dependent to non-group buffer pool dependent.
Recent DB2 9 enhancements
A number of data sharing enhancements are introduced through service to DB2 9 for
z/OS.
Several of these topics are also important for non-data sharing environments.
5.1 Subgroup attach name
With DB2 9, you can connect to a data sharing group by specifying either the DB2 subsystem
name or the group attach name in CICS, TSO Attach, CAF, RRSAF, JDBC, ODBC, and DB2
utilities connection requests to DB2 for z/OS.
DB2 10 enhances this function through support for subgroup attach. You can connect to only
a subset of DB2 members, in much the same way as you can connect to a subset of DB2
members through DDF. You can organize and control the DB2 members that are active and
the workload that can be performed on each member.
Consider an environment where you have a two member data sharing group that runs on two
separate LPARs and that supports an OLTP workload. To add two new warehousing
members, both LPAR1 and LPAR2 need to have an OLTP member and a warehousing
member. You also need a way for the warehousing applications to connect to the warehousing
members while the OLTP applications continue to connect to the OLTP members. For DDF
connections, this setup was possible through location alias name support. However, for local
z/OS applications, this setup was not possible prior to the sub-group attach name that is
introduced in DB2 10.
A group attach name, if defined, acts as a generic name for all the members of a data sharing
group. You can use a subgroup attach name to specify a subset of members within a group
attachment. Group and subgroup attachments can be specified in CICS, TSO, CAF, RRSAF,
JDBC, ODBC, and DB2 utilities connections to find active DB2 subsystems.
Group attach names and subgroup attach names are defined in the IEFSSNxx parmlib
member as follows:
SUBSYS SUBNAME(ssname) INITRTN(DSN3INI)
INITPARM('DSN3EPX,cmd-prefix<,scope<,GRP<,SGRP>>>')
Where GRP is the group attach name and SGRP is the subgroup attach name. You can also
dynamically add a new DB2 subsystem to a group and subgroup attach with the SETSSI
command.
Subgroup attach names follow the same naming rules as group attach names. They must be
1-4 characters in length. They can consist of letters A-Z, numbers 0-9, and the symbols $, #,
and @. The names should not be the same as an existing DB2 member name (not enforced).
The following example shows a data sharing group definition that uses subgroup attach:
DB1A,DSN3INI,'DSN3EPX,-DB1A,S,DB0A'
DB2A,DSN3INI,'DSN3EPX,-DB2A,S,DB0A,SBG1'
DB3A,DSN3INI,'DSN3EPX,-DB3A,S,DB0A,SBG1'
DB4A,DSN3INI,'DSN3EPX,-DB4A,S,DB0A,SBG2'
Chapter 5. Data sharing 111
In this example, you can connect to DB1A by specifying DB1A, you can connect to DB2A or
DB3A (whichever is active on the LPAR) by specifying SBG1, you can connect to DB4A by
specifying SBG2, and you can connect to DB1A, DB2A, DB3A or DB4A (whichever is active
on the LPAR) by specifying DB0A.
Subgroup attach names are subject to certain rules and conditions, which are clarified by new
messages added to DB2:
The subgroup attach name can belong only to one group attach definition.
The subgroup attach name must have a group attach name.
The subgroup attach name cannot have the same name as the group attach name.
A DB2 member can belong only to at most one subgroup attach.
A DB2 member does not need to belong to a subgroup attach.
The -DISPLAY GROUP DETAIL command is enhanced to display all defined members of all
subgroup attach names for group attach names that are defined to the given member.
Dynamic reconfiguration of subgroup attach names are not supported, and an IPL is required
to clear subgroup attach data (as is the case with group attach). Subgroup attach names are
also subject to the same rules concerning RANDOM ATTACH as group attach names. See
5.10.1, Random group attach DSNZPARM on page 120 for a discussion about RANDOM
ATTACH.
Using the -SET SYSPARM command to change RANDOM ATTACH NO/YES changes the
member information for both group attach and subgroup attach. For example, a member that
is defined to group DSNG displays only the subgroups that are defined to DSNG.
The DSNTIPK installation panel is changed to include a SUBGRP ATTACH field that, if
specified, identifies the name of a new or existing subgroup attach name within the group
attach name that is to be used for this DB2 subsystem.
When you submit a job on a z/OS system, for example, DB2 treats the name that you
specified on the DB2 connection request as a subsystem name, a group attach name, or a
subgroup attach name. Briefly, DB2 first assumes that the name is a subsystem name and
then uses the one of the following process:
Attaches to that subsystem if it is found and active.
Assumes the attachment name is either a group attach name or a subgroup attach name,
and performs group attach processing if either of the following conditions are met:
The name was not found in the subsystem interface list
The name was found, but the subsystem was not active, it has a group attach name,
and the NOGROUP option was not specified on the connection.
The name was found, but the subsystem was not active and either it lacks a group attach
name or the NOGGROUP option was specified on the connection. Thus, the connection is
unsuccessful.
For group attach processing, DB2 assumes that the name on the DB2 connection request is a
group attach name or subgroup attach name. DB2 creates a list of DB2 subsystems that are
defined to this z/OS. DB2 then searches the list first randomly and then sequentially in the
order each subsystem was registered to the MSV Subsystem Interface (SSI), depending on
the RANDOM ATTACH parameter. Because group attach names and subgroup attach names
must be unique, there is no hierarchy, and both types of attachment names are processed
together.
Subgroup attach works upon fallback to DB2 Version 8 or Version 9 if the DB2 10 early code
is used to define a DB2 subsystem either at IPL or with the z/OS SETSSI command.
112 DB2 10 for z/OS Technical Overview
Subgroup attach is not available if DB2 Version 8 or Version 9 early code was used to define
a DB2 subsystem. There are no coexistence considerations.
The following messages are included:
DSN3118I csect-name DSN3UR00 - GROUP ATTACH NAME grpn WAS ALREADY DEFINED AS A
SUBGROUP ATTACH
DSN3119I csect-name DSN3UR00 - SUBGROUP ATTACH sgrpn DOES NOT BELONG TO GROUP
ATTACH grpn
DSN3120I csect-name DSN3UR00 - grpn CANNOT BE DEFINED AS BOTH THE GROUP ATTACH
NAME AND THE SUBGROUP ATTACH NAME
DSN3121I csect-name DSN3UR00 - SUBGROUP ATTACH sgrpn WAS ALREADY DEFINED AS A
GROUP ATTACH
DSNT563I AN ENTRY IN field-name-1 REQUIRES AN ENTRY IN field-name-2
DSNT563I THE ENTRY IN field-name-1 CANNOT BE THE SAME AS THE ENTRY IN
field-name-2
5.2 Delete data sharing member
Prior to DB2 10, a dormant data sharing member could not be deleted. This has traditionally
not been an issue, as there is little or no effort in maintaining these dormant members.
However, DB2 10 brings the potential for consolidating the data sharing group to fewer
members and being able to delete a member becomes important since you can potentially
run the same workload in fewer members. The maintenance stream has provided this
function after GA with APARs PM31003, PM31004, PM31006, PM31007, PM31009.
PM42528, and PM 51945.
There are several instances where a function DELETE MEMBER can be of help:
Consolidation to smaller numbers of members, as System z processors have continued to
become more and more powerful, coupled with the news that DB2 10 provides the
potential to run the same workload with fewer DB2 members.
DB2 members have been added by mistake.
You need to maintain an old copy of the BSDS for the member that is no longer needed.
(However there is the option to reply QUIESCED to the DB2 restart messages for old
members on Group Restart.)
Possible complications in disaster recovery scenarios. If you do not have the BSDS for the
dormant member(s) at the DR site, you then need to reply to the WTOR message on DB2
restart.
Cloning production systems to create test systems might not need all members in the test
system. This of course depends on how you clone your production system.
DB2 10 provides a DELETE MEMBER offline function which you can use to delete dormant
DB2 member(s) from a data sharing group.
The Change Log Inventory utility, DSNJU003, has been enhanced to delete quiesced
members. This is therefore an offline implementation where the entire data sharing group
must be brought down so that the member record of the quiesced, to be deleted member in
the individual member BSDSs can be updated to remove the member details.
Two new operators are introduced to the DSNJU003 utility, DELMBR, to deactivate or destroy
a member, and RSTMBR, to restore a deactivated member to the quiesced state.
Chapter 5. Data sharing 113
DELMBR
DELMBR operates in two phases: a DEACTIVate and a DESTROY phase.
DELMBR DEACTIV Must be done first and marks as deactivated the member record of
the 'cleanly quiesced, to be deleted member' in the BSDS of the
member on which the utility is run. You need to still keep the
deactivated member's BSDS and log data sets, should you wish to
re-activate the member.
The result of successfully running this command is that the
member record in the BSDS is marked as deactivated. The
member is ignored during restart and any attempt to restart the
deactivated member fails with an abend.
The space containing the deactivated member record in the BSDS
remains in use, as does the associated record in the SCA. Also, the
member name and id cannot be reused while the member is in this
state.
A deactivated member can be restored to the quiesced state via
the RSTMBR operator
DELMBR DESTROY Can subsequently be run to reclaim the space in the BSDS so it can
be reused by future new member records. Once this command has
been successfully run, the deactivated member cannot be
restored. A destroyed member's BSDS and logs can be deleted.
You must first deactivate the dormant DB2 member before
destroying it.
RSTMBR
RSTMBR restores a previously deactivated, but not destroyed member, to the quiesced
state.
To deactivate a member, the DSNJU003 utility must be run against the BSDS data sets of all
DB2 members, including the member to be deleted and any other inactive members. So, all
DB2 members need to be shut down.
You need to run DSNJU003 against the member to be deleted, to define the member as
deactivated so it cannot be started in error. You also need to run the DSNJU003 against the
deactivated member in case you want to restore it.
A member that is to be deactivated must first be cleanly quiesced, with no open SYSLGRNX
records, and no:
URs (inflight, incommit, indoubt, inabort or postponed abort)
Retained locks
Active utilities
Cast out failures
Re-sync required
The overall flow you need to follow to delete a dormant DB2 member is:
1. The initial deletion.
Note: It is your responsibility to check for the above conditions when you wish to deactivate
a member, as they are not checked by DB2. Potential data integrity problems may occur if
you deactivate and destroy a DB2 member which still has outstanding work.
114 DB2 10 for z/OS Technical Overview
Run DSNJU003 with DELMBR DEACTIV against ALL members BSDS data sets in the
data sharing group, including the dormant member. This requires all members to be
brought down (only the deleted member needs to be quiesced).
2. Restart the group.
The deleted member is marked as such an can not be restarted.
3. Destroy the deleted member.
At some point when the deleted member's logs are no longer needed, it can be destroyed
and all record of it is gone.
During DB2 restart, a deactivated member is recognized and the DSNR020I WTOR is not
issued. However, DB2 restart issues a new informational message:
DSNR032I DEACTIVATED MEMBER xx WAS ENCOUNTERED AND IS BEING IGNORED.
IT WILL NOT BE STARTED
This message is issued for each deactivated member to indicate that the member's logs are
disregarded just as is done today for quiesced members.
Attempts to restart the deactivated member are terminated with a new message:
DSNR033I THIS MEMBER BEING STARTED IS MARKED AS DEACTIVATED AND CANNOT BE
STARTED. PROCESSING IS TERMINATED
and abend 04E with new reason 00D10130 is issued.
5.3 Buffer pool scan avoidance
Prior to DB2 10, DB2 members needed to execute complete scans of the buffer pool during
certain transitions in inter-DB2 Read/Write (RW) interest for a page set/partition (p/p).
Examples include (these are not the only cases):
When a p/p transitions from P-lock S state (inter-DB2 Read Only (RO)) to P-lock IS state
(another member has declared RW interest for this p/p) then each member downgrading
the P-lock on the p/p from S to IS must scan the buffer pool which this p/p is using to
ensure that all the locally cached pages for this p/p are properly registered to the GBP for
cross invalidation (XI).
The same is true for SIX to IX lock transition (this member used to be the sole updater for
this p/p, but now another member had declared RW interest).
Usually the buffer pool scan is fast. However, as buffer pool sizes continue to grow larger and
larger, the time to execute the buffer pool scan similarly takes longer and longer and can
sometimes can cause noticeable transaction delays.
DB2 10 eliminates all of these sequential buffer pool scans. This elimination in turn eliminates
the prospects of variable and unpredictable transaction delays for those transactions that
cause the changes in the inter-DB2 RW interest levels, especially in the case of larger buffer
pools sizes.
5.4 Universal table space support for MEMBER CLUSTER
DB2 typically tries to place data into a table using INSERT, based on the clustering sequence
as defined by the implicit clustering index (the first index created) or the explicit clustering
Chapter 5. Data sharing 115
index. This can cause hot spots in data page ranges and high update activity in the
corresponding space map page or pages. These updates to space map page or pages and
data page or pages must be serialized among all members in a data sharing environment,
which can adversely affect INSERT/UPDATE performance in data sharing.
DB2 Version 5 introduced the MEMBER CLUSTER option of CREATE TABLESPACE
statement of a partitioned table space to address this bottleneck. MEMBER CLUSTER
causes DB2 to manage space for inserts on a member-by-member basis instead of by using
one centralized space map. The main idea of a MEMBER CLUSTER page set is that each
member has exclusive use of a set of data pages and their associated space map page.
(Each space map covers 199 data pages.) Each member would INSERT into a different set of
data pages and not share data pages, thereby eliminating contention on pages and reducing
time spent searching for pages.
MEMBER CLUSTER is used successfully to reduce contention in heavy INSERT
applications, such as inserting to journal tables.
DB2 9 introduced universal table spaces (UTS). They combine both the enhanced space
search benefits of segmented table spaces with the flexibility in size of partitioned table
spaces. You can have two types of UTS (which is the strategic table space for DB2):
Partition-by-growth
Partition-by-range
Several functions are supported only through UTS. However, DB2 9 does not support
MEMBER CLUSTER in a UTS. So, you have to choose between the benefits of MEMBER
CLUSTER (reduced contention in a heavy INSERT environment) with the flexibility of UTS
(better space management and new functions).
DB2 10 in new-function mode removes this restriction. MEMBER CLUSTER is supported by
both partition-by-growth and range-partitioned UTS.
Each space map of a MEMBER CLUSTER UTS contains 10 segments. So, the number of
data pages that are allocated to each DB2 member varies, depending on SEGSIZE. A large
SEGSIZE value causes more data pages to be covered by the space map page, which in turn
causes the table space to grow in a multiple member data sharing environment. If the
SEGSIZE is small, then more space map pages are required. Therefore, we suggest using
SEGSIZE of 32 for MEMBER CLUSTER universal table spaces instead of the default 4.
As with prior versions of DB2, a MEMBER CLUSTER table space becomes unclustered
quickly. You need to REORG the table space to bring the data back into clustering sequence.
To create a MEMBER CLUSTER partition-by-range UTS:
CREATE TABLESPACE MySpace IN MyDB
MEMBER CLUSTER
MUNPARTS 3;
To create a MEMBER CLUSTER partition-by-growth UTS:
CREATE TABLESPACE MySpace in MyDB
MEMBER CLUSTER
MSAXPARTITIONS 10;
You can also implement MEMBER CLUSTER using an ALTER, without having to drop and
recreate the table space:
ALTER TABLESPACE MyDB.MySpace .......... MEMBER CLUSTER YES/NO
116 DB2 10 for z/OS Technical Overview
The need to use MEMBER CLUSTER can change over time because the data access
requirements can change from a heavy INSERT to a more query base. Therefore, DB2 10
also provides the capability to both enable and disable MEMBER CLUSTER for universal
table space.
The ALTER statement updates the catalog table SYSIBM.SYSPENDINGDDL to indicate
there is a pending change and places the table space in Advisory REORG-pending state
(AREOR). DB2 also issues the new SQLCODE +610 to indicate this fact. You are then
advised to schedule the REORG utility on the entire table space to materialize the pending
ALTER. (The REORG also restores the data clustering.) A LOAD on the table space level
also materializes the pending ALTER and honor the data clustering.
The MEMBER_CLUSTER column in the SYSIBM.SYSTABLESPACE catalog table is set for
MEMBER CLUSTER table spaces in DB2 10 NFM, after the pending ALTER is resolved by
LOAD or REORG. Existing MEMBER CLUSTER table spaces were created prior to DB2 10
in NFM have the TYPE column in the SYSIBM.SYSTABLESPACE catalog table set. During
the DB2 10 enabling-new-function mode (ENFM) process, these values are moved to the
MEMBER_CLUSTER column.
You can use the ALTER TABLESPACE statement to migrate a single table MEMBER
CLUSTER table space to a MEMBER CLUSTER universal table space.
If you use the same SQL syntax for a classic partitioned table space in DB2 10 NFM, DB2
creates a partition-by-range UTS. However, you can use SEGSIZE 0 with the MEMBER
CLUSTER clause and the NUMPARTS clause of the CREATE TABLESPACE statement to
create a classic partitioned table space with the MEMBER CLUSTER structure, which can be
useful if you use the DSN1COPY utility to copy one classic partitioned table space to another.
To create a MEMBER CLUSTER classic partitioned table space;
CREATE TABLESPACE MySpace in MyDB
MEMBER CLUSTER SEGSIZE 0
NUMPARTS 3;
Figure 5-1 shows the implication to RECOVER on making structure changes, such as ALTER
MEMBER CLUSTER. When the ALTER MEMBER CLUSTER change is materialized by a
REORG, DB2 prohibits recovery to any point in time prior to the REORG. However, you can
still recover to a point in time before the REORG, including after the ALTER statement was
run, provided that the REORG has not been run. This process is true for any deferred ALTER
introduced in V10, with the following exceptions:
ALTER TABLE ADD HASH and ALTER TABLE HASH SPACE
ALTER the length of an inline LOB column
These exceptions allow a recover to a point in time that is prior to the materialization.
Note also that any image copy taken prior to when the change is materialized (for example
the blue image copy), cannot be used to recover to any point in time after the REORG was
run, including to current. DB2 cannot restore a full image copy taken prior to the REORG then
apply logs across the REORG, which has always been the case.
Chapter 5. Data sharing 117
Figure 5-1 MEMBER CLUSTER with RECOVER
5.5 Restart light handles DDF indoubt units of recovery
When a DB2 9 data sharing member is started in light mode (restart light), retained locks
cannot be freed for an indoubt unit of recovery if the coordinator of the unit of recovery was a
remote system. This situation occurs because DDF automatic indoubt thread resolution
(resync) functionality is required to determine the unit of recovery decision but DDF is
unavailable in light mode. In fact, the ssnmDIST address space is not even started in light
mode.
DB2 10 implements DDF restart light support to automatically resolve indoubt units of
recovery if the coordinator of the unit of recovery is a remote system.
Note that most DB2 commands are restricted in restart light mode. The only commands
available are Start DB2 or Stop DB2, Display Thread (Indoubt), and Recover (Indoubt). With
the implementation of DDF restart light support for remote indoubt units of recovery, the
START and STOP DDF commands are also available in restart light mode.
DB2 for z/OS in light mode is concerned only with indoubt units of recovery where DB2 for
z/OS is a participant relative to a local or remote coordinator that owns the decision. DB2 for
z/OS can also be a coordinator relative to local or remote participants, and decisions can be
provided to remote participants as long as DB2 and DDF remain started while resolving its
own participant related indoubt units of recovery. When all indoubt units of recovery are
resolved, DB2 terminates automatically even if DB2 still has coordinator responsibility with
respect to local or remote indoubt participants.
XA recover processing for a remote XA Transaction Manager requires that XA Transaction
Manager first connect to the groups SQL port to retrieve a list of transactions (XIDs) that are
indoubt. Because the SQL port is unavailable for the light member, another member of the
group must be available and be accessible through a group DVIPA. After this other member is
contacted, it returns the member DVIPA and resync port for any indoubt work (XIDs) owned
by the light member. XA Transaction Manager can then resolve work at the light member
because its resynch port is available. Thus, another DB2 member must be active for the
To currency :
RECOVER wi ll use the i mage copy taken from REORG that materi ali zed
the pending ALTER
Image copy taken before the REORG that materialized the pending alter
can not be use.
RECOVER to
Currency
Alter
MEMBER
CLUSTER
On REORG converts to Member Cluster
structure with In-line copy B
DML activities
+ l og records
Image copy A
Not allowed
118 DB2 10 for z/OS Technical Overview
remote XA Transaction Manager to resolve any indoubt units of recovery on the member that
is started in light mode.
5.6 Auto rebuild coupling facility lock structure on long IRLM
waits during restart
In the past, users initiated a group restart when DB2 appeared to have stalled on restart. In
one case that was investigated, a stall occurred during DB2 restart after an internal resource
lock manager (IRLM) failure. Rather than initiating an IRLM lock structure rebuild, both a lock
and shared communications area (SCA) rebuild were started. The SCA rebuild stalled, and
the user then shut down all members and performed a group restart. Instead, only the lock
structure rebuild should have been started. The action that was taken was disruptive,
because all DB2 members must be down. A group restart should be initiated only as a last
resort.
In many cases, normal restart stalls are due to some incompatibility on a specific resource
that can be resolved with a simple lock structure rebuild. DB2 10 provides a function to
recover from these stalls automatically on restart. The IRLM now initiates an automatic
rebuild of the lock structure into the same coupling facility (CF) if it detects long waits from the
CF.
DB2 has a restart monitor that periodically checks the progress of DB2 restart. If the restart
monitor detects that restart was suspended for between 4-6 minutes on an IRLM request
(usually a lock request), DB2 notifies IRLM, and IRLM initiates the lock structure rebuild in an
attempt to allow DB2 restart to proceed.
A message is presented to the console to record this event:
DXR184I irlmx REBUILDING LOCK STRUCTURE AT REQUEST OF DBMS IRLM QUERY
The event is also recorded by a new reason codes in IFCID 267 record as shown in
Example A-6 on page 621.
5.7 Log record sequence number spin avoidance for inserts to
the same page
DB2 9 in NFM provided a function called LRSN spin avoidance that allows for duplicate log
record sequence number (LRSN) values for consecutive log records on a given member.
Consecutive log records that are written for updates to different pages (for example, a data
page and an index page, which is a common scenario) can share the same LRSN value.
However, in DB2 9, consecutive log records for the same index or data page must still be
unique.
The DB2 member does not need to spin consuming CPU under the log latch to wait for the
next LRSN increment. This function can avoid significant CPU overhead and log latch
contention (LC19) in data sharing environments with heavy logging. See DB2 9 for z/OS
Performance Topics, SG24-7473 for details. This spin might still be an issue with multi-row
INSERT.
DB2 10 further extends LRSN spin avoidance. In DB2 10 NFM, consecutive log records for
inserts to the same data page can now have the same LRSN value. If consecutive log records
are to the same data page, then DB2 no longer continues to spin waiting for the LRSN to
Chapter 5. Data sharing 119
increment. The log apply process of recovery has been enhanced to accommodate these
duplicate LRSN values.
Duplicate LRSN values for consecutive log records for the same data page set are allowed
only for INSERT type log records. DELETE and UPDATE log records still require unique
LRSN values.
This enhancement helps applications, such as those applications that use multi-row INSERT
where the table or tables that are inserted into have none or only a few indexes, by further
reducing CPU utilization.
5.8 IFCID 359 for index split
Index leaf page splits can be expensive in data sharing and can impact application
performance. The DB2 member that is performing the index page split needs to execute two
synchronous writes to the log during the split. This process can elongate transaction
response time in some cases. The index tree P-lock can also become a source for delays if
lots of index splitting is occurring between members. So, many data sharing users care about
monitoring index page splits.
Monitoring index leaf page splits might be less important in non-data sharing environments,
but there are still benefits. To achieve optimal performance of index access, you need to keep
the index organized efficiently. However, running frequent REORGs can increase the system
maintenance costs and at time be disruptive to your application workload or workloads.
Monitoring index leaf page splits can help you to decide when and how often to reorganize
your key indexes.
DB2 10 provides some tracing through the new IFCID 359, to help you to identify index leaf
page split activity. The new IFCID 359 records when index page splits occur and on what
indexes. Example A-10 on page 622 shows the format of IFCID 359. A X80 in fields
QW0359FL indicates the index was GBP Dependent during the index leaf page split.
5.9 Avoid cross invalidations
DB2 10 avoids excessive cross invalidations when a page set is converted to from group
buffer pool dependent to non-group buffer pool dependent.
When a page set is converted to non-group buffer pool dependent, (which is typically driven
by pseudo close or physical close processing when the close activity results in the removal of
inter-DB2 R/W interest), DB2 first takes a serialization lock on the page set then performs
castout processing for all the relevant pages in the group buffer pool. When castout is
complete, DB2 then issues a delete name request to the CF.
The delete name request instructs the CF to free the pages in the CF occupied by the page
set and to deregister the pages from the group buffer pool directory. Deregistering the pages
from the directory also causes cross invalidation requests to be percolated to all the DB2
members that also have cached pages for this page set.
IFCID 359 record: The IFCID 359 record is not included with any trace class. You need to
start it explicitly. A record is written at the end of every leaf page split that lists the index
and that page that was split.
120 DB2 10 for z/OS Technical Overview
DB2 10 instructs the delete name request to only delete the data entries from the group buffer
pool for the nominated page set or sets. Directory entries are left untouched, which in turn
avoids cross invalidation requests that are percolated to the members. In time, the directory
entries are reused. Pages for this nominated page set can also reside in member or members
buffer pools, but these pages should have already been made invalid if a more recent version
of the page resides in the group buffer pool. So, there is no need to send cross invalidation
requests to all the DB2 members.
This enhancement avoids situations such as IRLM lock timeouts for online transactions due to
a long running CF delete name process in DB2. Consider the situation where a DB2 member
driving the delete name requests is situated in an LPAR that is a long distance away from the
CF that contains the primary GBP. There can many thousands of directory entries to be
deleted, and for every entry deleted, the CF must send cross invalidation requests through
XCF to individual DB2 members that also contain pages for this deleted page set. This is the
CF architecture. If the XCF requests are delayed, for example because the signals had to
travel a long distance, the delete name request might also be able to only handle a few pages
at a time. So, to delete all of the entries for the page set, the delete name request might need
to be retried many times, requiring several tens of seconds. During this time, the DB2
member holds locks on the page set (for data integrity reasons), and this serialization can
cause IRLM timeouts for some transactions.
5.10 Recent DB2 9 enhancements
DB2 9 introduced a number of data sharing enhancements for z/OS through the service
stream. We list them in this section for completeness.
5.10.1 Random group attach DSNZPARM
DB2 9 introduced a behavior to the way DB2 subsystems are selected during group attach
processing. DB2 subsystems are chosen at random, rather than the DB2 V8 behavior where
group attachment requests are always attempted in the order of DB2 subsystem initialization.
Some customers relied on DB2 V8 behavior to isolate batch processing from transaction
processing.
APAR PK79327 introduces the DSNZPARM parameter, RANDOMATT=YES/NO, to restore
the DB2 V8 group attachment behavior after migrating to DB2 9 and to provide additional
flexibility.
You can specify RANDOMATT=NO to exclude a DB2 member from being selected at random.
To satisfy a group attachment request, DB2 first randomly searches for an active DB2
subsystem, excluding those specifying RANDOMATT=NO. If all such subsystems are
inactive, DB2 searches for any active DB2 subsystem within the group, in the order that is
defined by the z/OS subsystem initialization, regardless of the setting of RANDOMATT.
To achieve non-randomized group attachments for all members, as in DB2 Version 8, specify
RANDOMATT=NO for all members of the data sharing group.
5.10.2 Automatic GRECP recovery functions
DB2 9 tries to handle objects that are in GRECP during restart. However, automatic GRECP
recovery can be delayed for Lock timeouts or a deadlock with another member.
Chapter 5. Data sharing 121
The automatic GRECP recovery process first acquires drains on all of the objects being
recovered, then performs physical opens. During the physical open, the down-level detection
code attempts to retrieve the current level-ID from SYSLGRNX. This is done using an
execution unit switch to an unrelated task, so if the main task has already drained
SYSLGRNX, the other task ends up timing out on a drain lock while trying to acquire a read
claim.
Additionally, if another member is restarting, it might be suspended on a drain lock for DBD01,
if the automatic GRECP recovery process has drained DBD01, which might untimately cause
recovery to fail for that object.
The down-level detection logic is modified with APAR PK80320 to bypass objects on restart
that are in GRECP state. Automatic GRECP processing has also been modified to not be
suspended or timed out if another member is in restart processing.
Table 5-1 lists other recommended maintenance for GRECP/LPL that should be applied to
your DB2 9 systems.
Table 5-1 DB2 9 GRECP/LPL maintenance
5.10.3 The -ACCESS DATABASE command enhancements
DB2 9 introduced the -ACCESS DATABASE command to force a table space, index space, or
partition to be physically opened or to remove the GBP-dependent status for a table space,
index space, or partition.
APAR Brief description
PK80320 Bypass objects that are in GRECP state on
restart
PM02631 Utilities in data sharing when target object
exceeds 200 partitions
PM03795 Delay during startup when several DB2 members
start concurrently if any deadlocks occur
PM05255 Add DBET diagnostic data for data sharing
PM06324 Problems during GRECP/LPL recovery, or
Recovery utility if there are two CLR log records
with the same LRSN value against the same
page
PM06760 During restart AUTO-GRECP recovery
processing takes too long
PM06933 The DBET GRECP derived flags are not correctly
set with the GRECP data at the partition level.
PM06972 AUTO-GRECP recovery failed on SYSLGRNX or
takes a long time to recover SYSLGRNX
PM07357 Open-ended log range for GRECP/LPL recovery
PM11441 Long running backout recovery during
LPL/GRECP recovery
PM11446 DSNI021I incorrectly issued when GRECP/LPL
was abnormally terminated
122 DB2 10 for z/OS Technical Overview
The command is enhanced with APAR PK80925 to support the use of wildcards and subset
ranges in the DATABASE and SPACENAM keywords. This enhancement is a major usability
and manageability benefit, especially for large shops who need to manage thousands of
objects.
Figure 5-2 shows the syntax of the -ACCESS DATABASE command with the wild carding that
is now supported.
Figure 5-2 ACCESS DATABASE command
5.10.4 Reduction in forced log writes
DB2 performs forced log writes after inserting into a newly formatted or allocated page on a
GBP dependent segmented or UTS. The forced log writes lead to a performance degradation
for insert operations and inconsistent log write I/O times (both elapsed time and events) for
the same job running on two different days.
APARs PK83735 and PK94122 changed DB2 9 so that the forced log writes are no longer
done when inserting into a newly formatted or allocated page for a GBP dependent
segmented or universal table space.
ACCESS DATABASE
,
( database-name )
*
dbname1:dbname2
dbname*
*dbname
*dbname*
*dbstring1*dbsting2*
,
SPACENAM( space-name )MODE( open )
* ngbpdep ,
spacename1:spacename2
spacename* PART( integer )
*spacename integer1:integer2
*spacename*
*spacestring1*spacestring2*
=
SET CURRENT EXPLAIN MODE NO
YES
EXPLAIN
host-variable
DocID index
NodeID index
XMLDATA DOCID
MI N_
NODEID
V# update timestamp
(LRSN/ RBA) (14 bytes)
1
2
3
1
2
2
3
02
02
0208
02
2 0210
3 02
START_TS END_TS
FFFFF
FFFFF
FFFFF
FFFFF
00100
00200
00200 00500
00500
00650
00300 00650
(DB2_GENERATED_DOCID_FOR_XML)
XML index (user)
(DOCID, NODEID, END_TS, START_TS)
)
B+tree
B+tree
B+tree
Chapter 8. XML 277
The data for an XML column is stored in the corresponding table.
DB2 creates the XML table space and table in the same database as the table that
contains the XML column (the base table). The XML table space is in the Unicode UTF-8
encoding scheme.
If the base table contains XML columns that support XML versions, each XML table
contains two more columns than an XML table for an XML column that does not support
XML versions. Those columns are named START_TS and END_TS, and they have the
BINARY(8) data type. The START_TS column contains the RBA or LRSN of the logical
creation of an XML record. The END_TS column contains the RBA or LRSN of the logical
deletion of an XML record. The START_TS and END_TS columns identify the rows in the
XML table that make up a version of an XML document.
The START_TS column represents the time when that row is created, and the END_TS
column represents the time when the row is deleted or updated. The END_TS column
contains XFFFFFFFFFFFFFFFF initially. To avoid compression causing update overflow,
columns up to the END_TS column are not compressed in the reordered row format.
A document ID column in the base table named DB2_GENERATED_DOCID_FOR_XML
with data type BIGINT
We refer to this as the DOCID column. The DOCID column holds a unique document
identifier for the XML columns in a row. One DOCID column is used for all XML columns.
The DOCID column has the GENERATED ALWAYS attribute. Therefore, a value in this
column cannot be NULL. However, It can be null for rows that existed before a table is
altered to add an XML column.
If the base table space supports XML versions, the length of the XML indicator column is
eight bytes longer that the XML indicator column in a base table space that does not
support XML versions. That is, 14 bytes instead of six bytes.
An index on the DOCID column
This index is known as a document ID (or DOCID) index.
An index on each XML table that DB2 uses to maintain document order, and map logical
node IDs to physical record IDs
This index is known as a NODEID index. The NODEID index is an extended,
non-partitioning index.
If the base table space supports XML versions, the index key for the NODEID index
contains two more columns than the index key for a node ID index for a base table space
that does not support XML versions. These columns are START_TS and END_TS.
Figure 8-11 gives a general idea of how DB2 handles multiversioning for XML data.
Each row in the XML auxiliary table is associated with two temporal attributes (start and end)
to represent the period when the row exists. Start represents the time when that row is
created, end represents the time when the row is deleted or expired.
For example, an XML document is stored as two rows in the XML auxiliary table at time t0, the
second row is modified at t2, DB2 sets that row expired at t2, creates a new row representing
the modified version with create time t2. The first row is not changed during this process.
A row in XML auxiliary table is never deleted until the garbage collector cleans it up.
When an XML document is deleted at time t2, all the records for that document are marked
expired at t2. When a row of an XML document is updated, all the records for that document
are marked expired at t2, the new document is inserted into XML auxiliary table with start time
set to t2.
278 DB2 10 for z/OS Technical Overview
When a part of an XML document is modified, only the existing record or records to be
modified expire, and a new version of those records is created.
Figure 8-11 Multiversioning for XML data
This storage structure is possible only in new-function mode and for universal table spaces.
Storage structure for multiversioning is a prerequisite for several other XML enhancements,
such as:
Access of currently committed data
As Of for time-oriented data
XML Update with XMLMODIFY
Removing restrictions for SELECT FROM UPDATE/DELETE for XML
Maintain the multiple versions of an XML document
Readers do not need to lock XML.
Sub-document versioning
t0 t2 t3
FFF
t0
Time
FFF
t1
Example: A document is inserted at time t0 and is stored as two records. At time t2, a
node is i nserted into the 2nd record, a new version of the record is created at t2 and
the old versi on ended at t2. The old version is not deleted until garbage cl ean up.
Chapter 8. XML 279
In DB2 9, APAR PK55966 (PTF UK33456) eliminates the page locks for readers on XML table
space. Figure 8-12 shows the XML locking scheme in DB2 9 with the readers locks stricken
out.
Figure 8-12 XML locking scheme (with DB2 9 APAR PK55966)
Figure 8-13 shows the kind of locks that are removed and stricken out because of the
introduction of multiversioning support in DB2 10.
Figure 8-13 XML Locking scheme with multiversioning
g ( )
s page lock, release at
commit
s lock, release at
commit
s page/row lock SELECT RR,
SELECT RS
s page lock, release on
next fetch
s lock, release on
next fetch
s page/row lock on rowset,
release on next fetc h
SELECT UR,
SELECT CS-
CURRENT DATA NO,
SELECT CS-
CURRENT DATA YES
with multirow fet ch and
dynamic scrolling
s page lock, release at
close cursor
s lock, release at
close cursor
s page/row lock, release on
next row fet ch
SELECT CS-
CURRENT DATA YES
workfile
s page lock, release at
next row fetch
s lock, release at
next row fetch
s page/row lock, release on
next row fet ch
SELECT CS-
CURRENT DATA YES
no workfile
s page lock, release at
next row fetch
s lock, release at
next row fetch
None SELECT UR,
SELECT CS-
CURRENT DATA NO
Page latch with P-lock
(changed from page
l ocki ng by APAR PK68265)
x lock, release at
commit
u->x, s->x, x stays x UPDATE/DELETE
Page latch with P-lock
(changed from page
l ocki ng by APAR PK68265)
x lock, release at
commit
x page/row lock INSERT
XML Table space
Page Lock
XML Lock Base Page/Row Lock
(Business as usual)
SQL
s lock, release at
commit
s page/row lock SELECT RR,
SELECT RS
s lock, release on
next fetch
s page/row lock on rowset,
release on next fetc h
SELECT UR,
SELECT CS-
CURRENT DATA NO,
SELECT CS-
CURRENT DATA YES
with Multirow fet ch and
dynamic scrolling
s lock, release at
close cursor
s page/row lock, release on
next row fet ch
SELECT CS-
CURRENT DATA YES
workfile
s lock, release at
next row fetch
s page/row lock, release on
next row fet ch
SELECT CS-
CURRENT DATA YES
no workfile
Conditional lock
s lock, release at
next row fetch
None SELECT UR,
SELECT CS-
CURRENT DATA NO
Page latch (and
optional P-Lock)
x lock, release at
commit
u->x, s->x, x stays x UPDATE/DELETE
Page latch (and
optional P-Lock)
x lock, release at
commit
x page/row lock INSERT
XML Table space
Page Lock
XML Lock Base Page/Row Lock
(Business as usual)
SQL
280 DB2 10 for z/OS Technical Overview
With support for multiversioning, you can use the following statement to delete all rows with
PID like %101%:
SELECT *
FROM OLD TABLE (DELETE FROM PRODUCT
WHERE PID LIKE %101%)
This statement returns the deleted rows to SELECT.
In DB2 9, SELECT FROM UPDATE is possible only with the FINAL TABLE keyword and not
with the OLD TABLE keyword, because old versions of the XML columns are not kept.
8.6 Support for updating part of an XML document
In DB2 9, if you make modifications to part of an XML document stored in a DB2 table, the
application cannot specify the required modification to the XML document. Applications that
require parts of XML documents to be modified need to break apart the XML document into
modifiable pieces, make the modification to a piece of the XML document, and then construct
the pieces back into a single XML document. DB2 10 includes support for updating part of an
XML document. In this section, we describe this enhancement.
8.6.1 Updates to entire XML documents
To update an entire XML document in an XML column, you use the SQL UPDATE statement.
You include a WHERE clause when you want to update specific rows. XML data in an
application can be in textual XML format or binary XML format. Binary XML format is valid
only for JDBC, SQLJ, and ODBC applications.
You can use XML column values to specify which rows are updated. To find values within
XML documents, you need to use XPath expressions. One way of specifying XPath
expressions is the XMLEXISTS predicate, which allows you to specify an XPath expression
and to determine if the expression results in an empty sequence. When XMLEXISTS is
specified in the WHERE clause, rows are updated if the XPath expression returns a
non-empty sequence.
The input to the XML column must be a well-formed XML document, as defined in the XML
1.0 specification. The application data type can be XML (XML AS BLOB, XML AS CLOB, or
XML AS DBCLOB), character, or binary. When you update data in an XML column, it must be
converted to its XML hierarchical format. DB2 performs this operation implicitly when XML
data from a host variable directly updates an XML column. Alternatively, you can invoke the
XMLPARSE function explicitly when you perform the update operation to convert the data to
the XML hierarchical format.
The following examples demonstrate how to update XML data in XML columns. The
examples use the MYCUSTOMER table, which has a SMALLINT CID column and an XML
INFO column. The examples assume that MYCUSTOMER already contains a row with a
customer ID value of 1004.
Chapter 8. XML 281
The XML data that is used to replace existing column data is in the c7.xml file, and looks as
shown in Example 8-44.
Example 8-44 XML data
<customerinfo xmlns="http://posample.org" Cid="1004">
<name>Christine Haas</name>
<addr country="Canada">
<street>12 Topgrove</street>
<city>Toronto</city>
<prov-state>Ontario</prov-state>
<pcode-zip>N9Y-8G9</pcode-zip>
</addr>
<phone type="work">905-555-5238</phone> <phone type="home">416-555-2934</phone>
</customerinfo>
Example 8-45 shows a JDBC sample application. In a JDBC application, read XML data from
the c7.xml file as binary data, and use it to update the data in an XML column,
Example 8-45 JDBC application example
PreparedStatement updateStmt = null;
String sqls = null;
int cid = 1004;
sqls = "UPDATE Customer SET Info=? WHERE Cid=?";
updateStmt = conn.prepareStatement(sqls);
updateStmt.setInt(2, cid);
File file = new File("c7.xml");
updateStmt.setBinaryStream(1, new FileInputStream(file), (int)file.length());
updateStmt.executeUpdate();
Example 8-46 shows a COBOL sample application.
Example 8-46 COBOL example
In an embedded COBOL application, update data in an XML column from an XML AS CLOB
host variable:
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 CID SMALLINT VALUE 1004
01 XMLBUF USAGE IS SQL TYPE IS XML AS CLOB(5K).
01 CLOBBUF USAGE IS SQL TYPE IS CLOB(5K).
EXEC SQL END DECLARE SECTION END-EXEC.
* Read data from file C7.xml into host variables XMLBUF and CLOBBUF
* as XML
EXEC SQL UPDATE MYCUSTOMER SET INFO = : XMLBUF WHERE CID = :CID END-EXEC.
* as CLOB
EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(:CLOBBUF) WHERE CID = :CID
END-EXEC.
In these examples, the value of the CID attribute within the <customerinfo> element is stored
in the CID relational column as well. Thus, the WHERE clause in the UPDATE statements use
the relational column CID to specify the rows to update. In the case where the values that
determine which rows are chosen for update are only within the XML documents themselves,
the XMLEXISTS predicate can be used. For example, the UPDATE statement in the previous
embedded COBOL application example can be changed to use XMLEXISTS as shown in
Example 8-47.
282 DB2 10 for z/OS Technical Overview
Example 8-47 Row filtering using XMLEXISTS
EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBUF
WHERE XMLEXISTS ('declare default element namespace "http://posample.org";
/customerinfo[@Cid = $c]'
passing INFO, cast(:CID as integer) as "c");
8.6.2 Partial updates of XML documents
To update part of an XML document in an XML column, you can use the SQL UPDATE
statement with the XMLMODIFY built-in scalar function.
The XMLMODIFY function specifies a basic updating expression that you can use to insert
nodes, delete nodes, replace nodes, or replace the values of a node in XML documents that
are stored in XML columns.
The types of basic updating expressions are as follows:
Insert expression
Inserts copies of one or more nodes into a designated position in a node sequence.
Replace expression
Replaces an existing node with a new sequence of zero or more nodes, or replaces a
nodes value and preserves the nodes identity.
Delete expression
Deletes zero or more nodes from a node sequence.
Insert expression syntax
Figure 8-14 shows insert expression syntax.
Figure 8-14 Insert expression syntax
Figure 8-15 shows examples of the usage of insert expression with the XMLMODIFY built-in
function. The sample XML document is stored in the XML column INFO in table
PERSONINFO. The keywords that begin an insert expression can be either insert nodes or
insert node, regardless of the number of nodes to be inserted. The source-expression and
the target-expression are XPath expressions that are not updating expressions.
Note: Before you use XMLMODIFY to update part of an XML document, the column that
contains the XML document must support XML versions. You can create the base table
that contains XML columns in a universal table space.
Chapter 8. XML 283
Figure 8-15 Insert expression examples (1 of 3)
Figure 8-16 shows examples of the usage of insert expression with the XMLMODIFY built-in
function.
Figure 8-16 Insert expression examples (2 of 3)
Insert expression (1 of 3)
<person xmlns=http://sample>
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<nickName>Joey</nickName>
</person>
<person xmlns="htt p:// sample">
<f irstName>Joe</firstName>
<last Name>Smith</lastName>
<nickName>Joey</nickName>
<ename xmlns="http://sample">Joe.Smith@de.ibm.com</ename>
</person>
Insert node $ins
as last into person,
XMLPARSE()
<person xmlns="htt p:// sample">
<f irstName>Joe</firstName>
<last Name>Smith</lastName>
<nickName>Joey</nickName>
<ename xmlns="http://sample">Joe.Smith@de.i bm. com</ ename>
</person>
Insert node $ins
into person,
XMLPARSE(.)
(DB2 always insert s as last child)
Resulting XML document Insert operation
update personinfo set info = XMLMODIFY('
declare default element namespace http://sample;
insert node $ins
after person/nickName', XMLPARSE( '
<ename xmlns="http://sample">Joe.Smith@de.ibm.com</ename>') as "ins") ;
Sample insert statement:
Sample XML document:
Insert expression (2 of 3)
<person xmlns=http://sample>
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<nickName>Joey</nickName>
</person>
<person xmlns="http: //sample">
<first Name>Joe</ firstName>
<lastName>Smit h</lastName>
<nickName>Joey</nickName>
<ename xml ns="http://sampl e">Joe.Smith@de.ibm.com</ename>
</person>
Insert node $ins
after person/nickName,
XMLPARSE()
<person xmlns="http: //sample">
<first Name>Joe</ firstName>
<lastName>Smit h</lastName>
<ename xml ns="http://sampl e">Joe.Smith@de.ibm.com</ename>
<nickName>Joey</nickName>
</person>
Insert node $ins
before person/ nickName,
XMLPARSE()
<person xmlns="http: //sample">
<ename xml ns="http://sampl e">Joe.Smith@de.ibm.com</ename>
<first Name>Joe</ firstName>
<lastName>Smit h</lastName>
<nickName>Joey</nickName>
</person>
Insert node $ins
as first into person,
XMLPARSE()
Resulting XML document Insert operation
284 DB2 10 for z/OS Technical Overview
Figure 8-17 shows an example of the usage of insert expression with the XMLMODIFY
built-in function.
Figure 8-17 Insert expression examples (3 of 3)
Replace expression syntax
Figure 8-18 shows the replace expression syntax.
Figure 8-18 Replace expression syntax
Figure 8-19 shows an example of replacing the value of node with the XMLMODIFY built-in
function.
<person xmlns=http://sample>
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<nickName>Joey</nickName>
</person>
- Insertion of sequence of elements also possible
- Use same keywords as introduced before
update personinfo set info = XMLMODI FY('
declare default element namespace http://sample;
insert node $ins
before person/nickName',
XMLPARSE(
<contactinfo xmlns=http://sample>
<ename>Joe.Smi th@de.ibm.com</ename>
<phoneno>001-408-226-7676</phoneno>
</contactinfo>
') as "ins") ;
<person xmlns="http: //sample">
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<contactinfo xmlns=http://sample>
<ename>Joe.Smith@de.ibm.com</ename>
<phoneno>001-408-226-7676</phoneno>
</contactinfo>
<nickName>Joey</nickName>
</person>
Chapter 8. XML 285
Figure 8-19 Replace expression examples (1 of 2)
Figure 8-20 shows an example of replacing an existing node by new nodes with the
XMLMODIFY built-in function.
Figure 8-20 Replace expression examples (2 of 2)
<person xmlns="http: //sample">
<firstName>Joe</f irstName>
<lastName>Smit h</ lastName>
<contactinfo xmlns="http://sample">
<ename>Joe. Smith@de. ibm. com</ename>
<phoneno>001-408-226-7676</ phoneno>
</ contactinfo>
<nickName>Joey</nickName>
</person>
- Replace a nodes value preserving the nodes identity
update personinfo set info = XMLMODI FY('
declare default element namespace
htt p:// sample;
replace value of node person/contactinf o/ename
with Joey@us.ibm.com"')
<person xmlns="http: //sample">
<first Name>Joe</firstName>
<lastName>Smith</lastName>
<contactinfo>
<ename>Joey@us.ibm.com</ ename>
<phoneno>001-408-226-7676</phoneno>
</cont actinfo>
<nickName>Joey</nickName>
</person>
Replace an existing node with a new sequence of zero or more nodes
<person xmlns="http://sample">
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<contactinfo>
<ename>Joey@us.ibm.com</ename>
<phoneno>001-408-226-7676</phoneno>
</contactinfo>
<nickName>Joey</nickName>
</person>
-
update personinfoset info = XMLMODIFY('
declare default element namespace http://sample;
replace node person/contactinfo/ename
with $x', XMLPARSE( '
<enames xmlns=http://sample>
<email>Joey@us.ibm.com</email>
<vmaddr>JOEY@IBMUS</vmaddr>
</enames>') as "x");
<person xmlns="http://sample">
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<contactinfo>
<enames xmlns=http://sample>
<email>Joey@us.ibm.com</email>
<vmaddr>JOEY@IBMUS</vmaddr>
</enames>
<phoneno>001-408-226-7676</phoneno>
</contactinfo>
<nickName>Joey</nickName>
</person>
286 DB2 10 for z/OS Technical Overview
Delete expression syntax
Figure 8-21 shows the delete expression syntax.
Figure 8-21 Delete expression syntax
Figure 8-22 shows an example of the usage of delete expression with the XMLMODIFY
built-in function. The keywords that begin a delete expression can be either delete nodes or
delete node, regardless of the number of nodes to be deleted.
Figure 8-22 Delete expression example
To conclude this discussion, note the following information:
XMLMODIFY can be used only in the right side of UPDATE SET.
One update at a time for a document with concurrency control by the base table as
follows:
Row level locking, page level locking, and other functions
Document level lock to prevent UR reader
Only changed rows in XML table are updated.
If there is a schema type modifier on the updated XML column, partial validation occurs.
Global constraints are not validated.
8.7 Support for binary XML
The binary XML format is formally called Extensible Dynamic Binary XML DB2 Client/Server
Binary XML Format. Binary XML format is an external representation of an XML value that
can be used for exchange of XML data between a client and a data server. The binary
representation provides efficient XML parsing, which can result in performance improvements
for XML data exchange. Support of binary XML provides performance improvements,
because the XML data can be encoded more efficiently.
<person xmlns="http: //sample">
<firstName>Joe</f irstName>
<lastName>Smit h</ lastName>
<contactinfo xmlns=http://sample>
<enames>
<email>Joey@us.ibm.com</ email>
<vmaddr>JOEY@IBMUS</ vmaddr>
</enames>
</ contactinfo>
<nickName>Joey</nickName>
</person>
update personinfo set info = XMLMODI FY(
declare default element namespace
http://sample;
delet e node
person/contact info );
<person xmlns="ht tp:// sample">
<firstName>Joe</firstName>
<lastName>Smith</last Name>
<nickName>Joey</nickName>
</person>
Chapter 8. XML 287
You can find the binary XML specification at:
http://www.ibm.com/support/docview.wss?uid=swg27019354
Storage and retrieval of binary XML data requires version 4.9 or later of the IBM Data Server
Driver for JDBC and SQLJ. If you are using binary XML data in SQLJ applications, you also
need version 4.9 or later of the sqlj4.zip package.
DB2 10 supports the use of binary XML format between applications and the server.
Binary does not mean FOR BIT DATA. Figure 8-23 summarizes the reasons why XML binary
format is not the same as FOR BIT DATA.
Figure 8-23 Binary XML is not the same as FOR BIT DATA
Binary XML is more efficient for various reasons, such as:
Binary XML uses string IDs for names only (element names, attribute names, namespace
prefixes, and URIs), not an arbitrary string. A string ID can represent some or all
occurrences of the same text with an integer identifier.
Binary XML uses a pre-tokenized format, and all values are prefixed with length. There is
no need to look at every byte or to search for the end of element names for values.
Binary XML on average is about 17% to 46% smaller in size, requiring less virtual storage.
Binary XML saves DB2 9% to 30% CPU time and 8% to 50% elapsed time during insert.
Retrieval of data as binary XML is supported for remote access using DRDA and for use in
JDBC, SQLJ, and ODBC applications. Binary XML is also supported by the UNLOAD and
LOAD utilities.
Binary XML does not influence how XML data is stored in DB2. No parsing is needed and no
z/OS XML System Services are invoked. However, DB2 DRDA zIIP redirect is not affected by
the usage of binary XML.
<person xmlns=http://sample>
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<nickName>Joey</nickName>
</person>
Textual XML
DB2 XML
stored format
....X.......X.person.}..X.firstName.~..U.SmithzT.X.nickName....U.JoeyzzZ
00005C300000050767766870050667774666870005667675050666646668000504667775
03008AB510002860523FE7D008969234E1D57E0053D948A4088E93BE1D5800054AF59AAA
Binary XML
Direct INSERT
of bi nary XML
not possible!
User must convert
to string !!
288 DB2 10 for z/OS Technical Overview
The binary representation provides more efficient XML parsing. An XML value can be
transformed into a binary XML value that represents the XML document using the following
methods:
In a JDBC or SQLJ application, you can transform an XML value by retrieving the XML
column value into an java.sql.SQLXML object and then retrieving the data from the
java.sql.SQLXML object as a binary data type, such as InputStream. You cannot get
binary XML data out of SQL/XML type. It is implicitly used. JDBC 4.0 or later provides
support for the java.sql.SQLXML data type.
In an ODBC application, you can transform an XML value by binding the XML column to
an application variable with the SQL_C_BINARYXML data type and retrieving the XML
value into that application variable.
You can also transform an XML value by running the UNLOAD utility and using one of the
following field specifications for the XML output:
CHAR BLOBF template-name BINARYXML
VARCHAR BLOBF template-name BINARYXML
XML BINARYXML
Similarly, you can transform a binary value that represents an XML document to an XML
value in the following methods:
In a JDBC or SQLJ application, you can transform a binary value by assigning the input
value to an java.sql.SQLXML object and then inserting the data from the java.sql.SQLXML
object into the XML column.
ODBC provides a pass-through capability for binary data. Binary XML has to be generated
by the application or retrieved from DB2 for inserting back.
You can also transform a binary value by running the LOAD utility and using one of the
following field specifications for the XML input:
CHAR BLOBF BINARYXML
VARCHAR BLOBF BINARYXML
XML BINARYXML
Example 8-48 shows a JDBC example of using SQLXML type. The example shows how to
get a DOM tree, or SAX source, or string from SQLXML type. An SQL/XML object can be
read only once. Thus, the statements are mutually exclusive. Use one of them.
Example 8-48 Application using JDBC 4.0 (fetch XML as SQLXML type)
String sql = "SELECT xml_col from T1";
PreparedStatement pstmt = con.prepareStatement(sql);
ResultSet resultSet = pstmt.executeQuery();
// get the result XML as SQLXML
SQLXML sqlxml = resultSet.getSQLXML(column);
// column must be = 1
// get a DOMSource from SQLXML object
DOMSource domSource = sqlxml.getSource(DOMSource.class);
Document document = (Document) domSource.getNode();
//or: get a SAXSource from SQLXML object
SAXSource saxSource = sqlxml.getSource(SAXSource.class);
XMLReader xmlReader = saxSource.getXMLReader();
xmlReader.setContentHandler(myHandler);
xmlReader.parse(saxSource.getInputSource());
Chapter 8. XML 289
//or: get binaryStream or string from SQLXML object
InputStream binaryStream = sqlxml.getBinaryStream(); //Or:
String xmlString = sqlxml.getString();
Example 8-49 shows a JDBC example of using SQLXML type. The example shows how to
insert and update XML using SQLXML.
Example 8-49 Application using JDBC 4.0 (insert and update XML using SQLXML)
String sql = "insert into T1 values(?)";
PreparedStatement pstmt = con.prepareStatement(sql);
SQLXML sqlxml = con.createSQLXML();
// create a SQLXML object from the DOM object
DOMResult domResult = sqlxml.setResult(DOMResult.class);
domResult.setNode(myNode);
// create a SQLXML object from a string
sqlxml.setString(xmlString);
// set that xml doucment as the input to parameter marker 1
pstmt.setSQLXML(1, sqlxml);
pstmt.executeUpdate();
Sqlxml.free();
Figure 8-24 shows an example of using binary XML in the UNLOAD and LOAD utilities.
Figure 8-24 Binary XML in the UNLOAD and LOAD utilities
UNLOAD DATA
FROM TABLE ADMF006.PERSONINFO
(INFO XML BINARYXML)
SYSPUNCH
SYSREC
....-........?>g'...+/_..
With DIS UTF8, you see:
....X.......X.person.}..X.firstName.~..U.JoezX.lastN ..
LOAD DATA INDDN SYSREC .. ..
INTO TABLE "ADMF006". "PERSONINFO"
WHEN(00001:00002) = X'0003'
( "DSN_NULL_IND_00001" POSITION( 00003) CHAR(1)
, "INFO" POSITION( *)
XML PRESERVE WHITESPACE BI NARYXML
NULLIF(DSN_NULL_IND_00001)=X' FF )
290 DB2 10 for z/OS Technical Overview
8.8 Support for XML date and time
In DB2 9, the xs:date, xs:time, and xs:dateTime data types support date and time data.
However, they are not natively supported in XPath. XMLCAST and XMLTABLE functions can
be invoked on values typed on these data types. You can cast XML xs:date, xs:time, and
xs:dateTime data types to SQL DATE, TIME, and TIMESTAMP SQL data types, but you
cannot cast SQL DATE, TIME, and TIMESTAMP data types to XML xs:date, xs:time, and
xs:dateTime data types. There is no support for XML indexes for these data types.
When developing applications that deal with date or time in XML, keep in mind the following
consideration:
XML applications can model date and time data types only as character strings, which is
imprecise when world times (time zones) are involved.
You cannot do date or time arithmetic on XML data in DB2. Thus, an application developer
needs to write application code to perform these functions.
The lack of support for XML indexes with a date or time stamp data type negatively affects the
performance of applications that search on a date or time in XML documents.
DB2 10 includes support for XML date and time data types. In this section, we discuss these
enhancements.
8.8.1 Enhancements for XML date and time support
DB2 10 supports date and time in XML data types and functions. It provides a time zone
feature and arithmetic and comparison operators on date and time data types.
In addition to the xs:string, xs:boolean, xs:decimal, xs:integer, xs:untypedAtomic data
types, DB2 10 adds the following data types that are related to date and time operations:
xs:dateTime
xs:date
xs:time
xs:duration
xs:yearMonthDuration
xs:dayTimeDuration
DB2 10 supports the following XML functions and operators that deal with date and time:
Comparison operators on duration, date, and time values
Component extraction functions on duration, date, and time
Arithmetic operators on durations
Time zone adjustment function on date and time values
Arithmetic operators on duration, date, and time
Context functions
DB2 10 supports XMLCAST casts from SQL date and time to XML date and time.
DB2 10 supports extends XML index creation and exploitation on date and time data
types.
Chapter 8. XML 291
8.8.2 XML date and time support
Figure 8-25 shows the lexical representation for date and time support.
Figure 8-25 XML date and time support
The descriptions for the abbreviations for xs:dateTime, xs:date, and xs:time are as follows:
yyyy A four-digit numeral that represents the year. The value cannot begin
with a negative sign (-) or a plus sign (+). The number 0001 is the
lexical representation of the year 1 of the Common Era (also known as
1 AD). The value cannot be 0000.
- The dash (-) is a separator between parts of the date portion.
mm A two-digit numeral that represents the month.
dd A two-digit numeral that represents the day.
T A separator to indicate that the time of day follows.
hh A two-digit numeral (with leading zeros as required) that represents
the hours. The value must be between -14 and +14, inclusive.
: The colon (:) is a separator between parts of the time portion.
mm A two-digit numeral that represents the minute.
ss A two-digit numeral that represents the whole seconds.
.ssssssssssss Optional. If present, a 1-to-12 digit numeral that represents the
fractional seconds.
zzzzzz Optional. If present, represents the time zone. If a time zone is not
specified, an implicit time zone of Coordinated Universal Time (UTC),
also called Greenwich Mean Time, is used.
The lexical form for the time zone indicator is a string that includes one of the following forms:
A positive (+) or negative (-) sign that is followed by hh:mm, where the following
abbreviations are used:
hh A two-digit numeral (with leading zeros as required) that represents
the hours. The value must be between -14 and +14, inclusive.
mm A two-digit numeral that represents the minutes. The value of the
minutes property must be zero when the hours property is equal to
14.
DB2 10 adds native support for the foll owi ng types:
xs:dateTime
yyyy -mm-ddThh:mm:ss(.ssssssssssss)?(zzzzzz)?
xs:date
yyyy-mm-dd(zzzzzz)
xs:time
hh:mm:ss(-ssssssssssss)?(zzzzzz)?
xs:duration
PnYnMnDTnHnMnS
xs:yearMonthDuration
PnYnM
xs:dayTimeDuration
PnDTnHnMnS
Timezone if present,
else UTC (+/- zz:zz)
Fractional seconds if present
292 DB2 10 for z/OS Technical Overview
+ The plus sign (+) indicates that the specified time instant is in a
time zone that is ahead of the UTC time by hh hours and mm
minutes.
- The dash (-) indicates that the specified time instant is in a time
zone that is behind UTC time by hh hours and mm minutes.
The literal Z represents the time in UTC (Z represents Zulu time, which is equivalent to
UTC). Specifying Z for the time zone is equivalent to specifying +00:00 or -00:00.
Here are a few examples:
xs:dateTime
The following form indicates noon on 10 October 2009, Eastern Standard Time in the
United States:
2009-10-10T12:00:00-05:00
This time is expressed in UTC as 2009-10-10T17:00:00Z.
xs:date
The following form indicates 10 October 2009, Eastern Standard Time in the United
States:
2009-10-10-05:00
This date is expressed in UTC as 2009-10-10T05:00:00Z
xs:time
The following form, which includes an optional time zone indicator, represents 1:20 p.m.
Eastern Standard Time, which is five hours behind than UTC:
13:20:00-05:00
The following list describes the abbreviations for duration, yearMonthDuration, and
dayTimeDuration:
P The duration designator.
nY n is an unsigned integer that represents the number of years.
nM n is an unsigned integer that represents the number of months.
nD n is an unsigned integer that represents the number of days.
T The date and time separator.
nH n is an unsigned integer that represents the number of hours.
nM n is an unsigned integer that represents the number of minutes.
nS n is an unsigned decimal that represents the number of seconds. If a
decimal point appears, it must be followed by one to 12 digits that
represent fractional seconds.
Here are a few examples:
xs:duration
The following form indicates a duration of 1 year, 2 months, 3 days, 10 hours, and 30
minutes:
P1Y2M3DT10H30M
The following form indicates a duration of negative 120 days:
-P120D
Chapter 8. XML 293
xs:yearMonthDuration
The following form indicates a duration of 1 year and 2 months:
P1Y2M
The following form indicates a duration of negative 13 months:
-P13M
xs:dayTimeDuration
The following form indicates a duration of 3 days, 10 hours, and 30 minutes:
P3DT10H30M
The following form indicates a duration of negative 120 days:
-P120D
Figure 8-26 discusses comparison operators on XML duration, date, and time values of
XMLQUERY.
For example,
SELECT XMLQUERY('xs:duration("P1Y") = xs:duration("P12M")')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>true
SELECT XMLQUERY('xs:duration("P1Y") = xs:duration("P365D")')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>false
Figure 8-26 XML date and time related comparison operators
Here are a few examples for comparison of durations:
(xs:duration("P1Y") = xs:duration("P12M")) returns true.
(xs:duration("PT24H") = xs:duration("P1D")) returns true.
(xs:duration("P1Y") = xs:duration("P365D")) returns false.
(xs:yearMonthDuration("P0Y") = xs:dayTimeDuration("P0D")) returns true.
(xs:yearMonthDuration("P1Y") = xs:dayTimeDuration("P365D")) returns false.
(xs:yearMonthDuration("P2Y") = xs:yearMonthDuration("P24M")) returns true.
(xs:dayTimeDuration("P10D") = xs:dayTimeDuration("PT240H")) returns true.
(xs:duration("P2Y0M0DT0H0M0S") = xs:yearMonthDuration("P24M")) returns true.
(xs:duration("P0Y0M10D") = xs:dayTimeDuration("PT240H")) returns true.
XPath al lows to compare XML duration, date, and time values
Result is xs:boolean (true/false)
Possible comparisons:
xs:yearMonthDuration < (or >) xs:yearMonthDuration
xs:dayTi meDuration < (or >) xs:dayTimeDuration
duration-equal
Compare any durations to see if they are equal
xs:dateTime < (or > or = ) xs:dateTi me
xs:date < (or > or = ) xs:date
xs:Time < (or > or = ) xs:Time
294 DB2 10 for z/OS Technical Overview
Figure 8-27 shows examples for XML date and time comparison with SQL date.
Figure 8-27 XML date and time comparison with SQL date
Figure 8-28 shows how the durations can be used in arithmetic operations.
For example:
SELECT XMLQUERY ('xs:yearMonthDuration("P2Y11M") div 1.5')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>P1Y11M
Figure 8-28 Arithmetic operations on XML duration
<person xmlns=http://sample>
<firstName>Joe</firstName>
<lastName>Smith</lastName>
<dob>1941-10-13<</dob>
</person>
Sample select statement:
Sample XML document 1:
<person xmlns=http://sample>
<firstName>Jane</firstName>
<lastName>Smith</lastName>
<dob>1949-08-10<</dob>
</person>
Sample XML document 2:
SELECT XMLQUERY(
'declare default element namespace "http://sample";
/person[dob = xs:date(1941-10-13")]'
PASSING INFO)
FROM PERSONINFO ;
<?xml version="1.0" encoding="IBM037"?>
<person xmlns="http://sample">
<f irstName>Joe</firstName>
<last Name>Smith</lastName>
<dob>1941-10-13</dob>
</person>
<?xml version="1.0" encoding="IBM037"?>
This is the qualifying XML document
Non-qualifying document.
NULL returned!
Durations may be used i n arithmetic operations
The 10 al lowed operations are:
A + B (returns xs:yearMonthDurati on)
A B (returns xs:yearMonthDuration)
A * B (returns xs:yearMonthDurati on)
A div C (returns xs:yearMonthDuration)
A div B (returns xs:decimal )
D + E (returns xs:dayTimeDuration)
D - E (returns xs:dayTimeDurati on)
D * E (returns xs:dayTimeDuration)
D div F (returns xs:dayTimeDuration)
D div E (returns xs:deci mal)
A and B are type xs:yearMonthDurati on, and C i s type numeric
D and E are type: xs:dayTi meDuration, and F is numeri c
Example:
xs:yearMonthDuration(P2Y11M) div 1.5)
Result: 35/1.5=23.33333 => P1Y11M
Chapter 8. XML 295
Here are a few examples for arithmetic operations on XML duration
(xs:yearMonthDuration(P2Y11M) + xs:yearMonthDuration(P3Y3M)) returns an
xs:yearMonthDuration value corresponding to 6 years and 2 months: P6Y2M.
(xs:yearMonthDuration(P2Y11M) - xs:yearMonthDuration(P3Y3M)) returns an
xs:yearMonthDuration value corresponding to negative 4 months: -P4M.
(xs:yearMonthDuration(P2Y11M) * 2.3) returns an
xs:yearMonthDuration value corresponding to 6 years and 9 months: P6Y9M.
(xs:yearMonthDuration(P2Y11M) div 1.5) returns an
xs:yearMonthDuration value corresponding to 1 year and 11months: P1Y11M.
(xs:yearMonthDuration(P3Y4M) div xs:yearMonthDuration(P1Y4M)) returns 2.5.
(xs:dayTimeDuration(P2DT12H5M) + xs:dayTimeDuration(P5DT12H)) returns an
xs:dayTimeDuration value corresponding to 8 days and 5 minutes: P8DT5M.
(xs:dayTimeDuration(P2DT12H) - xs:dayTimeDuration(P1DT10H30M)) returns an
xs:dayTimeDuration value corresponding to 1 day, 1 hour, and 30minutes: P1DT1H30M.
(xs:dayTimeDuration(PT2H10M) * 2.1) returns an
xs:dayTimeDuration value corresponding to 4 hours and 33 minutes: PT4H33M.
(xs:dayTimeDuration(P1DT2H30M10.5S) div 1.5) returns an xs:dayTimeDuration value
corresponding to 17 hours, 40 minutes, and 7 seconds:PT17H40M7S
(xs:dayTimeDuration(P2DT53M11S) div xs:dayTimeDuration(P1DT10H)) returns
1.4378349.
Figure 8-29 discusses arithmetic operations on XML duration, date, and time.
Figure 8-29 Arithmetic operations on XML duration, date, and time
Here are a few examples for arithmetic operations on XML duration, date, and time:
(xs:dateTime(2000-10-30T06:12:00) - xs:dateTime(1999-11-28T09:00:00Z))
returns an xs:dayTimeDuration value of P336DT21H12M.
(xs:date(2000-10-30) - xs:date(1999-11-28)) returns an xs:dayTimeDuration
value of P337D.
(xs:time(24:00:00) - xs:time(23:59:59)) returns an xs:dayTimeDuration value
corresponding to -PT23H59M59S.
Date and time AND duration may be used in arithmetic operations
The 13 allowed operations are:
A - B (returns xs:dayTimeDuration) A and B are type xs:date
A B (returns xs:dayTimeDuration) A and B are type xs:time
A B (returns xs:dayTimeDuration) A and B are type xs:dateTime
A + B (returns xs:dateTime) A is type xs:yearMonthDuration and B is type xs:dateTime
A + B (returns xs:dateTime) A is type xs:dayTimeDuration and B is type xs:dateTime
A B (returns xs:dateTime) A is type xs:yearMonthDuration and B is type xs:dateTime
A B (returns xs:dateTime) A is type xs:dayTimeDuration and B is type xs:dateTime
A + B (returns xs:date) A is type xs:yearMonthDuration and B is type xs:date
A + B (returns xs:date) A is type xs:dayTimeDuration and B is type xs:date
A B (returns xs:date) A is type xs:yearMonthDuration and B is type xs:date
A B (returns xs:date) A is type xs:dayTimeDuration and B is type xs:date
A + B (returns xs:time) A is type xs:dayTimeDuration and B is type xs:time
A B (returns xs:time) A is type xs:dayTimeDuration and B is type xs:time
296 DB2 10 for z/OS Technical Overview
(xs:dateTime(2000-10-30T11:12:00) + xs:yearMonthDuration(P1Y2M)) returns an
xs:dateTime value corresponding to the lexical representation 2001-12-30T11:12:00.
(xs:dateTime(2000-10-30T11:12:00) + xs:dayTimeDuration(P3DT1H15M)) returns
an xs:dateTime value corresponding to the lexical representation 2000-11-02T12:27:00.
(xs:dateTime(2000-10-30T11:12:00) - xs:yearMonthDuration(P1Y2M)) returns an
xs:dateTime value corresponding to the lexical representation 1999-08-30T11:12:00.
(xs:dateTime(2000-10-30T11:12:00) - xs:dayTimeDuration(P3DT1H15M)) returns
an xs:dateTime value corresponding to the lexical representation 2000-10-27T09:57:00.
(xs:date(2000-10-30) + xs:yearMonthDuration(P1Y2M)) returns an xs:date
corresponding to 30 December 2001: 2001-12-30.
(xs:date(2004-10-30Z) + xs:dayTimeDuration(P2DT2H30M0S)) returns an xs:date
November 1, 2004-11-01Z. The starting instant of the first argument is the xs:dateTime
value {2004, 10, 30, 0, 0, 0, PT0S}. Adding the second argument to this, gives the
xs:dateTime value {2004, 11, 1, 2, 30, 0, PT0S}. The time components are then discarded.
(xs:date(2000-10-30) - xs:yearMonthDuration(P1Y2M)) returns an
xs:date 30 August 1999: 1999-08-30.
(xs:date(2000-10-30) - xs:dayTimeDuration(P3DT1H15M)) returns an
xs:date 26 October 2000: 2000-10-26.
(xs:time(11:12:00) + xs:dayTimeDuration(P3DT1H15M)) returns an
xs:time value corresponding to the lexical representation 12:27:00.
(xs:time(11:12:00) - xs:dayTimeDuration(P3DT1H15M)) returns an
xs:time value corresponding to the lexical representation 09:57:00.
Figure 8-30 lists the date and time related XPath functions.
Figure 8-30 Date and time related XPath functions
The fn:current-date, fn:current-dateTime, and fn:current-time functions are context functions.
The fn:current-date function returns the current date in the local time zone. The returned
value is an xs:date value that is the current date. The time zone component of the returned
value is the local time zone.
24 new functions based on date, time, and duration
fn:years-from duration, fn:months-from-duration, fn:days-from-durati on,
fn:hours-from-duration, fn:mi nutes-from-duration, fn:seconds-from-duration
fn:year-from-dateTime, fn:month-from-dateTi me, fn:day-from-dateTime,
fn:hours-from-dateTi me, fn:minutes-from-dateTime,
fn:seconds-from-dateTime, fn:timezone-from-dateTime
fn:year-from-date, fn:month-from-date, fn:day-from-date,
fn:timezone-from-date
fn:hours-from-time, fn:minutes-from-time, fn:seconds-from-time,
fn:timezone-from-time
fn:current-date, fn:current-dateTime, fn:current-time
Chapter 8. XML 297
For example, the following function returns the current date:
fn:current-date()
If this function were invoked on 02 September 2010, in Pacific Standard Time, the returned
value would be 2010-09-02-08:00.
The fn:current-dateTime function returns the current date and time in the local time zone.
The following function returns an xs:dateTime value that is the current date and time. The
time zone component of the returned value is the local time zone. The precision for fractions
of seconds is 12.
fn:current-dateTime()
If this function were invoked on 02 September 2010 at 6:25 in Toronto (time zone -PT5H), the
returned value would be 2010-09-02T06:25:30.384724902312-05:00.
The fn:current-time function returns the current time in the local time zone.
The following function returns an xs:time value that is the current time. The time zone
component of the returned value is the local time zone. The precision for fractions of seconds
is 12.
The following function returns the current time:
fn:current-time()
If this function were invoked at 6:31 Pacific Standard Time (-08:00), the returned value would
be 06:31:35.519003948231-08:00.
Figure 8-31 shows examples of date and time related XPath functions.
Figure 8-31 Date and time related XPath functions examples
For example:
SELECT XMLQUERY('fn:month-from-dateTime(
xs:dateTime("2010-02-22T13:20:00-05:00"))')
FROM SYSIBM.SYSDUMMY1 ;
Result: <?xml version="1.0" encoding="IBM037"?>2
fn:years-from duration (as xs:integer) (resul t may be negati ve)
fn:years-from duration(xs:yearMonthDuration(P20Y15M)
result 21
fn:month-from-dateTi me (as xs:integer)
fn:month-from-dateTime(xs:dateTime(2010-02-22T13:20:00-05:00))
result 2
fn:timezone-from-dateTi me (as xs:dayTimeDuration)
fn:timezone-from-dateTime(xs:dateTime(2010-02-22T13:20:00-05:00))
result is xs:dayTimeDuration with value PT5H
fn:seconds-from-time (as xs:deci mal)
fn:seconds-from-time(xs:time(13:20:10.5))
result 10.5
298 DB2 10 for z/OS Technical Overview
Figure 8-32 discusses time zone adjustment functions on date and time.
Figure 8-32 Time zone adjustment functions on date and time (1 of 2)
The function adjust-dateTime-to-timezone adjusts an xs:dateTime value to a specific time
zone or to no time zone at all.
The function adjust-date-to-timezone adjusts an xs:date value to a specific time zone or to
no time zone at all.
The function adjust-time-to-timezone adjusts an xs:time value to a specific time zone or to
no time zone at all.
Ti mezones are durati ons with hour and minute properties
2010-02-22T12:00:00-5:00 (February 22, 2010 - noon EST)
same as
2010-02-22T09:00:00-8:00 (February 22, 2010 - 9 a.m. PST)
same as
2010-02-22T17:00:00Z (February 22, 2010 5 p.m. UTC)
If a date or ti me does not contain a time zone, UTC i s implied
Ti me zones adjustable to your needs:
fn:adjust-dateTime-to-timezone (returns xs:dateTime)
fn:adjust-date-to-ti mezone (returns xs:date)
fn:adjust-time-to-timezone (returns xs:time)
Chapter 8. XML 299
Figure 8-33 shows some examples.
Figure 8-33 Time zone adjustment functions on date and time (2 of 2)
The time zone adjustment functions take a dayTimeDuration for the time zone as the second
argument. If it is an empty sequence, it strips the time zone. See Example 8-50.
Example 8-50 Samples of the time zone adjustment functions
SELECT XMLQUERY('
fn:adjust-dateTime-to-timezone(xs:dateTime("2010-02-22T12:00:00-05:00"),
xs:dayTimeDuration("-PT3H"))')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>2010-02-22T14:00:00-03:00
SELECT XMLQUERY('
fn:adjust-dateTime-to-timezone(xs:dateTime("2010-02-22T09:00:00-08:00"),
xs:dayTimeDuration("-PT3H"))')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>2010-02-22T14:00:00-03:00
SELECT XMLQUERY('
fn:adjust-dateTime-to-timezone(xs:dateTime("2010-02-22T12:00:00-05:00"), ())')
FROM SYSIBM.SYSDUMMY1;
Result: <?xml version="1.0" encoding="IBM037"?>2010-02-22T12:00:00
2010-02-22T12:00:00-5:00 (February 22, 2010 - noon EST)
2010-02-22T09:00:00-8:00 (February 22, 2010 - 9 a.m. PST)
2010-02-22T12:00:00Z (February 22, 2010 noon UTC)
fn:adjust-dateTime-to-timezone(xs:dateTime("2010:02:22T12:00:00-05:00"),
xs:dayTimeDuration("-PT3H"))
fn:adjust-dateTime-to-timezone(xs:dateTime("2010:02:22T12:00:00-05:00"),())
Third dateTime value NOT
equivalent to the first two !!
300 DB2 10 for z/OS Technical Overview
Figure 8-34 summarizes XML index improvements for date and timestamp.
Figure 8-34 XML index improvement for date and time stamp
Here are CREATE INDEX examples:
Create an index for the date representation of the shipDate element in XML documents in
the PORDER column of the PURCHASEORDER table.
CREATE INDEX PO_XMLIDX1 ON PURCHASEORDER (PORDER)
GENERATE KEY USING XMLPATTERN '//items/shipDate'
AS SQL DATE
The following query includes a range predicate on a timestamp type. It retrieves
documents from the PORDER column of the PURCHASEORDER table for items that
have been shipped.
SELECT PORDER FROM PURCHASEORDER T1
WHERE XMLEXISTS('$po//item[shipDate < $timestamp]'
PASSING PORDER AS "po", CURRENT TIMESTAMP AS "timestamp")
To be compatible with this query, the XML index needs to include the shipDate nodes
among the indexed nodes, and needs to store values in the index as a TIMESTAMP type.
The precision of the TIMESTAMP column must be implicitly or explicitly set to 12.
The query can use the following XML index:
CREATE INDEX PORDERINDEX ON PURCHASEORDER (PORDER)
GENERATE KEY USING XMLPATTERN '//item/shipDate' AS SQL TIMESTAMP(12)
8.9 XML in native SQL stored procedures and UDFs
DB2 9 does not support the native XML data type in native stored procedures and user
defined functions. Applications cannot pass XML values between statements in native SQL
procedures. To bypass this limitation, DB2 9 applications must use string or LOB SQL
variables, which involves redundant data type conversion and expensive XML serialization
and parsing. The lack of support for XML arguments in stored procedures and user-defined
functions limits the users ability to use XML features.
DB2 family compatibility and application portability compatibility and application portability are
affected.
DB2 10 introduces the following enhancements to remove the functional limitations and
provide DB2 family compatibility and application portability:
Use the XML data type for IN, OUT, and INOUT parameters
Use the XML data type for SQL variables inside the procedure and function logic
DB2 10 enhancements are only for native SQL procedures and for SQL user-defined
functions (both scalar and table functions).
XMLPATTERN can now contain TIMESTAMP and DATE
Used to be only VARCHAR and DECFLOAT
Rows contai ning invali d xs:date or xs:dateTime value i s ignored and not
i nserted into i ndex
Normalized to UTC before stored in index
Chapter 8. XML 301
8.9.1 Enhancements to native SQL stored procedures
In the CREATE PROCEDURE statement for a native SQL procedure, XML can now be
specified as the data type of a parameter.
Figure 8-35 shows an example of coding an XML parameter in a native SQL stored
procedure.
Figure 8-35 Native SQL stored procedure using XML parameter
The CREATE PROCEDURE syntax is extended to allow the XML data type to be specified for
parameters. The input parameter PARM1 passes an XML document for being inserted into
table TAB1 which has one XML column defined. You can verify whether the XML document is
inserted properly by querying the table TAB1 and retrieving the XML column using Data
Studio or SPUFI.
CREATE PROCEDURE PROC5 (IN PARM1 XML )
LANGUAGE SQL
BEGIN
INSERT INTO TAB1 VALUES(PARM1);
END %
Execution, for example,
using Data Studio
<?xml version="1.0"
encoding="IBM037"?><address><city>wrzburg
</city><count ry>germany</country></address>
OR
Query TAB1 using Data Studio
Query TAB1 using SPUFI
302 DB2 10 for z/OS Technical Overview
Figure 8-36 shows an example of coding an XML variable in a native SQL stored procedure.
Figure 8-36 Native a SQL stored procedure using XML variable
The DECLARE VARIABLE statement defines a variables data type in SQL PL. The syntax of
the DECLARE VARIABLE statement has been extended to allow for XML to be specified as a
data type. In the example in Figure 8-36, the SQL variable VAR1 is declared as an XML
variable to hold the XML document. Notice that the XML document is passed to the stored
procedure using the input parameter PARM1 which is not an XML data type. DB2 10 allows
assignment from string to XML.
Optionally, the XML document can be parsed using the XLPARSE function before it is stored
in VAR1 and then inserted into table TAB1. As before, you can verify if the XML document is
properly inserted by querying the table TAB1 and retrieving the XML column either by using
Data Studio or SPUFI.
8.9.2 Enhancements to user defined SQL scalar and table functions
In the CREATE FUNCTION statement for a scalar user defined function, XML can now be
specified as the data type of a parameter. The DECLARE VARIABLE statement can specify
XML data type.
Example 8-51 shows an example of coding an XML parameter in a user-defined SQL scalar
function.
Example 8-51 User-defined SQL scalar function using XML variable
CREATE FUNCTION CANOrder (BOOKORDER XML, USTOCANRATE DOUBLE)
RETURNS XML
DETERMINISTIC
NO EXTERNAL ACTION
CONTAINS SQL
BEGIN ATOMIC
DECLARE USPrice DECIMAL (15,2);
DECLARE CANPrice DECIMAL(15,2);
CREATE PROCEDURE PROC(IN PARM1 VARCHAR(1000))
RESULT SETS 1
LANGUAGE SQL
BEGIN
DECLARE VAR1 XML;
SET VAR1 = XMLPARSE(DOCUMENT PARM1);
INSERT INTO TAB1 VALUES(VAR1);
END %
Execution, for example,
using Data Studio
<?xml version="1.0"
encoding="IBM037"?><address><city>wrzburg
</city><country>germany</country></ address>
OR
Query TAB1 using Data Studio Query TAB1 using SPUFI
Chapter 8. XML 303
DECLARE OrderInCAN XML;
SET USPrice = XMLCAST(XMLQUERY(/bookorder/USprice PASSING BOOKORDER) AS
DECIMAL( 15,2);
SET CANPrice = XMLCAST(XMLQUERY(/bookorder/CANprice PASSING BOOKORDER) AS
DECIMAL( 15,2);
IF CANPrice is NULL or CANPrice <=0 THEN
IF USPrice > 0 THEN
SET CANPrice = USPrice * USTOCANRATE;
ELSE
SET CANPrice = 0;
END IF;
SET OrderInCAN =
XMLDOCUMENT(
XMLELEMENT(NAME bookorder,
XMLQUERY(/bookorder/bookname PASSING BOOKORDER),
XMLELEMENT(NAME CANPrice, CANPrice))
);
RETURN OrderInCAN;
END %
The SQL function performs the following operations:
1. Declares an XML variable named OrderInCAN, which holds the order with prices in
Canadian dollars that is returned to the caller.
2. Retrieves the U.S. price from the input document, which is in the BOOKORDER
parameter.
3. Looks for a Canadian price in the input document. If the document contains no Canadian
prices, the XMLCAST function on the XMLQUERY function returns NULL.
4. Builds the output document, whose top-level element is bookorder, by concatenating the
bookname element from the original order with a CANprice element, which contains the
calculated price in Canadian dollars.
Suppose that an input document looks as follows:
<bookorder>
<bookname>TINTIN</bookname>
<USprice>100.00</USprice>
</bookorder>
If the exchange rate is 0.9808 Canadian dollars for one U.S. dollar, the output document looks
as follows:
<bookorder><bookname>TINTIN</bookname><CANprice>98.1</CANprice></bookorder>
Example 8-52 demonstrates the use of XML parameters in a SQL table function. This
function takes three parameters as input:
An XML document that contains order information
A maximum price for the order
The title of the book that is ordered
The function returns a table that contains an XML column with receipts that are generated
from all of the input parameters, and a BIGINT column that contains the order IDs that are
retrieved from the input parameter that contains the order information document.
304 DB2 10 for z/OS Technical Overview
Example 8-52 User-defined SQL table function using XML variable
CREATE FUNCTION ORDERTABLE
(ORDERDOC XML, PRICE DECIMAL(15,2), BOOKTITLE VARCHAR(50))
RETURNS TABLE (RECEIPT XML, ORDERID BIGINT)
LANGUAGE SQL
SPECIFIC ORDERTABLE
NOT DETERMINISTIC
READS SQL DATA
RETURN
SELECT ORDER.ID, ORDER.RECEIPT
FROM XMLTABLE(XMLNAMESPACES(DEFAULT 'http://posample.org'),
/orderdoc/bookorder[USprice < $A and bookname = $B]
PASSING ORDERDOC, PRICE as A, BOOKTITLE as B
COLUMNS
ID BIGINT PATH '../@OrderID',
RECEIPT XML PATH '.')
AS ORDER;
The SQL table function uses the XMLTABLE function to generate the result table for the table
that is returned by the function. The XMLTABLE function generates a row for each
ORDERDOC input document in which the title matches the book title in the BOOKTITLE input
parameter, and the price is less than the value in the PRICE input parameter. The columns of
the returned table are the Receipt node of the ORDERDOC input document, and the OrderID
element from the ORDERDOC input document.
Suppose that the input parameters have the values PRICE: 200, BOOKTITLE: TINTIN,
ORDERDOC:
<orderdoc xmlns="http://posample.org" OrderID="5001">
<name>Jim Noodle</name>
<addr country="Canada">
<street>25 EastCreek</street>
<city>Markham</city>
<prov-state>Ontario</prov-state>
<pcode-zip>N9C-3T6</pcode-zip>
</addr>
<phone type="work">905-555-7258</phone>
<bookorder>
<bookname>TINTIN</bookname>
<USprice>100.00</USprice>
</bookorder>
</orderdoc>
Table 8-8 shows the returned table.
Table 8-8 Return table
ID RECEIPT
5001 ?xml version="1.0" encoding="IBM037"?> <bookorder>
<bookname>TINTIN</bookname> <USprice>100.00</USprice> </bookorder>
Chapter 8. XML 305
8.9.3 Decompose to multiple tables with a native SQL procedure
Figure 8-37 illustrates how a SQL PL stored procedure can be used to decompose an XML
document into multiple tables.
Figure 8-37 Decompose to multiple tables with a native SQL procedure
With XML SQL vars, you can parse once and use it multiple times for decomposition. If the
XML parameter type is used and the caller is from JDBC, you can have XML parsed into
binary XML in the client.
INSERT INTO TAB1 VALUES(
'<COLLECTION>
<ADDRESS>
<CITY>WRZBURG</CITY>
<COUNTRY>GERMANY</COUNTRY>
</ADDRESS>
<ADDRESS>
<CITY>SANJOSE</CITY>
<COUNTRY>USA</COUNTRY>
</ADDRESS>
</COLLECTION>')
CREATE PROCEDURE DECOMP1(IN DOC XML)
LANGUAGE SQL
BEGIN
INSERT INTO CITIES SELECT *
FROM XMLTABLE('/COLLECTION/ADDRESS' PASSING DOC
COLUMNS CITYNAME VARCHAR(20) PATH 'CITY') AS X;
INSERT INTO COUNTRIES SELECT *
FROM XMLTABLE('/COLLECTION/ADDRESS' PASSING DOC
COLUMNS COUNTRYNAME VARCHAR(20) PATH 'COUNTRY') AS
X;
END
306 DB2 10 for z/OS Technical Overview
Figure 8-38 shows how the XML decomposition to multiple tables can be done in DB2 9.
Figure 8-38 Decompose to multiple tables with a native SQL procedure (DB2 9)
Because you cannot specify an XML argument, the example shows the use of VARCHAR.
This results in the XML document being parsed multiple times. Thus, if the XML document
has to be decomposed into 10 tables, the document is parsed 10 times.
8.10 Support for DEFINE NO for LOBs and XML
Many DB2 installations use software packages and install a suite of applications that create
table spaces explicitly or implicitly with the DEFINE NO option. When the DEFINE NO option
is used data sets backing the table spaces are not created until a table in that table space is
actually used. Thus, installations that use only a subset of modules from a full suite of
applications do not experience the overhead of unnecessary data set creation. However, in
DB2 9 the DEFINE NO option does not have effect when LOB and XML table spaces are
created. Therefore, installations that install a subset of modules from a suite of applications
still experience sub-optimal application installation time due to the overhead of creating LOB
and XML table spaces and index spaces that are not even used by the subset of modules of
interest to the installations. Further, because unused objects are not called out as such, time
and resources are often spent on managing and backing up such unused LOB and XML table
spaces and index spaces.
The IMPDSDEF subsystem parameter specifies whether DB2 is to define the underlying data
set for an implicitly created table space that resides in an implicit database. This DSNZPARM
corresponds to the field DEFINE DATA SETS on the installation panel DSNTIP7 INSTALL
DB2 - SIZES PANEL 2. Acceptable values are YES or NO. The default is YES. The default
value of YES means that the data set is defined when the table space is implicitly created.
The value of DEFINE DATA SETS applies only to implicitly created base table spaces. It is not
used for implicitly created LOB or XML table spaces.
DB2 10 introduces a solution to allow installations to use the DEFINE NO option in the
CREATE TABLESPACE and CREATE INDEX statements or the system parameter,
p p p
INSERT INTO TAB1 VALUES(
'<COLLECTION>
<ADDRESS>
<CITY>WRZBURG</CITY>
<COUNTRY>GERMANY</COUNTRY>
</ADDRESS>
<ADDRESS>
<CITY>SANJOSE</CITY>
<COUNTRY>USA</COUNTRY>
</ADDRESS>
</COLLECTION>')
CREATE PROCEDURE DECOMP1(IN DOC VARCHAR(1000))
LANGUAGE SQL
BEGIN
INSERT INTO CITIES SELECT *
FROM XMLTABLE('/COLLECTION/ADDRESS'
PASSING XMLPARSE(DOCUMENT DOC)
COLUMNS CITYNAME VARCHAR(20) PATH 'CITY') AS X;
INSERT INTO COUNTRIES SELECT *
FROM XMLTABLE('/COLLECTION/ADDRESS'
PASSING XMLPARSE(DOCUMENT DOC)
COLUMNS COUNTRYNAME VARCHAR(20) PATH COUNTRY') AS X;
END
Alternatively you could also do this:
Chapter 8. XML 307
IMPDSDEF, to defer the creation of underlying VSAM data sets for LOB and XML table
spaces and their dependent index spaces when creating tables with LOB or XML columns.
Additionally, optimization has been added to inherit the DEFINE NO attribute of the base table
space for dependent objects that are implicitly created or explicitly created without the
DEFINE option specified.
8.10.1 IMPDSDEF subsystem parameter
The IMPDSDEF subsystem parameter specifies whether DB2 defines the underlying data set
for certain implicitly created table spaces and indexes. This parameter applies to:
Base table spaces
Index spaces of indexes that are implicitly created on base tables
Implicitly created LOB or XML table spaces
Auxiliary indexes
NODEID indexes
DOCID indexes
Acceptable values are YES or NO. The default is YES.
YES Underlying data sets are defined when the table space or index space
is created.
NO Underlying data sets are defined when data is first inserted into the
table or index.
8.10.2 Usage reference
To invoke this function, specify DEFINE NO in CREATE TABLESPACE and CREATE INDEX
statements for explicitly created LOBs, auxiliary indexes, and XML value indexes.
Alternatively, set the IMPDSDEF subsystem parameter to NO for implicitly created LOB or
XML table spaces and their dependent indexes. When the base table space has undefined
data sets through DEFINE NO or IMPDSDEF=NO, DB2 inherits the undefined state of the
base table space to dependent objects for the following cases:
For explicitly created dependent objects, if DEFINE YES/NO is specified the DEFINE
attribute is honored by DB2.
For explicitly created dependent objects (auxiliary index, XML index, base table index), if
the DEFINE attribute is not specified, DB2 inherits the DEFINE attribute from the base
table space. The exception is explicitly created LOB table spaces without DEFINE
specified. With an explicitly created LOB table space there is no correlation with the base
table space until the auxiliary table is created so DEFINE attribute inheritance does not
occur.
For implicitly created dependent objects (base table indexes, and all LOB or XML objects),
the DEFINE NO attribute of the base table space is inherited by these dependent objects.
Otherwise, the IMPDFDEF subsystem parameter setting is honored for these implicitly
created objects.
DBAs and application package providers should consider using the DEFINE NO or
IMPDSDEF=NO subsystem parameter if the DDL performance is critical or if the DASD
resources are limited. These DEFINE NO options provide better management reliefs on the
z/OS DD limits on data set allocation and data usability by deferring the VSAM
DEFINE/OPEN until the first insert into the object.
The SPACE column in SYSIBM.SYSTABLEPART catalog table with a value of -1 is used for
LOB and XML table spaces to indicate that the table space is undefined. A non-negative
308 DB2 10 for z/OS Technical Overview
value indicates that the data sets for the table space are defined with the underlying VSAM
data sets allocated.
The SPACE column in SYSIBM.SYSINDEXPART catalog table with a value of -1 is used for
indexes on LOB and XML table spaces to indicate that the index is undefined. A non-negative
value indicates that the data sets for the index space are defined with the underlying VSAM
data sets allocated.
DB2 sets the value of SPACE column to 0 after it creates an underlying VSAM data set upon
the first insert or LOAD.
The inheritance rules also apply to situations where the LOB or XML columns are added with
ALTER statement to an existing table and when new partitions are explicitly added to a
partition-by-growth or partition-by-range table space. However, when a partition-by-growth
table space grows and implicitly creates a new LOB table space for the new base partition,
DB2 uses the IMPDSDEF subsystem parameter to set the DEFINE attribute of the new
implicitly created LOB table space.
8.11 LOB and XML data streaming
In DB2 9, the processing of LOBs in a distributed environment with Java Universal Driver on
the client side has been optimized for the retrieval of larger amounts of data. This dynamic
data format is available only for the Java Common Connectivity (JCC) T4 driver (Type 4
Connectivity). The call-level interface (CLI) of DB2 for Linux, UNIX, and Windows also has
this client-side optimization. Many applications effectively use locators to retrieve LOB data
regardless of the size of the data that is being retrieved. This mechanism incurs a separate
network flow to get the length of the data to be returned, so that the requester can determine
the proper offset and length for SUBSTR operations on the data to avoid any unnecessary
blank padding of the value. For small LOB and XML data, returning the value directly instead
of using a locator is more efficient.
For these reasons, LOB and XML data retrieval in DB2 9 has been enhanced so that it is
more effective for small and medium size objects. It is also still efficient in its use of locators to
retrieve large amounts of data. This function is known as progressive streaming.
DB2 10 for z/OS offers additional performance improvements by extending the support for
LOB and XML streaming, avoiding LOB and XML materialization between the distributed data
facility (DDF) and DBM1 address space.
For more information, see also 13.11.3, Streaming LOBs and XML between DDF and DBM1
on page 568.
Copyright IBM Corp. 2010. All rights reserved. 309
Chapter 9. Connectivity and administration
routines
Mainframe systems are adept at accommodating software that has been around for decades,
such as IMS, DB2, and CICS. However, they also host web applications that are built in Java
and can accommodate the latest business requirements.
DB2 9 and DB2 10 improve and generalize universal drivers for accessing data on any local
or remote server without the need of a gateway. In addition, DB2 10 for z/OS provides a
number of enhancements to improve the availability of distributed applications, including
online CDB changes and online changes to location alias names in addition to connectivity
improvements.
In this chapter, we discuss the following topics:
DDF availability
Monitoring and controlling connections and threads at the server
JDBC Type 2 driver performance enhancements
High performance DBAT
Use of RELEASE(DEALLOCATE) in Java applications
Support for 64-bit ODBC driver
DRDA support of Unicode encoding for system code pages
DB2-supplied stored procedures
9
310 DB2 10 for z/OS Technical Overview
9.1 DDF availability
With DB2 9, making almost any configuration change to the distributed data facility (DDF) can
be disruptive. DB2 10 introduces changes to help improve DDF availability by eliminating
requirements for recycling DDF and to provide better correlation with remote systems. Most of
the changes involve the ability to perform configuration changes without recycling the DDF.
These enhancements provide higher DDF availability by providing support to modify the
communication database and to modify distributed location aliases and associated IP
addresses without disrupting existing connections. These functions are a late addition and
require APARs PM26781 (PTF UK63818) and PM26480 (PTF UK63820).
In this section, we discuss the following topics:
Online communications database changes
Online DDF location alias name changes
Domain name is now optional
Acceptable period for honoring cancel requests
Sysplex balancing using SYSIBM.IPLIST
Message-based correlation with remote IPv6 clients
9.1.1 Online communications database changes
The communications database (CDB) is a set of updatable DB2 catalog tables that contain
DDF information, such as a remote servers IP address, DRDA port, and other such
information. A DDF requester uses this information to establish outbound connections to a
server.
When a CDB table, specifically LOCATIONS, IPNAMES, or IPLIST, is updated, the DDF uses
the updated values only when it is recycled or when it has not yet connected to that server:
If DDF has not yet started communicating to a particular location, IPNAMES and
LOCATIONS take effect when DDF attempts to communicate with that location.
If DDF has already started communication, changes to IPNAMES and LOCATIONS take
effect the next time DDF is started.
In DB2 10, updates to IPNAMES, LOCATIONS, and IPLIST take effect for a new remote
connection that is requested by a new or existing application. Updates do not affect existing
connections.
Changes to other CDB tables are not impacted by this enhancement:
Changes to LUMODES take effect the next time DDF is started or on the initial session to
a given LUMODE combination.
Changes to LUNAMES and LULIST take effect as follows:
If DDF has not yet tried to communicate with a particular remote location, rows added
to LUNAMES take effect when DDF attempts to communicate with that location.
If DDF has already attempted communication with a particular location, rows added to
LUNAMES take effect the next time DDF is started.
Changes to USERNAMES and MODESELECT take effect at the next thread access.
Chapter 9. Connectivity and administration routines 311
The output of the -DISPLAY LOCATION and -DISPLAY LOCATION DETAIL commands is
enhanced to display these pending changes. See Figure 9-1 for an example of the command
output.
Figure 9-1 -DISPLAY LOCATION output
9.1.2 Online DDF location alias name changes
Group access uses a DB2 location name that represents all the members of a data sharing
group. In contrast, member-specific access uses a location alias name that represents only a
subset of members of a data sharing group. Remote requesters can use location alias names
to establish connections with one or more specific members and have connections balanced
to the subset based on capacity.
DB2 V8 introduced DDF location alias names. See DB2 UDB for z/OS Version 8: Everything
You Ever Wanted to Know, ... and More, SG24-6079 for details. However, new DB2 members
cannot be added or removed from the subset without stopping DB2, because the location
alias name or names are specified in the bootstrap data set (BSDS), which cannot be
modified while DB2 is running.
DB2 10 provides the ability to add, remove, and modify DDF location alias names without
stopping either DB2 or DDF, so that DB2 remote traffic is not disrupted. A new DB2
command, -MODIFY DDF ALIAS, is introduced to manage DDF location alias names
specified in the DDF communication record of DB2 BSDS. See Figure 9-2. This command
can be issued only when DB2 is running, but it can be issued regardless of whether DDF is
started or stopped.
For details, see DB2 10 for z/OS Command Reference, SC19-2972.
Figure 9-2 -MODIFY DDF ALIAS syntax diagram
@dis loc det
DSNL200I @ DISPLAY LOCATION REPORT FOLLOWS-
LOCATION PRDID T ATT CONNS
::FFFF:9.30.88.92..447 DSN10015 R 2
TRS 1
DISPLAY LOCATION REPORT COMPLETE
312 DB2 10 for z/OS Technical Overview
The START keyword instructs DB2 to start accepting connection requests to the specified
alias if DDF is running. If DDF is not running, then the alias is marked eligible for starting, so
that it starts automatically when DDF starts next time.
Similarly, the STOP keyword instructs DB2 to stop accepting new connection requests to the
specified alias. Existing data base access threads processing connections to the specified
alias remain unaffected. An alias that is stopped is not started automatically when DDF starts.
CANCEL also instructs DB2 to stop accepting new connection requests to the specified alias.
However, existing database access threads that are processing connections to the specified
alias are cancelled. An alias that is cancelled is not started automatically when DDF starts.
The attributes of an existing alias can be modified only when the alias is stopped. The
modified alias attributes take effect when the alias is started. By default, the aliases created
through DSNJU003 are started, and those created by the -MODIFY DDF ALIAS command
are stopped. DSNJU004 does not print the alias status information. Use the -DISPLAY DDF
command report to find the status of an alias.
You can also specify a member-specific IPv4 or IPv6 address for each alias using the same
command but not using the DSNJU003 utility.
You use location alias names to make an initial connection to one of the members that is
represented by the alias, using member-specific access. The member that received the
connection request works in conjunction with Workload Manager (WLM) to return a list of
members that are currently active and able to perform work. The list includes member IP
addresses and ports and includes a weight for each active member that indicates the
members current capacity. Requester systems use this information to connect to the member
or members with the most capacity that are also associated with the location alias. However,
with DB2 9, the IP address provided in this list is fixed regardless of how the member might
be accessed.
Specifying an IP addresses for location alias names gives you the ability to control the IP
addresses that is returned in the weighted server list when you connect to a location alias.
However, this support is provided only with the INADDR_ANY approach. Now, when the
request to WLM is for a location alias rather than location name, the returned list includes the
specified member IP addresses (and ports), and a weight for each active member only in the
member-specific list of alias IP addresses, if they exist, or the member-specific IP addresses.
In a data sharing environment, you can configure a DB2 member to accept connections either
on any IP address active on the TCP/IP stack, including the IP address specified in the DB2
BSDS (INADDR_ANY approach), or to accept connections only on a specific IP address
listed in the TCP/IP PORT statement (BINDSPECIFIC approach). DB2 9 introduced the
INADDR_ANY approach and provided an advantage in a data sharing environment over the
DB2 V8 function. In DB2 9, if you specify the IP address in the BSDS, you do not have to
define a domain name to TCP/IP. In DB2 V8, the domain name must be defined to the TCP/IP
host so that a DB2 subsystem can accept connections from remote locations. In addition,
DB2 9 does not allow SSL to work with the TCP/IP BIND specific statements. See DB2 9 for
z/OS: Distributed Functions, SG24-6952-01 and DB2 9 for z/OS: Configuring SSL for Secure
Client-Server Communications, REDP-4630 for details.
Chapter 9. Connectivity and administration routines 313
You can use the -DISPLAY DDF command to display the status of each alias and the
-DISPLAY DDF ALIAS command to display the associated alias IP addresses, if any.
Figure 9-3 shows the output from the -DISPLAY DDF command.
Figure 9-3 DISPLAY DDF command
Message DSN088I lists the DDF location alias names defined. The STATUS column can have
the following values:
STARTD Alias is started.
STOPD Alias is stopped.
CANCLD Alias is cancelled
STARTG Alias is starting.
STOPG Alias is stopping.
CANCLG Alias is canceling.
STATIC Alias is static, meaning, a DSNJU003 defined alias.
DSNL080I @ DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I LOC1 LU1 -NONE
DSNL084I TCPPORT=446 SECPORT=0 RESPORT=5001 IPNAME=-NONE
DSNL085I IPADDR=::1.3.23.80
DSNL086I SQL DOMAIN=hello.abc.com
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I ALS1 5004 CNCLD
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = COMMIT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
314 DB2 10 for z/OS Technical Overview
Figure 9-4 shows the output from the -DISPLAY DDF DETAIL command.
Figure 9-4 DISPLAY DDF DETAIL command
Message DSN089I lists the IP addresses that are associated with a DDF location alias name.
When an IP address is associated with a location alias name DB2 tries to activate this IP
address as a dynamic VIPA, provided the IP address is specified in the TCP/IP VIPARANGE
statement. If the activation fails, no error messages is printed, and the IP address is still
returned to the client, who must then be able to use it to route future requests.
@DISPLAY DDF DETAIL
DSNL080I @ DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I LOC1 LU1 -NONE
DSNL084I TCPPORT=446 SECPORT=0 RESPORT=5001 IPNAME=-NONE
DSNL085I IPADDR=::1.3.23.80
DSNL086I SQL DOMAIN=hello.abc.com
DSNL086I RESYNC DOMAIN=hello.abc.com
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I ALS1 5004 CANCLD
DSNL088I ALS2 5005 STARTD
DSNL089I MEMBER IPADDR=::1.3.23.81
DSNL090I DT=I CONDBAT= 64 MDBAT= 64
DSNL092I ADBAT= 0 QUEDBAT= 0 INADBAT= 0 CONQUED= 0
DSNL093I DSCDBAT= 0 INACONN= 0
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 64 ::1.3.23.81
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = COMMIT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
@DISPLAY DDF ALIAS(ALS1) DETAIL
DSNL080I @ DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I ALS1 5004 CANCLD
DSNL089I MEMBER IPADDR= ::1.2.3.4
DSNL089I MEMBER IPADDR= 2001::2:3:4
DSNL096I ADBAT= 0 CONQUED= 0 TCONS= 0
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
@DISPLAY DDF ALIAS(ALS2) DETAIL
DSNL080I @ DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I ALS2 5005 STARTD
DSNL089I MEMBER IPADDR= ::5.6.7.8
DSNL089I MEMBER IPADDR= 2005::6:7:8
DSNL096I ADBAT= 0 CONQUED= 0 TCONS= 0
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 64 ::5.6.7.8 2005::6:7:8
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Chapter 9. Connectivity and administration routines 315
The following enhanced messages are introduced also:
DSNL300I csect MODIFY DDF REPORT FOLLOWS:
DSNL302I ALIAS alias1 IS SET TO START
DSNL301I csect MODIFY DDF REPORT COMPLETE
DSNL303I csect MODIFY DDF command failed with REASON = reason-code
DSNL304I csect Alias alias_name already exists in the BSDS
DSNL305I csect Alias alias_name does not exist in the BSDS
DSNL306I csect Maximum limit for Aliases reached
DSNL307I csect Alias parameter alias_parm set to parm_value is invalid
DSNL308I csect Alias IP address ip_address does not exist
DSNL309I csect Member-specific IP addresses not defined in the BSDS
DSNL310I csect Alias port port is duplicate
DSNL311I csect Alias port port does not exist
DSNL312I ALIAS alias_name cannot be started
DSNL313I ALIAS alias_name cannot be stopped or cancelled
DSNL314I Alias alias_name cannot be modified while started
9.1.3 Domain name is now optional
With DB2 9, the domain name that is associated with the DDF IP address has to be resolved
prior to DDF completing its processing. This requirement of resolving the DB2 domain name
is historical. IBM Data Server drivers and other clients, including z/OS requesters, now use
the DB2 servers IP address, as their first choice to establish connections to a DB2 server for
2-phase commit resynchronization. The DB2 server also provides its domain name, in case
the clients want to use it as a backup to the IP address. However, this domain name does not
provide any real use because if a connection made with an IP address fails, either because
target DB2 is down or because other firewall issues, using a domain name (that resolves to
the same IP address), also fails. So, there is practically no benefit of providing a DDF domain
name.
DB2 10 tolerates the absence of a domain name when an IP address is defined in the BSDS,
because the IP address is not going to change. It is now no longer necessary to configure the
domain name if you specify the IP address in the BSDS; however, a domain name is still
required if you specify the IP address in the TCP/IP PORT statement. The change is made for
both data sharing and non-data sharing environments and is retrofitted to DB2 9.
When using Dynamic VIPA, the IP addresses that are assigned to DB2 by the TCP/IP profile,
must still be registered with the DNS. However, IP addresses that are registered in the BSDS
no longer need to be registered in the DNS.
When a domain name is unavailable, A DSNL523I message that contains an IP address is
issued, in lieu of a DSNL519I message that contains a domain name to indicate that DDF is
initialized to process TCP/IP requests.
9.1.4 Acceptable period for honoring cancel requests
SQL queries that originate from remote client system applications can sometimes run for
hours, consuming database and CPU resources, even though the DB2 access thread (DBAT)
was canceled. This situation typically occurs when a remote client terminates its connection
to the DB2 server. The impact is that users can be charged for wasted CPU costs that are
never materialized to applications, and the only way to terminate the DBAT (that is consuming
the cost) is to terminate the entire DB2 subsystem.
316 DB2 10 for z/OS Technical Overview
Prior versions of DB2 already try to detect these types of situations. However, in an effort to
eliminate DB2 outages, DB2 10 detects more of these CPU intensive DBATs.
9.1.5 Sysplex balancing using SYSIBM.IPLIST
You can target remotely connecting to only a subset of DB2 members of a data sharing group
in either of the following methods:
DB2 for z/OS as an application requester using member-specific routing by coding entries
in the catalog table SYSIBM.IPLIST
Routing DRDA requests to a subset of members of a data sharing group, from any DRDA
TCP/IP application requestor, such as DB2 Connect, based on DB2 location alias
names, by defining location alias names with valid port numbers in the BSDS
Both techniques are described in detail in DB2 UDB for z/OS Version 8: Everything You Ever
Wanted to Know, ... and More, SG24-6079.
When you use member subsetting (location alias names with specific port numbers that are
defined in the BSDS), DB2 for z/OS already maintains a weighted server list of the DB2 data
sharing group for each ALIAS name that is used as a subset and returns the appropriate list
to the client. However, when using the SYSIBM.IPLIST technique, DB2 for z/OS as a
requestor does not maintain this list internally with current weighted list that are returned by
the server, thus leading to improper workload balancing.
DB2 10 as a DRDA requestor using SYSIBM.IPLIST to connect to a subset of DB2 members
is enhanced to also use the real time weight information that is returned by the server to
balance connections.
9.1.6 Message-based correlation with remote IPv6 clients
Historically, correlation of work between a DB2 for z/OS and any associated remote partners
has been through the logical unit of work identifier (LUWID). This concept was introduced with
the initial SNA distributed support and continued to be used when TCP/IP support was
introduced, because an IPv4 (32-bit) address could still be represented successfully (8 or
4-byte character form) in the LUWID.
With the introduction of IPv6 support in DB2 9, an IPv6 (128-bit) address can no longer be
represented in an LUWID, and the concept of an extended correlation token was introduced.
This extended correlation token represented the entire IPv6 address, but DB2 9 provides this
extended correlation token only in Display Thread command reports and trace records.
Chapter 9. Connectivity and administration routines 317
DB2 10 provides the extended correlation token in more messages, making it much easier to
correlate message-related failures to the remote client application that is involved in the
failure. See Figure 9-5.
Figure 9-5 Extended correlation token in messages
The DDF DSNL027I and DSNL030I messages and lock-related messages DSNT318I,
DSNT375I, DSNT376I, DSNT377I, and DSNT378I are changed to also contain
THREAD-INFO information. The THREAD-INFO information is enhanced in these messages
to contain the extended correlation token.
Because all the components of the THREAD-INFO information are delimited by a colon (:),
the extended correlation token is enclosed in Less Than (<) and Greater Than (>) characters
to help distinguish the THREAD-INFO components from the extended correlation
components.
For details about the THREAD-INFO information provided, see message DSNT318I at DB2
10 for z/OS Messages, GC19-2979.
9.2 Monitoring and controlling connections and threads at the
server
In typical customer environments, DB2 on z/OS is accessed remotely from many different
applications. These applications typically have different characteristics. Some applications
are more important than others, and some applications have more users than others. These
differences present the following challenges:
To control the number of connections at an application level, DB2 9 includes a set of
knobs, CONDBAT and MAXDBAT, that control connections and threads at a subsystem
level.
Rogue applications might flood DB2 9 subsystems, which would consume all connections
and affect important business applications.
DB2 9 also includes profiles to identify a query or set of queries. Profile tables contain
information about how DB2 executes or monitors a group of statements. Profiles are specified
in SYSIBM.DSN_PROFILE_TABLE. In DB2 9, it is possible to identify SQL statements by
authorization ID and IP address or to specify combinations of plan name, collection ID, and
package name. You can then monitor all statements that are identified in this manner.
SQL statements identified by a profile are executed based on keywords and attributes
specified by customers in SYSIBM.DSN_PROFILE_ATTRIBUTES. These tables are used by
tools such as the Optimization Service Center as described in IBM DB2 9 for z/OS: New Tools
for Query Optimization, SG24-7421.
DSNL027I @ SERVER DISTRIBUTED AGENT WITH
LUWID=G91702F8.CD1B.101018185602=16
THREAD-INFO=ADMF001:mask:admf001:db2bp:*:*:*:<9.23.2.248.52507.101018185602>
RECEIVED ABEND=04E
FOR REASON=00D3001A
DSNL028I @ G91702F8.CD1B.101018185602=16
ACCESSING DATA FOR
LOCATION ::FFFF:9.23.2.248
IPADDR ::FFFF:9.23.2.248
318 DB2 10 for z/OS Technical Overview
DB2 10 for z/OS enhances the profile table monitoring facility to support the filtering and
threshold monitoring for system related activities, such as the number of connections, the
number of threads, and the period of time that a thread can stay idle. This enhancement
allows you to enforce the thresholds (limits) that were previously available only at the system
level using DSNZPARM, such as CONDBAT, MAXDBAT, and IDTHTOIN, at a more granular
level.
This enhancement allows you to control connections using the following categories:
IP Address (IPADDR)
Product Identifier (PRDID)
Role and Authorization Identifier (ROLE, AUTHID)
Collection ID and Package Name (COLLID, PKGNAME)
The filtering categories are mutually exclusive.
This enhancement also provides the option to define the type of action to take after these
thresholds are reached. You can display a warning message or an exception message when
the connection, thread, and idle thread timeout controls are exceeded. If the user chooses to
display a warning message, a DSNT771I or DSNT772I message is issued, depending on
DIAGLEVEL and processing continues. In the case of exception processing, a message is
displayed to the console and the action taken (that is queuing, suspension, or rejection)
depends on the type of attribute (connection, threads, or idle thread timeout) that is defined.
This monitoring capability is started and available only when you issue the START PROFILE
command. After you issue a START PROFILE command, any rows with a Y in the
PROFILE_ENABLED column in SYSIBM.DSN_PROFILE_TABLE are now in effect.
Monitoring can be stopped by issuing the STOP PROFILE command. Monitoring for
individual profiles can be stopped by updating the PROFILE_ENABLED column in the
SYSIBM,DSN_PROFILE_TABLE to N and issuing a START PROFILE command again.
9.2.1 Create the tables
Monitoring using profiles requires the following tables:
SYSIBM.DSN_PROFILE_TABLE
SYSIBM.DSN_PROFILE_HISTORY
SYSIBM.DSN_PROFILE_ATTRIBUTES
SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
These tables must be created if they do not exist. The HISTORY tables have the same
columns as the PROFILE or ATTRIBUTES tables, except that there is a STATUS column
instead of a REMARKS column. HISTORY tracks the state of rows in the other tables or why
a row was rejected.
9.2.2 Insert a row in DSN_PROFILE_TABLE
DSN_PROFILE_TABLE has one row per monitoring or execution profile. Rows are inserted
by users using tools such as SQL Processor Using File Input (SPUFI). A row can apply either
to statement monitoring or to system level activity monitoring, but not both. The monitoring
can be done based on options such as IP address, authid, plan name, package name, and
collection ID. DB2 10 adds role and product ID as additional options to monitor system
This support is available only for connections coming to DB2 on z/OS over DRDA.
Chapter 9. Connectivity and administration routines 319
activities (connections and threads). To monitor connections or threads, insert a row into
DSN_PROFILE_TABLE with the appropriate criteria.
Valid filtering criteria for monitoring system activities can be organized into categories as
shown in Table 9-1.
Table 9-1 Categories for system group monitoring
Figure 9-6 is an example of rows in the DNS_PROFILE_TABLE.
Figure 9-6 Contents of DNS_PROFILE_TABLE
PROFILEID 20 row defines a profile that matches all the threads that are run by user Tom.
PROFILEID 21 row defines a profile that matches all the threads that are using client driver
level JCC03570.
PROFILEID 22 row defines a profile that matches all the threads and connections that are
connecting from IP Address 129.42.16.152.
Multiple profile rows can apply to an individual execution or process, in which case the more
restrictive profile is applied.
9.2.3 Insert a row in DSN_PROFILE_ATTRIBUTES
DSN_PROFILE_ATTRIBUTES relates profile table rows to keywords that specify monitoring
or execution and attributes that define how to direct or define the monitoring or execution:
1. Insert a value in the PROFILEID column to identify which profiles use this type monitoring
function by entering the matching value from the PROFILEID column of the
SYSIBM.DSN_PROFILE_TABLE column.
Filtering category Columns to specify
IP address Specify only the IPADDR column
Client product identifier Specify only the PRDID column
Role on authorization identifier Specify one or all of the following columns:
ROLE
AUTHID
Collection identifier or package name Specify one or all of the following columns:
COLLID
PKGNAME
320 DB2 10 for z/OS Technical Overview
2. Specify the monitoring function by inserting values into the remaining columns of the
DSN_PROFILE_ATTRIBUTES table. DB2 10 for z/OS introduces the following keywords
to use for monitoring connections and threads:
MONITOR THREADS
The profile monitors the total number of concurrent active threads on the DB2
subsystem. The monitoring function is subject to the filtering on IPADDR, PRDID,
ROLE or AUTHID, and COLLID or PKGNAME, which are defined in
SYSIBM.DSN_PROFILE_TABLE.
MONITOR CONNECTIONS
The profile monitors the total number of remote connections from the remote
requesters using TCP/IP, which includes the current active connections and the
inactive connections. The monitoring is subject to the filtering on the IPADDR column
only in SYSIBM.DSN_PROFILE_TABLE for remote connections. Active connections
are those connections that are currently associated with an active database access
thread or that are queued and waiting to be serviced. Inactive connections are those
connections that are currently not waiting and that are not associated with a database
access thread.
MONITOR IDLE THREADS
The profile monitors the approximate time (in seconds) that an active server thread is
allowed to remain idle.
3. Insert values in the ATTRIBUTE1 column of DSN_PROFILE_ATTRIBUTES to specify how
DB2 responds when a threshold is met. The values can be WARNING or EXCEPTION
along with levels of diagnostic information, DIAGLEVEL1 or DIAGLEVEL2.
4. Insert values in the ATTRIBUTE2 column of DSN_PROFILE_ATTRIBUTES to specify the
threshold that the monitor uses.
If MONITOR THREADS is specified as the keyword, then the value in ATTRIBUTE2
column cannot exceed the value specified for MAXDBAT.
If MONITOR CONNECTIONS is specified as the keyword, then the value in the
ATTRIBUTE2 column cannot exceed the value specified in CONDBAT.
If MONITOIR IDLE THREADS is specified as the keyword, then the value specified in
ATTRIBUTE2 column is applied independent of the value specified for IDTHTOIN.
Figure 9-7 is an example of rows in DSN_PROFILE_ATTRIBUTES.
Figure 9-7 Contents of DSN_PROFILE_ATTRIBUTES
The first row indicates that DB2 monitors the number of threads that satisfy the scope that is
defined by PROFILEID 20 in SYSIBM.DSN_PROFILE_TABLE, which is all threads using
authid TOM. When the number of the threads in the DB2 system exceeds 10, which is defined
in ATTRIBUTE2 column, a DSNT772I message is generated to the system console. DB2
queues or suspends the number of any new coming connection request up to 10, the defined
exception threshold in ATTRIBUTE2, because EXCEPTION is defined in the ATTRIBUTE1
Chapter 9. Connectivity and administration routines 321
column. When the total number of threads that are queued or suspended reaches 10, DB2
starts to fail the connection request with SQLCODE -30041, meaning that there can be up to
10 threads for this profile plus up to 10 queued connections for this profile.
The second row monitors idle threads for PROFILEID 21, that is any thread using client driver
level JCC03570. When the thread stays idle for more than 3 minutes, DB2 issues a
DSNT771I console message and lets the thread continue stay idle because the thread is
monitored by PROFILEID 21 with a warning action defined.
The third row monitors connections for PROFILEID 22, which is any connection coming from
IP address 129.42.16.152. The profile attribute table definition shows that when more than 45
concurrent connections come to DB2 from the same IP address (129.42.16.152), a
DSNT772I message is sent to the console, and the connection is failed.
9.2.4 Activate profiles
After the profiles are defined, you can activate them dynamically using the DB2 START
PROFILE command.
9.2.5 Deactivate profiles
You can deactivate profiles dynamically using the DB2 STOP PROFILE command.
9.2.6 Activating a subset of profiles
To activate a subset of profiles, delete that row from SYSIBM.DSN_PROFILE_TABLE, or
change the PROFILE_ENABLED column value to N. Then, reload the profile table by issuing
the DB2 START PROFILE command.
9.3 JDBC Type 2 driver performance enhancements
The DB2 database system provides driver support for client applications and applets that are
written in Java using JDBC and for embedded SQL for Java (SQLJ). JDBC is an application
programming interface (API) that Java applications use to access relational databases. DB2
support for JDBC lets you write Java applications that access local DB2 data or remote
relational data on a server that supports DRDA. SQLJ provides support for embedded static
SQL in Java applications. In general, Java applications use JDBC for dynamic SQL and SQLJ
for static SQL. However, because SQLJ can inter-operate with JDBC, an application program
can use JDBC and SQLJ within the same unit of work.
The DB2 product includes support for the following types of JDBC driver architecture:
Type 2
This implementation of the JDBC driver is written partly in the Java and partly in native
code. The drivers use a native client library specific to the data source to which they
connect. The native component allows the type 2 JDBC driver to connect to DB2 and
issue SQL directly using cross memory services.
Type 4
This implementation includes drivers that are pure Java and implements the network
protocol for a specific data source.
322 DB2 10 for z/OS Technical Overview
The client connects directly to the DB2 on z/OS over a network through the Distributed
Data Facility (DDF) using DRDA protocol.
DB2 for z/OS supports a single driver that combines type 2 and type 4 JDBC
implementations. When running Java applications on z/OS that connect to DB2 on z/OS on
the same LPAR, use type 2 connectivity. For applications that connect to DB2 on z/OS over
the network, use type 4 connectivity. The recommendation to use type 2 connectivity is based
on performance.
Type 2 connectivity uses cross-memory services when connecting to DB2 on z/OS, whereas
type 4 has network latency. Recent enhancements to type 4 connectivity and performance in
some cases are actually better than type 2. Areas in which type 4 performs better than type 2
include cases when applications return multiple rows (cursor processing) and progressive
streaming of LOBS and XML. These areas implement limited block fetch capability. This
enhancement in the JDBC type 2 driver in DB2 10 now eliminates these performance issues.
9.3.1 Limited block fetch
With limited block fetch, the DB2 for z/OS server attempts to fit as many rows as possible in a
query block. DB2 transmits the block of rows over the network. Data can also be pre-fetched
when the cursor is opened without needing to wait for an explicit fetch request from the
requester.
JDBC Type 2 driver behavior without limited block fetch
Figure 9-8 depicts the flow of calls from a Java application running on z/OS to DB2 on z/OS
on the same LPAR using type 2 connectivity to DB2 9.
Figure 9-8 JDBC type 2 with DB2 9
In this case, the Java application running on z/OS opens a connection to DB2, prepares an
SQL statement for execution, and executes the SQL statement. Let us assume that the
Chapter 9. Connectivity and administration routines 323
execution of the SQL statement qualifies 100 rows. The Java application then issues a fetch
statement to get each row. Each fetch operation results in a call to DB2 to return the data.
After the application has processed the row, it issues the next fetch statement, which means
for this example, 100 fetch calls. Each fetch call is a separate call to DB2 by the JDBC driver
to fetch one row. Then, after all the rows have been fetched and processed by the application,
it issues close, which is another call to the DB2 server. This process is irrespective whether
the application runs in a application server such as WebSphere Application Server or in a
stand-alone Java application.
JDBC Type 2 driver behavior with limited block fetch
Now let us examine the behavior in DB2 10. Figure 9-9 depicts the sequence of calls from a
Java application running on z/OS to DB2 on z/OS on the same LPAR using type 2
connectivity to DB2 10.
Figure 9-9 JDBC type 2 with DB2 10
In Figure 9-9, the Java application running on z/OS opens a connection to DB2, prepares a
SQL statement for execution, and executes the SQL statement. This action results in DB2
returning multiple rows. Let us assume that the execution of the SQL statement qualifies 100
rows. The Java application then issues a fetch statement to get each row. Because the JDBC
type 2 driver is enhanced to provide limited block fetch capability, the driver in this case
returns as many rows as possible that fit in a buffer from a single fetch call issued by the
application from DB2. The number of rows that are returned depends on the buffer size,
which is controlled by the DataSource/Connection queryDataSize property. The default is
32 KB. Valid values for a DB2 10 for z/OS target are 32, 64, 128, or 256 KB.
It is important to note that a block of rows is returned to the Java application. Thus, all the
rows that fit the buffer size (32 KB is the default) are available in the JVM. It is also important
to note that only one fetch is issued by the application, and the application still has not
processed rows. From the application side, it has issued a single fetch statement to fetch and
process a single row. In this case, the driver working with DB2 on z/OS has already fetched all
rows and placed them in memory in the application address space. After processing the first
324 DB2 10 for z/OS Technical Overview
row, when the application issues the next fetch statement to get the next row, the driver
actually returns the row that is in memory in the application address space and does not
make a call to DB2.
JDBC type 2 driver is also enhanced to implement early close of the cursors after all the rows
are returned. Instead of a separate call to close the cursor from the application, the driver
closes the cursor in DB2 implicitly.
A common question is how this process is different from multi-row fetch. Multi-row fetch also
returns multiple rows in a single call to DB2. However, multi-row fetch has limited block fetch.
Multi-row fetch returns columns from each row into arrays that are declared in application
programs. JDBC specification does not currently support arrays. Thus, the driver has to
convert the arrays into rows after the fetch.
9.3.2 Conditions for limited block fetch
This limited block fetch capability and early close of cursor is available only when the following
conditions are met:
A forward scrollable and non-updateable cursor is used
Fully materialized lob is false and progressive streaming is true
Type 2 Multi-Row fetch is disabled
9.4 High performance DBAT
Prior to DB2 10, the release option of packages is not honored in distributed applications.
Packages when allocated were de-allocated at commit for distributed applications, mainly to
help with performance functions, such as issue DDL, run utilities, BIND packages, and other
functions that were impacted if the packages were not de-allocated after commit. Although
this behavior helped, it came with a price. The performance analysis of inactive connection
processing saw that a large CPU expenditure occurred in package allocation and
de-allocation with a lesser CPU expense occurring due to the processing to pool a DBAT and
then associating a pooled DBAT with a connection.
DB2 10 includes the following enhancements:
CPU reduction of inactive connection processing
An easy method to switch between RELEASE (COMMIT) and RELEASE (DEALLOCATE)
for existing applications without having to rebind, so that activities such as running DDL,
running utilities, and other activities can happen without an outage
9.4.1 High performance DBAT to reduce CPU usage
DB2 10 DRDA database access threads (DBATs) are allowed to run, accessing data under
the RELEASE BIND option of the package. If a package that is associated with a distributed
application is bound with RELEASE (DEALLOCATE), the copy of the package is allocated to
the DBAT up until the DBAT is terminated. The DBATs hold package allocation locks even
while they are not being used for client unit-of-work processing. However, to minimize the
number of different packages that can possibly be allocated by any one DBAT, distributed
data facility (DDF) does not pool the DBAT and disassociates it from its connection after the
unit-of-work is ended. After the unit-of-work is ended, DDF cuts an accounting record and
deletes the WLM enclave, such as inactive connection processing. Thus, the client
Chapter 9. Connectivity and administration routines 325
application that requested the connection holds onto its DBAT, and only the packages that are
required to run the application accumulate allocation locks against the DBAT.
The key thing to remember is that even if one package among many is bound with
RELEASE(DEALLOCATE), then the DBAT becomes a high performance DBAT, provided that
it meets the requirements. Also, similar to inactive connection processing, the DBAT is
terminated after 200 (not user changeable) units-of-work are processed by the DBAT. The
connection at this point is made inactive. On the next request to start a unit-of-work by the
connection, a new DBAT is created or a pooled DBAT is assigned to process the unit-of-work.
Normal idle thread time-out detection is applied to these DBATs.
If the DBAT is in flight processing a unit-of-work and if it has not received the next message
from a client, DDF cancels the DBAT after the IDTHTOIN value has expired. However, if the
DBAT is sitting idle, having completed a unit-of-work for the connection, and if it has not
received a request from the client, then the DBAT is terminated (not cancelled) after
POOLINAC time expires.
9.4.2 Dynamic switching to RELEASE(COMMIT)
In DB2 10 new-function mode, the default behavior is to honor the bind option for distributed
application threads. You can switch between modes dynamically using the MODIFY DDF
command:
The MODIFY DDF PKGREL BNDOPT command causes DDF to honor the bind option
that is associated with the package of a distributed application.
The MODIFY DDF PKGREL COMMIT command causes DDF to request that any package
used for remote client processing be allocated under RELEASE(COMMIT) BIND option
rules, regardless of the value of the packages RELEASE bind option.
To establish RELEASE(DEALLOC) behavior, you need to explicitly BIND with
RELEASE(DEALLOC) and issue PKGREL(BNDOPT).
JCC and DB2 Connect 9.7 FP3a has changed the default BIND option to
RELEASE(DEALLOC).
Ideally, during maintenance windows or low periods of activity, you can use the COMMIT
option to switch from the other options so that you can run utilities, DDL, bind packages, and
other applications.
9.4.3 Requirements
This honoring of packages bind options for distributed application threads is available only
under the following conditions:
KEEPDYNAMIC YES is not enabled.
CURSOR WITH HOLD is not enabled.
CMTSTAT is set to INACTIVE.
9.5 Use of RELEASE(DEALLOCATE) in Java applications
Rebinding the JCC packages, which are used by a remote connections to DB2 10 for z/OS,
with the RELEASE(DEALLOCATE) and KEEPDYNAMIC(NO) options can save CPU by
eliminating the need for package allocation and deallocation. You can bind the packages
326 DB2 10 for z/OS Technical Overview
using the DB2Binder utility or through the client configuration assistant, or you can rebind
them at the DB2 10 for z/OS system.
Instead of using the default NULLID collection, you might consider different collection IDs for
packages bound with different options, such as RELEASE DEALLOCATE or COMMIT. You
might also consider defining different instances of the IBM Data Server Driver for JDBC and
SQLJ to be used at run time. For example, using the DB2Binder utility, you can bind the DB2
Data Server Driver packages at the DB2 for z/OS server using a -collection option with a
value such as RELDEAL for packages bound with the RELEASE(DEALLOCATE) option and
the -collection option with a value such as RELCOMM for packages bound with
RELEASE(COMMIT).
When using a collection ID other than the default NULLID, ensure that your data source
property: jdbcCollection is set with the correct collection ID.
Both the older CLI-based JDBC driver and the IBM Data Server Driver for JDBC and SQLJ
type 2/4 connectivity use the same packages. They are also called DB2 CLI packages. You
can find more information in DB2 Version 9.5 for Linux, UNIX, and Windows Call Level
Interface Guide and Reference, Volume 2, SC23-5845.
Example 9-1 shows a REBIND job of the packages with the two options.
Example 9-1 REBIND job example
//WDRJCCRB JOB A,MSGLEVEL=(1,1),MSGCLASS=H,CLASS=A,
// USER=&SYSUID,NOTIFY=&SYSUID,REGION=0M,TIME=1440
/*JOBPARM S=S34
//USRLIB JCLLIB ORDER=WP1G.A10.PROCLIB
//JOBLIB DD DISP=SHR,DSN=WP1G0.S34.SDSNLOAD
//*
//JCCRBD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(WP1G)
REBIND PACKAGE (RELDEAL.SYSLH100) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH101) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH102) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH200) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH201) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH202) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH300) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH301) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH302) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH400) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH401) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLH402) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN100) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN101) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN102) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN200) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN201) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN202) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN300) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN301) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN302) RELEASE(DEALLOCATE)
Chapter 9. Connectivity and administration routines 327
REBIND PACKAGE (RELDEAL.SYSLN400) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN401) RELEASE(DEALLOCATE)
REBIND PACKAGE (RELDEAL.SYSLN402) RELEASE(DEALLOCATE)
Under WebSphere Application Server, consider the following factors for local or Type 2
connections:
jdbcCollection does not apply to Type 2 Connectivity on DB2 for z/OS.
WebSphere on z/OS users should set the aged timeout connection pool option to 120
seconds.
You can find more information in DB2 9 for z/OS: Distributed Functions, SG24-6952, and DB2
9 for z/OS: Packages Revisited, SG24-7688.
See also:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/t0024156.html
9.6 Support for 64-bit ODBC driver
The DB2 9 implementation of the DB2 z/OS ODBC driver does not have 64-bit support. Many
applications that use different products, such as WebSphere Message Broker, that support
64 bit cannot use the memory relief because the ODBC driver does not support 64 bit.
DB2 10 enhances the ODBC driver to provide 64-bit support. For APIs that currently accept
pointer arguments, the pointer values are expanded to 64 bits, allowing for addressing up to
16 exabytes. Predominantly, the change should not impact existing applications except for the
need to recompile the current ODBC applications that need to use 64-bit support with the
LP64 compiler option.
However, when using a 64-bit ODBC driver, keep in mind the following considerations:
You need to compile the application with the LP64 compiler option.
All APIs that currently accept pointer arguments or return pointer values are modified to
accommodate 64-bit values. Applications are required to pass only 64-bit pointer values
as input and output for all pointer arguments. Pointers that are returned as output values to
the applications are also expanded to 64 bit.
For APIs that have function arguments, such as SQLSetConnectAttr() and
SQLSetStmtAttr(), that are used to pass both pointers (8 bytes) and unsigned integer data
(4 bytes), the argument data must be contained in a 64-bit pointer variable. Depending on
the attributes specified, the driver either uses the argument data as an address or casts it
to an unsigned 32-bit integer. Because the integer values that are passed do not exceed
the maximum value for an unsigned 32-bit integer, no truncation occurs.
All ODBC symbolic data types and defined C types that are used for declaring variables
and arguments of integer type are assigned a base C type of int or unsigned int, which
ensures that a 4-byte integer value is preserved.
The following C type definitions are added to the ODBC standard:
SQLLEN
SQLULEN
These definitions are used specifically to declare integer function arguments in 64-bit
environments. The base C types for SQLLEN and SQLULEN can be compiled to either
328 DB2 10 for z/OS Technical Overview
32 bit or 64 bit, depending on the platform and the CLI/ODBC driver that the application is
using.
On the DB2 for z/OS platform, SQLLEN and SQLULEN are mapped as 32-bit integer type
to be consistent with the 32-bit and 64-bit CLI drivers. When running in 64-bit
environments, you need to change arguments that were previously defined with
SQLINTEGER and SQLUINTEGER to SQLLEN and SQLULEN, where appropriate, to
maximize the applications portability.
SQLINTEGER and SQLUINTERGER type definitions are changed to have a base C type
of int and unsigned int respectively, so that applications can continue to use them to
declare 32-bit integer values. Declare 64-bit integer values using the SQLBIGINT type
definitions, as it does today.
Because ints and longs are both 32 bit in 31-bit environments, the current driver uses
them indiscriminately while implicitly or explicitly assuming that they are interchangeable.
This use introduces a problem because some of C defined types, such as SQLHENV,
SQLHDBC, and SQLHSTMT, are defined as type definitions of long. For these C defined
types, the base C type is changed to int to preserve them as 32-bit integers.
Currently, the ODBC driver uses the C wchar_t (wide character) data type to represent
UCS-2 encoded data. This data type works in 31-bit environments because the lengths of
wchar_t and UCS-2 data are both 2 bytes. Under LP64, wchar_t is defined as a typedef of
a 32-bit unsigned integer and, therefore, can no longer be used to represent UCS-2 data.
To accommodate this change, the driver is changed to map SQLWCHAR to unsigned
short instead of wchar_t and continues to handle the SQLWCHAR data type as a 2-byte
UCS-2 string.
9.7 DRDA support of Unicode encoding for system code pages
DB2 10 includes DRDA support of Unicode encoding (code page 1208) for system code
pages, such as DRDA command parameters and reply message parameters. This
enhancement can provide improved response time and less processor usage for remote CLI
and JDBC applications by removing the need for the drivers to convert DRDA instance
variables between EBCDIC and Unicode.
9.8 Return to client result sets
Prior to DB2 10, a stored procedure could only return result sets to the immediate caller. If the
stored procedure is in a chain of nested calls, the result sets must be materialized at each
intermediate nesting level, typically through a declared global temporary table (DGTT).
DB2 10 introduces 'Return to Client Result Set' support. With this enhancement, a result set
can be returned from a stored procedure at any nesting level directly to the client calling
application. No materialization through DGTTs is required. The new syntax on the DECLARE
CURSOR statement is:
WITH RETURN TO CLIENT
Using addressable storage: Although 64-bit mode provides larger addressable storage,
the amount of data that can be sent or retrieved to and from DB2 by an ODBC application
is limited by the amount of storage that is available below the 2 GB bar. For example, an
application cannot declare a 2 GB LOB above the bar and insert the entire LOB value into
a DB2 LOB column.
Chapter 9. Connectivity and administration routines 329
Result sets that are defined WITH RETURN TO CLIENT are not visible to any stored
procedures at the intermediate levels of nesting. They are only visible to the client that issued
the initial CALL statement. This feature is not supported for stored procedures called from
triggers or functions, either directly or indirectly.
9.9 DB2-supplied stored procedures
Several useful stored procedures that provide server-side database and system
administration functions are supplied by DB2 and installed using the DB2 installation jobs.
These stored procedures are often used by vendor products, but they also enable you to write
client applications that perform advanced DB2 and z/OS functions, such as utility execution,
application programming, performance management, and data administration functions.
The following jobs install and validate the installation of DB2-supplied routines. These jobs
are configured with the options that you specified on the DSNTIPR1 installation panel and the
DSNTIPRA through DSNTIPRP panels during the installation or migration procedure.
Job DSNTIJRT installs and configures all DB2-supplied routines.
Job DSNTIJRV validates the installation of the routines.
For more information about these installation jobs, see 12.13, Simplified installation and
configuration of DB2-supplied routines on page 523.
Depending on the function that is provided, you can find a description of the stored
procedures and information about how to install and execute them in the following
documentation:
Chapter 24. DB2-supplied stored procedures of DB2 9 for z/OS Stored Procedures:
Through the CALL and Beyond, SG24-7604
Chapter 7. Working with additional capabilities for DB2 of DB2 10 for z/OS Installation
and Migration Guide, GC19-2974
Appendix B. DB2-supplied stored procedures of DB2 10 for z/OS Utility Guide and
Reference, SC19-2984
Chapter 14. Calling a stored procedure from your application of DB2 10 for z/OS
Application Programming and SQL Guide, SC19-2969
Chapter 8. Installing the IBM Data Server Driver for JDBC and SQLJ of DB2 10 for z/OS
Application Programming Guide and Reference for Java, SC19-2970
Chapter 6. XML schema management with the XML schema repository (XSR) of DB2 10
for z/OS pureXML Guide, SC19-2981
Chapter 15. Scheduling administrative tasks and Appendix B. Stored procedures for
administration of DB2 10 for z/OS Administration Guide, SC19-2968
Chapter 11. Designing DB2 statistics for performance of DB2 10 for z/OS Managing
Performance, SC19-2978
9.9.1 Administrative task scheduler
The administrative task scheduler starts as a task on the z/OS system during DB2 startup or
initialization. The administrative task scheduler remains active unless it is explicitly stopped,
even when DB2 terminates.
Every DB2 subsystem has a coordinated administrative task scheduler address space that it
can start with a z/OS started task procedure. See Figure 9-10.
330 DB2 10 for z/OS Technical Overview
Figure 9-10 Administrative task scheduler
The administrative task scheduler has an SQL interface that currently consists of the following
DB2-supplied stored procedures:
SYSPROC.ADMIN_TASK_ADD
The ADMIN_TASK_ADD stored procedure adds a task to the task list of the administrative
task scheduler and runs in a WLM-established stored procedure address space using the
Resource Recovery Services attachment facility to connect to DB2. It takes the task
information in input and provides the task name and return code in output.
SYSPROC.ADMIN_TASK_REMOVE
The ADMIN_TASK_REMOVE stored procedure removes a task from the task list of the
administrative task scheduler. If the task is currently running, it continues to execute until
completion, and the task is not removed from the task list. If other tasks depend on the
execution of the task that is to be removed, the task is not removed from the task list of the
administrative task scheduler.
SYSPROC.ADMIN_TASK_CANCEL
The ADMIN_TASK_CANCEL stored procedure attempts to stop the execution of a task
that is currently running.
For a task that is running, this stored procedure cancels the DB2 thread or the JES job
that the task runs in, and issues a return code of 0 (zero). If the task is not running, the
stored procedure takes no action, and issues a return code of 12.
Chapter 9. Connectivity and administration routines 331
SYSPROC.ADMIN_TASK_UPDATE
The ADMIN_TASK_UPDATE stored procedure updates the schedule of a task that is in
the task list for the administrative task scheduler. If the task that you want to update is
running, the changes go into effect after the current execution finishes.
Generally MONITOR1 authority is required for the execution of the stored procedures. See
APAR PM02658 for the availability of these routines with DB2 9.
The stored procedures make currently use of the following user-defined table functions to
access the contents of the SYSIBM.ADMIN_TASKS and SYSIBM.ADMIN_TASKS_HIST
scheduler tables:
DSNADM.ADMIN_TASK_LIST
The ADMIN_TASK_LIST() function returns a table with one row for each of the tasks that
are defined in the administrative scheduler task list.
DSNADM.ADMIN_TASK_OUTPUT
The ADMIN_TASK_OUTPUT(task-name, num-invocations) function return the output
parameter values and the result set values of the n-th execution of the specified stored
procedure task, where n is the value of the input parameter num-invocations. This function
returns a table containing one row for each output parameter value or result set value.
DSNADM.ADMIN_TASK_STATUS()
The ADMIN_TASK_STATUS function returns a table with one row for each task that is
defined in the administrative scheduler task list. Each row indicates the status of the task
for the last time it was run. You can review the last execution status of a task and identify
any messages or return codes that were passed back to the administrative task scheduler
The task status is overwritten as soon as the next execution of the task starts.
DSNADM.ADMIN_TASK_STATUS(max-history)
The ADMIN_TASK_STATUS(max-history) function accesses the history of the execution
statuses and returns the contents. max-history is the maximum number of execution
statuses that is returned per task. If max-history is NULL, all available execution statuses
are returned. If max-history is 1, only the latest execution status is returned for each task
(the same as DSNADM.ADMIN_TASK_STATUS()).
These routines and the task scheduler setup are described in DB2 10 for z/OS Administration
Guide, SC19-2968.
9.9.2 Administration enablement
The following stored procedures were a late addition to DB2 9:
SYSPROC.ADMIN_INFO_SMS
ADMIN_INFO_SMS provides a DB2 interface to SMS for storage space alerts. The
ADMIN_INFO_SMS stored procedure returns space information about copy pools and
their storage groups and volumes.
SYSPROC.ADMIN_INFO_SYSPARM
The ADMIN_INFO_SYSPARM stored procedure returns the system parameters,
application defaults module, and IRLM parameters of a connected DB2 subsystem, or
member of its data sharing group.
SYSPROC.ADMIN_INFO_SYSLOG
ADMIN_INFO_SYSLOG is a new stored procedure (provided by APAR PM22091) that
returns the system log entries. Filters, such as search string, system name, begin and end
332 DB2 10 for z/OS Technical Overview
date, begin and end time, and max entries, are supplied that can limit the system log
entries that are returned.
SYSPROC.ADMIN_INFO_SQL
ADMIN_INFO_SQL procedure collects DDL, statistics, and column statistics providing
information to help identifying query performance. This procedure is meant to be used by
DB2 provided programs and tools. The procedure is invoked by the DSNADMSB program,
which has functions equivalent to PLI8 for DB2 optimization problems diagnosis.
ADMIN_INFO_SQL is included in DB2 10 and retrofitted into DB2 9 with APAR PM11941.
Two DB2 10 APARs apply: PM25635 and PM24808.
ADMIN_INFO_SQL has several input and output parameter. In addition, a result set is
returned containing several scripts with all the data that was collected, such as DDL,
statistics, and other data. The result set can be returned to the caller or written to data sets
on the host.
Before executing DSNADMSB, complete the following tasks:
a. Create SYSPROC.ADMIN_INFO_SQL in DB2 and bind it prior to running this job. Use
job DSNTIJRT to create and bind DB2-supplied stored procedures.
b. Bind the package and plan for DSNADMSB prior to running this job. Use job
DSNTIJSG to bind the package and plan for DSNADMSB.
c. Decide what output vehicle will be used, that is data sets or job stream. Because the
data can get quite large, it is important to have space available to hold the results. It is
difficult to predict the size of the output as tables can have a lot of dependencies.
Average space is usually 2-3 MB, but some of the larger workloads can be up to
20 MB. Make sure the column stats switch is N unless it is required because this
setting can consume a lot of space.
d. Copy the sample JCL (member DSNTEJ6I in DB2HLQ.SDSNSAMP library). Follow the
directions suggested in the header notes by modifying the libraries and input
parameters.
For details, refer to the white paper Capture service information about DB2 for z/OS with
SYSPROC.ADMIN_INFO_SQL, which is available from:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1012capturequery/
index.html?cmp=dw&cpb=dwinf&ct=dwnew&cr=dwnen&ccy=zz&csr=121610
9.9.3 DB2 statistics routines
DB2 10 delivers the following stored procedures that are used to monitor and execute
RUNSTATS autonomically:
SYSPROC.ADMIN_UTL_MONITOR
SYSPROC.ADMIN_UTL_MONITOR is a stored procedure that provides functions that
enable analysis of database statistics. These functions include alerts for out-of-date,
missing, or conflicting statistics, summary reports, and detailed table-level reports that
describe generated RUNSTATS statements.
SYSPROC.ADMIN_UTL_EXECUTE
ADMIN_UTL_EXECUTE is a stored procedure that solves alerts stored in the
SYSIBM.SYSAUTOALERTS catalog table within the maintenance windows that are
defined by the SYSIBM.SYSAUTOTIMEWINDOWS catalog table.
SYSPROC.ADMIN_UTL_MODIFY
Chapter 9. Connectivity and administration routines 333
ADMIN_UTL_MODIFY is a stored procedure that maintains the
SYSIBM.SYSAUTORUNS_HIST and SYSIBM.SYSAUTOALERTS catalog tables.
The ADMIN_UTL_MODIFY stored procedure removes all entries in the
SYSIBM.SYSAUTORUNS_HIST table that are older than a configurable threshold and
removes all entries in the SYSIBM.SYSAUTOALERTS table that are older than the
configured threshold and are in COMPLETE state.
When configured and scheduled in the Administrative Task scheduler, the
ADMIN_UTL_MONITOR stored procedure monitors statistics on the given tables and writes
alerts when inadequate statistics are identified. The ADMIN_UTL_EXECUTE stored
procedure then invokes RUNSTATS USE PROFILE within defined maintenance windows to
resolve the problem. The ADMIN_UTL_MODIFY stored procedure can be scheduled
periodically to clean up the log file and alert history of both of the other stored procedures.
These routines and the set up are described in DB2 10 for z/OS Managing Performance,
SC19-2978.
The following stored procedures are configured (DSNTIJRT), and your authorization ID has
CALL privileges for each:
ADMIN_COMMAND_DB2
ADMIN_INFO_SSID
ADMIN_TASK_ADD
ADMIN_TASK_UPDATE
ADMIN_UTL_EXECUTE
ADMIN_UTL_MODIFY
ADMIN_UTL_MONITOR
ADMIN_UTL_SCHEDULE
ADMIN_UTL_SORT
DSNUTILU
DSNWZP
The following user-defined functions are configured (DSNTIJRT), and your authorization ID
has call privileges for each:
ADMIN_TASK_LIST()
ADMIN_TASK_STATUS()
Your authorization ID has privileges to select and modify data in the following catalog tables:
SYSIBM.SYSAUTOALERTS
SYSIBM.SYSAUTORUNS_HIST
SYSIBM.SYSAUTOTIMEWINDOWS
SYSIBM.SYSTABLES_PROFILES
Your authorization ID has privileges to read data in the following catalog tables:
SYSIBM.SYSTABLESPACESTATS
SYSIBM.SYSTABLESPACE
SYSIBM.SYSDATABASE
SYSIBM.SYSTABLES
SYSIBM.SYSINDEXES
SYSIBM.SYSKEYS
SYSIBM.SYSCOLUMNS
SYSIBM.SYSCOLDIST
SYSIBM.SYSDUMMY1
SYSIBM.UTILITY_OBJECTS
334 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 335
Part 3 Operation and
performance
Technical innovations in operational compliance (both regulatory and governance) help teams
work more efficiently, within guidelines, and with enhanced auditing capabilities.
DB2 Utilities Suite for z/OS Version 10 (program number 5655-V41) delivers full support for
the significant enhancements in DB2 10 for z/OS and delivers integration with the storage
systems functions, such as FlashCopy, exploitation of new sort options (DB2 Sort), and
backup and restore. The key DB2 10 performance improvements are an overall reduction in
CPU time for many types of workloads, deep synergy with System z hardware and z/OS
software, improved performance and scalability for inserts and LOBs, and improved SQL
optimization.
DB2 10 also improves I/O performance. In some cases, DB2 10 saves disk space, which
improves performance. DB2 10 also reduces the elapsed time for many workloads by
improving parallelism, by reducing latch contention for many types of operations, and by
eliminating the UTSERIAL lock.
This version also resolves the issue of virtual storage used below the 2 GB bar and allows
more concurrent threads.
Installation and migration use the established conversion mode and new-function mode
statuses. The major additional function is the possibility of migrating to DB2 10 from both DB2
9 and DB2 V8. We point out the key steps and situations where skip level migration might be
convenient.
This part contains the following chapters:
Chapter 10, Security on page 337
Chapter 11, Utilities on page 425
Chapter 12, Installation and migration on page 471
Chapter 13, Performance on page 533
Part 3
336 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 337
Chapter 10. Security
For regulatory compliance reasons (for example, Basel II, Sarbanes-Oxley, EU Data
Protection Directive), and other reasons such as accountability, audit ability, increased privacy
and security requirements, many organizations focus on security functions when designing
their IT systems. DB2 10 for z/OS provides a set of options that improve and further secure
access to data held in DB2 for z/OS to address these challenges.
In this chapter, we highlight the following security topics:
Reducing the need for SYSADM authorities
Separating the duties of database administrators from security administrators
Protecting sensitive business data against security threats from insiders, such as
database administrators, application programmers, and performance analysts
Further protecting sensitive business data against security threats from powerful insiders
such as SYSADM by using row-level and column-level access controls
Simplifying security administration by preserving dependent privileges during SQL
REVOKE
Using the RACF profiles to manage the administrative authorities
Auditing access to business sensitive data through policy-based SQL auditing for tables
without having to alter the table definition
Auditing the efficiency of existing security policies using policy-based auditing capabilities
Benefitting from security features introduced recently by z/OS Security Server, including
support for RACF password phrases (z/OS V1R10) and z/OS identity propagation (z/OS
V1R11)
This chapter includes the following sections:
Policy-based audit capability
More granular system authorities and privileges
System-defined routines
The REVOKE dependent privilege clause
Support for row and column access control
Support for z/OS security features
10
338 DB2 10 for z/OS Technical Overview
10.1 Policy-based audit capability
DB2 provides a variety of authentication and access control mechanisms to establish rules
and controls. However, to protect against and to discover and eliminate unknown or
unacceptable behaviors, you need to monitor data access. DB2 10 assists you in this task by
providing a powerful and flexible audit capability that is based on audit policies and
categories, helping you to monitor application and individual user access, including
administrative authorities. When used together with the audit filtering options introduced in
DB2 9 for z/OS, policy-based auditing can provide granular audit reporting. For example, you
can activate an audit policy to audit an authorization ID, a role, and DB2 client information.
The auditing capability is available in DB2 10 new-function mode (NFM) mode.
10.1.1 Audit policies
An audit policy provides a set of criteria that determines the categories that are to be audited.
Each category determines the events that are to be audited. You can define multiple audit
policies based on your audit needs.
Each policy has a unique name assigned, which you use when you complete the following
tasks:
Create an audit policy by inserting a row into the SYSIBM.SYSAUDITPOLICIES table
Enable an audit policy by issuing a START trace command
Display the status of an activated audit policy by issuing a DISPLAY TRACE command
Disable an audit policy by issuing a STOP TRACE command
Audit categories
There are several security events that must be audited to comply with laws and regulations
and to monitor and enforce the audit policy. For example, auditing is usually performed for the
following events:
Changes in authorization IDs
Unauthorized access attempts
Altering, dropping, and creating database objects
Granting and revoking authorities and privileges
Reading and changing user data
Running DB2 utilities and commands
Activities performed under system administrative and database administrative authorities
DB2 10 groups these events into audit categories. Audit categories were introduced in DB2
for LUW. The implementation in DB2 10 allows consistency through IBM DB2 server
platforms.
Additional information: For information about how to collect DB2 traces in SMF or GTF,
refer to DB2 10 for z/OS Managing Performance, SC19-2978.
Chapter 10. Security 339
In DB2 10, you can use the audit categories shown in Table 10-1 to define audit policies.
Table 10-1 Audit categories
Audit category Description
CHECKING Denied access attempts due to inadequate DB2 authorization (IFCID140) and RACF authentication
failures (IFCID 83)
CONTEXT Utility start, objects or phase change, or utility end (IFCID 23, 24, 25)
DBADMIN Generates IFCID 361 trace records when an administrative authority, DBMAINT,
DBCTRL, DBADM, PACKADM, SQLADM, System DBADM, DATAACCESS, ACCESSCTRL, or
SECADM satisfies the privilege that is required to perform an operation.
Database filtering can be performed through the DBNAME attribute for DBADM, DBCTRL, and
DBMAINT authorities. All databases are audited if no database name is specified in the DBNAME
Policy column.
Collection ID filtering can be performed through the COLLID attribute for the PACKADM authority.
All collection IDs are audited if no collection ID is specified in the COLLID column.
If the DB2/RACF Access Control Authorization Exit is active, only the operations that are performed
by the SECADM authority are audited.
EXECUTE Trace record is generated for every unique table change or table read including the SQL statement
ID.
The SQL statement ID is used in the IFCID 143 and IFCID 144 trace records to record any changes
or reads by the SQL statement identified in the IFCID 145 trace records.
One audit policy can be used to audit several tables of a schema by specifying the table names with
the SQL LIKE predicate (for example TBL%).
Only tables residing in universal table space (UTS), classic partitioned table spaces, and segmented
table spaces can be audited.
Table aliases cannot be audited. An audit policy can audit clone tables and tables that implicitly
created for LOBs or XML columns.
Generates trace record for all unique table access in a unit of work. The qualifying tables are
identified in policy columns OBJECTSCHEMA, OBJECTNAME and OBJECTTYPE. If the audit
policy is started while an SQL query is running, access to the table will be audited for the next SQL
query that accesses the table.
OBJMAINT Table altered or dropped. Audit policy specifies the tables to be audited. One audit policy can be
used to audit several tables of a schema by specifying the table names with the SQL LIKE predicate
(for example TBL%).
You can audit only tables defined in UTS, traditional partitioned table spaces, and segmented table
spaces.
An audit policy can also audit clone tables and tables that are implicitly created for XML columns.
Audit object type to be specified in OBJECTTYPE column. Default OBJECTTYPE column value of
blank means auditing of all supported object types.
SECMAINT Grant and revoke privileges or administrative authorities (IFCID 141) and create and alter trusted
contexts (IFCID 270) and roles.
SECMAINT category also includes IFCID 271 (create, alter, drop row permissions and column
masks).
340 DB2 10 for z/OS Technical Overview
Creating an audit policy in SYSIBM.SYSAUDITPOLICIES
You create an audit policy by inserting a row into the SYSIBM.SYSAUDITPOLICIES catalog
table. Each row in SYSIBM.SYSAUDITPOLICIES represents an audit policy. An audit policy
can be identified uniquely by its policy name.
The policy name is stored in the AUDITPOLICYNAME table column. This column is also the
unique index key. SYSIBM.SYSAUDITPOLICIES provides one column for each audit
category.
An audit category is selected for auditing by populating its corresponding category policy
column. The columns of the SYSIBM.SYSAUDITPOLICIES table and its valid category
column values are listed in Table 10-2.
Table 10-2 The SYSIBM.SYSAUDITPOLICIES catalog table
SYSADMIN Generates IFCID 361 trace records when an administrative authority, installation
SYSADM, installation SYSOPR SYSCTRL, or SYSADM satisfies the privilege required to perform
an operation.
If the DB2/RACF Access Control Authorization Exit is active, only the operations that are performed
by the installation SYSADM and installation SYSOPR authorities are audited.
VALIDATE New or changed authorization IDs (IFCID 55, 83, 87, 169, 269, 312), establishment of new or user
switching within a trusted connections
Audit category Description
Additional information: For details about DB2 10 audit categories refer to DB2 10 for
z/OS Administration Guide, SC19-2968.
Column name Data type Description
ALTERDTS TIMESTAMP NOT NULL WITH
DEFAULT
Alter timestamp
AUDITPOLICYNAME VARCHAR(128) NOT NULL Policy name
CHECKING CHAR(1) NOT NULL WITH DEFAULT A: Audit all failures
blank: None
IFCID 140
COLLID VARCHAR(128) NOT NULL WITH
DEFAULT
Name of package collid.
If no collid specified all collids are audited.
Applies only to authority PACKADM in category
DBADMIN
IFCID 83
CONTEXT CHAR(1) NOT NULL WITH DEFAULT A: Audit all utilities
blank: None
IFICD 23, 24, 25
CREATEDTS TIMESTAMP NOT NULL WITH
DEFAULT
Insert timestamp
DB2START CHAR(1) NOT NULL WITH DEFAULT Automatic policy start at DB2 startup, except for
DB2 restart light. Up to 8 policies can be started at
DB2 startup.
Y: Automatic start
N: No automatic start
Chapter 10. Security 341
DBADMIN VARCHAR(128) NOT NULL WITH
DEFAULT
Database administrative tasks.
*: Audit all,
B: System DBADM
C: DBCTRL
D: DBADM
E: SECADM
G: ACCESSCTRL
K: SQLADM
M: DBMAINT
P: PACKADM
T: DATAACCESS
zero length: Audit none
Can be a concatenated string of multiple
parameters, IFCID 361
DBNAME VARCHAR(24) NOT NULL WITH
DEFAULT
Database name.
If no database specified all databases are audited.
Applies only to authorities DBADM, DBCTRL,
DBMAINT authorities in column DBADMIN
EXECUTE CHAR(1) NOT NULL WITH DEFAULT Tables qualifying through OBJECTSCHEMA,
OBJECTNAME, OBJECTTYPE will be audited,
A: Audit any SQL activity of that application
process
C: Audit SQL insert, update, or delete of that
application process
blank: None
Traces bind time information about SQL
statements qualified by OBJECTSCHEMA,
OBJECTNAME, OBJECTTYPE
IFCID 143, 144, 145
OBJECTNAME VARCHAR(128) NOT NULL WITH
DEFAULT
Object name.
Applies only to categories OBJMAINT, EXECUTE.
Can be specified in SQL LIKE notation
OBJECTSCHEMA VARCHAR(128) NOT NULL WITH
DEFAULT
Schema name.
Applies only to categories OBJMAINT, EXECUTE
OBJECTTYPE CHAR(1) NOT NULL WITH DEFAULT C: Clone Table
P: Implicit table for XML column
T: Table
Blank: All
Applies only to OBJMAINT and EXECUTE
categories
OBJMAINT CHAR(1) NOT NULL WITH DEFAULT A: Audit alter or drop activities when a table
qualifies through OBJECTSCHEMA,
OBJECTNAME, and OBJECTTYPE
blank: None
IFCID 142
SECMAINT CHAR(1) NOT NULL WITH DEFAULT A: Audit all
blank: None
IFCID 141 grant, revoke
IFCID 270 trusted context created or altered
Column name Data type Description
342 DB2 10 for z/OS Technical Overview
Added and changed instrumentation facility component identifiers
Table 10-3 lists the instrumentation facility component identifiers (IFCIDs) that are added or
changed to support DB2 audit policies and the administrative authorities. For more
information regarding these IFCIDs, refer to Appendix A, Information about IFCID changes
on page 615.
Table 10-3 New and changed IFCIDs
SYSADMIN VARCHAR(128) NOT NULL WITH
DEFAULT
System administrative tasks:
*: Audit all
I: Install SYSADM
L: SYSCTRL
O: SYSOPR
R: install SYSOPR
S: SYSADM
zero length: Audit none
Can be a concatenated string of multiple
parameters, IFCID 361
VALIDATE CHAR(1) NOT NULL WITH DEFAULT A: Audit all
blank: None
Assignment or change of authorization ID:
IFCID 55, 83, 87, 169, 312
Trusted context establishment or user switch:
IFCID 269
Column name Data type Description
Additional information: For details about all catalog table columns of
SYSIBM.SYSAUDITPOLICIES. refer to DB2 10 for z/OS SQL Reference, SC19-2983.
IFCID Description
106 System parameters in effect
Changed to account for the following DSNZPARMs:
SECADM1
SECADM2
SECADM1_TYPE
SECADM2_TYPE
SEPARATE_SECURITY
REVOKE_DEP_PRIVILEGES
140 Authorization failures
Changed to include constants for new authorities
141 Explicit grants and revokes
Changed to include constants for new authorities
143 Changed to include unique statement identifier
144 Changed to include unique statement identifier
145 Changed to trace the entire SQL statement and to include the unique statement identifier
361 Added to support auditing of administrative authorities
362 Added for auditing the start audit trace command
This trace is started automatically when a start trace command with the AUDTPLCY keyword is issued.
IFCID 362 documents whether an audit policy was started successfully
Chapter 10. Security 343
Policy name considerations
The SYSIBM.SYSAUDITPOLICIES catalog table has a unique key constraint defined on the
AUDITPOLICYNAME column. If you try to insert a policy row using an AUDITPOLICYNAME
value that already exists in the table, you receive SQLCODE -803 (duplicate key) as you
would for any other user table. Thus, plan your naming standards for audit policy names
carefully.
Create audit policy samples
We created three audit policies for use and reference throughout this chapter. Example 10-1
and Example 10-2 show the SQL insert statements that we used for policy creation.
Audit all access to the DB2R5.EMP and DB2R5.DEPT tables
Example 10-1 inserts a policy row into SYSIBM.SYSAUDITPOLICY. The table row provides
the following attribute settings:
AUDITPOLICYNAME AUDEMP
OBJSCHEMA DB2R5
OBJNAME EMP and DEPT
OBJTYPE T to indicate table type
EXECUTE A to audit all access
Example 10-1 Audit all SQL for multiple tables
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, OBJECTSCHEMA, OBJECTNAME, OBJECTTYPE, EXECUTE)
VALUES('AUDEMP','DB2R5','EMP','T','A');
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, OBJECTSCHEMA, OBJECTNAME, OBJECTTYPE, EXECUTE)
VALUES('AUDDEPT','DB2R5','DEPT','T','A');
Audit use of SYSADM authorities
Example 10-2 inserts the AUDSYSADMIN audit policy of the SYSADMIN category into the
SYSIBM.SYSAUDITPOLICIES table. The table row has the following column settings:
AUDITPOLICYNAME AUDSYSADMIN
SYSADMIN S to audit SYSADMIN
Example 10-2 Audit policy for category SYSADMIN
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, SYSADMIN) VALUES('AUDSYSADMIN','S');
Audit policy to start automatically during DB2 startup
Example 10-3 updates the AUDSYSADMIN audit policy in the SYSIBM.SYSAUDITPOLICIES
table to mark the policy for automatic start during DB2 startup.
Example 10-3 Mark audit policy for automatic start during DB2 startup
UPDATE SYSIBM.SYSAUDITPOLICIES
SET DB2START = 'Y'
WHERE AUDITPOLICYNAME = 'AUDSYSADMIN';
Additional information: For a complete list of DB2 10 IFCIDs, refer to DB2 10 for z/OS
What's New, GC19-2985.
344 DB2 10 for z/OS Technical Overview
Secure Audit trace
APARs PM28296, PM26977, PM28543, PM30394, and PM32217 add support for secure
audit policy trace and removal of some system authorities for DBADM.
A new value 'S' is added to SYSIBM.SYSAUDITPOLICIES - DB2START column. If 'S' is
specified, then the audit policy will be automatically started at DB2 start up and can only be
stopped by user with SECADM authority or will be stopped at DB2 stop.
If multiple audit policies are specified to be started at DB2 start up, some with DB2START
= 'Y' and some with DB2START = 'S', then two traces will be started, one for audit policies
with DB2START='Y' and another for audit policies with DB2START='S'. To stop the audit
policies started at DB2 start up, all the audit policies that are assigned the same trace
number must be stopped simultaneously. Then the policies can be started individually, as
needed.
After adding the entry in SYSAUDITPOLICIES with DB2START='S', the trace can be
started to get this behavior.
Any user with necessary privilege can start the trace specified with DB2START = 'S' and
the SECADM restriction applies only during stop trace.
If the audit policy has already been started, then after updating the DB2START column,
the audit policy needs to be stopped and restarted to get the secure behavior.
Starting and stopping DB2 audit policies
Policy-based auditing is operated using start, stop, and display audit trace commands. In
DB2 10, these commands are enhanced to support an AUDTPLCY parameter (see
Figure 10-1). Using the policy name to manage audit traces provides additional simplification,
because DB2 chooses the IFCID information that is appropriate for the selected audit policy.
Figure 10-1 Start audit trace parameter AUDTPLCY
The AUDTPLCY parameter can take multiple policy names. We recommend to provide as
many audit policies as possible in the AUDPLCY keyword, because DB2 starts only one trace
for all policies that are specified, which helps to avoid the system limit of up to 32 started
traces. However, you cannot stop an individual audit policy if that policy was started with other
policies within the AUDTPLCY keyword. AUDTPLCY keyword is mutually exclusive with the
IFCID or CLASS keyword.
Important: Only the install SYSADM and the SECADM authority have the privilege to
insert, update, or delete rows in SYSIBM.SYSAUDITPOLICIES and only SECADM can
grant these privileges to others.
If DSNZPARM SEPARATE_SECURITY is set to NO, the SYSADM authority implicitly holds
the SECADM system privilege and can, therefore, perform any work that requires
SECADM authority.
For simplicity, we refer only to the SECADM authority when we discuss the privilege of
being able to modify SYSIBM.SYSAUDITPOLICIES as SECADM, install SYSADM, or
SYSADM with DSNZPARM SEPARATE_SECURITY set to NO.
>--+-------------------------------------+---------------------><
| .-,---------------. |
| (1) V | |
'-AUDTPLCY----(-----policy-name---+-)-'
Chapter 10. Security 345
In the scenario illustrated in Figure 10-2, we started the two audit policies that we defined in
Example 10-1 on page 343 within one single start trace command. We then tried to stop just
the AUDEMP audit policy, and the command failed with DB2 message DSNW137I.
Figure 10-2 Use of AUDTPLCY in start, stop trace commands
Start, display, and stop audit trace samples
We ran the following commands to start, display, and stop the audit policies:
Start the AUDSYSADMIN audit policy successfully for the DB0BSYSADM role after the
policy is defined (see Figure 10-3). The audit trace records are written to destination GTF,
which requires the GTF trace for DB2 to be started.
Figure 10-3 Start AUDSYSADMIN audit policy
Start multiple audit policies by specifying multiple audit policies in the start trace
AUDTPLCY keyword (see Figure 10-4).
Figure 10-4 Start multiple audit trace policies with one start trace command
Display status of the audit policy that was activated in Figure 10-3. The DISPLAY
command was issued with detail level 1 and 2 to provide all detail information (see
Figure 10-5). The DISPLAY TRACE output shows the name of the audit policy that we
started and the role name for which the trace data is filtered.
-START TRACE(AUDIT) AUDTPLCY(AUDEMP,AUDDEPT)
....
-STOP TRACE(AUDIT) AUDTPLCY(AUDEMP)
DSNW137I -DB0B SPECIFIED TRACE NOT ACTIVE
DSN9022I -DB0B DSNWVCM1 '-STOP TRACE' NORMAL COMPLETION
-STOP TRACE(AUDIT) AUDTPLCY(AUDEMP,AUDDEPT)
DSNW131I -DB0B STOP TRACE SUCCESSFUL FOR TRACE NUMBER(S) 03
DSN9022I -DB0B DSNWVCM1 '-STOP TRACE' NORMAL COMPLETION
-START TRACE(AUDIT) AUDTPLCY(AUDDEPT) DEST(GTF)
DSNW130I -DB0B AUDIT TRACE STARTED, ASSIGNED TRACE NUMBER 03
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDDEPT STARTED
END AUDIT POLICY SUMMARY
DSN9022I -DB0B DSNWVCM1 '-START TRACE' NORMAL COMPLETION
-START TRACE(AUDIT) AUDTPLCY(AUDSYSADMIN) DEST(GTF) ROLE(DB0BSYSADM)
DSNW130I -DB0B AUDIT TRACE STARTED, ASSIGNED TRACE NUMBER 04
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDSYSADMIN STARTED
END AUDIT POLICY SUMMARY
DSN9022I -DB0B DSNWVCM1 '-START TRACE' NORMAL COMPLETION
-START TRACE(AUDIT) AUDTPLCY(AUDEMP,AUDDEPT) DEST(GTF)
DSNW130I -DB0B AUDIT TRACE STARTED, ASSIGNED TRACE NUMBER 04
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDEMP STARTED
AUDIT POLICY AUDDEPT STARTED
END AUDIT POLICY SUMMARY
DSN9022I -DB0B DSNWVCM1 '-START TRACE' NORMAL COMPLETION
346 DB2 10 for z/OS Technical Overview
Figure 10-5 Display audit policy AUDSYSADM
Stop the audit trace activated in Figure 10-3 on page 345. The STOP AUDIT TRACE
command specifically stops the AUDSYSADMIN audit policy (see Figure 10-6).
Figure 10-6 Stop audit policy AUDSYSADMIN
As shown in Figure 10-5 on page 346 and in Figure 10-6, using the AUDTPLCY parameter is
sufficient to display and stop the DB2 audit policies. You do not need to provide the trace
number that was assigned by DB2 during TRACE START.
Error situations
DB2 10 introduces reason codes and messages that are related to audit policy processing.
These codes and messages can assist in problem assessment and determination. When we
tested our audit policy scenarios, we experienced the following common error situations:
Start the AUDSYSADMINFAIL audit policy. Because the AUDSYSADMINFAIL audit policy
contains an invalid parameter setting, DB2 returns error reason code 00E70022, as shown
in Figure 10-7.
Figure 10-7 Start AUDTPLCY reason code 00E70022
Start the AUDSYSADMIN audit policy created in Example 10-2 on page 343. Because the
AUDSYSADMIN audit policy is not defined, the START command fails with reason code
00E70021, as shown in Figure 10-8.
-DISPLAY TRACE(AUDIT) AUDTPLCY(AUDSYSADMIN) DETAIL(1,2)
DSNW127I -DB0B CURRENT TRACE ACTIVITY IS -
TNO TYPE CLASS DEST QUAL IFCID
04 AUDIT * GTF YES
*********END OF DISPLAY TRACE SUMMARY DATA*********
DSNW143I -DB0B CURRENT TRACE QUALIFICATIONS ARE -
DSNW152I -DB0B BEGIN TNO 04 QUALIFICATIONS:
ROLE: DB0BSYSADM
END TNO 04 QUALIFICATIONS
DSNW185I -DB0B BEGIN TNO 04 AUDIT POLICIES:
ACTIVE AUDIT POLICY: AUDSYSADMIN
END TNO 04 AUDIT POLICIES
DSNW148I -DB0B ******END OF DISPLAY TRACE QUALIFICATION DATA******
DSN9022I -DB0B DSNWVCM1 '-DISPLAY TRACE' NORMAL COMPLETION
-STOP TRACE(AUDIT) AUDTPLCY(AUDSYSADMIN )
DSNW131I -DB0B STOP TRACE SUCCESSFUL FOR TRACE NUMBER(S) 03
DSN9022I -DB0B DSNWVCM1 '-STOP TRACE' NORMAL COMPLETION
-START TRACE(AUDIT) AUDTPLCY(AUDSYSADMINFAIL) DEST(GTF)
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDSYSADMINFAIL NOT STARTED, REASON 00E70022
END AUDIT POLICY SUMMARY
DSN9023I -DB0B DSNWVCM1 '-START TRACE' ABNORMAL COMPLETION
Chapter 10. Security 347
Figure 10-8 Start AUDTPLCY reason code 00E70021
Start the AUDPHONE and AUDSYSADMIN audit policies. The AUDPHONE audit policy
references the PHONENO table. The START command fails with reason code 00E70024,
because the table referenced in the PHONENO table does not exist. DB2 does not verify
the existence of a table when you insert an audit policy into
SYSIBM.SYSAUDITPOLICIES. See Figure 10-9.
Figure 10-9 Start AUDTPLCY reason code 00E70024
Auditing start audit trace commands
DB2 10 externalizes an IFCID 362 trace record each time an audit policy starts or fails. In
case a start policy command fails, DB2 issues the DSNW196I warning message. For
example, a start trace AUDTPLCY command can fail when the SYSIBM.SYSAUDITPOLICY
table row contains an invalid value in any of the category columns or if you try to start a policy
that is not yet created (as shown in Figure 10-7 on page 346 and Figure 10-8 on page 347).
Figure 10-10 shows the sample report of an IFCID 362 trace record formatted by IBM Tivoli
OMEGAMON XE for DB2 Performance Expert Version V5R1 (OMEGAMON PE).
Figure 10-10 OMEGAMON PE IFCID 362 RECTRACE report
-START TRACE(AUDIT) AUDTPLCY(AUDSYSADMIN ) DEST(GTF)
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDSYSADMIN NOT STARTED, REASON 00E70021
END AUDIT POLICY SUMMARY
DSN9023I -DB0B DSNWVCM1 '-START TRACE' ABNORMAL COMPLETION
-START TRACE(AUDIT) AUDTPLCY(AUDPHONE,AUDSYSADMIN) DEST(SMF)
DSNW130I -DB0B AUDIT TRACE STARTED, ASSIGNED TRACE NUMBER 03
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDSYSADMIN STARTED
AUDIT POLICY AUDPHONE NOT STARTED, REASON 00E70024
END AUDIT POLICY SUMMARY
DSN9022I -DB0B DSNWVCM1 '-START TRACE' NORMAL COMPLETION
SYSOPR DB0B C66CDC2F49AA 'BLANK' 'BLANK' 'BLANK'
SYSOPR 022.AUDT 'BLANK' 00:14:05.66735526 48 1 362 START/STOP TRC NETWORKID:
'BLANK' BL01 N/P AUDITPOLICY
STATUS : SUCCESS TYPE : START TRACE
DB2 START UP : 'BLANK' DATABASE NAME : 'BLANK'
AUDIT CATEGORIES : EXECUTE
AUDIT POLICY NAME: AUDDEPT
TABLE SCHEMA NAME: DB2R5
TABLE NAME : 'AUDDEPT'
Additional information: For information about DB2 start, stop, and display trace
command options, refer to DB2 10 for z/OS Command Reference, SC19-2972.
348 DB2 10 for z/OS Technical Overview
Changing an audit policy
You can change an existing audit policy using an SQL update while an audit trace is active for
that policy. However, updating the policy using SQL does not change the audit trace criteria
that was selected by DB2 during the audit trace start. To reflect policy changes in the audit
trace, use the DB2 command interface to stop and start the audit policy.
You can keep track of audit policy changes by having an appropriate audit policy started. In
our DB2 10 environment, we ran an update on SYSIBM.SYSAUDITPOLICY under SYSADM
authority while the audit policy AUDSYSADMIN was started. Because of this policy, DB2
wrote an IFCID 361 trace record as a result of the update statement in Example 10-3 on
page 343.
Figure 10-11 shows the OMEGAMON PE record trace report that we created for IFCID 361.
Figure 10-11 OMEGAMON PE IFCID 361 record trace
Reason codes
To handle error situations that can occur during start, stop, and display audit trace commands
using the new AUDTPLCY keyword, DB2 10 introduces reason codes. For details about these
reason codes, refer to DB2 10 for z/OS Codes, GC19-2971.
Messages
DB2 10 introduces message IDs that are related to audit policy processing. These messages
can assist you in problem determination and in system automation. For details about these
messages for DB2 audit policies, refer to DB2 10 for z/OS Messages, GC19-2979.
10.1.2 Authorization
The following privileges are required to manage the policy-based auditing process:
INSERT privilege on table SYSIBM.SYSAUDITPOLICY for defining the policy
TRACE privilege for activating the policy using the START AUDIT TRACE command
In DB2 10, both privileges are implicitly held by the SECADM authority, which simplifies DB2
auditing because only one authority is required for audit policy management and activation.
Because the SECADM authority can grant the privilege to perform inserts on the
SYSIBM.SYSAUDITPOLICY catalog table, the management task of defining policies can be
delegated to other users, for example to users with TRACE authority. The SECADM authority
has the implicit privilege to issue the start, stop, and display trace commands. Any user who
PRIMAUTH CONNECT INSTANCE END_USER WS_NAME
ORIGAUTH CORRNAME CONNTYPE RECORD TIME DESTNO ACE IFC DESCRIPTION
PLANNAME CORRNMBR TCB CPU TIME ID
-------- -------- ---------- ------------ ------ --- --- -----------
DB2R5 DB2R5X TSO 17:36:53.164 421 3 361 AUDIT ADMIN
DSNTEP10 'BLANK' N/P AUTHORITIES
AUTHORITY TYPE : SYSADM AUTHID TYPE : AUTHORIZATION ID
PRIVILEGE CHECKED: 53 PRIVILEGE : UPDATE OBJECT TYPE : TABLE OR VIEW
AUTHORIZATION ID : DB2R5
SOURCE OBJ QUALIF: SYSIBM
SOURCE OBJ NAME : SYSAUDITPOLICIES
SQL STATEMENT:
UPDATE SYSIBM.SYSAUDITPOLICIES SET EXECUTE = 'A'
WHERE AUDITPOLICYNAME IN ('AUDEMP','AUDDEPT')
Chapter 10. Security 349
has the necessary privileges to issue DB2 commands can specify the AUDTPLCY keyword
on the start, stop, and display trace commands.
The following authorities implicitly hold select privilege on the SYSIBM.SYSAUDITPOLICY
catalog table:
ACCESSCTRL
DATAACCESS
System DBADM
SQLADM
SYSCTRL
SYSADM
10.1.3 Creating audit reports
With DB2 10, you can use OMEGAMON PE or other third-party report writer to format the
auditing IFCID records for auditing. If you prefer to use a third-party report writer, check with
your ISV for compatibility with DB2 10 for z/OS. In our environment, we set up the JCL sample
shown in Figure 10-12 to create reports with OMEGAMON PE V5R1.
Figure 10-12 OMEGAMON PE V1R5 JCL sample
10.1.4 Policy-based SQL statement auditing for tables
DB2 10 implements policy-based SQL statement auditing of tables. The table audit capability
enables you to dynamically enable SQL statement auditing for tables that do or do not have
the table AUDIT clause set in their table definition.
When table-based SQL statement auditing is active, DB2 writes IFCID 143, 144, and 145
trace records for every SQL statement that performs changes on (SQL insert, update, or
delete) or reads from (SQL select) tables that are identified in the audit policy. IFCID 145
records bind time information, including the full SQL statement text and a unique SQL
statement ID. The SQL statement ID is used in the IFCID 143 and IFCID 144 trace records to
record any changes or reads that are performed by the SQL statement identified in the IFCID
145 trace records.
//DB2R5PM JOB (999,POK),'JOSEF',CLASS=A,
// MSGCLASS=T,NOTIFY=&SYSUID,REGION=0M
/*JOBPARM S=SC63,L=9999
//DB2PM EXEC PGM=DB2PM,REGION=0K
//STEPLIB DD DISP=SHR,DSN=OMPE.V510.BETA.TKANMOD
//INPUTDD DD DISP=SHR,DSN=DB2R5.GTFDB2.SC63.D100809.T203916
//SYSIN DD *
AUDIT
REPORT
LEVEL(DETAIL)
TYPE(ALL)
TRACE
TYPE(ALL)
RECTRACE
TRACE
LEVEL(LONG)
EXEC
/*
350 DB2 10 for z/OS Technical Overview
For the tables identified in the policy, DB2 writes one audit record for each SQL statement ID
accessing the table. DB2 writes additional audit records when a table identified by the audit
policy is read or changed using a different SQL statement ID.
You create and manage the audit policy for SQL statement auditing for tables using the
interfaces that we described in 10.1.1, Audit policies on page 338.
DB2 10 introduces the following IFCID changes to support the new SQL statement ID based
auditing for tables:
IFCID 143 includes a unique statement identifier.
IFCID 144 includes a unique statement identifier.
IFCID 145 traces the entire SQL statement text and include a unique statement identifier.
For dynamic SQL statements, the unique statement identifier (STMT_ID) is derived from the
dynamic statement cache, and for embedded static SQL statements, the STMT_ID is derived
from the SYSIBM.SYSPACKSTMT.STMT_ID catalog table column.
You cannot use the table-based auditing with tables that are defined in simple table spaces.
Audit policy for table-based SQL auditing
If you want to activate table-based auditing for SQL statements, you need to create an audit
policy for the EXECUTE category. Additionally, you need to provide further policy information
to specify the object type, object name, and the schema name that you want to audit. The
following SYSIBM.SYSAUDITPOLICY columns are relevant to the audit category EXECUTE:
AUDITPOLICYNAME
OBJECTSCHEMA
OBJECTNAME
OBJECTTYPE
EXECUTE
For more information about these columns, refer to Table 10-2 on page 340.
Sample scenario
We prepared a sample scenario with two sample tables, an embedded SQL program and a
small dynamic SQL workload, to illustrate table-based auditing. The sample scenario
contained the following objects:
Tables DB2R5.AUDEMP and DB2R5.AUDDEPT
Dynamic SQL
The dynamic SQL workload shown in Example 10-4 executes three different SQL
statements within one unit of work. The SQL statements access the same table. We,
therefore, see three IFCID 143 records within one unit of work, one for each distinct SQL
DML statement.
Example 10-4 AUD dynamic SQL for auditing
INSERT INTO DB2R5.AUDEMP
VALUES ( '300000' , 'JOSEF' , ' ' , 'KLITSCH' , 'X00' , '1234' ,
'1998-08-29' , 'ITJOB' , 42 , 'M' , '1958-09-13' , 99.99 , 99.99 , 578) ;
INSERT INTO DB2R5.AUDEMP
VALUES ( '300001' , 'JOSEF' , ' ' , 'KLITSCH' , 'X00' , '1234' ,
'1998-08-29' , 'ITJOB' , 42 , 'M' , '1958-09-13' , 99.99 , 99.99 , 578) ;
DELETE FROM DB2R5.AUDEMP WHERE EMPNO >= 300000';
COMMIT ;
Chapter 10. Security 351
Static SQL COBOL program: AUDSQL
The COBOL program AUDSQL executes the static SQL statements shown in
Example 10-5. The program contains two different select statements, and each statement
is executed three times within each unit of work. Therefore, we see two IFCID 144 records
for each unit of work, one for each distinct SQL statement.
Example 10-5 AUD SQL static SQL for auditing
PERFORM 3 TIMES
PERFORM 3 TIMES
EXEC SQL
SELECT COUNT(*) INTO :V-EMP FROM DB2R5.AUDDEPT
END-EXEC
EXEC SQL
SELECT COUNT(*) INTO :V-EMP FROM DB2R5.AUDEMP
END-EXEC
END-PERFORM
EXEC SQL COMMIT END-EXEC
END-PERFORM.
Creating an EXECUTE category audit policy
Within this activity, we executed the SQL statements shown in Example 10-6 to create an
audit policy to audit any SQL activity on tables DB2R5.AUD%. We set OBJECTNAME to
AUD%, which causes DB2 to use a like predicate when determining the tables to be audited.
Example 10-6 inserts an audit policy to audit tables names that begin with AUD% within
schema DB2R5.
Example 10-6 Create SQL statement auditing policy
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, OBJECTSCHEMA, OBJECTNAME, OBJECTTYPE, EXECUTE)
VALUES('AUDTABLES','DB2R5','''AUD%''','T','A');
Starting the audit policy
Under an authority with TRACE privilege, we successfully started the audit policy for table
based SQL auditing (see Example 10-7).
Example 10-7 Start SQL statement auditing policy
-START TRACE(AUDIT) AUDTPLCY(AUDTABLES) DEST(GTF)
DSNW130I -DB0B AUDIT TRACE STARTED, ASSIGNED TRACE NUMBER 03
DSNW192I -DB0B AUDIT POLICY SUMMARY
AUDIT POLICY AUDTABLES STARTED
END AUDIT POLICY SUMMARY
DSN9022I -DB0B DSNWVCM1 '-START TRACE' NORMAL COMPLETION
Displaying audit policy status
If you are not sure an audit policy is started, you can use the DISPLAY TRACE command with
the AUDTPLCY parameter to verify that audit policy started. To run the DISPLAY TRACE
command, you must have the DISPLAY TRACE privilege, which is included in the system
DBADM, the SYSOPR, the SYSCTRL, the SYSADM, and in the SECADM authority.
Example 10-8 shows the output of the DISPLAY TRACE command with the AUDTPLCY
parameter.
352 DB2 10 for z/OS Technical Overview
Example 10-8 Display SQL statement audit policy
-DIS TRACE(AUDIT) AUDTPLCY(AUDTABLES) DETAIL(1,2)
DSNW127I -DB0B CURRENT TRACE ACTIVITY IS -
TNO TYPE CLASS DEST QUAL IFCID
03 AUDIT * GTF NO
*********END OF DISPLAY TRACE SUMMARY DATA*********
DSNW143I -DB0B CURRENT TRACE QUALIFICATIONS ARE -
DSNW152I -DB0B BEGIN TNO 03 QUALIFICATIONS:
NO QUALIFICATIONS
END TNO 03 QUALIFICATIONS
DSNW185I -DB0B BEGIN TNO 03 AUDIT POLICIES:
ACTIVE AUDIT POLICY: AUDTABLES
END TNO 03 AUDIT POLICIES
DSNW148I -DB0B ******END OF DISPLAY TRACE QUALIFICATION DATA******
DSN9022I -DB0B DSNWVCM1 '-DIS TRACE' NORMAL COMPLETION
Stop audit policy
After successful SQL workload execution, we ran the command in Example 10-9 to stop the
audit policy. The command requires TRACE or SECADM authority.
Example 10-9 Stop SQL statement audit policy
-STOP TRACE(AUDIT) AUDTPLCY(AUDTABLES)
DSNW131I -DB0B STOP TRACE SUCCESSFUL FOR TRACE NUMBER(S) 03
DSN9022I -DB0B DSNWVCM1 '-STOP TRACE' NORMAL COMPLETION
Chapter 10. Security 353
Reports for SQL auditing
In this section, we describe how we used OMEGAMON PE to format the IFCID traces that we
collected within our sample scenario.
OMEGAMON PE record trace for static SQL
Figure 10-13 shows the OMEGAMON PE record trace report that we created for our static
SQL sample workload. The report shows six IFCID 144 trace records (two for each DB2 unit
of work) that DB2 collected because of the SQL workload shown in Example 10-5 on
page 351.
Figure 10-13 OMEGAMON PE record trace for static SQL
DB2R5 BATCH C681E3C79388 'BLANK' 'BLANK' 'BLANK'
DB2R5 DB2R5R06 TSO 17:39:53.41335706 9 1 361 AUDIT ADMIN NETWORKID: USIBMSC
AUDSQL 'BLANK' N/P AUTHORITIES
AUTHORITY TYPE : SYSADM AUTHID TYPE : AUTHORIZATION ID
PRIVILEGE CHECKED: 64 PRIVILEGE : EXECUTE OBJECT TYPE : APPLICATION PLAN
AUTHORIZATION ID : DB2R5
SOURCE OBJ NAME : AUDSQL
SQL STATEMENT: 'BLANK'
................................................................................................
17:39:53.41637954 10 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 1
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 24682
17:39:53.47192559 11 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 1
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 219 TABLE OBID: 220
STMT ID : 24683
17:39:53.50356837 12 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 2
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 24682
17:39:53.50384456 13 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 2
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 219 TABLE OBID: 220
STMT ID : 24683
17:39:53.50402484 14 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 3
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 24682
17:39:53.50428398 15 1 144 AUDIT FIRST 'BLANK'
N/P READ NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 3
DATABASE: 285 LOGRBA: X'000000000000'
PAGE SET: 219 TABLE OBID: 220
STMT ID : 24683
354 DB2 10 for z/OS Technical Overview
OMEGAMON PE record trace for dynamic SQL
Example 10-10 shows the OMEGAMON PE record trace report that we created for our
dynamic SQL sample workload. The report shows three IFCID 143 trace records that DB2
collected because of the SQL workload shown in Example 10-4 on page 350. We expected
this output because we executed the SQL DML statements within one DB2 unit of work.
Example 10-10 OMEGAMON PE record trace for dynamic SQL
DB2R5 BATCH C68208FEC3A1 'BLANK' 'BLANK' 'BLANK'
DB2R5 DB2R5X TSO
DSNTEP10 'BLANK'
20:26:23.40079720 83 1 143 AUDIT FIRST NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 1
N/P WRITE DATABASE: 285 LOGRBA: X'000031D4D1F2'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 19
20:26:23.40120014 84 1 143 AUDIT FIRST 'BLANK'
N/P WRITE NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 1
DATABASE: 285 LOGRBA: X'000031D4D1F2'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 20
20:26:23.40152556 85 1 143 AUDIT FIRST 'BLANK'
N/P WRITE NETWORKID: USIBMSC LUNAME: SCPDB0B LUWSEQ: 1
DATABASE: 285 LOGRBA: X'000031D4D1F2'
PAGE SET: 222 TABLE OBID: 223
STMT ID : 21
10.1.5 Summary
The auditing capability provided by DB2 10 enables you to dynamically activate audit policies
without having to alter objects in DB2. DB2 10 also simplifies the management of auditable
events by introducing audit policies and by providing the ability to group auditable events into
audit categories. Audit policies are stored in the SYSIBM.SYSAUDITPOLICY catalog table,
which helps to simplify auditing processes and procedures. Storing the audit policies in this
table also makes the audit policy definitions available to all parties involved in administering
and managing audit policies throughout a DB2 subsystem or, if applicable, throughout all
members of a data sharing group.
The DB2 10 interface for starting and stopping audit policies is based entirely on using audit
policies to be stored in the SYSIBM.SYSAUDITPOLICY table. For example, you can start an
audit policy using the AUDTPLCY keyword in the START AUDIT TRACE command. You do
not need to specify IFCIDs or trace classes. During the audit trace start, DB2 fetches the
audit policy from the catalog and determines the IFCIDs that it needs to collect to satisfy the
auditing request.
10.2 More granular system authorities and privileges
DB2 10 provides additional system authorities and privileges, as illustrated in Figure 10-14.
These authorities and privileges are designed to help business to comply with government
regulations and to simplify the management of authorities. DB2 10 introduces the concepts of
separation of duties and least privilege to address these needs.
Separation of duties provides the ability for administrative authorities to be divided across
individuals without overlapping responsibilities, so that one user does not possess unlimited
authority (for example SYSADM).
Chapter 10. Security 355
In DB2 10, the system DBADM authority allows an administrator to manage all databases in a
DB2 subsystem. By default, the system DBADM has all the privileges of the DATAACCESS
and ACCESSCTRL authorities.
If you do not want a user with the system DBADM authority to GRANT any explicit
privileges, you can specify the WITHOUT ACCESSCTRL clause in the GRANT statement
when you grant the authority.
If you do not want a user with the system DBADM authority to access any user data in the
databases, you can specify the WITHOUT DATAACCESS clause in the GRANT statement
when you grant the authority. However, even when an authid or role has SYSTEM DBADM
WITHOUT DATAACCESS, explicit privileges (such as SELECT) can be still be granted to
this authid or role.
The SECADM authority provides the ability to manage access to tables in DB2 while not
holding the privilege to create, alter, drop, or access a table. Furthermore, a DATAACCESS
authority can access all user tables without being able to manage security or database
objects.
You can use these authorities to minimize the need for the SYSADM authority in day-to-day
operations by delegating security administration, system related database administration, and
database access duties onto the SECADM, system DBADM, and DATAACCESS authorities.
To go even further, you can separate security administration from the SYSADM authority so
that SYSADM can no longer perform security-related tasks.
The least privilege concept allows an administrator to delegate part of his responsibilities to
other users, without providing extraneous privileges. Under this model, database
administrators can share administrative duties while ensuring a high level of security.
Figure 10-14 System authorities
System authorities
Before DB2 10
Installation SYSADM
SYSADM
DBADM
DBCTRL
DBMAINT
SYSCTRL
PACKADM
Installation SYSOPR
SYSOPR
New in DB2 10
SECADM
System DBADM
With DATAACCESS
With ACCESSCTRL
SQLADM
EXPLAIN
356 DB2 10 for z/OS Technical Overview
10.2.1 Separation of duties
Separation of duties is the ability to distribute administrative authorities to different users
without overlapping of responsibilities. DB2 10 implements separation of duties by providing
the following auditable system administrative authorities:
SYSADM can now be set up to no longer manage access to secure objects, such as roles
or the ability to grant and revoke authorities or privileges.
The SECADM authority performs all security-related tasks against a DB2 subsystem, such
as managing security-related objects and the ability to grant and revoke authorities and
privileges.
The system DBADM authority manages objects but can be granted without access to user
data or the ability to grant and revoke authorities and privileges.
The DATAACCESS authority to controls database administrator access to user data in
DB2.
10.2.2 Least privilege
Least privilege allows an administrator to grant users only the privilege that is required to
perform a specific task. DB2 10 introduces privileges to help protect business-sensitive data
from users who should only be able to perform administrative tasks that do not require access
to user data. In that context DB2 10 introduces the following new privileges:
The EXPLAIN privilege to issue EXPLAIN, PREPARE, and DESCRIBE statements
without requiring the privilege to execute the SQL statements.
The SQLADM authority to monitor and tune SQL without any additional privileges.
The ACCESSCTRL authority that allows a SECADM authority to delegate the ability to
grant and revoke object privileges and most administrative authorities.
10.2.3 Grant and revoke system privilege changes
Except for the SECADM authority, the administrative authorities are system privileges that
can be granted and revoked in DB2. In DB2 10, the interface to grant and revoke system
privileges is extended to support these new authorities. Because the SECADM authority is
not recorded in the catalog, you cannot grant and revoke that privilege like other system
privileges (refer to 10.2.5, SECADM authority on page 359 for more information).
Chapter 10. Security 357
Figure 10-15 shows the syntax of the grant system privilege SQL.
Figure 10-15 SQL syntax grant system
Note that the WITH GRANT OPTION is also ignored for CREATE_SECURE_OBJECT
privilege.
Syntax
,
GRANT ACCESSCTRL
ARCHIVE ON SYSTEM
BINDADD
BINDAGENT
BSDS
CREATEALIAS
CREATEDBA
CREATEDBC
CREATESG
CREATETMTAB
CREATE_SECURE_OBJECT
DATAACCESS
(1) WITH ACCESSCTRL WITH DATAACCESS
DBADM
WITHOUT ACCESSCTRL WITHOUT DATAACCESS
DEBUGSESSION
DISPLAY
EXPLAIN
MONITOR1
MONITOR2
RECOVER
SQLADM
STOPALL
STOSPACE
SYSADM
SYSCTRL
SYSOPR
TRACE
,
TO authorization-name
ROLE role-name
PUBLIC
(2)
WITH GRANT OPTION
Notes:
1 The ACCESSCTRL and DATAACCESS clauses can be specified in any order.
2 The WITH GRANT OPTION can be specified but is ignored for DBADM, DATAACCESS, and
ACCESSCTRL.
358 DB2 10 for z/OS Technical Overview
Revoke system privileges
In DB2 10, the SQL interface for revoking system privileges is changed to support the
ACCESSCTRL, DATAACCESS, system DBADM, EXPLAIN, and SQLADM authorities.
Figure 10-16 shows the syntax of the revoke system privilege SQL.
Figure 10-16 SQL syntax revoke system
,
REVOKE ACCESSCTRL
ARCHIVE
BINDADD
BINDAGENT
BSDS
CREATEALIAS
CREATEDBA
CREATEDBC
CREATESG
CREATETMTAB
CREATE_SECURE_OBJECT
DATAACCESS
DBADM
DEBUGSESSION
DISPLAY
EXPLAIN
MONITOR1
MONITOR2
RECOVER
SQLADM
STOPALL
STOSPACE
SYSADM
SYSCTRL
SYSOPR
TRACE
FROM
,
authorization-name
ROLE role-name
PUBLIC
,
BY authorization-name
ROLE role-name
ALL
INCLUDING DEPENDENT PRIVILEGES
(1) (2)
NOT INCLUDING DEPENDENT PRIVILEGES
Notes:
1 INCLUDING DEPENDENT PRIVILEGES must not be specified when ACCESSCTRL,
DATAACCESS, or DBADM is specified.
2 NOT INCLUDING DEPENDENT PRIVILEGES must be specified when ACCESSCTRL,
DATAACCESS, or DBADM is specified
Chapter 10. Security 359
10.2.4 Catalog changes
Except for the SECADM authority, these new system administrative authorities are, like any
other DB2 system privilege, recorded in the SYSIBM.SYSUSERAUTH catalog table.
The AUTHHOWGOT catalog column tells the authority by which the privilege was granted. In
DB2 10, column values for the AUTHOWGOT column indicate whether a privilege was
granted by the SECADM or ACCESSCTRL authority.
The SYSIBM.SYSUSERAUTH catalog table
The SYSIBM.SYSUSERAUTH catalog table records the system privileges that are held by
users. In DB2 10, columns are added to record the EXPLAIN, SQLADM, system DBADM,
DATAACCESS, and ACCESSCTRL system authorities. Table 10-4 provides details about
these catalog columns.
Table 10-4 Catalog table changes for SYSIBM.SYSUSERAUTH
10.2.5 SECADM authority
DB2 10 introduces a security administrative authority, the SECADM authority, that allows you
to manage security-related DB2 objects and to control access to all database objects and
resources. SECADM has no implicit privilege to access user data or to manage database
objects. However, if a table privilege is granted to PUBLIC, everyone gets access, including
SECADM user.
Column name Data type Description
ACCESSCTRLAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the grantee has system ACCESSCTRL authority:
blank: Privilege is not held
Y: Privilege is held without the GRANT option
CREATESECUREAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the GRANTEE can create secured objects
(triggers and user-defined functions):
blank: Privilege is not held
Y: Privilege is held without the GRANT option
DATAACCESSAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the grantee has system DATAACCESS authority:
blank: Privilege is not held
Y: Privilege is held without the GRANT option
EXPLAINAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the grantee has EXPLAIN privilege:
blank: Privilege is not held
G: Privilege is held with the GRANT option
Y: Privilege is held without the GRANT option
SDBADMAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the grantee has system DBADM authority:
blank: Privilege is not held
Y: Privilege is held without the GRANT option
SQLADMAUTH CHAR(1) NOT NULL
WITH DEFAULT
Whether the grantee has SQLADM authority:
blank: Privilege is not held
G: Privilege is held with the GRANT option
Y: Privilege is held without the GRANT option
Additional information: For a full description of the catalog tables referenced at 10.2.4,
Catalog changes on page 359, refer to DB2 10 for z/OS SQL Reference, SC19-2983.
360 DB2 10 for z/OS Technical Overview
The SECADM authority is available in DB2 10 CM. In case of fall back to previous versions,
the DSNZPARMs are reassembled, and SECADM is removed.
Privileges held by the SECADM authority
The SECADM authority implicitly holds the grantable privileges shown in Table 10-5.
Table 10-5 Privileges held by SECADM authority
Activating the SECADM authority
In DB2 10 NFM, the SECADM authority is activated either implicitly (if you do not provide the
SECADM related DSNZPARMs) or explicitly (if you do provide the SECADM related
DSNZPARMs).
DSNZPARMs for the SECADM authority
DB2 10 introduces the DSNZPARMs shown in Table 10-6 to configure and manage the
SECADM authority.
Table 10-6 DSNZPARMs for SECADM
Privilege category Description
SELECT Select on all catalog tables
INSERT, UPDATE, DELETE All updatable catalog tables including SYSIBM.SYSAUDITPOLICY
GRANT, REVOKE
(SQL DCL)
Grant access to and revoke access from all database resources
Revoke explicit privileges that are granted by SECADM
Revoke explicit privileges granted by others using the BY
clause
Create, alter, drop, comment Column mask controls
Row permission controls
Alter user table to Activate and deactivate column level access crontrol
Activate and deactivate row level acces control
Create, drop, comment Role
Create, drop, comment, alter Trusted context
Online DSNZPARM changes SECADM can perform any online DSNZPARM change
Utilities Unload catalog tables only
DSN1SDMP
Commands Start, alter, stop, and display trace
CREATE_SECURE_OBJECT Alter the SECURED or NOT SECURED clause on triggers and
user-defined functions
DSNZPARM Description
SECADM1 Authorization ID or role name of first SECADM
AUTHID: 1 to 8 characters, starting with an alphabetic character
ROLE: Ordinary SQL identifier up to 128 bytes and must not use any of the reserved words
ACCESSCTRL, DATAACCESS, DBADM, DBCTRL, DBMAINT, NONE, NULL, PACKADM,
PUBLIC, SECADM, or SQLADM.
Default if not provided in DSNZPARM: SECADM
SECADM1_TYPE Specifies whether SECADM1 is an authorization ID or a role
Acceptable values: AUTHID, ROLE
Default: AUTHID
Chapter 10. Security 361
The DSNZPARMs described in Table 10-6 are online changeable by either install SYSADM or
the SECADM authority. Any attempt by an authorization ID or role to change any of these
online DSNZPARMs fails.
Defaults for SECADM related DSNZPARMs
If you do not specify any of the SECADM DSNZPARMs listed in Table 10-6, DB2 uses their
default values. With these defaults, the SECADM authority is not separated from the
SYSADM authority, and the SYSADM authority implicitly has SECADM authority and
continues to function as in DB2 10 CM or in previous versions of DB2.
At this point (with SEPARATE_SECURITY set to NO), you must implement the SECADM
authority authorization ID that is provided by the DSNZPARM defaults, because your
SYSADM authority implicitly has SECADM authority and can exercise any privilege that is
implicitly held by the SECADM authority.
Setting DSNZPARM SEPARATE_SECURITY to NO
If you set DSNZPARM SEPARATE_SECURITY to NO, the SYSADM authority implicitly has
SECADM authority, in which case the SYSADM authority behaves exactly the same way as in
DB2 10 CM and in previous versions of DB2 for z/OS.
Setting DSNZPARM SEPARATE_SECURITY to NO is recommended for DB2 for z/OS
version migrations. With this DSNZPARM setting, you preserve the status of your existing
SYSADM authorities.
Setting DSNZPARM SEPARATE_SECURITY to YES
After you set DSNZPARM SEPARATE_SECURITY to YES, the SECADM authority can
perform the tasks listed in Table 10-5 on page 360. SYSADM does not have the privileges to
SECADM2 Authorization ID or role name of second SECADM
AUTHID: 1 to 8 characters, starting with an alphabetic character
ROLE: Ordinary SQL identifier up to 128 bytes and must not use any of the reserved words
ACCESSCTRL, DATAACCESS, DBADM, DBCTRL, DBMAINT, NONE, NULL, PACKADM,
PUBLIC, SECADM, or SQLADM.
Default if not provided in DSNZPARM: SECADM
SECADM2_TYPE Specifies whether SECADM2 is an authorization ID or a role
Acceptable values: AUTHID, ROLE
Default: AUTHID
SEPARATE_SECURITY Security administrator duties (SECADM) to be separated from system administrator duties
(SYSADM)
Acceptable values: YES or NO
Default value: NO
DSNZPARM Description
Notes regarding authorities:
INSTALL SYSADM always has the right to perform security-related tasks.
If SEPARATE_SECURITY is set to NO, the SYSADM authority implicitly holds the
SECADM authority.
If SEPARATE_SECURITY is set to YES, the SYSADM authority does not hold the
privilege to perform security related tasks.
SECADM1, SYSADM, and install SYSADM can all grant and manage security objects
at the same time.
362 DB2 10 for z/OS Technical Overview
grant and revoke any of these privileges in DB2. Install SYSADM is not affected by the
SEPARATE_SECURITY setting. It can grant and revoke all the time. Only granted SYSADM
is affected.
Specify your own SECADM DSNZPARM
You can activate a SECADM authority of your choice by setting at least one of the SECADM1
or SECADM2 DSNZPARMs to a ROLE or an authorization ID of your organization. Table 10-7
shows the SECADM related DSNZPARMs that we set explicitly in our DB2 10 environment.
Table 10-7 SECADM authority DSNZPARMs settings of our sample scenario
We set both DSNZPARMs, SECADM1 and SECADM2, to the same value, to RACF group
DB0BSECA. In the command output provided in Figure 10-17, user DB2R53 is connected to
that RACF group. As a result, user DB2R53 has the capability to perform DB2 work that
requires the privileges of the SECADM authority. For example, DB2R53 can perform
DSNZPARM online changes of SECADM related DSNZPARMs and can perform any of the
tasks listed in Table 10-5 on page 360.
Figure 10-17 user DB2R53 is connected to RACF group DB0BSECA
For the SECADM authority to be exercised by user DB2R53, user DB2R53 must set its
CURRENT SQLID special register to DB0BSECA. Otherwise, user DB2R53 cannot grant and
revoke privileges, and any attempt to grant or revoke privileges under an SQLID other than
DB0BSECA fails with SQLCODE -551.
Revoking the SECADM authority
Similar to install SYSADM, the SECADM authority is not recorded in the DB2 catalog and
cannot be granted or revoked. Instead, SECADM can be modified only through an online
DSNZPARM change that must be performed either by the install SYSADM or the SECADM
authority. Removing the SECADM authority from an authorization ID or role through an online
DSNZPARM change does not revoke any dependent privileges that are granted by SECADM.
DSNZPARM Parameter value
SECADM1 DB0BSECA
SECADM1_TYPE AUTHID
SECADM2 DB0BSECA
SECADM2_TYPE AUTHID
SEPARATE_SECURITY NO
LG DB0BSECA
INFORMATION FOR GROUP DB0BSECA
SUPERIOR GROUP=SYS1 OWNER=RC63 CREATED=10.215
INSTALLATION DATA=SECADM RACF GROUP FOR DB2 SYSTEM DB0B
NO MODEL DATA SET
TERMUACC
NO SUBGROUPS
USER(S)= ACCESS= ACCESS COUNT= UNIVERSAL ACCESS=
DB2R53 USE 000000 NONE
CONNECT ATTRIBUTES=NONE
REVOKE DATE=NONE RESUME DATE=NONE
Chapter 10. Security 363
Revoking privileges granted by others
If you use the SECADM authority to revoke privileges granted by others, you must specify the
BY clause in the revoke statement as shown in Example 10-11.
Example 10-11 SECAM to revoke privileges granted by others
REVOKE DATAACCESS ON SYSTEM FROM DB0BDA BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES ;
RACF/DB2 Access Control Authorization Exit users
You can define the SECADM authority in RACF for use with the RACF/DB2 Access Control
Authorization Exit. For a list of RACF profiles that are available to manage the DB2 10
security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage DB2 10
authorities on page 377.
Auditing the SECADM authority
You can use a category SECMAINT audit policy to keep track of grants and revokes
performed by SECADM authorities. In our DB2 10 environment, we used the SQL statement
shown in Example 10-12 to create an audit policy to audit all grants and revokes performed by
SECADM authorities.
Example 10-12 SECMAINT audit policy - grant - revoke auditing
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, SECMAINT) VALUES('AUDSECMAINT','A');
We then started the audit policy and used the SECADM authority authorization ID
DB0BSECA to grant the DATAACCESS privilege to authorization ID DB0BDA (see
Example 10-13).
Example 10-13 SECMAINT audit policy - grant DATAACESS SQL
SET CURRENT SQLID = 'DB0BSECA';
GRANT DATAACCESS ON SYSTEM TO DB0BDA;
As a result, an IFCID 141 audit trace record with an audit reason of SECADM was created by
DB2. The OMEGAMON PE formatted audit trace for this trace record is shown in
Figure 10-18.
Figure 10-18 OMEGAMON PE category SECMAINT audit report
10.2.6 System DBADM authority
DB2 10 introduces the system DBADM database administrative authority, which allows you to
perform administrative tasks throughout all databases of a DB2 system and to issue
commands for a DB2 subsystem without being able to access user data and without the
ability to grant and revoke privileges.
TIMESTAMP TYPE DETAIL
----------- -------- ------------------------------------------------------------------
18:32:11.91 AUTHCNTL GRANTOR: DB0BSECA OWNER TYPE: PRIM/SECOND AUTHID
REASON: SECADM SQLCODE: 0
OBJECT TYPE: USERAUTH
TEXT: GRANT DATAACCESS ON SYSTEM TO DB0BDA
364 DB2 10 for z/OS Technical Overview
The system DBADM is available in DB2 10 NFM.
Privileges held by system DBADM
The system DBADM authority implicitly holds the grantable privileges shown in Table 10-8.
Table 10-8 System DBADM privileges
Privilege category Description
System privileges BINDADD
BINDAGENT
CREATEALIAS
CREATEDBA
CREATEDBC
CREATEMTAB
DISPLAY
MONITOR1
MONITOR2
TRACE
EXPLAIN
SQLADM
On all collections CREATEIN
All user databases CREATETAB
CREATETS
DISPLAYDB
DROP
IMAGCOPY
RECOVERDB
STARTDB
STOPDB
On all user tables except for tables with row
permissions or column masks defined
ALTER
INDEX
REFERENCES
TRIGGER
TRUNCATE
On all packages BIND, COPY
On all plans BIND
On all schemes CREATEIN
ALTERIN
DROPIN
On all sequences ALTER
On all distinct types ALTER
Use privileges TABLESPACES
On all catalog tables SELECT, UNLOAD
On all updatable catalog tables except for table
SYSIBM.SYSAUDITPOLICY
INSERT
UPDATE
DELETE
Chapter 10. Security 365
Management of trusted contexts, roles, row permissions, column masks, and altering or
defining triggers and UDFs with the SECURED or NOT SECURED clause is entirely in the
responsibility of the SECADM authority. SECADM exclusively manages to protect business
sensitive information, which is why System DBADM cannot manage any of these objects.
The system DBADM authority does not have any of the ARCHIVE, BSDS, CREATESG, and
STOSPACE system privileges. Therefore, you need to grant these privileges separately if an
authorization ID with system DBADM needs to perform tasks that require these privileges.
The system DBADM authority can set the plan or package bind owner to any value, provided
that DSNZPARM SEPARATE_SECURITY is set to NO. However, the package owner
specified by system DBADM must have authorization to execute all statements that are
embedded in the package.
Granting and revoking the system DBADM privilege
The system DBADM authority can be granted or revoked only by an authorization ID or role
with SECADM authority. Revoking this authority does not revoke dependent privileges.
Under SECADM authority, we used the SQL statements shown in Example 10-14 to grant
and revoke the system DBADM privilege for the authorization ID DB0BSDBA. By default, the
DATAACCESS and ACCESSCTRL authorities are granted when the system DBADM
authority is granted.
Example 10-14 Grant or revoke system DBADM privilege
SET CURRENT SQLID = 'DB0BSECA';
GRANT DBADM ON SYSTEM TO DB0BSDBA;
REVOKE DATAACCESS ON SYSTEM FROM DB0BSDBA BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES ;
If you do not want the system DBADM authority to have access to data and the ability to grant
and revoke privileges, you have to explicitly exclude the DATAACCESS and ACCESSCTRL
privileges from the grant system DBADM SQL statement as shown in Example 10-15.
Example 10-15 Grant system DBADM WITHOUT DATAACCESS WITHOUT ACCESSCTRL
GRANT DBADM WITHOUT DATAACCESS WITHOUT ACCESSCTRL ON SYSTEM TO DB0BSDBA;
Note that WITHOUT DATAACESS is not applicable in a persistent way to all objects. An ID
WITHOUT DATAACCESS can still be granted EXPLICIT privileges.
When you revoke the system DBADM privilege, you must specify the NOT INCLUDING
DEPENDENT PRIVILEGES clause. Otherwise, the revoke statement fails with SQLCODE
-104. If system DBADM is granted WITH DATAACCESS and WITH ACCESSCTRL, then
DATAACCESS and ACCESSCTRL must be revoked explicitly.
Utilities Unload on catalog tables only
DSN1SDMP to start, stop and capture
monitor traces
User-defined routines Execute privilege on all system-defined routines
(functions and procedures)
Privilege category Description
366 DB2 10 for z/OS Technical Overview
Creating objects
The system DBADM authority can create databases, tables, aliases, sequences, table
spaces, and distinct types without requiring any additional privileges. System DBADM can
also create objects such as tables, views, indexes, and aliases for someone else. However, to
create functions, indexes, triggers, MQTs, procedures, or views, the system DBADM authority
might require additional privileges as described in Table 10-9.
Table 10-9 Additional privileges required by system DBADM
Altering and dropping objects
The system DBADM authority can alter and drop user objects without requiring object
ownership or the privilege to alter or drop the object. However, the system DBADM cannot
alter or drop any of the security objects that are exclusively managed by the SECADM
authority. These security objects include:
Alter user tables to activate or deactivate row level access control
Alter triggers to modify the SECURED or NOT SECURED clause
Alter functions to modify SECURED or NOT SECURED clause
DB2 object Description
Function using a table as input parameter If the function uses a table as parameter. one of the
following privileges:
Select privilege on any table that is an imput
parameter to the function
DATAACCESS authority
Sourced function If the function is sourced on another function:
Execute privilege for the source function
DATAACCESS authority
External function or procedure
required to run in a WLM APPLENV
If the function or procedure runs in a WLM
application environment (WLM APPLENV)
Authority to create functions or procedures in
that WLM APPLENV
JAVA function or procedure If a .jar file name is specified in the external name
clause, one of the following privileges:
Usage privilege on the .jar file
DATAACCESS authority
Index on expression If the index is created on an expression, one of the
following privileges:
Execute privilege on any UDF that is involved in
the index expression
DATAACCESS authority
Trigger If the trigger accesses other DB2 objects, one of the
following privileges:
The privilege that satisfies the authorization
requirement for the SQL statement contained in
the trigger
DATAACCESS authority
View One of the following privileges:
Select privileges on the table or view referenced
by the view
DATAACCESS authority
Chapter 10. Security 367
RACF/DB2 Access Control Authorization Exit users
You can define the system DBADM authority in RACF for use with the RACF/DB2 Access
Control Authorization Exit. For a list of RACF profiles that are available to manage the DB2 10
security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage DB2 10
authorities on page 377.
Auditing the system DBADM authority
You can use a category DBADMIN audit policy to keep track of activities that are performed
by system DBADM authorities. In our DB2 10 environment, we used the SQL statement
shown in Example 10-16 to create an audit policy to audit all database administrative tasks
performed by system DBADM authorities. We set the DBADMIN column to * to audit all
authorities.
Example 10-16 DBADMIN audit policy - system DBADM auditing
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, DBADMIN) VALUES('AUDDBADMIN','*');
We then started the audit policy and used the system DBADM authority authorization ID
DB0BSDBA to create a plan table in schema DB0BEXPLAIN (see Example 10-17).
Example 10-17 DBADMIN audit policy - create table by system DBADM
SET CURRENT SQLID = 'DB0BSDBA';
CREATE TABLE DB0BEXPLAIN.PLAN_TABLE LIKE DB2R5.PLAN_TABLE;
As a result, DB2 created an IFCID 361 audit trace record for authorization type SYSDBADM.
Figure 10-19 shows the OMEGAMON PE formatted audit trace for this trace record.
Figure 10-19 OMEGAMON PE auth type SYSDBADM audit report
10.2.7 DATAACCESS authority
DB2 10 introduces the DATAACCESS authority, which allows you to access data in all user
tables, views, and materialized query tables within a DB2 subsystem. A DATAACCESS
authority furthermore can execute all plans, packages, functions, and procedures.
The DATAACCESS authority is available in DB2 10 NFM.
AUTHCNTL AUTH TYPE: SYSDBADM
PRIV CHECKED: CREATE TABLE OBJECT TYPE: DATABASE
AUTHID: DB0BSDBA
SOURCE OBJECT
QUALIFIER: SYSIBM
NAME: DSNDB04
TARGET OBJECT
QUALIFIER: DB0BEXPLAIN
NAME: PLAN_TABLE
TEXT: CREATE TABLE DB0BEXPLAIN.PLAN_TABLE LIKE DB2R5.PLAN_TABLE
Additional information: For more information about row level and column mask control,
refer to 10.5, Support for row and column access control on page 384.
368 DB2 10 for z/OS Technical Overview
Privileges held by DATAACCESS
The DATAACCESS authority implicitly has the grantable privileges shown in Table 10-10.
Table 10-10 DATAACCESS implicitly held grantable privileges
Granting and revoking the DATAACCESS authority
DATAACCESS authority can be granted or revoked only by an authorization ID or role with
SECADM authority. Revoking this authority does not revoke dependent privileges. The WITH
GRANT option is not supported with DATAACCESS and is ignored when used in grant
statements. Under SECADM authority, we used the SQL statements shown in Example 10-18
to grant and revoke the DATAACCESS authority for the authorization ID DB0BDA.
Example 10-18 Grant or revoke the DATAACCESS privilege
SET CURRENT SQLID = 'DB0BSECA';
GRANT DATAACCESS ON SYSTEM TO DB0BDA;
REVOKE DATAACCESS ON SYSTEM FROM DB0BDA BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES ;
When you revoke the DATAACCESS privilege, you must specify the NOT INCLUDING
DEPENDENT PRIVILEGES clause. Otherwise, the revoke statement fails with SQLCODE
-104.
RACF/DB2 Access Control Authorization Exit users
You can define the DATAACCESS authority in RACF for use with the RACF/DB2 Access
Control Authorization Exit. For a list of RACF profiles that are available for managing the
DB2 10 security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage
DB2 10 authorities on page 377.
Privilege category Description
System privilege DEBUGSESSION
All user tables, views, MQTs SELECT
INSERT
UPDATE
DELETE
TRUNCATE
All plans, packages, routines EXECUTE
All databases RECOVERDB
REORG
REPAIR
LOAD
All jar files USAGE
All sequences USAGE
All distinct types USAGE
All catalog tables SELECT
All updatable catalog tables except for
SYSIBM.SYSAUDITPOLICIES
INSERT
UPDATE
DELETE
Chapter 10. Security 369
Auditing the DATAACCESS authority
You can use a category DBADMIN audit policy to keep track of activities that are performed
by DATAACCESS authorities. In our DB2 10 environment, we used the SQL statement shown
in Example 10-19 to create an audit policy to audit all database administrative tasks
performed by DATAACCESS authorities. We set the DBADMIN column to T to audit only
DATAACCESS authorities.
Example 10-19 DBADMIN audit policy: DATAACCESS auditing
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, DBADMIN) VALUES('AUDDBADMIN_T','T');
We then started the audit policy and used the DATAACCESS authority authorization ID
DB0BDA to query plan table DB2R5.PLAN_TABLE (Example 10-20).
Example 10-20 DBADMIN audit policy - query a table by DATAACCESS authority
SET CURRENT SQLID = 'DB0BDA';
SELECT * FROM DB2R5.PLAN_TABLE ;
As a result, DB2 created an IFCID 361 audit trace record for authorization type
DATAACCESS. Figure 10-20 shows the OMEGAMON PE formatted audit trace for this trace
record.
Figure 10-20 OMEGAMON PE auth type DATAACCESS audit report
10.2.8 ACCESSCTRL authority
DB2 10 introduces the ACCESSCTRL authority, which allows you to grant and revoke explicit
privileges to AUTHIDs and ROLEs. The ACCESSCTRL authority itself is not holding any SQL
DML privilege that allow access to user tables.
The ACCESSCTRL authority is available in DB2 10 NFM.
TYPE DETAIL
-------- --------------------------------------------------------------------------------
AUTHCNTL AUTH TYPE: DATAACCS
PRIV CHECKED: SELECT OBJECT TYPE: TAB/VIEW
AUTHID: DB0BDA
SOURCE OBJECT
QUALIFIER: DB2R5
NAME: PLAN_TABLE
TARGET OBJECT
QUALIFIER: N/P
NAME: N/P
TEXT: SELECT * FROM DB2R5.PLAN_TABLE
370 DB2 10 for z/OS Technical Overview
Privileges held by ACCESSCTRL
ACCESSCTRL holds the grantable privileges shown in Table 10-11.
Table 10-11 ACCESSCTRL implicitly held grantable privileges
Granting and revoking the ACCESSCTRL authority
The ACCESSCTRL authority can be granted or revoked only by the SECADM authority.
Revoking ACCESSCTRL does not revoke privileges and authorities granted by
ACCESSCTRL. The WITH GRANT option is not supported with ACCESSCTRL and is
ignored when used in grant statements. In our DB2 10 environment, we used the SQL
statements shown in Example 10-21 to grant and revoke the ACCESSCTRL privilege for the
authorization ID DB0BAC.
Example 10-21 Grant and revoke the ACCESSCTRL privilege
GRANT ACCESSCTRL ON SYSTEM TO DB0BAC;
REVOKE ACCESSCTRL ON SYSTEM FROM DB0BAC BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES;
When you revoke the ACCESSCTRL privilege, you must specify the NOT INCLUDING
DEPENDENT PRIVILEGES clause. Otherwise, the revoke statement fails with SQLCODE
-104.
Revoking privileges granted by others
If you use the ACCESSCTRL authority to revoke privileges granted by others you must
specify the BY clause in the revoke statement as shown in Example 10-11.
Example 10-22 SECADM to revoke privileges granted by others
REVOKE DATAACCESS ON SYSTEM FROM DB0BDA BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES ;
If you do not specify the BY clause, the revoke statement fails with SQLCODE -556.
RACF/DB2 Access Control Authorization Exit users
You can define the ACCESSCTRL authority in RACF for use with the RACF/DB2 Access
Control Authorization Exit. For a list of RACF profiles that are available to manage the DB2 10
security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage DB2 10
authorities on page 377.
Privilege category Description
All catalog tables SELECT
UNLOAD
All catalog tables INSERT, UPDATE, DELETE on all updatable catalog tables, except for
SYSIBM.SYSAUDITPOLICIES
Grant, revoke Grant all grantable privileges except for system DBADM,
DATAACCESS, ACCESSCTRL and CREATE_SECURE_OBJECT
Revoke privileges granted by others using the BY clause, except for
system DBADM, DATAACCESS, ACCESSCTRL and
CREATE_SECURE_OBJECT
Chapter 10. Security 371
Auditing the ACCESSCTRL authority
You can use a category DBADMIN audit policy to keep track of activities that are performed
by ACCESSCTRL authorities. In our DB2 10 environment, we used the SQL statement shown
in Example 10-23 to create an audit policy to audit all database administrative tasks
performed by ACCESSCTRL authorities. We set the DBADMIN column to G to audit only
ACCESSCTRL authorities.
Example 10-23 DBADMIN audit policy: ACCESSCTRL auditing
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, DBADMIN) VALUES('AUDDBADMIN_G','G');
We then started the audit policy and used the ACCESSCTRL authority authorization ID
DB0BAC to grant the select privilege on table DB2R5.PLAN_TABLE to public (refer to
Example 10-24).
Example 10-24 DBADMIN audit policy: Grant privilege by ACCESSCTRL authority
SET CURRENT SQLID = 'DB0BAC';
GRANT SELECT ON TABLE DB2R5.PLAN_TABLE TO PUBLIC;
As a result, DB2 created an IFCID 361 audit trace record for authorization type
ACCESSCTRL.
Figure 10-21 shows the OMEGAMON PE formatted audit trace for this trace record.
Figure 10-21 OMEGAMON PE auth type ACCESSCTRL audit report
10.2.9 Authorities for SQL tuning
DB2 10 introduces the EXPLAIN and SQLADM authorities. These authorities have no access
to user data. They have only the privileges that are required for SQL query tuning activities,
such as explain, prepare, and describe table.
Using these authorities, you can enable application developers and SQL tuning specialists to
perform SQL tuning even in production environments, without exposing user data. This
function is supported by your favorite SQL tuning tool. The privileges that are provided by the
SQLADM authority are a super set of the privileges that are provided by the EXPLAIN
privilege (see Figure 10-22).
TYPE DETAIL
-------- --------------------------------------------------------------------------------
AUTHCNTL AUTH TYPE: ACCSCTRL
PRIV CHECKED: SELECT OBJECT TYPE: TAB/VIEW
AUTHID: DB0BAC
SOURCE OBJECT
QUALIFIER: DB2R5
NAME: PLAN_TABLE
TARGET OBJECT
QUALIFIER: DB0BAC
NAME: N/P
TEXT: GRANT SELECT ON TABLE DB2R5.PLAN_TABLE TO PUBLIC
372 DB2 10 for z/OS Technical Overview
Figure 10-22 SQLADM authority and EXPLAIN privilege
EXPLAIN privilege
Using the EXPLAIN privilege, you can perform SQL explain, prepare, and describe table
statements without having the privilege to execute the explainable SQL statement that is to be
explained, prepared, or described.
Availability
The EXPLAIN privilege is available in DB2 10 NFM.
Privileges held by the EXPLAIN privilege
With the EXPLAIN privilege, you can perform the tasks described in Table 10-12 without
having to require access to user data.
Table 10-12 Privileges of EXPLAIN privilege
Privilege Description
Explain SQL explain with option PLAN and ALL
Prepare Prepare statement without binding an executable statement
Describe table Obtaining table structure information
Bind Bind packages with options
EXPLAIN ONLY
explains SQL statements contained in DBRM. cannot be used with rebind.
SQLERROR(CHECK)
performs all syntax and semantics checks on the SQL statements contained
in a DBRM
CURRENT
EXPLAIN
MODE=EXPLAIN
Explain dynamic SQL statements executing under special register, CURRENT
EXPLAIN MODE = EXPLAIN
SQLADM
Explain STMTCACHE ALL, STMTID, STMTTOKEN
Start, stop and display profi le commands
Runstats and modify stati stics
Perform Insert, update, delete on updatable catal og tables
(except for SYSIBM.SYSAUDITPOLICIES)
perform catalog queri es
MONITOR2 privi lege
Explain PLAN / ALL
Prepare
Descri be table
CURRENT EXPLAIN MODE = EXPLAIN
Bind package
EXPLAIN(ONLY)
SQLERROR(CHECK)
EXPLAIN
Chapter 10. Security 373
Granting and revoking the EXPLAIN privilege
The EXPLAIN privilege can be granted or revoked by the SECADM authority or
ACCESSCTRL authority. EXPLAIN can be revoked by the EXPLAIN privilege, if the EXPLAIN
privilege was granted by an authorization ID or role with EXPLAIN privilege and GRANT
option. In our DB2 10 environment, we used the SQL statements shown in Example 10-25 to
grant and revoke the EXPLAIN privilege for user DB2R52. In this revoke example, DB2 does
not perform a cascading revoke of EXPLAIN privileges granted by authorization ID DB2R52.
Example 10-25 Grant, revoke the explain privilege
GRANT EXPLAIN ON SYSTEM TO DB2R52 WITH GRANT OPTION;
REVOKE EXPLAIN ON SYSTEM FROM DB2R52 BY DB0BSECA
NOT INCLUDING DEPENDENT PRIVILEGES;
Other privileges required by the EXPLAIN privilege
To execute the explain SQL statement the authorization ID or role that you assigned the
EXPLAIN privilege, you also need plan or package execution authorization, which is required
to execute EXPLAIN privilege for dynamic SQL. For example, you might want to grant that
authorization ID the execute privilege on plan DSNTEP2 for mainframe-based dynamic SQL
access or on package collection NULLID for DRDA-based dynamic SQL access.
The authorization ID or role that you assign EXPLAIN privilege also needs all privileges on
the set of EXPLAIN tables for that authorization ID or role. For example, if you grant only the
EXPLAIN privilege to authorization ID DB2R52 and subsequently create the EXPLAIN tables
in schema DB2R52, authorization ID DB2R52 does not become the EXPLAIN table owner. In
that case, you must grant all privileges on all EXPLAIN tables in schema DB2R52 explicitly to
authorization ID DB2R52 so that DB2R52 can further process EXPLAIN tables.
CURRENT EXPLAIN MODE not to be used with EXPLAIN privilege
The EXPLAIN privilege has only the privileges described in Table 10-12. Any attempt to
execute an SQL DML statement other than explain, prepare, or describe table fails. For
example, if you as an EXPLAIN privilege use the CURRENT EXPLAIN MODE special register
to run an SQL query in EXPLAIN mode, your query fails with SQLCODE -522, because the
EXPLAIN privilege does not hold the EXPLAINED MONITORED privilege (see SQL error
message in Figure 10-23 for details).
Figure 10-23 EXPLAIN privilege failure with CURRENT EXPLAIN MODE special register
---------+---------+---------+---------+---------+---------+---------+--------
set current explain mode = EXPLAIN;
---------+---------+---------+---------+---------+---------+---------+--------
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------+---------+--------
select * from dsn_statemnt_table;
---------+---------+---------+---------+---------+---------+---------+--------
DSNT408I SQLCODE = -552, ERROR: DB2R51 DOES NOT HAVE THE PRIVILEGE TO PERFORM
OPERATION EXPLAIN MONITORED STATEMENTS
---------+---------+---------+---------+---------+---------+---------+--------
374 DB2 10 for z/OS Technical Overview
If you have only the privileges that are implicitly held by the EXPLAIN privilege, you need to
use the SQL EXPLAIN statement to successfully explain the SQL query (see
Example 10-26).
Example 10-26 EXPLAIN
EXPLAIN ALL SET QUERYNO=578
FOR
SELECT * FROM DSN_STATEMNT_TABLE;
New bind options EXPLAIN(ONLY) and SQLERROR(CHECK)
DB2 10 introduces the EXPLAIN(ONLY) and SQLERROR(CHECK) package bind options.
These two bind package options can be used only in package BIND commands. They are
unavailable to the BIND PLAN and REBIND package commands.
The EXPLAIN privilege holds the implicit privilege to bind packages with the EXPLAIN(ONLY)
and the SQLERROR(CHECK) bind options. The EXPLAIN(ONLY) bind package option
performs an SQL EXPLAIN for each SQL statement that is contained in the DBRM. The
SQLERROR(CHECK) bind package option performs a syntax check on SQL statements that
are contained in the DBRM.
RACF/DB2 Access Control Authorization Exit users
You can define the EXPLAIN privilege in RACF for use with the RACF/DB2 Access Control
Authorization Exit. For a list of RACF profiles that are available to manage the DB2 10
security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage DB2 10
authorities on page 377.
Auditing the EXPLAIN privilege
The audit category DBADMIN does not provide auditing for the EXPLAIN privilege, because
auditing is performed on authorities, not privileges. The EXPLAIN privilege does not allow
access to sensitive information.
IFCID 361 is cut for all successful accesses by a user if the trace is started using the IFCID
keyword.
SQLADM authority
The SQLADM implicitly holds the privileges of the EXPLAIN privilege. Additionally, the
SQLADM authority allows you to issue start, stop, and display profile commands, to execute
RUNSTATS, and to modify statistics utilities. It also includes the MONITOR2 privileges, which
allow SQLADM to start, read, and stop the instrumentation facility interface (IFI) monitor
traces.
Availability
The SQLADM authority is available in DB2 10 NFM.
Privileges held by the SQLADM authority
With the SQLADM authority, you can perform the tasks that are described in Figure 10-22 on
page 372 without requiring access to user data.
Chapter 10. Security 375
Granting and revoking the SQLADM authority
An authorization ID or role with SECADM or ACCESSCTRL authority can grant or revoke the
SQLADM privilege. In our DB2 10 environment, we used the SQL statements shown in
Example 10-27 to grant and revoke the SQLADM privilege for user DB2R51. In this example,
DB2 performs a cascading revoke of SQLADM privileges granted by authorization ID
DB2R51.
Example 10-27 Grant, revoke SQLADM privilege
GRANT SQLADM ON SYSTEM TO DB2R51 WITH GRANT OPTION;
REVOKE SQLADM ON SYSTEM FROM DB2R51 BY DB0BSECA
INCLUDING DEPENDENT PRIVILEGES;
CURRENT EXPLAIN MODE special register
With the SQLADM system privilege, you can set the CURRENT EXPLAIN MODE special
register to EXPLAIN to dynamically explain a workload of SQL statements. As shown in
Figure 10-24, you can dynamically explain a series of SQL statements using the CURRENT
EXPLAIN MODE special register.
Figure 10-24 SQLADM authority to use CURRENT EXPLAIN MODE special register
As indicated by SQL0217W, the query requested only explain information. As a result, no
executable statement was prepared and stored in the dynamic statement cache. For this
reason, the plan table has no statement token information for the SQL queries shown in
Figure 10-24.
RACF/DB2 Access Control Authorization Exit users
You can define the SQLADM authority in RACF for use with the RACF/DB2 Access Control
Authorization Exit. For a list of RACF profiles that are available to manage the DB2 10
security authorities through RACF, refer to 10.2.11, Using RACF profiles to manage DB2 10
authorities on page 377.
set current explain mode = EXPLAIN
DB20000I The SQL command completed successfully.
select 1 from sysibm.sysdummy1
SQL0217W The statement was not executed as only Explain information requests
are being processed. SQLSTATE=01604
set current explain mode = NO
DB20000I The SQL command completed successfully.
SELECT PROGNAME,CREATOR,TNAME,ACCESSTYPE,STMTTOKEN FROM PLAN_TABLE
DB20000I The SQL command completed successfully.
PROGNAME CREATOR TNAME ACCESSTYPE STMTTOKEN
-------- -------- ------------------ ---------- ----------
SQLC2H20 SYSIBM SYSDUMMY1 R (NULL)
SQLC2H20 SYSIBM SYSDUMMY1 R (NULL)
376 DB2 10 for z/OS Technical Overview
Auditing the SQLADM authority
You can use the category DBADMIN audit policy to keep track of activities performed by
SQLADM authorities. In our DB2 10 environment, we used the SQL statement shown in
Example 10-28 to create an audit policy to audit all database administrative tasks performed
by SQLADM authorities. We set the DBADMIN column to K to audit only SQLADM
authorities.
Example 10-28 DBADMIN audit policy: SQLADM auditing
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, DBADMIN) VALUES('AUDDBADMIN_K','K')
We then started the audit policy and used the SQLADM authority authorization ID DB2R51 to
query table SYSIBM.SYSVOLUMES (refer to Example 10-29).
Example 10-29 DBADMIN audit policy: Query table by SQLADM authority
SELECT * FROM SYSIBM.SYSVOLUMES
As a result, DB2 created an IFCID 361 audit trace record for authorization type SQLADM.
Figure 10-25 shows the OMEGAMON PE formatted audit trace for this trace record.
Figure 10-25 OMEGAMON PE auth type SQLADM audit report
10.2.10 The CREATE_SECURE_OBJECT system privilege
DB2 10 introduces the CREATE_SECURE_OBJECT system privilege, which is required to
secure triggers or scalar functions that are used on row permission or column mask enforced
tables. For details about row and column mask enforced tables, refer to 10.5, Support for row
and column access control on page 384.
If you use triggers or functions on tables that are row permission or column mask enforced,
your trigger or function needs to be marked as secure.
TYPE DETAIL
-------- --------------------------------------------------------------------------------
AUTHCNTL AUTH TYPE: SQLADM
PRIV CHECKED: SELECT OBJECT TYPE: TAB/VIEW
AUTHID: DB2R51
SOURCE OBJECT
QUALIFIER: SYSIBM
NAME: SYSVOLUMES
TARGET OBJECT
QUALIFIER: N/P
NAME: N/P
TEXT: SELECT * FROM SYSIBM.SYSVOLUMES
Chapter 10. Security 377
10.2.11 Using RACF profiles to manage DB2 10 authorities
If you use the DB2 RACF Access Control Module to enforce security policy, you can use the
RACF profiles listed in Table 10-13 to manage the DB2 10 administrative authorities and
privileges.
Table 10-13 DB2 RACF profiles for DB2 10 new authorities
10.2.12 Separating SECADM authority from SYSADM and SYSCTRL authority
DB2 10 introduces DSNZPARM SEPARATE_SECURITY, which you can use to separate the
security authority from system and database administration authority. You can set this
DSNZPARM to YES or NO. The default value is NO.
DSNZPARM SEPARATE_SECURITY is updatable using the online DSNZPARM capability.
The online DSNZPARM change can be performed only by users with installation SYSADM or
SECADM authority.
Security administration authorities not separated from SYSADM
If you specify NO (the default) for DSNZPARM SEPARATE_SECURITY, the install SYSADM
and SYSADM authorities implicitly have the SECADM authority and can continue to manage
all security objects and to grant and revoke privileges granted by others. Additionally, the
SYSADM authority manages security objects, including the row-level and column-level
access control security objects available with DB2 10.
Users with SYSCTRL authority implicitly have ACCESSCTRL authority and can continue to
perform security-related tasks with no authority to access user data. Additionally, SYSCTRL
can manage roles and can set the BIND OWNER to any value, provided that the specified
owner qualifies for the data access privilege that is required by the SQL DML statements
contained in the package.
Authority name RACF profile RACF class
SECADM DB2-ssid.SECADM DSNADM
ACCESSCTRL DB2-ssid.ACCESSCTRL DSNADM
System DBADM DB2-ssid.SYSDBADM DSNADM
DATAACCESS DB2-ssid.DATAACCESS DSNADM
SQLADM DB2-ssid.SQLADM MDSNSM or GDSNSM
EXPLAIN DB2-ssid.EXPLAIN MDSNSM or GDSNSM
Additional information: For more details about DB2 RACF profiles, refer to DB2 10 for
z/OS RACF Access Control Module Guide, SC19-2982.
378 DB2 10 for z/OS Technical Overview
Figure 10-26 illustrates the privileges implicitly held by the SYSADM authorities when DB2 is
configured with DSNZPARM SEPARATE_SECURITY set to NO. In this scenario, SYSADM
can execute all privileges, including those of the SECADM and ACCESSCTRL authorities.
Figure 10-26 SECADM and ACCESSCTRL not separated from SYSADM
Security administration authorities separated from SYSADM
Install SYSADM is not affected by the SEPARATE_SECURITY setting. Install SYSADM can
always grant, revoke, and manage security objects.
After you set the SEPARATE_SECURITY system parameter to YES, the SYSADM authority
cannot manage security objects or grant and revoke privileges. The SYSCTRL authority does
not have the ability to manage roles either. From this point, the SECADM authorization grants
and revokes privileges and manages these objects instead. SYSADM or SYSCTRL cannot
perform grants to revoke privileges granted by others. Existing privileges granted by SYSADM
and SYSCTRL are left intact. You need a user with ACCESSCTRL or SECADM authority to
revoke any of these privileges.
DSNZPARM SEPARATE_SECURITY = NO
Install SYSADM
SYSCTRL
SYSADM
System DBADM
DATAACCESS
SECADM
ACCESSCTRL
privilege
implicitly
held
Chapter 10. Security 379
Figure 10-27 illustrates the separation of security administrative duties from the SYSADM
authorities when DB2 is configured with DSNZPARM SEPARATE_SECURITY set to YES. In
this scenario, the security administration authority SECADM is separated from SYSADM. The
SYSADM authority (including install SYSADM) loses the power to execute the privileges held
by the SECADM and ACCESSCTRL authorities.
Figure 10-27 SECADM and ACCESSCTRL separated from SYSADM
Important: The SYSADM authority can continue to grant and revoke privileges on tables
for which it is the table owner. If you do not desire that behavior, you can use the DB2
CATMAINT utility to change the table owner to a different authorization ID or role name.
Important: With system parameter SEPARATE_SECURITY set to YES, the following
conditions are true:
The SYSADM authority cannot set the CURRENT SQLID special register to an ID that
is not included in the list of secondary authorization IDs (RACF groups).
The SYSADM, SYSCTRL and system DBADM authorities cannot set BIND OWNER to
an ID that is not included in the list of secondary authorization IDs.
If you plan to separate the SECADM authority from the SYSADM authority by setting
DSNZPARM SEPARATE_SECURITY to YES, you must consider these runtime behavior
changes. For example, if your application runs under the SYSADM authority and is used to
set the CURRENT SQLID special register to an arbitrary value, your application might not
work correctly. To avoid similar situations, we strongly recommend to use the DB2 10
policy-based audit facility to monitor and analyze carefully the use of SYSADM authorities
in your environment prior to setting SEPARATE_SECURITY DSNZPARM to YES.
Install SYSADM
SYSADM
System DBADM
DATAACCESS
DSNZPARM SEPARATE_SECURITY = YES
SECADM
ACCESSCTRL
380 DB2 10 for z/OS Technical Overview
10.2.13 Minimize need for SYSADM authorities
An authorization ID or role with SYSADM authority can access data from any table in the
entire DB2 subsystem. To minimize the risk of exposing sensitive data to too many people,
you can plan to reduce the number of authorization IDs or roles that have SYSADM authority.
In DB2 10, the SYSADM authority is required only for certain system administrative tasks
(provided the other administrative authorities are appropriately in use) and only during a short
window of time.
To limit the number of users with SYSADM authority, you can separate the privileges of the
SYSADM authority and migrate them to other administrative authorities, which allows you to
minimize the need for granting the SYSADM authority. For example, you can use this function
to perform the following tasks:
The INSTALL SYSADM authority grants SYSADM authority to a RACF group or role.
Users eligible to have SYSADM authority are connected to the SYSADM RACF group
just for the time the SYSADM authority is needed to perform the task that requires
SYSADM authority.
The SYSADM trusted context is enabled just for the time the SYSADM authority is
needed to perform the task that requires SYSADM authority
DSNZPARM SECADM1 and SECADM2 assign SECADM authority to security
administrators who perform security administration and manage access control.
The SECADM authority grants ACCESSCTRL authority to database administrators who
control access to DB2.
The SECADM authority grants system DBADM authority to database administrators who
only manage objects.
The SECADM authority grants DATAACCESS authority to database administrators who
only access data.
The SECADM or ACCESSCTRL authority grant SQLADM authority to performance
analysts who are responsible for analyzing DB2 performance.
The SQLADM authority grants the EXPLAIN privilege to application architects and
developers who need to explain SQL statements or collect metadata information about the
SQL statements.
The SECADM or ACCESSCTRL authority grant the SYSOPR authority, the ARCHIVE,
BSDS, CREATESG, and STOSPACE privileges to system administrators for performing
system administrative tasks.
The SECADM authority uses the policy-based auditing to monitor the usage of the
administrative authorities to monitor the usage of administrative authorities.
You can furthermore protect data from SQL access through SYSADM authorities by
implementing row access control on tables that contain business sensitive information. For
details about row access controls, refer to 10.5, Support for row and column access control
on page 384.
10.3 System-defined routines
In DB2 10, any function or procedure that is created or altered by the install SYSADM
authority is marked as system defined (the SYSIBM.SYSROUTINES.SYSTEM_DEFINED
catalog column is set to S). Furthermore, the DBADM and SQLADM authorities system holds
the implicit execute privilege on any routine that is marked as system-defined and on any
Chapter 10. Security 381
package executed within such system-defined routines. In this section, we discuss the
installation of system-defined routines and the addition of user-defined routines.
We also discuss DB2-supplied routines in 9.9, DB2-supplied stored procedures on
page 329 and 12.13, Simplified installation and configuration of DB2-supplied routines on
page 523.
10.3.1 Installing DB2-supplied system-defined routines
When you install and configure the DB2 for z/OS supplied routines, you run install job
DSNTRIN under install SYSADM authority, which marks any routine installed by DSNTRIN as
system defined. As a result, these routines can be executed by the system DBADM and
SQLADM authorities, which fits well with popular SQL tuning tools. For example, IBM Optim
Query Tuner requires many of the DB2-supplied stored procedures to be available and
accessible by an authority that focuses on SQL tuning activities.
The following DB2-supplied system-defined routines are commonly used:
Stored procedures for administration
JDBC metadata stored procedures that are used by the JDBC driver when accessing the
DB2 catalog
WLM_REFRESH and WLM_SET_CLIENT_INFO
Stored procedures for XLM schema administration
Real Time Statistics stored procedure DSNACCOX
SOAP user-defined functions
MQ user-defined functions
The DB2 utility DSNUTILS, DSNUTILU stored procedures
DSNWZP stored procedure for retrieving DB2 DSNZPARM and DSNHDECP parameter
settings
10.3.2 Define your own system-defined routines
If you want to mark one of your own routines as system-defined to make it implicitly
executable for authorization IDs and roles with system DBADM and SQLADM authority, you
need an install SYSADM authority to create or to alter that routine. For example, your install
SYSADM authorization ID can perform a dummy alter on that routine (for example, alter one
of the routine parameters to its current setting).
10.3.3 Mark user-provided SQL table function as system defined
In our DB2 10 environment, we granted user DB2R52 to have only SQLADM authority. The
SQLADM authority had the implicit execute privilege on system-defined routines. When
DB2R52 tried to execute the GET_SECURITY_ENFORCED_TABLES table UDF to check
whether any row or column access controls are activated on the customer table, a SQLCODE
Additional information: For a complete list of DB2-supplied stored procedures, refer to
DB2 10 for z/OS Installation and Migration Guide, GC19-2974. For details about
DB2-stored procedures for administration, refer to DB2 10 for z/OS Administration Guide,
SC19-2968.
382 DB2 10 for z/OS Technical Overview
217 is issued as shown in Figure 10-28. The table UDF at this point is not marked as system
defined.
Figure 10-28 SQLADM Execution failure non-system-defined routine
Under install SYSADM, we performed a dummy alter function statement to mark the table
UDF as system defined, as shown in Example 10-30, which set the corresponding
SYSIBM.SYSROUTINES.SYSTEM_DEFINED catalog column to S.
Example 10-30 SQLADM marking UDF as system defined through dummy alter
SET CURRENT SQLID = 'DB0BINST';
ALTER FUNCTION DB2R5.GET_SECURITY_ENFORCED_TABLES
(XTBCREATOR VARCHAR(128),XTBNAME VARCHAR(128) ) RESTRICT
READS SQL DATA;
Now that the table UDF is marked as system defined, user DB2R52 can execute the
GET_SECURITY_ENFORCED_TABLES table UDF successfully (see Figure 10-29).
Figure 10-29 SQLADM run system-defined user provided UDF
---------+---------+---------+---------+---------+---------+---------+---------
SELECT * FROM TABLE(DB2R5.GET_SECURITY_ENFORCED_TABLES(
'DB2R5','CUSTOMER'
)) AS A;
---------+---------+---------+---------+---------+---------+---------+---------
TBCREATOR TBNAME TYPE NAME
---------+---------+---------+---------+---------+---------+---------+---------
DSNT404I SQLCODE = 217, WARNING: THE STATEMENT WAS NOT EXECUTED AS ONLY
EXPLAIN INFORMATION REQUESTS ARE BEING PROCESSED
---------+---------+---------+---------+---------+---------+---------+---------
---------+---------+---------+---------+---------+---------+---------+---------
SELECT * FROM TABLE(DB2R5.GET_SECURITY_ENFORCED_TABLES(
'DB2R5','CUSTOMER'
)) AS A;
---------+---------+---------+---------+---------+---------+---------+---------
TBCREATOR TBNAME TYPE NAME
---------+---------+---------+---------+---------+---------+---------+---------
DB2R5 CUSTOMER COLUMN INCOME_BRANCH
DB2R5 CUSTOMER ROW SYS_DEFAULT_ROW_PERMISSION__CUSTOMER
DB2R5 CUSTOMER ROW RA01_CUSTOMERS
Important: Do not use the install SYSADM authority to create or alter routines that should
not be marked as system defined. If you in your day-to-day operations use the install
SYSADM authority to create or alter routines that access business sensitive data, you
unintentionally expose these routines to be executed by the SQLADM and system DBADM
authorities.
Chapter 10. Security 383
10.4 The REVOKE dependent privilege clause
If you revoke privileges in DB2 9, cascading revokes dependent privileges. In some situations,
this behavior is not desirable, especially in situations in which you want to keep privileges that
are granted by an authority that you plan to revoke. For example, you might need to revoke a
SYSADM authority, and you do not want to lose privileges that are granted by that authority.
DB2 10 addresses this issue by adding a dependent privilege clause to the revoke statement
that allows you to control whether dependent privileges go through cascade revoke
processing. The depending privileges clause is available in DB2 10 NFM.
10.4.1 Revoke statement syntax
Figure 10-30 shows the DB2 10 SQL revoke syntax diagram.
Figure 10-30 Revoke syntax diagram
DB2 10 supports the new dependent privileges clause for all SQL revoke statements.
10.4.2 Revoke dependent privileges system default
The dependent privileges clause on the SQL REVOKE statement is controlled by the
DSNZPARM REVOKE_DEP_PRIV. The system default for the revoke dependent privileges
clause depends on the authority being revoked and the setting of DSNZPARM
REVOKE_DEP_PRIVILEGES. Possible system defaults for the REVOKE DEPENDING
PRIVILEGES clause are listed in Table 10-14.
>>-REVOKE--authorization-specification-------------------------->
.-,----------------------.
V |
>--FROM----+-authorization-name-+-+----------------------------->
+-ROLE--role-name----+
'-PUBLIC-------------'
>--+------------------------------------+----------------------->
| .-,----------------------. |
| V | |
'-BY--+---+-authorization-name-+-+-+-'
| '-ROLE--role-name----' |
'-ALL------------------------'
(1)
.-RESTRICT-----.
>--+------------------------------------+--+--------------+----><
+-INCLUDING DEPENDENT PRIVILEGES-----+
'-NOT INCLUDING DEPENDENT PRIVILEGES-'
384 DB2 10 for z/OS Technical Overview
Table 10-14 Default behavior for REVOKE_DEPENDING_PRIVILEGES
When you revoke any of the ACCESSCTRL, DATAACCESS, or system DBADM
administrative authorities, you must specify the NOT INCLUDING DEPENDENT
PRIVILEGES clause. Otherwise, the revoke statement fails with SQLCODE -104. This
mechanism protects you from unintentional cascading revokes, which can potentially harm
the availability of the application system.
10.5 Support for row and column access control
DB2 10 introduces a method of implementing row and column access control as an additional
layer of security that you can use to complement the privileges model and to enable DB2 to
take part in your efforts to comply with government regulations for security and privacy. You
use row and column access control to control SQL access to your tables at the row level,
column level, or both, based on your security policy.
After you enforce row- or column-level access control for a table, any SQL DML statement
that attempts to access that table is subject to the row and column access control rules that
you define. During table access, DB2 transparently applies these rules to every user,
including the table owner and the install SYSADM, SYSADM, SECADM, system DBADM,
and DBADM authorities, which can help to close remaining security loopholes. For example,
you can use the new row and column access controls to transparently hide table rows from
users with SYSADM authority or to mask confidential table column information.
DB2 enforces row and column access control at table level, transparent to any kind of SQL
DML operation or applications that access that table through SQL DML. Existing views do not
need to be aware of row or column access controls because the access rules are enforced
transparently on its underlying tables and table columns. When a table with row or column
level control is accessed through SQL, all users of a table are affected, regardless of how they
access the table (through an application, through ad-hoc query tools, through report
generation tools, or other access) or what authority or privileges they hold.
Row and column access control is available in DB2 10 NFM.
10.5.1 Authorization
Only the SECADM authority has the privilege to manage row and column access controls,
row permissions and column masks. That privilege cannot be granted to others. The
management tasks to be performed exclusively by SECADM include:
Alter a table to activate or deactivate row access control
Alter a table to activate or deactivate column access control
Create, alter, and drop row permissions objects
Create, alter, and drop column mask objects
Privilege to be revoked
ACCESSCTRL,DATAACCESS
or system DBADM
DSNZPARM setting for
REVOKE_DEP_PRIVILEGES
REVOKE DEPENDING
PRIVILEGES default
YES NO/YES/SQLSTMT NO
NO NO NO
NO YES YES
NO SQLSTMT YES
Chapter 10. Security 385
The SECADM authority does not need to have object or execute privileges on tables, views,
or routines that are referenced by row permissions and column mask objects.
10.5.2 New terminology
Before we discuss the new DB2 10 access control features in more detail, we clarify the new
terminology for these new controls and objects here:
Access control refers to the method that DB2 10 introduces to implement access control at
row and at column level.
Row access control is the DB2 security mechanism that uses SQL to control access to a
table at row level. When you activate row access control by altering the table DB2 defines
a default row permission with a predicate of 1=0 to prevent any access to that table
(because the predicate 1=0 will never be true).
Column access control is the DB2 security mechanism that uses SQL to control access to
a table at column level. You activate column access control by altering the table.
A row permission is a table based database object that describes the row filtering rule
DB2 implicitly applies to all table rows for every user whenever the table is accessed using
SQL DML. You create a row permission by the new CREATE PERMISSION DDL. As
shown Figure 10-31, DB2 enforces a row permission only if the row permission is explicitly
enabled and row access control for the table is activated.
Figure 10-31 Row permission enforcement
Row permission enforced by DB2?
Row permission enabled?
Row access control activated?
3 4 2 1
Case 2:
DB2 enforces the default row
permission predictate of 1=0
386 DB2 10 for z/OS Technical Overview
A column mask is a table based database object that describes the column masking rule
DB2 implicitly applies to a table column for every user whenever the table column is
referenced using SQL DML. You create a column mask by the new CREATE MASK DDL.
As shown in Figure 10-32, DB2 enforces a column mask only if the column mask is
explicitly enabled and column access control for the table is activated.
Figure 10-32 Column mask enforcement
10.5.3 Object types for row and column based policy definition
DB2 row access and column access control is similar to a table-based switch that you can set
to activate or deactivate the enforcement of row and column security rules for a table. Except
for the default row access control predicate (1=0), activating row and column access control
itself does not define any of the row and column security policies that you want to enforce for
individual users, groups or roles.
In DB2 10, you can use the following object types to define table row and table column policy
rules:
Row permission object
Column mask object
These object types are managed through SQL DCL operations that are performed exclusively
by the SECADM authority. A row permission object is a row filter that DB2 implicitly applies
when the table is accessed through SQL DML. A column mask is similar to a scalar function
that returns a masked value for a column. If enforced for the column, the column mask is
implicitly applied by DB2 when that column is referenced in an SQL DML operation.
You can create and enable these objects any time, even if row access and column access
controls for the table are not enabled. However, DB2 enforces these object types only if you
explicitly enable them and if you alter the table to activate row access control (prerequisite to
enforce row permissions) or column access control (prerequisite to enforce column masks)
for the table. The order in which you activate controls or enable these object types is
irrelevant. DB2 enforces row permission and column mask objects as soon as you enable
these objects and their corresponding access controls. For more information about when DB2
enforces row permissions and column masks, refer to the illustrations provided in
Figure 10-31 and in Figure 10-32.
Column mask enforced by DB2?
Column mask enabled?
Column access control activated?
3 4 2 1
Chapter 10. Security 387
10.5.4 SQL DDL for managing new access controls and objects
DB2 10 implements the following SQL DDL changes to support the creation and
management of the new access controls and objects:
Row access control
In DB2 10, the alter table DDL statement is extended to support the activation and
deactivation of row access control, as shown in Figure 10-33. DB2 provides no support to
activate or deactivate row access control in the create table statement.
Figure 10-33 Alter table row access control
Column access control
In DB2 10, the alter table DDL statement is extended to support the activation and
deactivation of column access control, as shown in Figure 10-34. DB2 provides no support
to activate or deactivate column access control in the create table statement.
Figure 10-34 Alter table column access control
Row permission
DB2 10 provides an SQL DDL interface to create and alter row permission objects.
Figure 10-35 shows the SQL DDL statement to create row permission objects.
Figure 10-35 CREATE PERMISSION SQL DDL
Figure 10-36 shows the SQL DDL statement to alter a row permission.
Figure 10-36 ALTER PERMISSION DDL
ACTIVATE ROW ACCESS CONTROL
DEACTIVATE
ACTIVATE COLUMN ACCESS CONTROL
DEACTIVATE
CREATE PERMISSION permission-name ON table-name
AS
correlation-name
(1)
ASENSITIVE
INSENSITIVE
DYNAMIC
SENSITIVE
STATIC
NO SCROLL
SCROLL
holdability
returnability
rowset-positioning
fetch-first-clause
read-only-clause
update-clause
optimize-clause
isolation-clause
(2)
FOR MULTIPLE ROWS
FOR SINGLE ROW
(3)
ATOMIC
NOT ATOMIC CONTINUE ON SQLEXCEPTION
SKIP LOCKED DATA
WITHOUT EXTENDED INDICATORS
WITH EXTENDED INDICATORS
CONCENTRATE STATEMENTS OFF
CONCENTRATE STATEMENTS WITH LITERALS
Notes:
1 The same clause must not be specified more than one time. If the options are not specified, their
defaults are whatever was specified for the corresponding option in an associated statement.
2 The FOR SINGLE ROW or FOR MULTIPLE ROWS clause must only be specified for an INSERT
or a MERGE statement.
3 The ATOMIC or NOT ATOMIC CONTINUE ON SQLEXCEPTION clause must only be specified
for an INSERT statement.
556 DB2 10 for z/OS Technical Overview
Using parameter markers can provide higher dynamic statement cache hit ratio. However,
using literals can provide the better access path. Using this method is a trade-off, depending
on the performance characteristics of a dynamic SQL call. Using CONCENTRATE
STATEMENTS WITH LITERALS can at times result in a degraded execution-only
performance. However, the biggest performance gain is for small SQL statements with literals
that have a cache hit now, but did not before.
The REOPT bind option is enhanced to address this possibility in degraded performance.
Normally, REOPT(AUTO) is applicable only to dynamic SQL statements that reference
parameter markers (?). Now, if the REOPT(AUTO) bind option is specified and the
PREPARE for a dynamic SQL statement matches SQL in the dynamic statement cache with
the literal replacement character (&), then for each OPEN or EXECUTE of that dynamic
statement DB2 reevaluates that access path using the current instance of literal values. This
behavior is similar to the current REOPT(AUTO) behavior and host variable usage.
The LITERAL_REPL column is added to the DSN_STATEMENT_CACHE_TABLE to identify
those cached statements which have their literals replaced with ampersands (&). The
ampersands are also visible in the actual SQL statement text, which is externalized to the
STMT_TEXT column. Column LITERAL_REPL is defined as CHAR(1). Table 13-3 shows its
possible values.
Table 13-3 Column LITERAL_REPL values
Example A-11 on page 623 shows the changes to the Dynamic Statement Cache Statement
Statistics trace record, IFCID 316. The QW0316LR field is included to indicate that the
statement used literal replacement.
Example A-1 on page 616 shows the changes to the Database Services Statistics trace
record, IFCID 002. In addition to using the existing fields QXSTFND (short prepare or cache
match found) and QXSTNFND (cache full prepare or cache match not found) for capturing
caching statistics, the new fields are also captured to track the increased benefit of using
CONCENTRATE STATEMENTS WITH LITERALS.
Finally, other trace records such as IFCID 063, IFCID 317, and IFCID 350 which externalize
the SQL statement text, show the statement text with the statement literals replaced with
ampersands (&).
Replacement
character
Meaning
R Literals are replaced with an ampersand (&)
D Literals are replaced with an ampersand (&) and can have an identical value for
STMT_TEXT to another cached statement or statements, but literal reusability
criteria is different
(Default) The statement was not prepared with CONCENTRATE STATEMENTS
WITH LITERALS and no literal constants are replaced
Chapter 13. Performance 557
13.6 INSERT performance improvement
Heavy insert applications can hit a series of issues that hinder performance, scalability, and
availability. DB2 must remove these issues to take advantage of faster host and storage
servers.
Figure 13-8 summarizes the main enhancements on insert processing with DB2 9 and
DB2 10.
Figure 13-8 Summary of main insert performance improvements
DB2 9 introduced the following improvements for heavy INSERT processing applications:
Large index page support
Index pages larger than 4 KB (enable index compression and) reduce the index splits
proportionally to the increase in size of the page.
Asymmetric index split
Based on the insert pattern, DB2 splits the index page by choosing from several
algorithms. If an ever-increasing sequential insert pattern is detected for an index, DB2
splits index pages asymmetrically using approximately a 90/10 split.If an ever-decreasing
sequential insert pattern is detected in an index, DB2 splits index pages asymmetrically
using approximately 10/90 split. If a random insert pattern is detected in an index, DB2
splits index pages with a 50/50 ratio.
Data sharing log latch contention and LRSN spin loop reduction
Allows for duplicate LRSN values for consecutive log records for different pages on a given
member.
More index look aside
DB2 keeps track of the index value ranges and checks whether the required entry is in the
leaf page accessed by the previous call. It also checks against the lowest and highest key
of the leaf page. If the entry is found, DB2 can avoid the getpage and traversal of the index
tree.
DB2 10 NFM
INCLUDE index
Support Member Cluster in UTS
Additional LRSN spin avoidance
DB2 10 CM
Space search improvement
Index I/O parallelism
Log latch cont ention reduction and
faster commit process
Additional index look aside
DB2 9
Large i ndex pages
Asymmetric i ndex spli t
Data sharing Log latch contention
and LRSN spin loop reduction
More i ndex l ook aside
Support APPEND opti on
RTS LASTUSED support
Remove log force wri te at new page
(segmented and UTS) via PK83735
558 DB2 10 for z/OS Technical Overview
APPEND=YES
Requests data rows to be placed into the table by disregarding the clustering during SQL
INSERT and online LOAD operations. Rows are appended at the end of the table or
partition.
RTS LASTUSED column
RTS records the day the index was last used to process an SQL statement. Not used
indexes can be identified and DROPped.
Remove log force write at new page through PK83735
Forced log writes are no longer done when inserting into a newly formatted or allocated
page for a GBP dependent segmented or universal table space.
For details, see DB2 9 for z/OS Performance Topics, SG24-7473.
DB2 10 further improves the performance for heavy INSERT applications, through:
I/O parallelism for index updates
Sequential inserts into the middle of a clustering index
13.6.1 I/O parallelism for index updates
When DB2 9 inserts a row into a table, it must perform a corresponding insert into all the
indexes that are defined on that table. All of these inserts into the indexes are done
sequentially. Each insert into an index must be completed before the insert into the next index
can start. If there are many indexes defined on the table and if the index pages that are
needed for the insert are not always in the buffer pool, the inserting transactions can suffer
from high response times due to index I/O wait times.
DB2 10 provides the ability to insert into multiple indexes that are defined on the same table in
parallel. Index insert I/O parallelism manages concurrent I/O requests on different indexes
into the buffer pool in parallel, with the idea being to overlap the synchronous I/O wait time for
different indexes on the same table. This processing can significantly improve the
performance of I/O bound insert workloads. It can also reduce the elapsed times of LOAD
RESUME YES SHRLEVEL CHANGE utility executions, because the utility functions similar to
a MASS INSERT when inserting to indexes.
DB2 performs the insert on the first index. A conditional getpage returns the page if it is
already in the buffer pool, but it never initiates an I/O. Thus, if leaf pages are buffer hits, DB2
does not schedule the prefetch engine. When DB2 gets a buffer miss, it schedules a prefetch
engine and continues with the next index. Then, later it must do another getpage after the
prefetch completes.
Because DB2 cannot avoid waiting for I/O when reading the clustering index to find the
candidate data page, I/O parallelism cannot be performed against the clustering index. In
general, there is also no benefit to not waiting for the I/O to complete against the last index,
because DB2 will probably have to wait for the I/O to complete for the first index revisited
anyway, (An I/O is orders of magnitude slower than CPU processing time.) So, asynchronous
I/O is not scheduled for the last index and the clustering index.
In general, this enhancement benefits tables with three or more indexes defined. The
exceptions are tables defined as MEMBER CLUSTER, tables created with APPEND YES
option, or tables created with the ORGANIZE BY HASH clause. In these cases, indexes are
not really used to position the rows in the table. So tables with two or more indexes benefit
from this enhancement, rather than three or more.
Chapter 13. Performance 559
All I/Os for non-leaf pages are still done synchronously. Only the leaf page I/Os can be
scheduled asynchronously.
I/O parallelism for index updates also does not apply for any indexes that are defined on the
DB2 catalog and directory objects, either DB2 created or user created. There is a small CPU
overhead in scheduling the prefetch. This enhancement benefits tables with three indexes or
more, especially in case of poor disk performance. Tables with many large indexes that are
also not already in the buffer pool will see the greater performance improvement.
I/O parallelism for index update is active in CM and a rebind or bind is not required. However,
it is only available for classic partitioned table spaces and universal table spaces
(partition-by-growth and range-partitioned). Segmented table spaces are not supported.
Index I/O parallelism likely reduces insert elapsed time and class 2 CPU time. Elapsed time
savings are greatest when I/O response times are high. Due to the extra overhead of a
sequential prefetch, DBM1 service request block (SRB) time increases, and the total CPU
time increases. However, in DB2 10, because the prefetch SRB time is zIIP eligible, the total
cost of the CPU time can be reduced.
I/O parallelism for index updates can also be disabled by setting the new online changeable
DSNZPARM parameter INDEX_IO_PARALLELISM to NO. The default is YES. You want to
disable this function if the system has insufficient zIIP capacity to redirect the prefetch engine.
The new IFCID 357 and IFCID 358 are available in DB2 10 to trace the start and end of index
I/O parallel insert processing. You can use these IFCIDs to monitor for each table insert
operation the degree of I/O parallelism, which is the number of synchronous I/O waits DB2
has avoided during the insertions into the indexes for a given table row insert.
Example A-8 on page 622 shows the contents of IFCID 357, and Example A-9 on page 622
shows the contents of IFCID 358.
13.6.2 Sequential inserts into the middle of a clustering index
When inserting rows into a table, a set of space management algorithms are in place to find
the candidate page where the row is to be inserted according to the clustering index. DB2 10
enhances the way the first candidate page is selected.
To select the initial candidate page, DB2 selects the data page where the row that contains
the next highest key to the row being inserted resides. If there is available space, DB2 inserts
the row into that page. However, if there is not enough space, DB2 searches for another
candidate page and eventually finds space to insert the row.
Now, consider the case where a second row is inserted that has a key higher than the row just
inserted but lower than the next highest existing row. DB2 selects the same initial candidate
page again, only to find it is still full. So, DB2 must repeat the process to find another
candidate page.
DB2 10 changes this behavior. Rather than choosing the page pointed to by the next highest
key as the initial candidate page, DB2 chooses the first candidate page based on the next
lower key. In our example that uses sequential inserts, DB2 chooses the page where the
previous row was inserted as the first candidate page to check. Because a row was just
inserted, the chances are reasonable that the page still contains enough space for this new
row. So, DB2 does not have to search for another candidate page.
This behavior helps sequential inserts into the middle of the table based on the clustering
index. On the second and subsequent sequential insert, DB2 does not have to repeatedly find
560 DB2 10 for z/OS Technical Overview
the first candidate page as full, which translates directly into CPU and getpage savings
because fewer candidate pages need to be searched for sequential insert workloads. This
performance improvement is available in CM with no rebind or bind required.
13.7 Referential integrity checking improvement
When inserting into a dependent table, DB2 must access the parent key for referential
constraint checking. DB2 10 changes help to reduce the CPU overhead of referential integrity
checking by minimizing index probes for parent keys:
DB2 10 allows sequential detection to trigger dynamic prefetch for parent key referential
integrity checking.
DB2 10 also enables index look-aside for parent key referential integrity checking. Index
look-aside is when DB2 caches key range values. DB2 keeps track of the index value
ranges and checks whether the required entry is in the leaf page accessed by the previous
call. If the entry is found, DB2 can avoid the getpage and traversal of the index tree. If the
entry is not found, DB2 checks the parent non-leaf pages lowest and highest key. If the
entry is found in the parent non-leaf range, DB2 must perform a getpage but can avoid a
full traversal of the index tree.
DB2 can also avoid index lookup for referential integrity checking, if the non-unique key to
be checked has been checked before.
INSERT KEY A, INSERT KEY A, .... INSERT KEY A, COMMIT;
For the 1st INSERT KEY A, DB2 checks the parent table index for RI. no RI checking
takes place for all subsequent inserts.
INSERT KEY A, COMMIT; INSERT KEY A, COMMIT;
Only the 1st INSERT checks the parent index, all subsequent INSERT will not check.
So for INSERT within or without the same commit scope, if the key is already in the child
table, DB2 does not check the parent key value again. If key A is already in the child table
(already committed), when you insert another key A (assuming non-unique index on
child), DB2 detects that key A is already there, so there is no need to check the parent
again because key A already matches the RI rule otherwise it cannot be in the child table.
However there must be an index on the child table, with the relationship primary key
column(s) defined as leading columns in the index. Otherwise, you would just benefit of
index look-aside on parent table, but not due to key already exist.
Referential integrity checking can take advantage of index enhancements and hash access
can be used for parent key checking, however referential integrity checking is not externalized
in the Explain tables.
13.8 Buffer pool enhancements
The buffer pool enhancements in DB2 10, which are all available in CM, allow you to increase
transaction throughput and take advantage of larger buffer pools by reducing latch class 14
and 24 contention, reducing buffer pool CPU overhead, and avoiding transaction I/O delays
by preloading objects into the buffer pool.
We describe these enhancements in the following sections:
Buffer storage allocation
In-memory table spaces and indexes
Chapter 13. Performance 561
Reduce latch contention
13.8.1 Buffer storage allocation
In previous versions of DB2, storage is allocated for the entire size of the buffer pool (VPSIZE)
when the buffer pool is first allocated, (the first logical open of a page set), even if no data was
accessed in any table space or index using that buffer pool. Now, buffer pool storage is
allocated on-demand as data is brought in. If a query touches only a few data pages, only a
small amount of buffer pool storage is allocated.
Here, logical open is when the page set is either physically opened (the first SELECT) or
pseudo opened (the first UPDATE after being physically opened for read). There is no buffer
pool allocation as it is already allocated at read time.
In addition, for a query that performs index-only access, the buffer pool for the table space
does not need to have any buffer pool storage allocated. DB2 10 no longer performs logical
open of the table space page set for index-only access.
For buffer pools defined with PGFIX=YES, DB2 requests buffer pools to be allocated using 1
MB page frames if they are available, rather than 4 KB pages frames. 1 MB page frames are
available in z10 and later. You define the number of 1 MB page frames that are available to
z/OS in the LFAREA parameter of SYS1.PARMLIB(IEASYSxx). Manipulating storage in 1 MB
chunks rather than 4 KB chunks can significantly reduce CPU overhead in memory
management, by increasing the hit ratios of the hardware translation lookaside buffer.
Note that although DB2 can request 1 MB page frames from the operating system, DB2 itself
still manages the buffer pools as 4 KB, 8 KB, 16 KB, and 32 KB pages. Nothing changes
inside DB2.
DB2 requests 1 MB page frames only for PGFIX=YES buffer pools because individual 4 KB
buffer pool pages are already page fixed in memory for read/write operations in 1 MB chunks.
If there are no more 1 MB page frames available, then DB2 requests 4 KB page frames. See
2.2.3, Increase of 64-bit memory efficiency on page 5 for details.
Buffer pools do not automatically shrink when page sets are logically closed. When allocated,
the buffer pools remain until either all the page sets are physically closed or DB2 is stopped.
Workload managed buffer pool support introduced in DB2 9 can dynamically reduce the size
of buffer pools when there is less demand. See DB2 9 for z/OS Performance Topics,
SG24-7473 for details about WLM buffer pool support.
There can be a temptation either to make buffer pools bigger than they normally would be or
to define more buffer pools with PAGEFIX=YES, thinking that the extra buffer pool space
might not be used because it is allocated only when it is needed. We advise that you resist
this temptation. You still need enough real storage to back the buffer pools to keep real paging
at an acceptable level and your amount of real storage has not changed.
For BP0, BP8K0, BP16K0, and BP32K, the minimum size set by DB2 is 8 MB.
13.8.2 In-memory table spaces and indexes
In the past, the cost of physically opening a page set was worn by the first SQL to access that
data set. This cost adversely impacted application performance, typically after DB2 restart. In
addition, some tables might be critical for application performance, so they need to be always
resident in the buffer pools.
562 DB2 10 for z/OS Technical Overview
DB2 10 provides a buffer pool attribute that you can use to specify that all objects using that
buffer pool are in-memory objects. The data for in-memory objects is preloaded into the buffer
pool at the time the object is physically opened, unless the object has been opened for utility
access. The pages remain resident as long as the object remains open.
When the page set is first accessed (the first getpage request initiated by SQL) an
asynchronous task is scheduled under the DBM1 address space to prefetch the entire page
set into the buffer pool. The CPU for loading the page set into the buffer pool is therefore
charged to DB2. If this first getpage happens to be for a page that is being asynchronously
read at the time, then it waits. Otherwise, the requested page is read synchronously.
If a page set is opened as a part of DB2 restart processing, the entire index or table space is
not prefetched into the buffer pool.
Page sets can still be physically opened before the first SQL access by using the -ACCESS
DATABASE command introduced in DB2 9. Now, you can also preload all of the data into the
buffer pool or pools before the first SQL access.
To realize the benefit of in-memory page sets, you still need to make sure that the buffer pools
are large enough to fit all the pages of all the open page sets. Otherwise, I/O delays can occur
as the buffer pool fills up and DB2 must steal buffers, (on a FIFO in this case). This behavior
is the same as with previous versions of DB2.
In-memory page sets help DB2 to reduce overall least recently used (LRU) chain
maintenance and latch class 14 contention, because there is much less buffer pool activity
and buffer pools with in-memory page sets are managed with FIFO. They also avoid
unnecessary prefetch and latch class 24 contention, because the data is already in the buffer
pools and because prefetch is disabled for in-memory page sets.
Important: It is more important in DB2 10 to make sure that you have large enough buffer
pools to store all the in-memory page sets. If the buffer pools are too small to store all of
the data, then performance can be impacted, because DB2 might be using a nonoptimal
access plan that did not allow for the extra I/O. In addition, if DB2 needs to perform I/O to
bring a page into the buffer pool for processing, this I/O is synchronous because prefetch is
disabled.
Chapter 13. Performance 563
A new option is available for the PGSTEAL parameter of the -ALTER BUFFERPOOL
command. PGSTEAL(NONE) indicates that no page stealing can occur. All the data that is
brought into the buffer pool remains resident. Figure 13-9 shows the new syntax.
Figure 13-9 ALTER BUFFERPOOL command
Altering the PGSTEAL value takes effect immediately. For PGSTEAL LRU or FIFO, new
pages added to the LRU chain take the new behavior immediately, but the ALTER does not
affect the pages already on the chain. Altering the buffer pool to PGSTEAL(NONE) also has
an immediate effect. The ALTER schedules prefetches for all of the page sets in the buffer
pool.
You can define in-memory table spaces and indexes in DB2 10 CM. On fallback to DB2 9,
PGSTEAL=NONE reverts to its previous value, which is LRU if the parameter was never
changed. On remigration, PGSTEAL returns to NONE if it was set prior to fallback.
The following buffer manager display messages are modified to accommodate
PGSTEAL=NONE:
DSNB4021
The BUFFERS ACTIVE count is removed, because it reflects the number of buffers that have
ever been accessed, which is essentially the same behavior as BUFFERS ALLOCATED:
DSNB406I
DSNB519I
IFCID 201 records the buffer pool attributes changed by the -ALTER BUFFERPOOL
command. A new value of N indicates that PGSTEAL(NONE) is defined for the QW0201OK
and QW0201NK trace fields, which records the old and new values of PGSTEAL respectively.
Similarly, IFCID 202, which records the current attributes of a buffer pool, also uses N to
indicate PGSTEAL(NONE).
13.8.3 Reduce latch contention
DB2 10 introduces improvements to buffer pool management to reduce latch contention on
latch class 14 (buffer pool manager exclusive latch) and latch class 24 (buffer pool manager
page latch), particularly for large buffer pools, which is achieved through faster suspend and
resume processing.
564 DB2 10 for z/OS Technical Overview
Latch class 14 is reduced during commit on update transactions. During update commit, DB2
must take an exclusive latch. DB2 10 takes the latch at the partition level rather than table
space level. Latch class 24 is reduced as DB2 10 reduced the serialization in concurrent read
threads such as CPU parallelism. DB2 must serialize when reading the same page at the
same time from multiple threads, typically using CPU parallelism.
Class 14 hash latch contention reduction takes place also by having 10 times more latches for
a given buffer pool size.
In-memory table spaces also significantly reduce buffer pool latch contention.
13.9 Work file enhancements
DB2 10 in NFM supports partition-by-growth table spaces in the work file database.
Declared global temporary tables (DGTTs) compete for space with other activities in the work
file database, but they cannot span work file table spaces. Partition-by-growth work file table
spaces help these applications to reduce SQLCODE -904 (unavailable resource) for lack of
space in the work file database by allowing you to control the maximum size of work file table
spaces using DSSIZE and MAXPARTITIONS.
For example, you can limit a partitioned by growth table spaces use at 3 GB with the following
setting:
MAXPARTITIONS 3 DSSIZE 1G
With DB2-managed segmented table spaces, this function was not possible. You can limit the
growth at only 2 GB or less (using PRIQTY nK SECQTY 0).
If the DSNZPARM WFDBSEP is NO (the default), DB2 tries to use only work file
partitioned-by-growth table spaces for DGTT; however, if there is no other table space
available, DB2 also uses it for work files (for example, sorts and CGTTs).
With WFDBSEP YES, DB2 uses only work file partitioned by growth table spaces for DGTT,
and if there is no other table space available, a workfile application receives a resource
unavailable message.
For DGTTs using large DSSIZE, you can have larger work file table space (that is, greater
than 64 GB).
You can have a mixture of table space types (some segmented and some universal table
spaces partition-by-growth) in the work file database. A partitioned by growth table space in
work file database must be produced using the CREATE TABLESPACE statement when in
DB2 10 NFM. It cannot be altered to from other table spaces.
The records of work files that are created for joins and large sorts can span multiple pages to
accommodate larger record lengths and larger sort key lengths for sort records. The
maximum length of a work file record is 65529 bytes. This enhancement reduces the
instances of applications receiving SQLCODE -670 when the row length of a large sort record
or as a result of a join exceeds approximately the maximum page size for a work file table
space.
Support for spanned work file records is available only in NFM; however, a rebind or bind is
not required.
Chapter 13. Performance 565
The maximum limit of sort key length is also increased from 16000 to 32707 bytes, which
reduces instances of applications receiving SQLCODE -136.
In DB2 9, the use of in-memory work files is restricted to small work files that do not require
any predicate evaluation. DB2 10 extends the use of in-memory work files by allowing simple
predicate evaluation for work files. This enhancement helps to reduce the CPU time for
workloads that execute queries that require the use of small work files. The in-memory work
file enhancement is available in CM, and a rebind or bind is not required.
Finally, the maximum size of all 4 KB WORKFILE table spaces can now be up to
8,388,608,000 MB and the maximum size of all 32 KB WORKFILE table spaces can now be
up to 67,108,864,000 MB. However, when migrating to DB2 10, the limits both remain
32,768,000 MB, the same as DB2 9, because WORKFILE table spaces cannot be created as
partitioned-by-growth in DB2 10 CM. See Chapter 12, Installation and migration on
page 471 for details about changes to the installation process.
13.10 Support for z/OS enqueue management
DB2 V8 exploits Workload Manager for z/OS (WLM) enqueue management. When a
transaction has spent roughly half of the lock timeout value waiting for a lock, then the WLM
priority of the transaction, which holds the lock, is increased to the priority of the lock waiter if
the latter has a higher priority. If the lock holding transaction completes, it resumes its original
service class. In case multiple transactions hold a common lock, this procedure is applied to
all of these transactions
DB2 10 uses IBM Workload Manager for z/OS enqueue management to more effectively
manage lock holders and waiters. DB2 also notifies WLM about threads that are being
delayed while holding some key resources such as enqueues and critical latches.
13.11 LOB enhancements
In this section, we discuss the following enhancements, which are mostly related to LOBs:
LOAD and UNLOAD with spanned records
File reference variable enhancement for 0 length LOBs
Streaming LOBs and XML between DDF and DBM1
Inline LOBs
APAR PM24721 provides BIND performance improvement on LOB table spaces.
13.11.1 LOAD and UNLOAD with spanned records
Prior to DB2 9, DB2 sometimes could not LOAD or UNLOAD large LOB or XML columns with
other non-LOB or XML columns into the same data set because the I/O record size was
limited to 32 KB with VB type data sets.
In DB2 9, DB2 allows loading or unloading of LOB or XML data from or into separate data
sets using file reference variables. However, the UNLOAD utilitys support of file reference
variables is restricted to partitioned sets and UNIX file systems. File references variables
cannot be used to unload all of the LOB or XML columns for an individual table space or
partition to a single sequential file, and cannot unload LOB or XML data to tape (because all
tape data sets are sequential). Writing all of the LOB or XML data to a partitioned data set or
566 DB2 10 for z/OS Technical Overview
UNIX file system is slow. Furthermore, the only way to unload LOB or XML data that is larger
than 32 KB is by using file reference variables.
DB2 10 introduces support for spanned records (RECFM = VS or VBS) in LOAD or UNLOAD
to allow LOB columns and XML columns of any size to be loaded or unloaded from or to the
same data set with other non-LOB columns. Spanned records overcome the limitations of File
Reference Variables, because all of the LOB or XML data of a given table space or partition
can be written to a single sequential file, which can reside on tape or disk and can span
multiple volumes.
You ask the LOAD or UNLOAD utilities to use spanned records by specifying the new
SPANNED keyword, as shown in Example 13-2.
Example 13-2 Unloading in spanned format
UNLOAD TABLESPACE TESTDB1.CLOBBASE SPANNED YES
FROM TABLE TB1
(ID
,C1 INTEGER
,C2 INTEGER
,C3 CHAR(100)
,C4 CHAR(100)
,C5 INTEGER
,C6 CHAR(100)
,C7 CHAR(100)
,C8 CHAR(100)
,CLOB1 CLOB
,CLOB2 CLOB
,CLOB3 CLOB)
DB2 ignores the RECFM when SPANNED YES is specified. All LOB and XML columns must
be ordered at the end of the record as specified in a field specification list. A field specification
list is required, and length and POSITION must not be specified on the LOB or XML field
specifications. TRUNCATE has no meaning. If the data is not ordered as DB2 UNLOAD
expects, spanned records are not used and DSNU1258I will be issued. If DELIMITED is
specified, the data will not be unloaded using spanned records. If SPANNED YES is specified
NOPAD is the default. You can only use spanned records in NFM.
When unloading LOB or XML data to a spanned record data set, all non-LOB and non-XML
data (including file reference variables) are written in the record first and then LOBs and XML
documents are written, spanning to subsequent records if required. This is also the order and
format the LOAD utility needs to read when using a spanned record processing of LOB or
XML data. The SYSPUNCH generated by UNLOAD lists the LOB or XML data in a field
specification list in the corresponding order.
The LOAD utility uses spanned record functionality if FORMAT SPANNED YES is specified,
SYSREC has spanned record format, and a field specification list is provided with all LOB and
XML fields at the end of the record. A field specification list is required, and position must be
omitted or must use an asterisk (*) for the LOB and XML columns. If the fields are not
ordered as DB2 LOAD expects, then message DSNU1258I is issued and the utility terminates
with RC8.
Performance of reading or writing from or to a single sequential file is much faster than
reading or writing from or to separate files or partition data set members, because the utilities
do not have as much open and close work to do.
Chapter 13. Performance 567
In addition, LOAD REPLACE into a LOB table space in DB2 10 uses format writes in the
same way that LOAD has always done for non-LOB table space. This method is faster than
preformatting the table space as used by CREATE TABLESPACE or INDEX.
The following new messages are also introduced:
DSNU1256I csect-name - VIRTUAL STORAGE REQUIRED FOR SPANNED RECORDS EXCEEDS
DEFINED LIMIT.
DSNU1257I csect-name - REFERENCE TO LOB OR XML DATA IN A WHEN CLAUSE IS NOT
ALLOWED IF THE LOAD STATEMENT SPECIFIES FORMAT SPANNED YES.
DSNU1258I csect-name - ORDER OF FIELDS IS NOT VALID FOR KEYWORD SPANNED.
DSNU1259I csect-name - THE INPUT DATA SET DOES NOT HAVE THE ATTRIBUTE SPANNED
13.11.2 File reference variable enhancement for 0 length LOBs
Prior to DB2 10, unloading empty LOBs through file reference variables results in SYSREC
recording a file name of an empty file. UNLOAD creates a file for every empty LOB. LOAD
issues an error on a blank or zero length file reference variables. This behavior can cause
performance issues, because DB2 must open and close the data set for an empty LOB. DB2
Version 8 also does not determine emptiness before inserting the base row. Thus, DB2
inserts an empty LOB (occupying a page) in the LOB table space and does not mark the
column indicator as empty.
In some cases, you might be able to mark LOB columns as NULL instead of empty, but in
some cases the LOB column is not nullable. Also, for some applications, NULL and empty (or
zero length) have different meanings.
DB2 10 changes the way UNLOAD and LOAD handle file reference variables for empty LOBs.
An empty LOB is a LOB with a length of zero. It has no data bytes and is not NULL.
When unloading an empty LOB to file reference variables, UNLOAD writes a zero length
VARCHAR or blank CHAR in SYSREC and does not create a data set for the LOB. If the field
is nullable, the null byte is set to '00'x, indicating not NULL.
When loading a LOB file reference variable field, LOAD treats a zero length VARCHAR, blank
VARCHAR, or blank CHAR as an empty LOB. This behavior assumes the LOB is not nullable
or is not NULL as indicated by the NULL byte. A NULL file reference variable results in a
NULL LOB. Because LOAD can determine the length of the LOB before inserting the base
row, the zero length flag in the column indicator for the LOB is set on, and an empty LOB is
not inserted into AUX table space. That is, the column indicator is used to determine that the
LOB is empty.
A file reference variable with a data set name that references an empty data set still
represents an empty LOB. In this case, LOAD cannot determine the length of the LOB before
inserting the base row, and an empty LOB is inserted into AUX table space, the same
behavior as before.
This function was retrofitted by APAR PM12286 (PTF UK59680) to DB2 9, where it is even
more important. In DB2 10, the preferred method of unloading LOBs is spanned records.
568 DB2 10 for z/OS Technical Overview
13.11.3 Streaming LOBs and XML between DDF and DBM1
DB2 9 introduced functionality that allowed clients to not know the total length of a LOB value
before sending it to a remote DB2 for z/OS server. However, DB2 for z/OS still had to
materialize the entire LOB in memory to get the length of the entire LOB before inserting it
into the database. This behavior is called LOB streaming, and it eliminates the need for the
client application to read the entire LOB to get its total length prior to sending the LOB to the
server.
DB2 10 for z/OS offers additional performance improvements by extending the support for
LOB and XML streaming, avoiding LOB and XML materialization in more situations.
Figure 13-10 shows the differences between DB2 9 and DB2 10. The DDF server no longer
needs to wait for the entire LOB or XML to be received before it can pass the LOB or XML to
the data manager.
Figure 13-10 Streaming LOBs and XML
DB2 materializes up to 2 MB for LOBs and 32 KB for XML before passing the data to the
database manager. The storage allocated for this LOB or XML value is reused on subsequent
chunks until the entire LOB or XML is processed.
The LOAD utility also uses LOB streaming for LOBs and XML but only when file reference
variables are used.
Whether materialization is reduced, and by how much, depends on the following conditions:
JDBC 4.0 and later or ODBC/CLI V8 driver FP4 and later. If using JDBC 3.5, the
application has to specify the length of the LOB or XML to be -1.
There can be at most one LOB per row for INSERT, UPDATE, or LOAD with file reference
variable.
There can be at most one XML per row for INSERT or UPDATE with DRDA streaming.
There can be one or more number of LOB and XML values per row with INSERT,
UPDATE, or LOAD XML with file reference variables and LOB INSERT or UPDATE or
using crossloader that require CCSID conversion.
Network
Layer
DRDA Client Application DDF Server Engine
900M
LOB
Network
Layer
DRDA Client Application DDF Server Engine
900M
LOB
900M
LOB
DB2 9
DB2 10
Chapter 13. Performance 569
For UPDATE, an additional restriction applies in that the UPDATE must qualify just one
row where a unique index is defined on the target update column.
This enhancement is available in CM and reduces the virtual storage consumption and
reduces elapsed time. You can also see a reduction in class 2 CPU time.
Streaming of LOB and XML data to the DBM1 address space is also available to local
applications, for example SQL INSERT UPDATE and the LOAD utility, when file reference
variables are used.
13.11.4 Inline LOBs
Prior to DB2 10, DB2 for z/OS stores each LOB column, one per page, in a separate auxiliary
(LOB) table space, regardless of the size of the LOB to be stored. All accesses to each LOB,
(SELECT, INSERT, UPDATE, and DELETE) must access the auxiliary table space using the
auxiliary index.
A requirement with LOB table spaces is that two LOB values for the same LOB column
cannot share a LOB page. Thus, unlike a row in the base table, each LOB value uses a
minimum of one page. For example, if some LOBs exceed 4 KB but a 4 KB page size is used
to economize on disk space, then it might take two I/Os instead of one to read such a LOB,
because the first page tells DB2 where the second page is.
DB2 10 supports inline LOBs. Depending on its size, a LOB can now reside completely in the
base table space along with other non-LOB columns. Any processing of this inline LOB now
does not have to access the auxiliary table space. A LOB can also reside partially in the base
table space along with other non-LOB columns and partially in the LOB table space. That is, a
LOB is split between base table space and LOB table space. In this case any processing of
the LOB must access both the base table space and the auxiliary table space.
Inline LOBs offer the following benefits:
Small LOBs that reside completely in the base table space can now achieve similar
performance to similarly sized varchar columns.
Inline LOBs avoid all getpages and I/Os that are associated with an auxiliary index and
LOB table space.
Inline LOBs can save disk space even if compression cannot be used on the LOB table
space.
The inline piece of the LOB can be indexed using index on expression.
Inline LOBs access small LOB columns with dynamic prefetch.
The inline portion of the LOB can be compressed.
A default value other than empty string or NULL is supported.
The DB2 10 LOAD and UNLOAD utilities can load and unload the complete LOB along with
other non-LOB columns.
Inline LOB support comes with DB2 10 in NFM.
Inline LOBs are only supported with either partition-by-growth or partition-by-range universal
table spaces. Reordered row format is also required. Inline LOBs take advantage of the
reordered row format and handle the LOB better for overall streaming and application
performance. Additionally, the DEFINE NO option allows the row to be used and the data set
for the LOB not to be defined. DB2 does not define the LOB data set until a LOB is saved that
is too large to be completely inlined.
570 DB2 10 for z/OS Technical Overview
DB2 introduces a new parameter, INLINE LENGTH, for the column definition clause of
ALTER TABLE, CREATE TABLE, ALTER TYPE, and CREATE TYPE statements. For BLOB
and CLOB columns, the integer specifies the maximum number of bytes that are stored in the
base table space for the column. The integer must be between 0 and 32680 (inclusive) for a
BLOB or CLOB column. For a DBCLOB column, the integer specifies the maximum number
of double-byte characters that are stored in the table space for the column. The integer must
be between 0 and 16340 (inclusive) for a DBCLOB column.
A new version of the table is generated after the LOBs inline length is altered. Increasing the
LOB length is considered an immediate change. All new rows begin using the new value, and
the base table space is put in advisory REORG-pending (AREOR) status. However, if you use
the ALTER TABLE statement to reduce the length of an inline column, DB2 places the base
table space in REORG-pending status. The old inline quantity is used until REORG is
executed. When you change the length of an inline LOB, the REORG utility always takes an
inline image copy.
In addition, plans or packages and queries in the dynamic statement cache that access the
altered base table are invalidated when you change the length of inline LOB columns. Any
views that access these LOB columns are also regenerated.
To preserve data integrity between the base table and auxiliary table where two parts of the
same LOB column can reside, the RECOVER utility now enforces recovering the base table
and associated auxiliary table spaces, both LOB and XML, together to the same point in time.
DB2 enforces this restriction in both CM and NFM. This is not enforced in earlier versions of
DB2 although it is has always been highly recommended.
Inline LOBs allow you to index your LOB columns by indexing the inlined piece of LOB
columns. DB2 10 allows you to specify LOB columns for indexes on expression; however, only
the SUBSTR in built-in function is allowed. If you can search on the inlined piece of LOB
columns, you no longer need to access the LOB table space to locate a specific LOB. This is
achieved by exploiting index on expression.
For example, you can create a base table and an index on expression as follows:
CREATE TABLE mytab (clobcol CLOB(1M) INLINE LENGTH 10);
CREATE UNIQUE INDEX myindex1 on mytab (VARCHAR (SUBSTR(clobcol, 1,10))) ;
And you can issue the statement:
SELECT clobcol From mytab WHERE VARCHAR(SUBSTR(clobcol,1,10)) = ABCDEFGHIK
If you can store the complete LOB inline, then you can potentially realize significant savings in
CPU time and DASD space, by avoiding I/Os when fetching the LOB column or columns, and
savings in not having to manage the auxiliary table space and index. Some preliminary
performance measurements with random access to small LOBs show as much as 70%
improvement on SELECTs and even higher improvement in INSERTs.
However, split LOBs incur the cost of both inline and out-of-line LOBs. Small inline LOBs use
approximately 5%-10% more CPU time than VARCHAR columns. Because the base table
becomes bigger, SQL that is limited to non-LOB columns can be impacted. So, there is no
performance advantage except to support indexing of the LOB columns. Table scans and
utilities that did not refer to the LOB columns will take longer, and image copies for the base
table will become larger. (Alternatively, the image copy for the LOB table space will become
smaller, or non-existent.) The buffer hit ratio for the base table can also be impacted.
DB2 10 introduces a DSNZPARM parameter, LOB_INLINE_LENGTH, to define the default
maximum length to be used for the inline portion of a LOB column. This value is used when a
value is not specified by the INLINE LENGTH parameter of the CREATE or ALTER TABLE
Chapter 13. Performance 571
statement. The value is interpreted as number of bytes, regardless of the type of LOB column
to which it is applied. The default is zero (0), meaning no inline LOBs.
Be advised that specifying a default inline LOB length can hurt the performance of some table
spaces.
The following general tuning practices are practical:
Choose the INLINE LENGTH according to the LOB distribution. A size too small takes up
some more space on the base page and still requires frequent accesses to the auxiliary
table space.
Consider increasing the page size of the base table to accommodate larger rows.
Reconsider whether to use compression for the base table. (If the inline LOBs do not
compress well and if they dominate the bytes in the row, then turn off compression.)
Retune the buffer pool configuration as needed.
Table 13-4 lists the DB2 catalog table changes to support inline LOBs.
Table 13-4 Catalog table changes for inline LOBs
13.12 Utility BSAM enhancements for extended format data
sets
z/OS R9 introduced support for long-term page fixing of basic sequential access method
(BSAM) buffers, and z/OS R10 introduced support for 64-bit BSAM buffers if the data set is
extended format. This function frees BSAM from the CPU-time intensive work of fixing and
freeing the data buffers itself.
Increasing the number of BSAM buffers uses more real storage but reduces the number of I/O
operations, which can save CPU time and elapsed time. As the performance of channel and
Column name Data type Description
SYSDATATYPES.INLINE_LENGTH INTEGER
NOT NULL WITH DEFAULT
The inline length attribute of the type if it is
based on a LOB source type
SYSTABLEPART.CHECKFLAG Same as DB2 9 D: Indicates that the inline length of the
LOB column that is associated with this
LOB table space was decremented when
the inline length was altered.
I: Indicates that the inline length of the
LOB column that is associated with this
LOB table space was incremented when
the inline length was altered.
SYSCOPY.STYPE Same as DB2 9 I: Indicates that the inline length attribute
of the LOB column was altered by
REORG.
SYSCOPY.TTYPE Same as DB2 9 When ICTYPE=A and STYPE=I, this column
indicates that the inline length of a LOB
column was altered:
D: Indicates that REORG decremented
the inline length of the LOB column
I: Indicates that REORG incremented the
inline length of the LOB column
572 DB2 10 for z/OS Technical Overview
storage hardware improves, the performance benefit of using more real storage to reduce the
number of I/O operations increases. FICON Express 8, the z196 processor, and the DS8800
storage control unit are all examples of this. Thus, each new version of DB2 tries to make use
of the hardware advances by increasing the I/O quantity of its sequential I/O operations.
The I/O performance is improved by reducing both the processor time and the channel
start/stop time that is required to transfer data to or from virtual storage
DB2 10 utilities exploit these recent z/OS enhancements, by offering the following
enhancements:
Allocating 64-bit buffers for BSAM data sets
Allocating more BSAM buffers for faster I/O
Long term page fixing BSAM buffers
DB2 10 utilities, when opening BSAM data sets, increase MULTSDN from 6 to 10, and
MULTACC from 3 to 5. This increase enables DB2 to reduce the number of physical I/Os by
up to 40%.
MULTACC allows the system to process BSAM I/O requests more efficiently by not starting
I/O until a number of buffers are presented to BSAM.
MULTSDN is used to give a hint to OPEN processing so it can calculate a better default value
for NCP
2
instead of 1, 2, or 5.
These performance improvements are available in CM, and the data sets need to be defined
as extended format data sets to take advantage of these enhancements.
Unlike DB2 buffers, the BSAM buffers never go dormant. So, if real storage is overcommitted
and many DB2 9 utilities are running, the utilities are thrashing real storage. Therefore, long
term page fixing the BSAM buffers in DB2 10 does not present a real risk to your system.
Note also that if you currently explicitly specify BUFNO on your data set allocations, your
value overrides MULTSDN so nothing changes.
All utilities using BSAM benefit from this change, For example, Copy and Unload are faster in
elapsed time. Copy, Unload, Load, and Recover all have less CPU time than they have
without these enhancements.
The elapsed time benefits are expected to increase as the hardware speeds up sequential
I/O.
13.13 Performance enhancements for local Java and ODBC
applications
Before DB2 10, Java and ODBC applications running locally on z/OS did not always perform
faster than the same application called remotely. This behavior is because the optimizations
built over the past few DB2 versions for DDF processing with the DBM1 address space have
not been available to local JDBC and ODBC applications. zIIP redirect of distributed
workloads also reduced the chargeable CP consumption of distributed Java applications
significantly.
2
Number of Channel Programs, the maximum number of blocks allowed per BSAM I/O operation.
Chapter 13. Performance 573
DB2 10 brings optimization functions already in place between the DDF and DBM1 address
spaces and that are exploited by distributed JDBC Type 4 applications, to local JCC type 2
and ODBC z/OS driver applications:
Limited block fetch
LOB progressive streaming
Implicit CLOSE
DB2 triggers block fetch for static SQL only when it can detect that no updates or deletes are
in the application. For dynamic statements, because DB2 cannot detect what follows in the
program, the decision to use block fetch is based on the declaration of the cursor.
To use either limited block fetch (or continuous block fetch), DB2 must determine that the
cursor is not used for updating or deleting. The easiest way to indicate that the cursor does
not modify data is to add the FOR FETCH ONLY or FOR READ ONLY clause to the query in
the DECLARE CURSOR statement. Remember also that CURRENTDATA(NO) is required to
interpret ambiguous cursors as read only.
With limited block fetch, DB2 attempts to fit as many rows as possible in a query block. DB2
then transmits the block of rows to your application. Data can also be pre-fetched when the
cursor is opened without needing to wait for an explicit fetch request from the requester.
Processing is synchronous. The java/ODBC driver sends a request to DB2, which causes
DB2 to send a response back to the driver. DB2 must then wait for another request to tell it
what should be done next.
Block fetch is controlled by the OPTIMIZE FOR n ROWS clause. When you specify
OPTIMIZE FOR n ROWS, DB2 prefetches and returns only as many rows as fit into the query
block.
You can use the FETCH FIRST n ROWS ONLY clause of a SELECT statement to limit the
number of rows that are returned. However, the FETCH FIRST n ROWS ONLY clause does
not affect blocking. This clause improves performance of applications when you need no
more than n rows from a potentially large result table.
If you specify FETCH FIRST n ROWS ONLY and do not specify OPTIMIZE FOR n ROWS,
the access path for the statement uses the value that is specified for FETCH FIRST n ROWS
ONLY for optimization. However, DB2 does not consider blocking. When you specify both the
FETCH FIRST n ROWS ONLY and the OPTIMIZE FOR m ROWS clauses in a statement,
DB2 uses the value that you specify for OPTIMIZE FOR m ROWS for blocking, even if that
value is larger than the value that you specify for the FETCH FIRST n ROWS ONLY clause.
Limited block fetch requires less fetches, which in turn results in a significant increase in
throughput and a significant decrease in CPU time. These savings are now realized by local
JCC T2 and local ODBC applications.
LOB and XML progressive streaming is described in more detail in 13.11.3, Streaming LOBs
and XML between DDF and DBM1 on page 568.
Your application needs client side support to take advantage of LOB or XML progressive
streaming. JDBC 4.0 and later is needed. If using JDBC 3.5, the application has to specify the
length of the LOB or XML to be -1. CLI users must utilize the StreamPutData API (Version 8
driver FP 4 and later).
Additional information: Limited block fetch and continuous block fetch are described in
more detail in DB2 10 for z/OS Managing Performance, SC19-2978.
574 DB2 10 for z/OS Technical Overview
Fast implicit close means that DB2 automatically closes the cursor after it prefetches the nth
row if you specify FETCH FIRST n ROWS ONLY or when there are no more rows to return.
Fast implicit close can improve performance because it saves an additional interaction
between DB2 and your address space.
DB2 uses fast implicit close when the following conditions are true:
The query uses limited block fetch
The query retrieves no LOBs
The cursor is not a scrollable cursor
Either of the following conditions is true
The cursor is declared WITH HOLD, and the package or plan that contains the cursor
is bound with the KEEPDYNAMIC(YES) option.
The cursor is not defined WITH HOLD.
You can expect to see significant performance improvement for applications with queries that
return more than 1 row and with queries that return LOBs.
13.14 Logging enhancements
As machines get faster and faster and as DB2 can do more and more work, latch class 19
contention can become a concern. This condition is especially true for systems with high
logging rates, such as applications that have heavy INSERT workloads. DB2 10 includes the
following enhancements, all active in CM, that reduce logging delays:
Long term page fix log buffers
LOG I/O enhancements
Log latch contention reduction
13.14.1 Long term page fix log buffers
DB2 10 page fixes the log buffers permanently in memory. On DB2 start, all the buffers as
specified by OUTBUFF are page fixed. The OUTPUT BUFFER field of installation panel
DSNTIPL allows you to specify the size of the output buffer that is used for writing active log
data sets. The maximum size of this buffer is 400,000 KB, the default is 4,000 KB (4 MB), and
the minimum is 400 KB.
Generally, the default value is sufficient for good write performance. Increasing OUTBUFF
beyond the DB2 10 default might improve log read performance. For example, ROLLBACK
and RECOVER with the new BACKOUT option can benefit by finding more data in the buffers.
COPY with the CONSISTENT option might benefit too.
Whether a large OUTBUFF is desirable depends on the tradeoff between log read
performance (especially in case of a long running transaction failing to COMMIT) versus real
storage consumption. Review your OUTBUFF parameter to ensure that it is set to a realistic
trade-off value.
3
If OUTBUFF is too large (because of low LOG activity), then you can
monopolize real frames that might be put to better use elsewhere in your system.
3
The QJSTWTB block in the QJST section of IFCID 001 can indicate if the log buffer is too small. This counter
represents the number of times that a log record write request waits because there is not available log buffers.
Chapter 13. Performance 575
13.14.2 LOG I/O enhancements
Todays enterprise-class controllers are considerably more reliable than controllers built in the
1980s. Battery backed non-volatile memory has provided the improved reliability making sure
that data is not lost in the event of a system wide power outage. DB2 10 improves log I/O
performance by taking advantage of this improved reliability.
With DB2 9, if a COMMIT needs to rewrite page 10, along with pages 11, 12, and 13, to the
DB2 log, DB2 first serially writes page 10 to log 1, then serially writes page 10 to log 2, and
then writes pages 11 through 13 to log 1 and log 2 in parallel. In effect, DB2 does four I/Os
and waits for the duration of three I/Os.
The first time a log control interval is written to disk, the write I/Os to the log data sets are
performed in parallel. However, if the same 4 KB log control interval is again written to disk,
the write I/Os to the log data sets must be done serially to prevent any possibility of losing log
data in case of I/O errors on both copies simultaneously.
Current DASD technology writes I/Os to a new area of DASD cache each time rather than
disk. There is no possibility of a log page being corrupted when it is being re-written. So, DB2
10 simply writes all four pages to log 1 and log 2 in parallel. Hence, DB2 10 writes these
pages in two I/Os, and waits for the duration of only one I/O (that is, whichever of the two I/Os
takes the longer.)
DB2 10 takes advantage of the non-volatile cache architecture of the I/O subsystem. DB2
rewrites the page asynchronously to both active log data sets. In the this example, DB2
chains the write for page 10 with the write requests for pages 11, 12, and 13. Thus, DB2 10
reduces the number of log I/Os and improves the I/O overlap.
13.14.3 Log latch contention reduction
Since DB2 Version 1, DB2 has used a single latch for the entire DB2 subsystem to serialize
updates to the log buffers when a log record needs to be created. Basically, the latch is
obtained, an RBA range is allocated, the log record is moved into the log buffer, and then the
latch is released.
DB2 10 makes several changes to the way this latch process works, which increases logging
throughput significantly and reduces latch class 19 contention. The changes improve the
latch management and reduce the time that the latch is held.
13.15 Hash access
DB2 10 introduces hash access data organization for fast efficient access to individual rows
using fully qualified unique keys rather than using an index.
Consider a typical access path for randomly accessing an individual row through an index. If
the index has five levels, DB2 needs to perform a getpage each level accessed as DB2
traverses the index tree performing an index probe using the key. So, DB2 might need as
many as five getpage requests to traverse the index and then another getpage request to
access the row pointed to by the Index. Even if the index was built to provide index only
access, five getpage requests might still be needed. Assuming the top three levels of the
index are already in the buffer pool, then DB2 might need as many as three physical I/Os to
access the right row.
The hash access provides a more efficient access to the data. The hashing routine points
directly to the page the data is on so the access only takes 1 I/O. In fact, hashing methods
576 DB2 10 for z/OS Technical Overview
have been used in IBM and in DB2 (access to DB2 directory) for many years to provide fast
access. DB2 10 introduces a hashing algorithm for access to user data.
As the name suggests, hash access employs the use of a DB2 predetermined hashing
technique to transform the key into a physical location of the row. So, a query with an equal
predicate can almost always access the row with a single getpage request and probably a
single I/O. Hash access provides a reduction in CPU and elapsed time because the CPU
used to compute the row location is small compared with the CPU expended on index tree
traversal and all the getpages requests, together with the reduction in physical I/O.
The table space for a hash table consists of the fixed hash space and the overflow space, as
shown in Figure 13-11.
Figure 13-11 Hash space structure
On each data page, a number of ID map slots are reserved. The hashing algorithm uses the
key of the row to work out which page and ID map entry on the page to use. In the case of an
insert, the data is inserted on the page, and the displacement of that location is written to the
ID map. It is possible that the hashing algorithm can point more than one key to the same
page and ID map entry. When this situation occurs, a pointer record is set up to point to where
the row was placed. The two rows are conceptually chained off the same RID entry. If there is
no space on the data page, then the row is put into the overflow space, and an index entry is
created to point to this row.
The fixed hash space must remain fixed because the hashing algorithm uses this space to
ensure that the calculated target page for the row stays within the fixed area.
Although the two are similar, a hash table is different from an Information Management
System (IMS) HDAM database. IMS data is held in segments. Each segment can equate to a
row in a DB2 table. Each segment is linked to other segments using pointers. The IMS
segments represent a hierarchical structure.
An example of hierarchical structure is a bank that has many customers. Each customer can
have a number of accounts, and each account can have a number of transactions that are
related to each account. Thus, an IMS database can have a Customer segment (the root
segment) with an Account segment under the Customer segment for each account that the
Hash space structure
User created tabl e space:
Based on estimated volume of data and
Desired storage attributes
in particular, DSSIZE > 4G
DB2 can create the table space
Using a default or specified HASH SPACE
DB2 uses hash space value
to generate the hash algorithm
Fixed hash space
Overflow space
page page .pages pages
ID map slots reserved
Overfl ow
Index
Chapter 13. Performance 577
customer holds. Each Account segment can have a number of Transaction segments or can
have no Transaction segments, if there are no transactions.
In an IMS HDAM database, either DFSHDC40 (the default randomizes) or a home grown one
is used to calculate which root anchor point (RAP) in which control interval the key of the root
segment is to be placed. If there is already a root segment with another key pointed to by this
RAP, then the new root segment is inserted in the next available space in the control interval,
and a 4-byte pointer points from the first root to the second and so on. If the control interval is
full, then IMS places the new root segment in the overflow area, which is pointed to from the
root addressable area.
A hash table can exist only in a universal table space, either a partition-by-growth or
partition-by-range table space, as shown in Figure 13-12.
Figure 13-12 Hash access and partitioning
For a range-partitioned table space, the partitioning key columns and hashed columns must
be the same; however, you can specify extra columns to hash. DB2 determines how many
pages are within each partition and allocates rows to partitions based on the partitioning key.
Row location within the partition is determined by the hash algorithm.
For a partition-by-growth table space, DB2 determines how many partitions and pages are
needed. Rows are inserted or loaded in the partition and page according to hash algorithm.
Rows are, therefore, scattered throughout the entire table space.
Only reordered row format is supported. Basic row format is not supported nor is MEMBER
CLUSTER or the table APPEND option.
A hash table is expected to consume between 1.2 and 2 times the storage to obtain almost
single page access. Single page access is dependent on no rows being relocated to the
overflow space due to lack of space in the fixed hash access space. Thus, there is a trade-off
between extra DASD usage and performance.
Although the unique key is used as the hash key, this key might not be the tables traditional
primary key. A hash table can have other indexes defined.
1
Hash
range
Partition by range table space
Hash
range
Hash
range
Part key
value
Part 1
Part n
Part 2
Part 1
Part 2
Part n
Partition by growth table space
Hash
range
PBG
Parts
1-n
Part Key
must be a
subset of the
Hash Key
Part Key
must be a
subset of the
Hash Key
578 DB2 10 for z/OS Technical Overview
In addition, hash key column values cannot be updated. The row must be deleted and
reinserted with the new key value.
Hash tables are not a good choice for all applications and should be viewed as yet another
design tool in the DBA toolbox. Use hash tables selectively to solve specific design problems.
In fact, a poor choice for hash access can severely impact performance through death by
random I/O because there is no concept of clustering.
Hash access is a good option for tables with a unique key and a known approximate number
of rows. Although the number of rows can be volatile through heavy inserts, updates, or
deletes, the space that the table space occupies must remain the same. Queries must be
single row access that use equals predicates on the unique key, such as OLTP applications.
Table spaces that have access requirements that are dependent on clustering are poor
choices.
For example, using the banking example that we described earlier, a transaction to select the
customers account details using the account_id (the unique key) is a good candidate for hash
access. Sometimes, when you choose a hash organization, some queries perform better and
some perform worse. In this case, you might want to consider the overall performance to
decide whether a hash organization is best for you.
A transaction to list all the transactions for that account between a given period (either by a
between clause or < and > clauses) cannot use hash access. An application that accesses a
table only using a unique key is the perfect candidate for hash access; however, these
situations are rare.
Consider the following starting approach to find candidate tables:
1. Find all the tables with unique indexes.
2. Disqualify the tables with small indexes. Target indexes with at least three levels. Large
tables with indexes with many levels have the greater potential for CPU saving.
3. Check the statements against the tables to see if they are using the unique index.
Statements with fully qualified equal predicates that use the unique index keys are good.
For example, if you have three column composite indexes but uniqueness is determined
by the first two columns, two qualified predicates are still acceptable. For joins, as long as
the unique index is used to qualify a row from the inner or outer table, it is still good.
Not fully qualified or non-equal (range) predicates are not suitable. If the statements use
the non-unique index with high cluster ratio, this is also a poor choice.
Choose tables where the maximum size is somewhat predictable such that the estimated
hash space is also predictable within a range of +/- 15-20%. Choose at least 20% extra space
for the hash space, to allow for efficient hashing. The average row size should not be too
small (less than 30 bytes) or too large (larger than 2000 bytes). Smaller rows can be used
Important: To avoid disappointment and death by random I/O, we suggest that you take
time to thoroughly research table accesses before choosing candidates for hash access.
Hash access works best for truly random access where there is no access that is
dependent on clustering at all. Be warned that you might have access patterns to data that
you do not even know exist.
Determining whether any queries depend on the clustering of the cluster index is not easy.
If you want to know if prefetch was ever used for the cluster index, after you have found a
good candidate table, you might consider moving the cluster index to a separate buffer pool
to check the use of prefetch.
Chapter 13. Performance 579
with hash access, but they will require more space. In general, tables with only a few rows per
page are prone to having more overflow rows and therefore do not perform quite as well.
When a row is updated and increases in size but cannot fit in the original page, the updated
row must be stored in the overflow area. Any time that DB2 hashes to the original page, it
contains the RID of the new page in the overflow.
When converting to hash access, it is possible to retain the old cluster index as a non-cluster
index, which implies that the cluster ratio will be poor. If the optimizer selects that index, it can
choose list prefetch to prefetch the rows. Furthermore, because the list prefetch access path
destroys the ordering of the index keys, switching to list prefetch can require a sort where
none was needed before. That sort might cause the SQL statement to use more CPU time.
13.15.1 The hashing definitions
You can either create tables for hash access or alter existing universal table spaces, by
specifying the ORGANIZE BY HASH clause on the CREATE TABLE and ALTER TABLE
statements respectively, as shown in Figure 13-13.
Figure 13-13 ORGANIZE BY HASH clause
We examine the hashing clauses in the next sections.
ORGANIZE BY HASH UNIQUE(column-name,...)
This clause defines a list of column names which form the hash key that is used to determine
where a row will be placed. The columns must be defined as NOT NULL, the number of
columns cannot exceed 64, and the sum of their lengths cannot exceed 255. In addition, the
columns cannot be a LOB, DECFLOAT, or XML data type or a distinct type that is based on
one of these data types. For range-partitioned table spaces, the list of column names must
specify all of the partitioning columns in the same order, however more columns can be
specified. Remember that for range-partitioned table spaces, the partitioning columns define
the partition number where the row is to be inserted.
HASH SPACE integerK|M|G
This clause specifies the amount of fixed hash space to preallocate for the table. If the table is
a range-partitioned table space, this value is the space for each partition. The default value is
64 MB for a table in a partition-by-growth table space or 64 MB for each partition of a
range-partitioned table space. If you change the hash space using the ALTER TABLE
statement, the hash space value is applied when the table space is reorganized using the
REORG utility.
Do not over-allocate the hash space area so that there is little or no room left in the table
space for the overflow area. When you run out of space in the overflow area, inserts can fail
with a resource unavailable message.
You can also specify the hash space at the partition level, as shown by Figure 13-14.
ORGANIZE BY HASH UNIQUE
,
( column-name )
HASH SPACE 64 M
HASH SPACE integer K
M
G
4
The values HN and MH are used to evaluate IN predicates.
Chapter 13. Performance 581
Simple EQUAL predicates (whth literals or host variables or place markers)
SELECT * FROM EMPTAB WHERE WHERE EMPNO = 123
IN list predicates (can have IN list)
SELECT * FROM EMPTAB WHERE EMPNO IN (123, 456)
OR predicates
SELECT * FROM EMPTAB WHERE EMPNO = 123 OR EMPNO = 456
With predicates against other columns in the table (EG: DEPTNO)
SELECT * FROM EMPTAB WHERE EMPNO = 123 AND DEPTNO = A001
Multi column hash key where all columns are specified
(EG: Hash key on COMPONENT, PARTNO)
SELECT * FROM PARTTAB WHERE COMPONENT = TOWEL AND PARTNO = 123
SELECT * FROM PARTTAB WHERE COMPONENT IN (TOWEL, BOOK) AND PARTNO = 123
Hash access is not chosen if all columns in a multi column hash key are not specified. The
following statements do not qualify for hash access:
SELECT * FROM PARTTAB WHERE COMPONENT = TOWEL OR PARTNO = 123
SELECT * FROM PARTTAB WHERE COMPONENT = TOWEL
SELECT * FROM PARTTAB WHERE COMPONENT = TOWEL AND PARTNO > 123
SELECT * FROM PARTTAB WHERE COMPONENT NOT IN (TOWEL, BOOK) AND PARTNO = 123
Hash access can be chosen for access to the inner table of a nested loop join. However, hash
access is not used in star joins. DB2 also restricts the use of parallelism with hash access in
specific cases. Although hash access itself does not use parallelism, other access paths on a
table organized by hash can use parallelism, if applicable. Parallel update of indexes works as
usual. If hash access is selected for a table in a parallel group, parallelism is not selected for
that parallel group. Parallelism can be selected for other parallel groups in the query.
Used with the correct application, hash access offers the following performance opportunities:
Faster access to the row using the hash key rather than using an index, with less
getpages, fewer I/Os, and CPU saved in searching for the row.
A possible saving in index maintenance if an existing unique index is not used for
(skipped) sequential processing, allowing for faster insert and delete processing. However,
dropping of such an index must be done carefully by monitoring for some time Real Time
Statistics SYSINDEXSPACESTATS.LASTUSED.
Be aware of the following considerations:
Slower LOAD of data into a HASH table versus non-HASH because of the non sorted
distribution.
You might see a possible INCREASE in I/O or buffer pool space in some cases. This
situation can happen if the active rows were colocated or clustered without hashing, but
now with hashing the rows are spread randomly across the whole hash area. Many more
pages can be touched and possibly the working group might no longer fit in the buffer pool,
increasing the CPU usage, I/O, and elapsed time.
If you encounter an increase in synchronous I/Os, check your original index access. Does
it have clustering in the accesses of which you were not aware? You might also consider a
larger buffer pool size or not using hashed tables.
Member cluster and clustering indexes cannot be used because the hash determines the
placement of the rows. Any performance benefits from these clustering methods is lost.
582 DB2 10 for z/OS Technical Overview
Performance degrades as space becomes over-used. The degradation should be gradual,
not dramatic. So, be careful with spaces that increase in size.
Table 13-5 summarizes the new columns and column values to various DB2 catalog tables to
support hash access.
Table 13-5 Catalog table changes for hash access
Column name Data type Description
SYSTABLESPACE.
ORGANIZATIONTYPE
CHAR(1)
NOT NULL WITH DEFAULT
Type of table space organization:
blank: Not known (default)
H: Hash organization
SYSTABLESPACE.
HASHSPACE
BIGINT
NOT NULL WITH DEFAULT
The amount of space, in KB, specified by the user
that is to be allocated to the table space or
partition
For partition-by-growth table spaces the space
applies to the whole table space. For
range-partitioned table spaces, the space is
applicable for each partition.
SYSTABLESPACE.
HASHDATAPAGES
BIGINT
NOT NULL WITH DEFAULT
The total number of hash data pages to be
pre-allocate for hash space.
For partition-by-growth, this includes all pages in
the fixed part of the table space. For
range-partitioned, this is the number of pages in
the fixed hash space in each partition unless it is
overridden by providing hash space at the part
level.
This is calculated by DB2 from HASH SPACE or at
REORG with automatic estimation of space and it
is used in the hash algorithm.
The value is zero for non-hash table spaces. The
value is zero for table spaces which have been
altered for hash access but have not been
reorganized.
SYSTABLEPART.
HASHSPACE
BIGINT
NOT NULL WITH DEFAULT
For partition-by-growth table spaces this is zero.
For range-partitioned table spaces, this is the
amount of space, in KB, specified by the user at
the partition level to override the space
specification at the general level. If no override is
provided it will be the same as SYSTABLEPSACE.
HASHSPACE.
SYSTABLEPART.
HASHDATAPAGES
BIGINT
NOT NULL WITH DEFAULT
For partition-by-growth table spaces, this is zero.
For range-partitioned table spaces, this is the
number of hash data pages corresponding to
SYSTABLEPART. HASHSPACE for each part. The
value is zer0 for table spaces which have been
altered for hash access but have not been
reorganized
SYSCOLUMNS.
HASHKEY_COLSEQ
SMALLINT
NOT NULL WITH DEFAULT
The columns numeric position within the table's
hash key. The value is zero if the column is not part
of the hash key. This column is applicable only if
the table that use hash organization.
SYSTABLES.
HASHKEYCOLUMNS
SMALLINT
NOT NULL WITH DEFAULT
The number of columns in the hash key of the
table. The value is zer0 if the row describes a view,
an alias, or a created temporary table.
Chapter 13. Performance 583
DB2 follows the normal locking scheme based on the applications or SQL statements
isolation level and the LOCKSIZE definition of the table space. However, a new lock type of
CHAIN is introduced to serialize hash collision chain updates when the page latch is not
sufficient as in the case when the hash collision chain continues to the hash overflow area.
Hash overflow indexes are a bit different than normal indexes in that they do not have an entry
for every row in the table. So, RUNSTATS handles these indexes like XML node indexes and
avoids updating column statistics (SYSCOLUMNS). These indexes are sparse, and any
statistics collected might not be usable.
SYSINDEXES.
UNIQUERULE
SMALLINT
NOT NULL WITH DEFAULT
A new value of C is used to enforce the
uniqueness of hash key columns.
SYSINDEXES.
INDEXTYPE
SMALLINT
NOT NULL WITH DEFAULT
A value of 2 indicates a Type 2 index or a hash
overflow index
SYSINDEXES.
HASH
CHAR(1)
NOT NULL WITH DEFAULT
Whether the index is the hash overflow index for a
hash table.
SYSINDEXES.
SPARSE
CHAR(1)
NOT NULL WITH DEFAULT
Whether the index is sparse.
N: No (default). Every data row has an index
entry.
Y: Yes. This index might not have an entry for
each data row in the table.
SYSTABLESPACESTATS.
REORGSCANACCESS
BIGINT The number of times data is accessed for
SELECT, FETCH, searched UPDATE, or
searched DELETE since the last CREATE, LOAD
REPLACE or REORG. A null value indicates that
the number of times data is accessed is unknown.
SYSTABLESPACESTATS.
REORGHASHACCESS
BIGINT The number of times data is accessed using hash
access for SELECT, FETCH, searched UPDATE,
searched DELETE, or used to enforce referential
integrity constraints since the last CREATE, LOAD
REPLACE or REORG. A null value indicates that
the number of times data is accessed is unknown.
SYSTABLESPACESTATS.
HASHLASTUSED
DATE
NOT NULL WITH DEFAULT
1/1/0001
The date when hash access was last used for
SELECT, FETCH, searched UPDATE, searched
DELETE, or used to enforce referential integrity
constraints.
SYSINDEXSPACESTATS.
REORGCLUSTERSENS
BIGINT The number of times the index was used for
SELECT, FETCH, searched UPDATE, searched
DELETE, or used to enforce referential integrity
constraints. For hash overflow indexes, this is the
number of times DB2 has used the hash overflow
index. A null value indicates that the number of
times the index has been used is unknown.
SYSCOPY.
STYPE
same as before A new value of H indicates the hash organization
attributes of the table were altered.
SYSCOPY.
TTYPE
same as before When ICTYPE=W or X and STYPE=H, this
column indicates the prior value of
HASHDATAPAGES.
Column name Data type Description
584 DB2 10 for z/OS Technical Overview
The biggest percentage DB2 CPU reduction is for SELECT SQL call, followed by
OPEN/FETCH/CLOSE and UPDATE, and then other SQL calls. However, hash tables are not
for general use for the following reasons:
Might require more space or larger buffer pools in some cases, especially if there is a
small active working set.
Are not for sequential insert (no MEMBER CLUSTER support).
Performance slowly degrades as space becomes over-used. Monitor monitor utilization of
space and the number of overflow entries regularly to determine when to REORG.
Take longer to be created because the entire hash table has to be preformatted.
Utility performance of LOAD is slower, because the rows are loaded according to the hash
key rather than LOAD using the input in clustering order.
Consider the following method as a performance alternative to loading data into a hash
table:
a. CREATE the table an non-hash.
b. LOAD the data.
c. ALTER the table to hash access.
d. REORG the entire table space.
Update of a hash key is not allowed because it requires the row to move physically from
one location to another. This move causes DB2 semantic issues, because you can see the
same row twice or miss it completely. (The same issue existed with updating partitioning
keys.) DB2 returns -151 SQLCODE and 42808 SQLSTATE. The row must be deleted and
reinserted with the new key value.
Declared and global temporary tables cannot be defined as hash tables.
REORG SHRLEVEL NONE is not supported on hash tables.
13.15.3 Monitoring the performance of hash access tables
If you decide to implement hash access for some tables, then you are most likely interested in
the performance of those tables. The performance of hash tables is sensitive to the size of the
hash space and to the number of rows that flow into overflow. If the fixed hash space is too
small, then performance might suffer. One I/O is required to access the page pointed to by the
hash routine. If there is no room on the page, then one or more I/Os are required to access
the overflow Index and another I/O is required to access the data in overflow. So, ongoing
monitoring of space usage is important.
During insert, update, and delete operations, DB2 real time statistics (RTS) maintains a
number of statistics in the SYSTABLESPACESTATS and SYSINDEXSPACESTATS catalog
tables. These statistics are relevant to monitoring the performance of hash access tables.
These values are also used by the DB2 access path selection process to determine if using
hash access is suitable.
SYSIBM.SYSTABLESPACESTATS.TOTALROWS contains the actual number of rows in
the table.
SYSIBM.SYSTABLESPACESTATS.DATASIZE contains the total number of bytes used by
the rows.
SYSIBM.SYSINDEXSPACESTATS.TOTALENTRIES contains the total number of rows
with keys.
TOTALROWS and DATASIZE apply for the whole table space, So, HASH SPACE from the
DDL when the hash table was created should be close to DATASIZE. Ideally, TOTALENTRIES
Chapter 13. Performance 585
should be less than 10% of TOTALROWS. If TOTALENTRIES is high, then you need to either
ALTER the HASH SPACE and REORG the table space or let DB2 automatically recalculate
the hash space upon next REORG.
The REORG utility is enhanced to support hash access tables. The
AUTOESTSPACE(YES/NO) parameter directs the REORG utility to calculate the size of the
hash space either by using the RTS values (the default) or by using the user-specified HASH
SPACE values stored in SYSTABLESPACE and SYSTABLEPART. Note that automatic space
calculation does not change the catalog values.
DB2 calculates the size of the hash space (when you specify AUTOESTSPACE YES) by
estimating that about 5%-10% of the rows need to go into overflow. You can influence this by
specifying a FREESPACE percentage for the table space. However, FREEPAGE is ignored.
Although the REORG utility probably cannot remove all overflow entries, run it regularly to
clean up the majority of overflow entries and reset hash chains, reducing the need to look into
overflow. However, if your hash area is sized appropriately, then you might be able to run
REORG less often.
DB2 also maintains the following RTS columns that you need to monitor:
SYSIBM.STSTABLESPACESTATS.REORGHASHACCESS records the number of times
data is accessed using hash access for SELECT, FETCH, searched UPDATE, searched
DELETE, or used to enforce referential integrity constraints since the last CREATE, LOAD
REPLACE, or REORG.
SYSIBM.SYSINDEXSPACESTATS.REORGINDEXACCESS records the number of times
DB2 has used the hash overflow index for SELECT, FETCH, searched UPDATE, searched
DELETE, or used to enforce referential integrity constraints since the last CREATE, LOAD
REPLACE, or REORG.
13.15.4 New SQLCODEs to support hash access
In addition to changes to many existing SQLCODEs, DB2 10 introduces the following
SQLCODEs to support hash access:
-20300 THE LIST OF COLUMNS SPECIFIED FOR THE <clause> CLAUSE IS NOT ALLOWED IN
COMBINATION WITH THE LIST OF COLUMNS FOR THE PARTITIONING KEY FOR THE TABLE
-20487 HASH ORGANIZATION CLAUSE IS NOT VALID FOR <table-name>
-20488 SPECIFIED HASH SPACE IS TOO LARGE FOR THE IMPLICITLY CREATED TABLE
SPACE. REASON <reason-code>. (PARTITION <partition>)
DB2 10 introduces the following utility messages:
DSNU851I csect-name RECORD rid CANNOT BE LOCATED USING HASH ACCESS ON TABLE
table-name , REASON CODE: reason
DSNU852I csect-name RECORD rid FOR PARTITION part-no CANNOT BE LOCATED USING HASH ACCESS
ON TABLE table-name , REASON CODE: reason
DSNU853I csect-name THE LOCATED RECORD rid HAS A DIFFERENT RID OR KEY THAN THE
UNLOADED ROW, REASON CODE: reason
DSNU1174I csect-name PARTITION LEVEL REORG CANNOT BE PERFORMED AFTER
Note: Monitor overflow entries closely.
586 DB2 10 for z/OS Technical Overview
ALTER ADD ORGANIZATION OR ALTER DROP ORGANIZATION
DSNU1181I csect-name THE HASH SPACE HAS BEEN PREALLOCATED AT x FOR
TABLE table-name
DSNU1182I csect-name - RTS VALUES ARE NOT AVAILABLE FOR ESTIMATING HASH
SPACE VALUE FOR table-name
DSNU1183I csect-name PARTITION LEVEL REORG CANNOT BE PERFORMED AFTER
ALTER ADD ORGANIZATION OR ALTER DROP ORGANIZATION
DSNU2700I csect-name UNLOAD PHASE STATISTICS: NUMBER OF RECORDS=xxxx
(TOTAL) , NUMBER OF RECORDS (OVERFLOW)=yyyy
DB2 10 includes the following reason codes:
00C9062E
Check Data utility identified a rid which represents a row that cannot be located in the
fixed hash space using hash key column values.
00C9062F
Check Data utility identified a rid which represents a row that cannot be located in the
hash overflow space using hash key column values to probe the hash overflow index
00C90630
Check Data utility identified a rid which represents a row that locates a different rid
using hash key column values.
Finally, IFCID 013 and IFCID 014 record the start and end of hash scan. See Example A-3 on
page 617.
13.16 Additional non-key columns in a unique index
Consider the situation where you have an index that is used to enforce uniqueness across
three columns. However, you need five columns in the index to achieve index-only access on
columns that are not part of the unique constraint during queries. To achieve this situation,
you need to create another index and endure the cost of more CPU time in INSERT/DELETE
processing and increased storage and management costs.
DB2 10 allows you to define additional non-key columns in a unique index. You do not need
the extra five column index and the processing cost of maintaining that index. DB2 in NFM
includes a new INCLUDE clause on the CREATE INDEX and ALTER INDEX statements.
In DB2 9, you need to:
CREATE UNIQUE INDEX ix1 ON t1 (c1,c2,c3)
CREATE UNIQUE INDEX ix2 ON t1 (c1,c2,c3,c4,c5)
Chapter 13. Performance 587
In DB2 10, use either:
CREATE UNIQUE INDEX ix1 ON t1 (c1,c2,c3) INCLUDE (c4,c5)
or
ALTER INDEX ix1 ADD INCLUDE (c4);
ALTER INDEX ix1 ADD INCLUDE (c5);
and
DROP INDEX ix2
The INCLUDE clause is only valid for UNIQUE indexes. The extra columns specified in the
INCLUDE clause do not participate in the uniqueness constraint.
Indexes that participate in referential integrity either by enforcing uniqueness of primary keys
or referential integrity checking, can still use indexes with extra INCLUDE columns. However,
the extra columns are not used in referential integrity checking.
In addition, INCLUDE columns are not allowed in the following situations:
Non-unique indexes
Indexes on expression
Partitioning index where the limit key values are specified on the CREATE INDEX
statement (that is, index partitioning)
Auxiliary indexes
XML indexes
Hash overflow indexes
Indexes with INCLUDEd columns cannot have additional unique columns ALTER ADDed to
the index. The index needs to be dropped and recreated or created.
If you add a column to both the table and the index (using the INCLUDE clause) in the same
unit of work, then the index is placed in advisory REORG-pending (AREO*) state. Otherwise,
the index is placed in rebuild pending (RBDP) state or page set rebuild pending (PSRBD)
state. Packages are not invalidated; however, a rebind or bind is strongly recommended.
Table 13-6 lists the DB2 catalog changes for the INCLUDEd columns.
Table 13-6 Catalog table changes for INCLUDE
Preliminary measurements with two indexes defined versus one index with INCLUDE
columns show 30% CPU reduction in INSERT with same query performance using the
indexes. There is a trade-off in performance of INSERT or DELETE versus SELECT.
Depending on how much bigger an INCLUDE index is, there can be a significant retrieval
performance regression especially in an index-only access for having fewer number of
indexes to benefit INSERT/DELETE performance.
Column name Data type Description
SYSKEYS.
ORDERING
same as before Order of the column in the key. A blank indicates this is an index
based on expressions or the column is specified for the index
using the INCLUDE clause
SYSINDEXES.
UNIQUE_COUNT
SMALLINT
NOT NULL WITH DEFAULT
The number of columns or key targets that make up the unique
constraint of an index, when other non-constraint enforcing
columns or key-targets exist. Otherwise, the value is zero.
588 DB2 10 for z/OS Technical Overview
Open pageset lock and unlock of table space in an index-only access is eliminated, which has
a bigger impact with exploitation of INDEX INCLUDE to promote more index-only access.
A query can be issued against the catalog for identifying possible candidate columns to be
evaluated for inclusion. See Example 13-3 and note the comments in the disclaimer.
Example 13-3 Query to identify indexes for possible INCLUDE
//CONSOLQZ JOB 'USER=$$USER','<USERNAME:JOBNAME>',CLASS=A,
// MSGCLASS=A,MSGLEVEL=(1,1),USER=ADMF001,REGION=0M,
// PASSWORD=C0DECODE
/*ROUTE PRINT STLVM.MS
//**********************************************************************
//*
//* DISCLAIMER:
//* - THIS QUERY IDENTIFIES INDEXES THAT HAVE THE SAME LEADING
//* COLUMNS IN THE SAME ORDER AS A SUBSET INDEX THAT IS UNIQUE.
//* WHILE IT DOES MATCH INDEXES WITH COLUMNS IN THE SAME
//* SEQUENCE, IT:
//* - DOES NOT CONSIDER THE ORDERING ATTRIBUTE OF EACH
//* COLUMN (ASC, DESC OR RANDOM)
//* - DOES NOT CONSIDER IF A SUBSET/SUPERSET INDEX IS
//* UNIQUE WHERE NOT NULL
//* - DOES NOT CONSIDER WHETHER THE INDEXES ARE PARTITIONED OR
//* NOT
//* - DOES NOT CONSIDER THE IMPACT DUE TO REMOVING THE EXISTING
//* INDEXES AND USING THE NEW ONES
//* - THEREFORE, USE THIS AS A GUIDE FOR INDEXES
//* THAT SHOULD BE CONSIDERED, BUT NOT GUARANTEED, TO BE
//* CONSOLIDATED INTO A UNIQUE INDEX WITH INCLUDE COLUMNS.
//*
//*********************************************************************
//STEP1 EXEC TSOBATCH,DB2LEV=DB2A
//SYSTSIN DD *
DSN SYSTEM(VA1A)
RUN PROGRAM(DSNTEP3)
//SYSIN DD *
SELECT CASE WHEN SK.COLSEQ = 1 THEN
CAST(A.DBID AS CHAR(5))
ELSE
' '
END AS DBID,
CASE WHEN SK.COLSEQ = 1 THEN
SUBSTR(STRIP(A.CREATOR, TRAILING) CONCAT
'.' CONCAT STRIP(A.NAME,TRAILING),1,27)
ELSE ' '
END AS SIMPLEST_UNIQUE_IX,
CASE WHEN SK.COLSEQ = 1 THEN
CAST(A.OBID AS CHAR(5))
ELSE
' '
END AS OBID,
CASE WHEN SK.COLSEQ = 1 THEN
SUBSTR(STRIP(B.CREATOR, TRAILING) CONCAT
'.' CONCAT STRIP(B.NAME,TRAILING),1,27)
ELSE
Chapter 13. Performance 589
' '
END AS IX_WITH_CAND_INCLUDE_COLS,
CASE WHEN SK.COLSEQ = 1 THEN
CAST(B.OBID AS CHAR(5))
ELSE
' '
END AS OBID,
CASE WHEN SK.COLSEQ <= A.COLCOUNT
AND (SK.COLSEQ <= A.UNIQUE_COUNT OR
A.UNIQUE_COUNT = 0) THEN
'UNIQUE'
ELSE
'INCLUDE'
END AS COLTYPE,
SUBSTR(SK.COLNAME,1,27) AS COLNAME
FROM SYSIBM.SYSINDEXES A,
SYSIBM.SYSINDEXES B,
SYSIBM.SYSKEYS SK
WHERE A.UNIQUERULE IN ('U', 'N', 'P', 'R')
AND B.UNIQUERULE IN ('U', 'N', 'P', 'R', 'D')
AND (A.UNIQUERULE <> 'N' OR
(A.UNIQUERULE = 'N' AND B.UNIQUERULE IN ('D', 'N')))
AND NOT (A.CREATOR = B.CREATOR
AND A.NAME = B.NAME)
AND A.TBCREATOR = B.TBCREATOR
AND A.TBNAME = B.TBNAME
AND SK.IXNAME = B.NAME
AND SK.IXCREATOR = B.CREATOR
AND A.COLCOUNT <= B.COLCOUNT
AND A.COLCOUNT =
(SELECT COUNT(*)
FROM SYSIBM.SYSKEYS X,
SYSIBM.SYSKEYS Y
WHERE X.IXCREATOR = A.CREATOR
AND X.IXNAME = A.NAME
AND Y.IXCREATOR = B.CREATOR
AND Y.IXNAME = B.NAME
AND X.COLNAME = Y.COLNAME
AND X.COLSEQ = Y.COLSEQ)
ORDER BY A.CREATOR, A.NAME, B.CREATOR, B.NAME, SK.COLSEQ
WITH UR;
/*
//
Example 13-4 lists the sample output.
Example 13-4 Possible INCLUDE candidates
+------------------------------------------------------------------------------------------------------+
| DBID | SIMPLEST_UNIQUE_IX | OBID | IX_WITH_CAND_INCLUDE_COLS | OBID | COLTYPE | COLNAME |
+------------------------------------------------------------------------------------------------------+
1_| 274 | ADMF001.IX1 | 4 | ADMF001.IX2 | 6 | UNIQUE | C1 |
2_| | | | | | UNIQUE | C2 |
3_| | | | | | UNIQUE | C3 |
4_| | | | | | INCLUDE | C4 |
5_| | | | | | INCLUDE | C5 |
590 DB2 10 for z/OS Technical Overview
+------------------------------------------------------------------------------------------------------+
13.17 DB2 support for solid state drive
A solid state drive (SSD) is a storage device that stores data on solid-state flash memory
rather than traditional hard disks. A SSD contains electronics that enable them to emulate
hard disk drive (HDD). These drives have no moving parts and use high-speed memory, so
they are fast and energy-efficient. They have been enhanced to be usable by enterprise-class
disk arrays. These drives are treated like any HDD in the array.
SSD are available and can be used for allocating table space data sets or index data sets like
HDD. The use of SSD for enterprise storage is transparent to both DB2 and z/OS; however,
measurements have shown that SSD can improve DB2 queries two to eight times as the
result of a faster I/O rate over HDD. There are other DB2 functions that are also affected by
the drive type.
The need to REORG to improve query performance is reduced when the table space is on
SSD.
Refer to Ready to Access DB2 for z/OS Data on Solid-State Drives, REDP-4537.
DB2 10 tracks the device type on which the page sets reside. The column DRIVETYPE is
added to the DB2 catalog tables SYSIBM.SYSTABLESPACESTATS and
SYSIBM.SYSINDEXSPACESTATS.
Table 13-7 describes the new column. The DRIVETYPE value is inserted during CREATE. If
the drive type changes after page sets are created, the value is updated synchronously by
utilities. The value is also updated asynchronously by the RTS service task when the page
sets are physically opened or go through extent processing or EOV processing. Finally, the
value can also be updated manually by SQL.
Table 13-7 Column DRIVETYPE
For multi-volume or striped data sets the drive type is set to HDD if any volume is on HDD.
Whether the data sets are DB2 managed or user managed, we recommend that all volumes
that can contain secondary extents, have the same drive type as the drive type of the primary
extent volume and that all pieces of a multi-piece data set be defined on volumes with the
same drive type.
Column DRIVETYPE provides a catalog column that can be referenced to determine the
drive type on which a data set is defined. The column is used by the stored procedure,
DSNACCOX, when deciding if a REORG is warranted.
The stored procedure DSNACCOX is enhanced to consider the drive type that the data set is
on when making REORG recommendations. When the data set is defined on an SSD, and
queries have been run that are sensitive to clustering (REORGCLUSTERSENS not equal to
0), the threshold value is changed to two times the current unclustered insert percentage
Column name Data type Description
DRIVETYPE CHAR(3)
NOT NULL WITH
DEFAULT 'HDD'
Drive type that the table space or index space
data set is defined on.
HDD: Hard Disk Drive
SSD: Solid State Drive
Chapter 13. Performance 591
(RRTUnclustInsPctreorg). In a future release of DB2, the optimizer, which is a cost based
optimizer, can also consider the drive type in access path selection.
13.18 Extended support for the SQL procedural language
The SQL Procedural Language (SQL PL) is the language used to develop SQL procedures
for the DB2 family. DB2 9 introduces native SQL procedures, by executing the SQL
procedures natively (without the need of a C compiled module) inside the DBM1address
space, rather than the WLM stored procedure address space.
DB2 10 extends SQL PL by allowing you to develop and debug SQL scalar functions written
in SQL PL. Prior to DB2 10 NFM, an SQL scalar function is restricted to a single statement
which returns the results of an expression defined within the function. This type of function is
not associated with a package as the expression in inlined into the invoking SQL. DB2 10
introduces non-inline scalar functions,
5
which can also be used to contain SQL PL logic.
DB2 10 also introduces support for simple SQL PL table functions.
6
NFM also brings enhancements to native SQL procedures, introduced in version 9. Together
with these functional enhancements, a number of performance optimizations are also
introduced which bring significant CPU reduction to a number of commonly used areas. The
SQL PL assignment statement, SET, is extended to allow you to set multiple values with a
single SET statement. This is similar to the existing support for SET:host-variable statement.
Figure 13-15 shows the syntax of the SET statement. Reference to any SQL variables or
parameters in the multiple assignment statement always use the values that were present
before any assignment is made. Similarly, all expressions are evaluated before any
assignment is made.
Figure 13-15 SQL PL multiple assignment statement
5
See 6.1, Enhanced support for SQL scalar functions on page 126.
6
See 6.2, Support for SQL table functions on page 144.
SET
,
transition-variable= expression
DEFAULT
NULL
, ,
(1)
( transition-variable )=( expression )
DEFAULT
NULL
VALUES expression
DEFAULT
NULL
,
( expression )
DEFAULT
NULL
Notes:
1 The number of expressions, DEFAULT, and NULL keywords must match the number of
transition-variables.
592 DB2 10 for z/OS Technical Overview
Multiple assignments together with other optimizations combine to reduce CPU consumption
in commonly used statements in SQL PL procedures. Other SQL PL optimizations include;
path length reduction when evaluating IF statements and, most significantly, CPU reduction
with SET statement with function.
Preliminary performance results for an OLTP workload using SQL PL show a 20% reduction
in CPU and a 5% improvement in response time, running in DB2 10 in CM. The SQL
procedures need to be regenerated first.
13.19 Preemptable backout
Prior to DB2 10, abort and backout processing was scheduled under a non-preemptable
SRB. On a single CPU system, this processing can give the impression DB2 is hung looping.
Now, DB2 creates an enclave. This enclave is used to classify the work and to enable
preemptable SRB backout processing for must complete processing. Must complete
processing includes DB2 abort, rollback, or backout processing and commit phase 2
processing.
Preemptable enclaves allow the z/OS dispatcher and workload manager to interrupt these
tasks for more important tasks, whereas a non-preemptable unit of work can also be
interrupted but must receive control after the interrupt is processed.
13.20 Eliminate mass delete locks for universal table spaces
The mass delete lock is used to serialize a mass delete operation with an insert at the table
level. Mass delete locks are used for classic partitioned table spaces in V8, DB2 9, and
DB2 10. The mass delete process gets an exclusive mass delete lock. Then, insert (if a user
wanted to allocate a deallocated segment of pages) asks for a shared mass delete lock to
make sure that the mass delete that released the segment is committed, eliminating any
possibility that the mass delete can roll back and reallocate the segment. The mass delete
lock is also used to serialize uncommitted readers on indexes with a mass delete.
DB2 10 avoids the need for a mass delete lock to be held for universal table spaces.
Serialization is performed through a mass delete timestamp recorded in the header page.
In data sharing, for other members of a data sharing system to see an update to the mass
delete time stamp, the data set must now be closed and re-opened. In DB2 10, a mass delete
causes a UTS to be drained, which causes the table space to be closed on the other
members.
However, a mass delete on a classic segmented table space still must use a mass delete lock
and an X-lock on the table, instead of a drain. Thus, the data set is not closed and, therefore,
other members might not see the updated mass delete time stamp. This situation presents no
problem.
Prerequisite: You need to be in CM and recreate or regenerate SQL PL procedures to
take full advantage of all of these performance enhancements.
Chapter 13. Performance 593
13.21 Parallelism enhancements
DB2 10 includes the following changes to enhance query parallelism performance:
Remove some restrictions
Improve the effectiveness of parallelism
Straw model for workload distribution
Sort merge join improvements
You can take advantage of these parallelism enhancements in CM; however, a rebind or bind
is required.
13.21.1 Remove some restrictions
Two significant restrictions are removed. DB2 now allows parallelism for multi-row fetch
queries when the cursor is read only. Parallelism is also allowed if a parallel group contains a
work file created from view or table expression materialization.
Support for parallelism in multi-row fetch
In previous releases, parallelism is disabled when multi-row fetch is used for simple queries
such as SELECT * FROM TABLE T1. You were, therefore, forced to choose between
parallelism and multi-row fetch to improve the performance of such queries. You could not
have both.
DB2 10 removes this restriction for multi-row fetch read-only queries only. However,
parallelism is still not allowed for multi-row fetch ambiguous queries.
7
Support for parallelism parallel group contains a work file
In previous versions, when an inner work file is used and when the number of partitioning
keys and sort keys are different or they have different data types or lengths, then either
parallelism is disabled or the inner work file is not partitioned. When the inner work file is not
partitioned, then each parallel child task must scan through the whole work file which
consumes extra CPU.
In DB2 10, if the join is not a full outer join, DB2 does not attempt to partition the inner work
file. DB2 partitions only the outer table and builds a sparse index on the inner work file. Each
parallel child task then uses the sparse index to position on the starting point to commence its
processing.
Probing the sparse index for positioning is almost like having the work file partitioned. In this
way, DB2 can exploit parallelism more often. However, this enhancement applies only to CP
parallelism.
DB2 generates temporary work files when a view or table expression is materialized, however
parallelism was disabled for these work files. DB2 10 now supports parallelism if a parallel
group contains a work file created from view or table expression materialization. However,
only CP parallelism is supported.
DB2 also generates temporary work files for sort merge join (SMJ), which also benefit from
parallelism improvements in work files. See 13.21.4, Sort merge join improvements on
page 597 for information about these improvements.
7
A query or better a cursor is considered ambiguous if DB2 cannot tell whether it is used for update or read-only
purposes.
594 DB2 10 for z/OS Technical Overview
13.21.2 Improve the effectiveness of parallelism
The DB2 optimizer currently chooses the best cost based sequential access plan, then
attempts to parallelize it. So, sometimes the best sequential access plan is not the most
efficient access plan for parallelism.
To evaluate parallelism, DB2 chooses key ranges decided at bind time by optimizer based on
statistics (low2key, high2key, and column cardinality) and the assumption of uniform data
distribution within low2key and high2key. This makes DB2 dependant on the availability and
accuracy of the statistics. Also the assumption of uniform data distribution does not always
stand. Figure 13-16 shows an example.
Figure 13-16 Key range partitioning
DB2 normally partitions on the leading table (Medium_T in this example) using page range or
key range based on catalog statistics. Here, although at bind time DB2 chose a parallelism
degree of three, the majority of the qualifying rows in the leading table are in only one
parallelism group.
In previous releases, DB2 might not have had effective parallelism because of the following
situations:
Parallelism might be disabled because the leading table was a work file.
The leading table only has one row.
Parallelism might not be chosen for an I/O bound query if the leading table is not
partitioned.
The degree of parallelism might be limited by the column cardinality. For example, if the
matching column in the leading table has a cardinality of two, then the degree of
parallelism will be limited to two.
For page range partitioning, the degree of parallelism is limited to the number of pages in the
leading table.
Large_T
10,000,000 rows
C2 C3
Workfile
SELECT *
FROM Medium_T M,
Large_T L
WHERE M.C2 = L.C2
AND M.C1 BETWEEN (CURRENTDATE-90) AND CURRENTDATE
M.C1 is date column, assume currentdate is 8-31-2007, after the
between predicate is applied, onl y rows with date between
06-03-2007 and 8-31-2007 survived, but optimizer chops up the key
ranges within the whole year after the records are sorted.
SORT
ON C2
2,500 rows
3-degree parallelism
Partition the
records according
to the key ranges
25%
12-31-2007
09-30-2007
08-31-2007
01-01-2007
05-01-2007
04-30-2007
Medium_T
10,000 rows
C1 C2
5,000,000 rows
Chapter 13. Performance 595
To overcome this issue, DB2 10 introduces the concept of dynamic record range based
partitioning. DB2 materializes the intermediate result in a sequence of join process, and the
results are divided into ranges with an equal number of records.
Figure 13-17 shows the same example but using dynamic record based partitioning.
Figure 13-17 Dynamic record based partitioning
Partitioning now takes place on the work file after it is sorted. Each parallel task now has the
same number of rows to process.
This division does not have to be on the key boundary unless it is required for group by or
distinct function. Record range partitioning is dynamic because partitioning is no longer
based on the key ranges decided at bind time. Instead, it is based on the number of
composite side records and the number of workload elements. All the problems associated
with key partitioning, such as the limited number of distinct values, lack of statistics, data skew
or data correlation, are bypassed and the composite side records are distributed evenly.
13.21.3 Straw model for workload distribution
Prior to DB2 10, DB2 divides the number of keys or pages to be processed by the number
representing the parallel degree. One task is allocated per degree of parallelism, the range is
processed and the task ends. So, parallelism tasks can take different times to complete.
DB2 might, therefore, have bad workload balancing issues as depicted by Figure 13-16 on
page 594, caused by lack of statistics or data skew. Even with dynamic record based
partitioning on the leading table, a workload can go out-of-balance through repeated nested
joins.
DB2 10 introduces a new straw model workload distribution method, as depicted by
Figure 13-18. More key or page ranges are allocated than the number of parallel degrees.
The same number of parallel processing tasks are allocated as before, (the same as the
Dynamic record range partition
Large_T
10,000, 000 rows
C2 C3
Workfile
SELECT *
FROM Medium_T M,
Large_T L
WHERE M.C2 = L.C2
AND M.C1 BETWEEN (CURRENTDATE-90) AND CURRENTDATE
SORT
ON C2
2,500 rows
3-degrees parallelism
Partition the records -
each range has same
number of records
Medium_T
10,000 rows
C1 C2
596 DB2 10 for z/OS Technical Overview
degree of parallelism). However, after a task finishes its smaller range, it does not terminate,
but it processes another range until all ranges are processed. Even if data is skewed, this
process should make processing faster, because all parallel processing tasks finish at about
the same time.
Figure 13-18 Workload balancing straw model
The degree of parallelism for both examples is 3. In the old method (on the left), the key
ranges are divided into three tasks, but the middle task takes the longest time to process
because it has far more rows to process. In the straw model (on the right), the key ranges are
split into 10, even though the degree of parallelism is still 3. The three tasks are allocated
smaller key ranges and when each one task finishes, it processes another key range. In the
old method, the second task processed most of the rows, but in the straw model all three
tasks share the work, making the query run quicker.
You can see whether the straw model is used by viewing the DSN_PGROUP_TABLE
EXPLAIN table. Table 13-8 shows the new column.
Table 13-8 DSN_PGROUP_TABLE changes
The straw model is a new concept for providing better workload distribution in SQL
parallelism. An IFCID 363 is introduced to monitor the use of the straw model in parallelism.
IFCID 363 is started through Performance Trace class 8. The contents of IFCID 363 are listed
in Example A-11 on page 623,
Column name Data type Description
DSN_PGROUP_TABLE.
STRAW_MODEL
CHAR(1)
NOT NULL WITH DEFAULT
'
Y: The parallel group is
processed in straw model.
N: The parallel group is not
processed in straw model
(default value).
SELECT *
FROM Medium_T M
WHERE M.C1 BETWEEN 20 AND 50
100
Medium_T
10,000 rows
C1 C2
index on C1
50
47
44
41
38
35
32
29
26
23
20
0
degree=3
#ranges=10
100
Medium_T
10,000 rows
C1 C2
index on C1
50
0
20
30
40
degree = 3
Chapter 13. Performance 597
13.21.4 Sort merge join improvements
Prior to DB2 10, DB2 uses the same key range to partition both the outer table and the inner
table for sort merge join (SMJ). However, in some cases, the inner work file cannot be
partitioned. See Support for parallelism parallel group contains a work file on page 593 for a
more detailed discussion. DB2 10 takes advantage of building a sparse index for the inner
work file by using the index to position on the starting point for each child parallel task before
starting normal SMJ processing. Even if parallelism is disabled at run time, DB2 still uses the
sparse index to access the work file.
Table 13-9 shows the two new columns in the PLAN_TABLE.
Table 13-9 Sort merge join PLAN_TABLE changes
13.22 Online performance buffers in 64-bit common
31-bit ECSA has been a precious resource in versions prior to DB2 10 and member
consolidation in DB2 10 is likely to continue to grow demand for 31-bit ECSA. In DB2 10, IFC
is a significant exploiter of 64-bit common.
DB2 online performance buffers moved to 64-bit common. The buffers now support 64 MB
maximum sizes. Thus the -START TRACE command keyword BUFSIZE now allows a max
value of 67108864. Any value larger than that is interpreted as 67108864. The minimum and
default sizes is also increased to 1 MB. The values originally were 16 MB and 256 KB
respectively.
We also suggest you define an additional 64-bit common to accommodate other potential
exploiters. If a monitor application is being used that issues group scope IFI requests, up to
128 MB times the number of members in the group of data can be returned to the IFI
requesting member. That LPAR should be configured to accommodate this potential spike in
real storage required to manage these requests.
If the shared virtual space remaining is less than 128 GB or if there is less than 6 GB of 64-bit
COMMON storage available for this DB2, DB2 abends with reason code 00E8005A and
issues the following message:
DSNY011I csect_name z/OS DOES NOT HAVE SUFFICIENT 64-BIT SHARED AND
COMMON STORAGE AVAILABLE
Column Name Data type Description
PLAN_TABLE.MERGEC CHAR(1)
NOT NULL WITH DEFAULT
Y: The composite table is
consolidated before join.
N: The composite table is not
consolidated before join (default
value).
PLAN_TABLE.MERGEN CHAR(1)
NOT NULL WITH DEFAULT
Y: The new table is consolidated
before join.
N: The new table is not
consolidated before join (default
value).
Minimum requirements: Verify that you have a minimum of 128 GB of SHARED storage
and 6 GB of 64-bit COMMON storage for each DB2 subsystem that you plan to start on
each MVS image.
598 DB2 10 for z/OS Technical Overview
13.23 Enhanced instrumentation
DB2 instrumentation also includes the following enhancements in DB2 10:
One minute statistics trace interval
Statistics trace records are generated to SMF every minute, regardless of what you
specify in DSNZPARM.
IFCID 359 for index leaf page split
A new IFCID to help you to monitor index leaf page splits.
Separate DB2 latch and transaction lock in Accounting class 8
A new monitor class to monitor SQL statements at the system-wide level rather that the
thread-level.
Separate DB2 latch and transaction lock in Accounting class 8
DB2 10 externalizes both types of waits in two different fields.
Storage statistics for DIST address space
You can now monitor storage used by the DIST address space
Accounting: zIIP SECP values
zAAP on zIIP and zIIP SECP values
Package accounting information with rollup
Package accounting statistics are also rolled up.
DRDA remote location statistics detail
MVS function can be invoked o compress DB2 SMF records.
DRDA remote location statistics detail
More granularity in monitoring DDF locations.
13.23.1 One minute statistics trace interval
As processors are getting faster and faster, more things can happen in a shorter time. For
example, on a Uni-processor z10, we have the equivalent of ~60 billion instructions occurring
in a single minute. That is a lot of instructions that can happen in a minute. The current default
of 5 minutes is too long to see trends occurring in the system.
To help to capture events that happen inside DB2 for performance or problem diagnosis,
DB2 10 always generates an SMF type 101 trace record (statistics trace) every one minute
interval, no matter what you specify in the STATIME DSNZPARM parameter. This setting
applies to selective statistics records critical for performance problem diagnosis.
This trace interval should not severely impact SMF data volumes, because only 1359 records
are produced per DB2 subsystem per day.
13.23.2 IFCID 359 for index leaf page split
Index leaf page splits can cause performance problems, especially in a data sharing
environment. So, DB2 10 introduces a new trace record, IFCID 359, to help you to monitor
leaf page splits. IFCID 359 records when index page splits occur and on what indexes. IFCID
359 is discussed in more detail in 5.8, IFCID 359 for index split on page 119.
Chapter 13. Performance 599
13.23.3 Separate DB2 latch and transaction lock in Accounting class 8
Prior to DB2 10, plan and package accounting IFCID data does not differentiate between the
time threads wait for locks and the time threads wait for latches. Accounting records produce
a single field with the total for both types of wait time reported.
DB2 10 externalizes both types of waits in two different fields in all relevant IFCID records,
IFCID 3, IFCID 239, and IFCID 316.
OMEGAMON PE V5.1 externalizes both counters in both the Accounting Detail report and
Accounting Trace, as shown in Figure 13-19.
Figure 13-19 Accounting suspend times
Prior to DB2 10, field IRLM LOCK+LATCH shows the accumulated elapsed time spent by the
package or DBRM waiting for lock and latch suspensions. DB2 10 breaks this counter down.
Field IRLM LOCK+LATCH shows the accumulated elapsed time waiting in IRLM for locks and
latches. Field DB2 LATCH now records the accumulated elapsed time waiting for latches in
DB2.
13.23.4 Storage statistics for DIST address space
DB2 Version 7 introduced two IFCIDs to help manage and monitor the virtual storage usage
in the DBM1 address space. IFCID 225 provides summary storage information, and IFCID
217 provides more detailed information. Both IFCIDs have become key components for
performance analysis and system tuning.
DB2 10 restructures IFCID 225 to take into account the DB2 10 memory mapping and 64-bit
addressing. The trace record is also now divided into data sections for a more logical
grouping of data. IFCID 225 also contains information about storage in the DIST address
space.
IFCID 217 is also overhauled. Duplicate data with IFCID 225 is now removed. Enable both
IFCID 225 and IFCID 217 to generate a detail system storage profile. However, in most cases
IFCID 225 is sufficient for general monitoring and reporting.
Example A-5 on page 618 lists the restructured IFCID 225.
Report: Trace:
PACKAGE AVERAGE TIME AVG.EV TIME/EVENT DSNTEP2 TIME EVENTS TIME/EVENT
------------------ ------------ ------ ------------ ------------------ ------------ ------ ------------
LOCK/LATCH 5.954479 23.6K N/C LOCK/LATCH 0.000000 0 N/C
IRLM LOCK+LATCH 5.000000 23.0K IRLM LOCK+LATCH 0.000000 0 N/C
DB2 LATCH 0.954479 0.6K DB2 LATCH 0.000000 0 N/C
SYNCHRONOUS I/O 1:03:11.9093 557.7K 0.033630 SYNCHRONOUS I/O 0.000000 0 N/C
OTHER READ I/O 58:53.212253 260.8K N/C OTHER READ I/O 0.000000 0 N/C
OTHER WRITE I/O 0.000000 0.00 N/C OTHER WRITE I/O 0.000000 0 N/C
SERV.TASK SWITCH 1.148303 6.00 0.887426 SERV.TASK SWITCH 0.000191 2 0.000095
ARCH.LOG(QUIESCE) 0.000000 0.00 N/C ARCH.LOG(QUIESCE) 0.000000 0 N/C
ARCHIVE LOG READ 0.000000 0.00 N/C ARCHIVE LOG READ 0.000000 0 N/C
DRAIN LOCK 0.000000 0.00 N/C DRAIN LOCK 0.000000 0 N/C
CLAIM RELEASE 0.000000 0.00 N/C CLAIM RELEASE 0.000000 0 N/C
PAGE LATCH 0.000000 0.00 N/C PAGE LATCH 0.000000 0 N/C
NOTIFY MESSAGES 0.000000 0.00 N/C NOTIFY MESSAGES 0.000000 0 N/C
GLOBAL CONTENTION 0.000000 0.00 N/C GLOBAL CONTENTION 0.000000 0 N/C
TCP/IP LOB 0.000000 0.00 N/C TCP/IP LOB 0.000000 0 N/C
TOTAL CL8 SUSPENS. 2:02:12.2243 842.1K 0.318229 TOTAL CL8 SUSPENS. 0.000191 2 0.000095
600 DB2 10 for z/OS Technical Overview
OMEGAMON PE V5.1externalizes the new DIST address space storage statistics in two new
trace blocks in the Statistics Short and Long reports, as shown in Figure 13-20 and
Figure 13-21.
Figure 13-20 Statistics, DIST storage above 2 GB
Figure 13-21 Statistics, DIST storage below 2 GB
13.23.5 Accounting: zIIP SECP values
z/OS V1.11 is enhanced with a new function that can enable System z Application Assist
Processor (zAAP) eligible workloads to run on System z Integrated Information Processors
(zIIPs). This function can enable you to run zIIP- and zAAP-eligible workloads on the zIIP.
This capability is ideal for customers without enough zAAP or zIIP-eligible workload to justify
a specialty engine today. The combined eligible workloads can make the acquisition of a zIIP
cost effective. This capability is also intended to provide more value for customers having only
zIIP processors by making Java and XML-based workloads eligible to run on existing zIIPs.
This capability is available with z/OS V1.11 (and z/OS V1.9 and V1.10 with PTF for APAR
OA27495) and all System z9 and System z10 servers. This capability does not provide an
overflow so additional zAAP eligible workload can run on the zIIP, it enables the zAAP eligible
work to run on zIIP when no zAAP is defined.
PK51045 renames the redirection eligible zIIP CPU time (IIPCP CPU TIME) to SECP CPU
and the consumed zIIP CPU time (IIP CPU TIME) to SE CPU TIME, to more accurately
reflect what they are, Specialty Engine times which can have both zAAP and zIIP
components. These fields are externalized in the OMEGAMON PE DB2 Accounting reports.
Beginning with DB2 10, the possible redirection value SECP, which used to indicate either the
estimated (PROJECTCPU) or zIIP overflow to CP if zIIP is too busy, is no longer supported
and is always zero. However the actual redirected CPU time continues to be available in the
DIST STORAGE ABOVE 2 GB
--------------------------------------------
GETMAINED STORAGE (MB)
FIXED STORAGE (MB)
VARIABLE STORAGE (MB)
STORAGE MANAGER CONTROL BLOCKS (MB)
DIST AND MVS STORAGE BELOW 2 GB
--------------------------------------------
TOTAL DIST STORAGE BELOW 2 GB (MB)
TOTAL GETMAINED STORAGE (MB)
TOTAL VARIABLE STORAGE (MB)
NUMBER OF ACTIVE CONNECTIONS
NUMBER OF INACTIVE CONNECTIONS
TOTAL FIXED STORAGE (MB)
TOTAL GETMAINED STACK STORAGE (MB)
TOTAL STACK STORAGE IN USE (MB)
STORAGE CUSHION (MB)
24 BIT LOW PRIVATE (MB)
24 BIT HIGH PRIVATE (MB)
24 BIT PRIVATE CURRENT HIGH ADDRESS
31 BIT EXTENDED LOW PRIVATE (MB)
31 BIT EXTENDED HIGH PRIVATE (MB)
31 BIT PRIVATE CURRENT HIGH ADDRESS
EXTENDED REGION SIZE (MAX) (MB)
Chapter 13. Performance 601
SE CPU time field. The reason is that z/OS cannot provide possible redirection values for zIIP
on zAAP engines to DB2, so the value cannot represent all the specialty engines as the name
implies. SECP CPU is still reported for DB2 9 and earlier; however, it lists only the possible
redirection for zIIP eligible processes. SE CPU TIME remains the actual CPU time consumed
for both zAAP and zIIP processors.
In DB2 10, you need to review the RMF Workload Activity reports to get an indication of how
much work is eligible for zIIP or zAAP processing (PROJECTCPU) or how much work
overflowed to CP because zIIP or zAAP was too busy. Look at the AAPCP (for zAAP) and
IIPCP (for zIIP) in the APPL% section of the RMF workload activity report Service or
Reporting class section as shown in the following example:
---APPL %---
CP 131.62
AAPCP 0.00
IIPCP 79.00
AAP 0.00
IIP 0.00
13.23.6 Package accounting information with rollup
DB2 currently provides the facility to accumulate accounting trace data. A rollup record is
written with accumulated counter data for the following type of threads:
Query parallelism child tasks if the DSNZPARM PTASKROL parameter is set to YES
DDF and RRSAF threads if the DSNZPARM ACCUMACC parameter is greater than or
equal to 2
For query parallelism child tasks, a rollup record is written with accumulated counter data
when the parent task (agent) deallocates on an originating DB2 or when an accumulating
child task deallocates on an assisting DB2. The rollup data is an accumulation of all the
counters for that field for each child task that completed and deallocated.
For DDF and RRSAF threads, a rollup record is written with accumulated counter data for a
given user when the number of occurrences for that user reaches the DSNZPARM value for
ACCUMACC. The user is the concatenation of the following values, as defined by the
DSNZPARM ACCUMUID parameter:
User user ID (QWHEUID, 16 bytes)
User transaction name (QWHCEUTX, 32 bytes)
User workstation name (QWHCEUWN, 18 bytes)
Accounting statistics are rolled up only at the plan level (Accounting class 1, 2, and 3) in
releases prior to DB2 10.
DB2 10 enhances accounting rollup for the package level, (accounting class 7, 8 and 10 if
available).
This change impacts IFCID 003, IFCID 239, IFCID 147, IFCID 148, and SMF type 100 and
101 records (subtypes in header).
13.23.7 DRDA remote location statistics detail
Prior versions of DB2 collect statistics for all locations accessed by DRDA and group them
under one collection name, DRDA REMOTE LOCS, and these statistics are reported in the
602 DB2 10 for z/OS Technical Overview
Statistics Detail reports in one single block, DRDA REMOTE LOCATIONS. DB2 10 introduces
a number of enhancements to IFI tracing to address these challenges.
Because many thousands of remote clients can potentially be in communication with a DB2
subsystem, DB2 10 introduces a statistics class 7 trace to externalize DRDA statistics data by
location. The new statistics class 7 trace triggers a new IFCID, 365, to be activated.
When a statistics trace is started with class 7 specified in the class list, the location data is
only written out at the interval specified by the STATIME DSNZPARM parameter. During
normal DB2 statistics trace processing (CLASS 1, 2, 4, 5, or 6), only the statistics for the
location named DRDA REMOTE LOCS are externalized to the specified statistics destination,
which defaults to SMF. The information is written to the specified destination every minute.
To get the new detail location statistics, you must either specify CLASS(7) or IFCID(365) on a
-START TRACE or -MODIFY TRACE command, which activates a new or modifies an
existing statistics trace. DB2 then writes new IFCID 365 records to the specified destination of
the statistics trace for the remote locations that are communicating with the subsystem.
The new record includes only the statistics for those locations with activity since a record was
generated. The statistics for up to 95 locations are written in one record with more than one
record being written until all locations are retrieved. The default DRDA location, DRDA
REMOTE LOCS, which has all location statistics, is still written every time the default
statistics trace classes are written, but it is not written with the other detail location statistics in
the IFCID 365 trace.
When a monitor trace is started for IFCID 365, multiple IFCID 365 records are returned in
response to a READS request. The issuer of the READS must provide a buffer of suitable
size. The number of IFCID 365 records written to the buffer depends on the size of the buffer
passed as a parameter of the READS. The buffer must be large enough to hold at least one
QW0365 location detail section and the QW0365HE header section. Again, only the locations
that have activity since the last READS were processed are returned.
DDF continues to capture statistics about all DRDA locations in one location named DRDA
REMOTE LOCS. The statistics for this default DRDA location is the only location statistics
written with the default statistics trace data, which is a change from DB2 9 where the statistics
for up to 32 locations were written.
DDF now captures statistics about each location that it is in communication with as a server
or a requester. The statistics for these locations is written every time the STATIME interval has
elapsed. The statistics detail for up 75 locations is written in one IFCID 365 record. Multiple
records are produced until the detail statistics for all locations showing activity are written.
DDF continues to capture statistics at the accounting level for every location referenced by
the thread during a single accounting interval or a rollup of multiple accounting intervals. The
information is written every time an accounting record is requested to be sent to disk.
IFCID 365 trace data can be accessed through the monitor interface by starting a user
defined class monitor trace with IFCID 365 specified. The location statistics are returned in
response to a READS request.
In addition, some location statistics counters collected are private protocol only and need to
be removed from the statistics that are captured. DB2 10 also restructures trace records to
expand DDF counters and remove redundant private protocol counters. The layout for the
Note: The STATIME subsystem parameter applies only to IFCIDs 0105, 0106, 0199, and
0365.
Chapter 13. Performance 603
DRDA remote locations reporting has not changed in OMEGAMON PE statistics reports.
Obsolete fields are shown as N/A.
13.24 Enhanced monitoring support
Although DB2 for z/OS has support for problem determination and performance monitoring at
the statement level for both dynamic and static SQL, primarily through tracing, the following
limitations apply:
Identifying the specific SQL statements involved in a problem (for example deadlocks,
timeouts and unavailable resource conditions) can be expensive and costly. In addition,
the process to narrow the application in question can be time-consuming and can involve
resources from many different teams including (DBA, application developers, system
administrators, and users). You might need to examine traces after the original problem
occurs because insufficient information was captured at the point where the problem first
occurred. Trace and log files can be huge and can change the timing behavior of
applications.
For distributed workloads, low priority or poorly behaving client applications can
monopolize DB2 resources and prevent high-priority applications from executing. There is
no way to prioritize traffic before it arrives into DB2. WLM classification of workload does
not help because it does not occur until after the connection has been accepted and a
DBAT is associated to the connection. WLM makes sure that high priority work completes
more quickly, but if there is a limited number of DBATs or connections, low priority work
has equal access to the threads or connections and can impact the higher priority work.
(In a data sharing environment, you can use LOCATION ALIAS NAMES to connect to a
subset of members but this has limited granularity.)
For distributed workloads, the number of threads, the number of connections and idle thread
timeout are controlled by system level parameters (MAXDBAT, CONDBAT, and IDTHTOIN),
so all distributed clients are treated equally. This situation is a problem if the client
applications do not represent equal importance to the business.
DB2 10 for z/OS enhances performance monitoring support and monitoring support for
problem determination for both static and dynamic SQL. This new support uses the IFI to
capture and externalize monitoring information for consumption by tooling.
To support problem determination, the statement type (dynamic or static) and statement
execution identifier (STMT_ID) are externalized in several existing messages (including those
related to deadlock, timeout, and lock escalation). In these messages, the statement type and
statement identifier are associated with thread information (THREAD-INFO) that can be used
to correlate the statement execution on the server with the client application on whose behalf
the server is executing the statement.
To support performance monitoring, several existing trace records that deal with statement
level information are modified to capture the statement type, statement identifier, and new
statement-level performance metrics. New trace records are introduced to provide access to
performance monitoring statistics in real time and to allow tooling to retrieve monitoring data
without requiring disk access. In addition, profile monitoring is introduced to increase
granularity and to improve threshold monitoring of system level activities. The monitor profile
supports monitoring of idle thread timeout, the number of threads, and the number of
connections, as well as the ability to filter by role and client product-specific identifier.
604 DB2 10 for z/OS Technical Overview
13.24.1 Unique statement identifier
DB2 10 introduces a unique statement execution identifier (STMT_ID) to facilitate monitoring
and tracing at the statement level. The statement ID is defined at the DB2 for z/OS server,
returned to the DRDA application requester, and captured in IFCID records for both static and
dynamic SQL.
For dynamic statement, the statement identifier is available when dynamic statement cache is
enabled. For static statement, the statement identifier matches the STMT_ID column value in
the SYSIBM.SYSPACKSTMT table. STMT_ID column is a new column in the
SYSIBM.SYSPACKSTMT table.
To use the enhanced monitoring support functions, you must rebind or bind any existing
pre-DB2 10 package in DB2 10 new function mode in order for the STMT_ID column to be
populated and loaded into the package.
Through DRDA, the statement identifier is returned to the application requesters in addition to
a 2-byte header information which includes the SQL type information (indicating dynamic or
static statement), and a 10-byte compilation time in a representation of a timestamp format
with the form of yyyymmddhhmmssnnnnnn (the last bind time for static SQL, and the time of
compilation for dynamic SQL). This information is contained in DRDA MONITORID object and
is returned through the DRDA MONITORRD reply. When DRDA MONITORID is returned in
the DRDA MONITORRD reply data, DB2 for z/OS server also returns DRDA SRVNAM object
along in the DRDA MONITORRD reply data providing the specific server.
The following IFCIDs are changed to externalize the new statement identifier:
IFCID 58 (End SQL) is enhanced to return statement type, statement execution identifier
and statement level performance metrics. For related statements such as OPEN, FETCH,
and CLOSE, only the IFCID 58 of the CLOSE request provides information which reflect
the statistics collected on the OPEN and FETCHes as well.
IFCID 63 and 350 (SQL Statement) are enhanced to return statement type, statement
execution identifier, and the original source CCSID of the SQL statement. This new
information is returned for the statement types that are candidates to be in the DB2
dynamic statement cache (SELECT, INSERT, UPDATE, DELETE, MERGE, etc) when the
dynamic statement cache is enabled.
IFCID 124 (SQL Statement Record) is enhanced to return the statement type and
statement execution identifier.
IFCID 66 (Close Cursor) is enhanced to trace implicit close statement. A new field is
added to indicate if the close statement is an explicit close statement or an implicit close
statement.
IFCID 65 (Open Cursor) is enhanced to trace the cursor attribute on the implicit commit
request. A new field is added to indicate if implicit commit cursor attribute is specified.
IFCID 172 and IFCID 196 (Deadlock/Timeout) are enhanced to return statement type and
statement execution identifier for both dynamic and static statement when a deadlock or
timeout condition is detected.
IFCID 337 (Lock Escalation) is enhanced to return statement type and statement
execution identifier for both dynamic and static statement when lock escalation condition is
detected.
In addition, the THREAD-INFO token is extended to include statement type information,
besides role and statement ID information. The affected messages are DSNL030I,
DSNV436I, DSNL027I, DSNI031I, DSNT318I, DSNT375I, DSNT376I, DSNT377I, DSNT378I,
DSNT771I, and DSNT772I.
Chapter 13. Performance 605
13.24.2 New monitor class 29 for statement detail level monitoring
Monitor class 9 tracing provides real-time statement level detail per thread basis, with IFCID
124. Monitor class 29 is introduced to monitor detailed trace information in real-time for all the
dynamic statements in the dynamic statement cache and all static statements currently in the
EDM Pool. Monitor class 29 looks at the system-wide level rather that the thread-level.
Monitor class 29 has the following IFCIDs:
IFCID 318 is an existing IFCID that is merely a flag to instruct DB2 to collect more detailed
statistics about statements in the dynamic statement cache.
IFCID 316 is an existing IFCID that provides detailed information about the dynamic
statement cache when promoted by an IFI READS request. IFCID is also written when a
statement is evicted from the dynamic statement cache.
IFCID 400 is a new IFCID that essentially mirrors the behavior of IFCID 318.
IFCID 401 is a new IFCID that when prompted externalizes all static statements in the
EDM pool together with their detailed statistics. IFCID is also written when a statement is
removed from the EDM Pool. STMTID support (as well as 401 generation) requires
REBIND in NFM.
The statement identifiers and new IFCID 401 are utilized in the new Extended Insight
feature of OMEGAMON PE V5.1 within the Optim Performance Manager infrastructure.
13.24.3 System level monitoring
DB2 9 for z/OS introduced profiles to allow you to identify a query or set of queries. It is
possible to identify SQL statements by authorization ID and IP address, or to specify
combinations of plan name, collection ID, and package name. These profiles are specified in
the table SYSIBM.DSN_PROFILE_TABLE. Profile tables allow you to record information
about how DB2 executes or monitors a group of statements. SQL statements identified by a
profile are executed based on keywords and attributes in the table
SYSIBM.DSN_PROFILE_ATTRIBUTES. System parameters such as NPGTHRSH,
STARJOIN and SJTABLES can be specified as keywords, providing greater granularity than
DSNZPARM parameters.
In DB2 9 for z/OS it is possible to identify SQL statements by authorization ID and IP address,
or to specify combinations of plan name, collection ID, and package name. All statements so
identified can be monitored. The following tables are required to monitor statements:
SYSIBM.DSN_PROFILE_TABLE
SYSIBM.DSN_PROFILE_HISTORY
SYSIBM.DSN_PROFILE_ATTRIBUTES
SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
SYSIBM.DSN_STATEMENT_RUNTIME_INFO
The following tables might also be required, depending on the information that DB2 is to
record:
SYSIBM.DSN_OBJECT_RUNTIME_INFO
SYSIBM.PLAN_TABLE
SYSIBM.DSN_STATEMNT_TABLE
SYSIBM.DSN_FUNCTION_TABLE
Other EXPLAIN tables that are used by optimization tools
Statement execution is controlled by the following keywords:
NPAGES THRESHOLD
606 DB2 10 for z/OS Technical Overview
STAR JOIN
MIN STAR JOIN TABLES
Other keywords control the amount of monitoring data recorded.
The information collected in these table was used by tools such as the Optimization Service
Center and Optimization Expert. See IBM DB2 9 for z/OS: New Tools for Query Optimization,
SG24-7421.
DB2 10 enhances support for filtering and threshold monitoring for system related activities,
such as the number of threads, the number of connections, and the period of time that a
thread can stay idle.
Two new scope filters on role (available through trusted context support) and client
product-specific identifier are added to provide more flexibility to monitor the activities across
the DB2 system. Allowing filtering by role and client product-specific identifier gives a finer
degree of control over the monitor profiles.
Similarly, the addition of new function keywords related to the number of threads, the number
of connections and idle thread timeout values allows thresholds (limits) that were previously
available only at the system level through DSNZPARM to be enforced at a more granular
level.
Together, these enhancements provide a greater flexibility and control with regard to
allocating resources to particular clients, applications or users according to their priorities or
needs.
The enhancements in DB2 10 apply only to the system level monitoring related to threads
and connections, not to the statement level monitoring and tuning.
Briefly, to use system level monitoring, you must first create the following tables, if they do not
already exist, by running either installation job DSNTIJSG or DSNTIJOS:
SYSIBM.DSN_PROFILE_TABLE
Has one row per monitoring or execution profile. A row can apply to either statement
monitoring or system level monitoring but not both. Multiple profile rows can apply to an
individual execution or process, in which case the more restrictive profile is applied.
SYSIBM.DSN_PROFILE_HISTORY
Tracks the state of rows in the profile table or why a row was rejected.
SYSIBM.DSN_PROFILE_ATTRIBUTES
Relates profile table rows to keywords, which specify monitoring or execution attributes
which define how to direct or define the monitoring or execution.
SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
Tracks the state of rows in the attributes table or why a row was rejected.
To monitor system level activities, you must then:
1. Create a profile to specify what activities to monitor.
2. Define the type of monitoring. (There are three keywords and two attribute columns.)
3. Load or reload profile tables and start the profile.
4. Stop Monitoring.
To create a profile, add a row to the table SYSIBM.DSN_PROFILE_TABLE. DB2 10 provides
the ROLE and PRDID columns for additional filtering. ROLE filters by the role that is assigned
Chapter 13. Performance 607
to the user who is associated with the thread. PRDID filters by the client product-specific
identifier that is currently associated with the thread, for example JCC03570. In addition,
different profiles can apply to different members of a data sharing group.
The following filtering categories are supported for system level monitoring:
IP Address (IPADDR)
Product Identifier (PRDID)
Role and Authorization Identifier (ROLE, AUTHID)
Collection ID and Package Name (COLLID, PKGNAME)
The filtering criteria in different filtering categories cannot be specified together. For example,
IP Address and Product ID cannot be specified together as a valid filtering criteria because IP
Address and Product ID are in different filtering categories.
The newly introduced ROLE and Product ID filtering scope can also only apply to the system
related monitoring functions or keywords regarding threads and connections.
The product-specific identifier of the remote requester (PRDID), is in the form pppvvrrm, an
8-byte field with alphanumeric characters, where:
ppp Identifies the specific database product.
vv Identifies the product version.
rr Identifies the product release level.
m Identifies the product modification level.
When there are more than one filtering scope in the same filtering category, the filtering
scope can exist without the other. For example, for the role and authorization ID filtering
category, role can be filtered by itself or authorization ID can be filtered by itself, or role and
authorization ID can be filtered together. The same applies to collection ID and package
name filtering category. That is for collection ID and package name filtering category,
collection ID can be filtered by itself or package name can be filtered by itself, or collection ID
and package name can be filtered together. These new filtering categories rules only apply to
the system related monitoring functions or keywords regards to threads and connections.
Figure 13-22 shows profiles 11 and 15 as invalid because there are different filtering
categories defined in the same row. There are corresponding rows in
SYSIBM.DSN_PROFILE_HISTORY that show the reason for the rejection in the STATUS
column.
Figure 13-22 System level monitoring - Invalid profile
Y 15 Null COLL1 Null 129.42. 16. 152 Null Null
Y 14 Null Null Null 129.42. 16. 152 Null Null
Y 13 Null Null JCC03570 Null Null Null
Y 12 Null Null Null Null Tom Null
Y 11 Null Null SQL09073 Null Null DB2DEV
Y 10 Null Null Null Null Tom DB2DEV
PROFILE_
ENABLED
PROFILEID PKGNAME COLLID PRDID IPADDR AUTHID ROLE
608 DB2 10 for z/OS Technical Overview
Multiple qualified filtering profiles from different filtering category can all apply if the filtering
criteria matches the thread or connection for system level monitoring. For example in the
Figure 13-22, profile 10 and 14 might both apply to Toms thread or connection.
Within the same filtering category, there is only one qualified profile applied. When there is
more than one profile qualified within the same filtering category, the most specific profile is
chosen. In Role and Authorization ID filtering category, Role has higher precedence over
Authorization. In Collection ID and Package name filtering category, Collection ID has higher
precedence over Package name. In the previous example, if profile 10 and 12 both apply to
Toms thread or connection for the same keyword, row 10 will apply, as ROLE takes
precedence.
You define the type of monitoring that you want to perform by inserting a row into the table
SYSIBM.DSN_PROFILE_ATTRIBUTES with the following attributes:
Profile ID column
Specify the profile that defines the system activities that you want to monitor. Use a value
from the PROFILEID column in SYSIBM.DSN_PROFILE_TABLE.
KEYWORDS column
Specify one of the following monitoring keywords:
MONITOR THREADS The number of active threads for this profile. This value
cannot exceed MAXDBAT
MONITOR CONNECTIONS The total number of connections, active plus inactive, for
this profile. This value cannot exceed CONDBAT
MONITOR IDLE THREADS The number of seconds before idle threads timeout, for this
profile. This value can exceed IDTHTOIN
ATTRIBUTE1, ATTRIBUTE2, and ATTRIBUTE3 columns
Specify the appropriate attribute values depending on the keyword that you specify in the
KEYWORDS column.
There are two levels of messaging mechanisms introduced (DIAGLEVEL1 and
DIAGLEVEL2) as described later for system profile monitoring related to threads and
connections. You can choose which messaging level you prefer by defining a value in the
ATTRIBUTE1 column in the DSN_PROFILE_ATTRIBUTES table.
You specify the threshold for the corresponding keyword in ATTRIBUTE2.
Currently ATTRIBUTE3 does not apply to any of the system related monitoring functions
so there is no need to enter any value for the ATTRIBUTE3 column. If there is value
specified in ATTRIBUTE3 the row for system level monitoring keyword, a row is inserted
into SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY table to indicate this row is
invalidated.
You do no need to specify values for any other columns for
SYSIBM.DSN_PROFILE_ATTRIBUTES.
MONITOR THREADS is used to monitor the total number of concurrent active threads on
the DB2 subsystem. This monitoring function is subject to the filtering on IPADDR, PRDID,
ROLD/AUTHID and COLLID/PKGNAME defined in SYSIBM.DSN_PROFILE_TABLE.
MONITOR CONNECTIONS is used to monitor the total number of remote connections
from the remote requesters using TCP/IP, which includes the current active connections
and the inactive connections. This monitoring function is subject to the filtering on the
IPADDR column only in the SYSIBM.DSN_PROFILE_TABLE for remote connections.
Active connections are those currently associated with an active DBAT or have been
Chapter 13. Performance 609
queued and are waiting to be serviced. Inactive connections are those currently not
waiting and not associated with a DBAT.
MONITOR IDLE THREADS is used to monitor the approximate time (in seconds) that an
active server thread should be allowed to remain idle.
It is important to note that these system level function keywords are to monitor system
related activities such as the number of threads, the number of connections, and idle time
of the threads, not to monitor any SQL statements activities.
ATTRIBUTE1 specifies either WARNING or EXCEPTION and the messaging level, as
follows:
WARNING (defaults to DIAGLEVEL1)
WARNING DIAGLEVEL1
WARNING DIAGLEVEL2
EXCEPTION (defaults to DIAGLEVEL1)
EXCEPTION DIAGLEVEL1
EXCEPTION DIAGLEVEL2
ATTRIBUTE2 specifies the actual threshold value, for example the number of active
threads or total connections, or seconds for idle threads
ATTRIBUTE3 must remain blank.
If a WARNING is triggered, then either message DSNT771I or DSNT772I is issued
depending on the DIAGLEVEL specified and processing continues. However if an
EXCEPTION is triggered, then processing varies depending on the exception triggered;
MONITOR THREADS
Threads are either queued or suspended as follows:
If IPADDR filter, then return RC00E30506 and queue the threads
If PRODID, ROLE or AUTHID filter, then return RC00E30507 and queue and suspend
for threshold, then fail additional connection requests with SQLCODE -30041
If COLLID or PKGNAME filter, then return RC00E30508 and queue and suspend for
threshold, then fail SQL statement and return SQLCODE -30041
MONITOR CONNECTIONS (IPADDR filtering only)
Issue RC00E30504 and reject new connection requests
MONITOR IDLE TREADS thread terminated
Issue RC00E30502
Note that the system-wide installation parameters related to maximum connections (e.g.
CONDBAT), maximum threads (for example MAXDBAT) and idle thread timeout (IDTHTOIN)
still apply. In the event that the system-wide threshold is set lower than a monitor profile
threshold, the system-wide threshold is enforced before the monitor profile threshold could
apply.
When DIAGLEVEL1 is chosen, a DSNT771I console message is issued at most once every 5
minutes for any profile threshold that is exceeded. This level of messaging provides minimum
console messages activity and limited information in the message text itself. You should refer
to the statistics trace records to determine the accumulated occurrences of a specific profile
threshold that is exceeded under a specific profile ID.
DSNT771I csect-name A MONITOR PROFILE condition-type CONDITION OCCURRED
numberof-time TIME(S).
When DIAGLEVEL2 is chosen, a DSNT772I console message is issued at most once every 5
minutes for each unique occurrence of the profile threshold in a specific Profile ID that is
610 DB2 10 for z/OS Technical Overview
exceeded. This level of messaging provides more information in the message text itself which
includes the specific profile ID and the specific reason code. You can also refer to the
statistics trace records to determine the accumulated occurrences of a specific profile
threshold that is exceeded under a specific profile ID.
DSNT772I csect-name A MONITOR PROFILE condition-type CONDITION OCCURRED
numberof-time TIME(S) IN PROFILE ID=profile-id WITH PROFILE FILTERING
SCOPE=filtering-scope WITH REASON=reason
For both message levels, when a profile warning or exception condition occurs, a DB2
statistics class 4 IFCID 402 trace record is written at a statistical interval which is defined in
the STATIME DSNZPARM. Each statistics trace record written can contain up to 500 unique
profiles. Multiple trace records can be written if there are more than 500 unique profiles
whose profile thresholds are exceeded in a given STATIME interval.
Figure 13-23 show sample rows from SYSIBM.DSN_PROFILE_ATTRIBUTES. The
REMARKS column is not shown.
Figure 13-23 System level monitoring - Attributes table
The first row indicates that DB2 monitors the number of threads that satisfy the scope that is
defined by PROFILEID 1 in SYSIBM.DSN_PROFILE_TABLE. When the number of the
threads in the DB2 system exceeds 10 that is defined in ATTRIBUTE2 column, a DSNT772I
message is generated to the system console (if there is no other DSNT772I message being
issued within last 5 minutes due to this particular profile threshold in PROFILEID 1 is
exceeded) and DB2 queues or suspends the number of any new connection request up to 10,
the defined exception threshold in ATTRIBUTE2, as EXCEPTION_DIAGLEVEL2 is defined in
the ATTREBUTE1 column.
When the total number of threads that are queued or suspended reaches 10, DB2 begins to
fail the connection request with SQLCODE -30041 (up to 10 threads for this profile, PLUS up
to 10 queued connections for this profile). The 21st connection for this profile receives a
-30041, unless the profile filters on IPADDR, in which case any number of connections are
allowed, up to MONITOR CONNECTIONS, if such a row is provided for this profile with
Exception in Attribute 1 or until CONDBAT is reached in the system.
The second row indicates that DB2 monitors the number of connections that satisfy the scope
that is defined by PROFILEID 2 in SYSIBM.DSN_PROFILE_TABLE. When the number of
connections in the DB2 system exceeds 50 that is defined in ATTRIBUTE2 column, a
2009-12-21 300 Exception_dia
glevel1
MONITOR IDLE
THREADS
3
2009-12-19 50 Warning MONITOR
CONNEC-TIONS
2
2009-12-19... 10 Exception_dia
glevel2
MONITOR
THREADS
1
Att ribute
Timestamp
Attribute3 Attribut e2 At tribute1 Keywords ProfileID
Queue 11-20
*Reject 21st
Chapter 13. Performance 611
DSNT771I message is generated to the system console (if there is no other DSNT771I
message being issued within last 5 minutes) and DB2 continues to service the new
connection request as WARNING is defined in the ATTRIBUTE1 column.
The third row indicates that DB2 monitors the period of time that the thread can remain idle
that satisfies the scope that is defined by PROFILEID 3 in SYSIBM.DSN_PROFILE_TABLE.
When the thread stays idle for more than 5 minutes (300 in second) that is defined in
ATTRIBUTE2 column, a DSNT771I message is generated to the system console (if there is
no other DSNT771I message being issued within last 5 minutes) and DB2 terminates the
thread as EXCEPTION_DIAGLEVEL1 is defined in the ATTRIBUTE1 column.
Next, you need to load or reload the profile tables into memory by issuing the following
command, (the command has no parameters):
-START PROFILE
Any rows with a Y in the PROFILE_ENABLED column in SYSIBM.DSN_PROFILE_TABLE
are now in effect. DB2 monitors any system activities related to threads and connections that
meet the specified criteria. When threshold is reached, DB2 takes certain action according to
the value specified in the ATTRIBUTE1 column, EXCEPTION or WARNING. Refer to the
section of the changes to SYSIBM.DSN_PROFILE_ATTRIBUTES for specific action that DB2
is taking. It is important to note that when you start the START PROFILE command, DB2
begins to monitor the next thread or the next connection activities for the monitoring functions
that are in effect.
When you finish monitoring these system related threads and connections activities, stop the
monitoring process by performing one of the following tasks:
To disable the monitoring function for a specific profile Delete that row from
SYSIBM.DSN_PROFILE_TABLE, or change the PROFILE_ENABLED column value to N.
Then, reload the profile table by issuing the command START PROFILE.
To disable all monitoring for both system related and statement related monitoring that are
specified in the profile tables, issue the following command:
- STOP PROFILE
DB2 provides the following commands to manage performance profiles:
START PROFILE
To load or reload active profiles into memory. This command can be issued from the z/OS
console through a batch job or IFI. This command can also be used by tools, such as
Optim Query Tuner.
DISPLAY PROFILE
Allows you to see if profiling is active or inactive. This command can be issued from the
z/OS console, using a batch job or instrumentation facility interface (IFI). This command
can also be used by tools, such as Optim Query Tuner.
STOP PROFILE
To stop or disable the profile monitoring function. This command can be issued from the
z/OS console, using a batch job or IFI. This command can also be used by tools, such as
Optim Query Tuner.
By using profiles, you can easily monitor system level activities. Profiling provides granularity
of control for warning and exception handling based on filtering capability for Idle threads,
number of connections and number of threads. Now thread and connection resources can be
monitored and controlled by filters to correspond to business priorities.
612 DB2 10 for z/OS Technical Overview
The information collected in these table was used by tools such as the Optimization Service
Center and Optimization Expert. See IBM DB2 9 for z/OS: New Tools for Query Optimization,
SG24-7421.
Copyright IBM Corp. 2010. All rights reserved. 613
Part 4 Appendixes
Part 4
614 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 615
Appendix A. Information about IFCID changes
This appendix includes the details of new or changed IFCIDs that we mention in the chapters
of this book. For more information about IFCIDs, refer to Chapter 12. Planning for DB2 10 for
z/OS of DB2 10 for z/OS What's New?, GC19-2985.
DB2 for z/OS has system limits, object and SQL limits, length limits for identifiers and strings,
and limits for certain data type values. Restrictions exist on the use of certain names that are
used by DB2. In some cases, names are reserved and cannot be used by application
programs. In other cases, certain names are not recommended for use by application
programs though not prevented by the database manager.
For information about limits and name restrictions, refer to Appendix. Additional information
for DB2 SQL of DB2 10 for z/OS SQL Reference, SC19-2983.
You can find up-to-date mapping in the SDSNMACS data set that is delivered with DB2.
A
616 DB2 10 for z/OS Technical Overview
A.1 IFCID 002: Dynamic statement cache
Example A-1 lists the changes to IFCID 002 to support literal replacement in the dynamic
statement cache.
Example A-1 Changes to IFCID 002 - Dynamic statement cache
QXSTCWLP DS D # of times DB2 parsed dynamic statements
* because CONCENTRATE STATEMENTS WITH
* LITERALS behavior was in effect for
* the prepare of the statement for
* the dynamic statement cache
QXSTCWLR DS D # of times DB2 replaced at least one
* literal in a dynamic statement because
* CONCENTRATE STATEMENTS WITH LITERALS
* was in effect for the prepare of
* the statement for dynamic statement cache
QXSTCWLM DS D # of times DB2 found a matching reusable
* copy of a dynamic statement in statement
* cache during prepare of a statement that
* had literals replaced because of
* CONCENTRATE STATEMENTS WITH LITERALS
QXSTCWLD DS D # of times DB2 created a duplicate stmt
* instance in the the statement cache for
* a dynamic statement that had literals
* replaced by CONCENTRATE STATEMENTS WITH
* LITERALS behavior and the duplicate stmt
* instance was needed because a cache
* match failed solely due to literal
* reusability criteria
A.2 IFCID 002 - Currently committed data
Example A-2 lists the changes to the Data Manager Statistics Block (DSNDQIST) of IFCID
002 to record the use of currently committed data.
Example A-2 IFCID 002 - Currently committed data
QISTRCCI DS F /* RCCI: Read Currently */
* /* Committed Insert */
* /* How many rows skipped by */
* /* read transaction due to */
* /* uncommitted insert when */
* /* using Currently Committed */
* /* semantic for fetch. */
QISTRCCD DS F /* RCCD: Read Currently */
* /* Committed Delete */
* /* How many rows accessed by */
* /* read transaction due to */
* /* uncommitted delete when */
* /* using Currently Committed */
* /* semantic for fetch. */
QISTRCCU DS F /* RCCU: Read Currently */
* /* Committed Update */
Appendix A. Information about IFCID changes 617
* /* How many rows accessed by */
* /* read transaction due to */
* /* uncommitted update when */
* /* using Currently Committed */
* /* semantic for fetch. */
A.3 IFCID 013 and IFCID 014
Example A-3 records the start and end of hash access scan.
Example A-3 IFCID 013 and IFCID 014
********************************************************************
* IFC ID 0013 FOR RMID 14 RECORDS INPUT TO HASH SCAN *
********************************************************************
*
QW0013 DSECT IFCID(QWHS0013)
QW0013S1 DS 0F SELF DEFINING SECTION 1 - QWT02R10
QW0013DB DS XL2 DATABASE ID (DBID)
QW0013PS DS XL2 PAGESET OBID
QW0013OB DS XL2 RECORD OBID
ORG QW0013
QW0013S2 DS 0F SELF DEFINING SECTION 2 - QWT02R2O
* NEXT FIELDS DESCRIBE PREDICATE FOR SCAN
* REPEATING GROUP - MAXIMUM 10
QW0013C1 DS XL2 FIRST COLUMN NUMBER
QW0013OP DS CL2 OPERATOR - NE,G,GE,LE ETC
QW0013CO DS CL1 CONNECTOR A=AND, O=OR, BLANK
QW0013TF DS CL1 T=TRUE / F=FALSE / OR BINARY ZERO
QW0013TP DS CL1 C=COLUMN OR V=VALUE FOLLOW
DS CL1 RESERVED
QW0013VA DS CL8 FIRST EIGHT BYTES OF VALUE
ORG QW0013VA
QW0013C2 DS XL2 SECOND COLUMN NUMBER
DS XL6 RESERVED
SPACE 2
*
*
********************************************************************
* IFC ID 0014 FOR RMID 14 RECORDS END OF HASH SCAN *
********************************************************************
*
QW0014 DSECT IFCID(QWHS0014)
QW0014RT DS F RETURN CODE - 0 SUCCESSFUL
QW0014RE DS F (S)
* (S) = FOR SERVICEABILTY
SPACE 2
618 DB2 10 for z/OS Technical Overview
A.4 IFCID 106
Example A-4 shows a new reason code in IFCID 106 to record that DB2 attempted to delete
all the CF structures on restart, as directed by the new DEL_CFSTRUCTS_ON_RESTART
DSNZPARM parameter.
Example A-4 Changes to IFCID 106
*
QWP1FLG2 DS X
QWP1CSMF EQU X'80' Compress trace records destined for SMF
QWP1DCFS EQU X'40' During restart, attempt delete of CF
* structures, including the SCA, IRLM lock
* structures and allocated group buffer
pools
A.5 IFCID 225
Example A-5 list the restructured IFCID 225.
Example A-5 Changes to IFCID 225
* ! IFCID225 summarizes system storage usage
* ! The record is divided into data sections described as follows:
* !
* ! Data Section 1: Address Space Summary (QW0225)
* ! This data section can be a repeating group.
* ! It will report data for DBM1 and DIST
* ! address spaces. Use QW0225AN to identify
* ! the address space being described.
* ! Data Section 2: Thread information (QW02252)
* ! Data Section 3: Shared and Common Storage Summary (QW02253)
* ! Data Section 4: Statement Cache and xPROC Detail (QW02254)
* ! Data Section 5: Pool Details (QW02255)
*
* Data Section 1: Address Space Summary
QW0225 DSECT
QW0225AN DS CL4 ! Address space name (DBM1 or DIST)
QW0225RG DS F ! MVS extended region size
QW0225LO DS F ! MVS 24-bit low private
QW0225HI DS F ! MVS 24-bit high private
QW0225EL DS F ! MVS 31-bit extended low private
QW0225EH DS F ! MVS 31-bit extended high private
QW0225TP DS A ! Current high address of 24-bit private region
QW0225EP DS A ! Current high address of 31-bit private region
QW0225CR DS F ! 31-bit storage reserved for must complete
QW0225MV DS F ! 31-bit storage reserved for MVS
QW0225SO DS F ! Storage cushion warning to contract
QW0225GS DS F ! Total 31-bit getmained stack
QW0225SU DS F ! Total 31-bit stack in use
QW0225VR DS F ! Total 31-bit variable pool storage
QW0225FX DS F ! Total 31-bit fixed pool storage
QW0225GM DS F ! Total 31-bit getmained storage
QW0225AV DS F ! Amount of available 31-bit storage
Appendix A. Information about IFCID changes 619
DS F ! Reserved
QW0225VA DS D ! Total 64-bit private variable pool storage
QW0225FA DS D ! Total 64-bit private fixed pool storage
QW0225GA DS D ! Total 64-bit private getmained storage
QW0225SM DS D ! Total 64-bit private storage for storage
* ! manager control structures
QW0225RL DS D ! Number of real 4K frames in use for
* ! 31 and 64-bit private
QW0225AX DS D ! Number of 4K auxiliary slots in use
* ! for 31 and 64-bit private
QW0225HVPagesInReal DS D ! Number of real 4K frames is use for
* ! 64-bit private (available in
* ! >= z/OS 1.11)
QW0225HVAuxSlots DS D ! Number of 4K auxiliary slots in use
* ! for 64-bit private (available in
* ! >= z/OS 1.11)
QW0225HVGPagesInReal DS D ! High water mark for number of real 4K
* ! frames is use for 64-bit private
* ! (available in >= z/OS 1.11)
QW0225HVGAuxSlots DS D ! High water mark for 4K auxiliary slots
* ! in use for 64-bit private
* ! (available in >= z/OS 1.11)
*
* Data Section 2: Thread information
QW02252 DSECT
QW0225AT DS F ! # of active threads
QW0225DB DS F ! # of active and disconnected DBATs
QW0225CE DS F ! # of castout engines
QW0225DW DS F ! # of deferred write engines
QW0225GW DS F ! # of GBP write engines
QW0225PF DS F ! # of prefetch engines
QW0225PL DS F ! # of P-lock/notify exit engines
*
* Data Section 3: Shared and Common Storage Summary
QW02253 DSECT
QW0225EC DS F ! MVS extended CSA size
QW0225FC DS F ! Total 31-bit common fixed pool storage
QW0225VC DS F ! Total 31-bit common variable pool storage
QW0225GC DS F ! Total 31-bit common getmained storage
QW0225FCG DS D ! Total 64-bit common fixed pool storage
QW0225VCG DS D ! Total 64-bit common variable pool storage
QW0225GCG DS D ! Total 64-bit common getmained storage
QW0225SMC DS D ! Total 64-bit common storage for
* ! storage manager control structures
QW0225SV DS D ! Total 64-bit shared variable pool storage
QW0225SF DS D ! Total 64-bit shared fixed pool storage
QW0225SG DS D ! Total 64-bit shared getmained storage
QW0225SMS DS D ! Total 64-bit shared storage for
* ! storage manager control structures
QW0225GSG_SYS DS D ! Total 64-bit shared system agent stack
QW0225SUG_SYS DS D ! Total 64-bit shared system agent stack
* ! in use
QW0225GSG DS D ! Total 64-bit shared non-system agent
* ! stack
QW0225SUG DS D ! Total 64-bit shared non-system agent
620 DB2 10 for z/OS Technical Overview
* ! stack in use
DS F ! Reserved
QW0225SHRNMOMB DS F ! Number of shared memory objects
* ! allocated for this MVS LPAR
QW0225SHRPAGES DS D ! Number of 64-bit shared memory pages
* ! allocated for this MVS LPAR (this
* ! count includes hidden pages)
QW0225SHRGBYTES DS D ! High water mark for number of 64-bit
* ! shared bytes for this MVS LPAR
QW0225SHRINREAL DS D ! Number of 64-bit shared pages backed
* ! in real storage (4K pages) for this
* ! MVS LPAR
QW0225SHRAUXSLOTS DS D ! Number of auxiliary slots used for
* ! 64-bit shared storage for this
* ! MVS LPAR
QW0225SHRPAGEINS DS D ! Number of 64-bit shared pages
* ! paged in from auxiliary storage for
* ! this MVS LPAR
QW0225SHRPAGEOUTS DS D ! Number of 64-bit shared pages
* ! paged out to auxiliary storage
* ! for this MVS LPAR
*
* Data Section 4: Statement Cache and xPROC Detail
QW02254 DSECT
QW0225SC DS F ! Total 31-bit xPROC storage for dynamic
* ! SQL used by active threads
* ! (31-bit DBM1 private variable pool)
QW0225LS DS F ! Allocated 31-bit xPROC storage for dynamic
* ! SQL used by active threads
QW0225SX DS F ! Total 31-bit xPROC storage for static
* ! SQL statements
* ! (31-bit DBM1 private variable pool)
QW0225HS DS F ! High water mark allocated for 31-bit xPROC
* ! storage for dynamic SQL used by active
* ! threads
QW0225LC DS F ! # of statements in 64-bit agent local pools
* ! (64-bit shared agent local variable pools)
QW0225HC DS F ! High water mark # of statements in 64-bit
* ! agent local pools at high storage time
* ! (64-bit shared agent local variable pools)
QW0225L2 DS D ! Allocated statement cache storage in 64-bit
* ! agent local pools
* ! (64-bit shared agent local variable pools)
QW0225H2 DS D ! High water mark for allocated statement
* ! cache storage in 64-bit agent local pools
* ! (64-bit shared agent local variable pools)
QW0225HT DS CL8 ! Timestamp of high water mark for storage
* ! allocated in 64-bit agent local pools
* ! since the last IFCID225 was written
* ! (64-bit shared agent local variable pools)
QW0225S2 DS D ! Total 64-bit STMT CACHE BLOCKS 2G
* ! storage
* ! (64-bit shared variable pool)
QW0225F1 DS D ! (S)
QW0225F2 DS D ! (S)
Appendix A. Information about IFCID changes 621
*
* Data Section 5: Pool Details
QW02255 DSECT
QW0225AL DS F ! Total agent local storage
* ! (31-bit DBM1 private variable pools)
QW0225AS DS F ! Total system agent storage
* ! (31-bit DBM1 private variable pools)
QW0225ALG DS D ! Total agent local storage
* ! (64-bit shared variable pools)
QW0225ASG DS D ! Total system agent storage
* ! (64-bit shared variable pools)
QW0225BB DS D ! Total buffer manager storage blocks
* ! (31-bit DBM1 private variable pool)
QW0225RP DS D ! Total RID pool storage
* ! (64-bit shared fixed pool)
QW0225CD DS D ! Total compression dictionary storage
* ! (64-bit DBM1 private getmained)
*
A.6 IFCID 267
Example A-6 shows a new reason code in IFCID 267 to record a detected restart delay and
lock structure rebuild.
Example A-6 Changes to IFCID 267
DSNDQW04
QW0267 DSECT IFCID(QWHS0267)
...
QW0267RR EQU C'R' Rebuild started due to RESTART delay
A.7 IFCID 316
Example A-7 lists the changes to IFCID 316 to support literal replacement in the dynamic
statement cache.
Example A-7 Changes to IFCID 316
QW0316LR DS CL1 Cache literal replacement indicator:
QW0316LRB EQU C' ' Blank = no literal replacement was done
QW0316LRR EQU C'R' 'R' = literals were replaced in statement
QW0316LRD EQU C'D' 'D' = Same as 'R' but cached statement is
* a duplicate cache entry instance
* because cache match failed solely due
* to literal reusability criteria
622 DB2 10 for z/OS Technical Overview
A.8 IFCID 357
Example A-8 lists the new IFCID 357, which records the start of index I/O parallel insert
processing.
Example A-8 New IFCID 357
*
QW0357 DSECT IFCID(QWHS0357)
QW0357DB DS CL2 DATA BASE ID
QW0357TB DS CL2 TABLE OBID
QW0357PS DS CL2 INDEX SPACE PAGE SET ID
*
A.9 IFCID 358
Example A-9 lists the new IFCID 358 which records the end of index I/O parallel insert
processing.
Example A-9 New IFCID 358
*
QW0358 DSECT IFCID(QWHS0358)
QW0358DB DS CL2 DATA BASE ID
QW0358TB DS CL2 TABLE OBID
QW0358PS DS CL2 INDEX SPACE PAGE SET ID
QW0358DE DS XL2 DEGREE OF PARALLELISM
*
A.10 IFCID 359
Example A-10 lists the new IFCID 359, which records index page split events.
Example A-10 New IFCID 359
**********************************************************************
* IFCID 0359 to record index page split
**********************************************************************
*
QW0359 DSECT IFCID(QWHS0359)
QW0359DB DS CL2 DATA BASE ID
QW0359OB DS CL2 INDEX PAGE SET ID
QW0359PT DS CL2 PARTITION NUMBER
QW0359FL DS CL1 FLAGS
DS CL1 RESERVED
QW0359PG DS CL4 SPLITTING PAGE NUMBER
QW0359TS DS CL8 TIMESTAMP AT BEGINNING OF SPLIT
QW0359TE DS CL8 TIMESTAMP AT ENDING OF SPLIT
QW0359DP EQU X'80' INDEX IS GBP DEPENDENT DURING SPLIT
*
Appendix A. Information about IFCID changes 623
A.11 IFCID 360
IFCID 0360 records information about queries that are incrementally rebound because
parallelism was chosen in packages that were created before DB2 10. This record is written
when performance trace class 3 or 10 is on.
A.12 IFCID 363
Example A-11 shows the new IFCID 363, which monitors the use of the straw model in query
parallelism.
Example A-11 New IFCID 363
***********************************************************************
* IFCID 0363 FOR RMID 22 - PARALLEL GROUP STRAW MODEL TRACE *
***********************************************************************
QW0363 DSECT IFCID(QWHS0363)
QW0363TS DS CL8 Time-stamp (consistency token)
QW0363SN DS F Statement number... Same
* as QUERYNO in PLAN_TABLE
* if PLAN_TABLE exists
QW0363QN DS H Query block number... Same
* as QBLOCKNO in PLAN_TABLE
* if PLAN_TABLE exists
QW0363GN DS H Parallel group number
QW0363BD DS H Planned (bind time) degree
QW0363RK DS CL1 Partition kind of the parallel
QW0363C1 EQU C'1' Key range
* Constant for QW0363RK
QW0363C2 EQU C'2' Page range
* Constant for QW0363RK
QW0363C3 EQU C'3' Rid
* Constant for QW0363RK
QW0363C4 EQU C'4' Record on key boundary
* Constant for QW0363RK
QW0363C5 EQU C'5' Record not on key boundary
* Constant for QW0363RK
QW0363OD DS CL1 Record order. Use only
* if QW0363RK = 4 or 5
QW0363CA EQU C'A' Ascending order
* Constant for QW0363OD
QW0363CD EQU C'D' Descending order
* Constant for QW0363OD
QW0363IW DS CL1 In memory workfile
QW0363IY EQU C'Y' Is in-memory workfile
* Constant for QW0363IW
QW0363IN EQU C'N' Not in-memory workfile
* Constant for QW0363IW
QW0363RI DS CL1 (S)
QW0363RY EQU C'Y' (S)
QW0363RN EQU C'N' (S)
QW0363RO DS CL1 (S)
QW0363OY EQU C'Y' (S)
624 DB2 10 for z/OS Technical Overview
QW0363ON EQU C'N' (S)
QW0363RV DS CL1 (S)
QW0363VY EQU C'Y' (S)
QW0363VN EQU C'N' (S)
QW0363NE DS XL4 Number of total input workload elements
QW0363AE DS XL4 Number of actual workload elements
QW0363NR DS XL8 Total number of input records
QW0363PD DS XL4 Pipe degree
QW0363PS DS CL10 Pipe created time
QW0363PT DS CL10 Pipe terminated time
QW0363EN DS H Number of QW0363WE entries in QW0363E
QW0363NQ DS H Number of QW0363 records that
* together complete this
* series of QW0363 records
QW0363TR DS H The sequence number of the
* QW0363 records in a series of
* QW0363 records
QW0363LN_OFF DS H Offset from QW0363 to location
* name
QW0363PC_OFF DS H Offset from QW0363 to
* package collection id
QW0363PN_OFF DS H Offset from QW0363 to program
* name
QW0363LN_D DSECT Use if QW0363LN_OFF ,= 0
QW0363LN_LEN DS H Length of the following field
QW0363LN_VAR DS 0CL128 %U LOCATION NAME (RDB NAME)
*
QW0363PC_D DSECT Use if QW0363PC_Off,=0
QW0363PC_LEN DS H Length of the following field
QW0363PC_VAR DS 0CL128 %U Package collection id
*
QW0363PN_D DSECT Use if QW0363PN_Off,=0
QW0363PN_LEN DS H Length of the following field
QW0363PN_VAR DS 0CL128 %U Program name
*
QW0363E DSECT
QW0363NM DS H Total length of all QW0363WE
* entries + 8
QW0363RE DS CL6 Unused
QW0363WE DS 0CL128 Individual workload element entry
* There are multiple QW0363WE
* entries. See field QW0363EN for
* the number of times that QW0363WE
* structure repeats
QW0363IX DS XL4 The sequence number of workload element
QW0363PI DS XL4 Subpipe index - task number
QW0363LP DS XL4 Page number of low bound of
* logical partition. Use only
* if QW0363RK = 2
QW0363HP DS XL4 Page number of high bound of
* logical partition. Use only
* if QW0363RK = 2
QW0363PB DS CL10 Subpipe start time
QW0363PE DS CL10 Subpipe end time
QW0363FF DS CL4 Unused
QW0363CN DS XL8 Counter for input #rid, #record. Use only
* if QW0363RK = 3, 4 or 5
Appendix A. Information about IFCID changes 625
QW0363NI DS XL4 Number of rows consumed
QW0363NO DS XL4 Number of rows produced
QW0363BI DS XL4 (S)
QW0363EI DS XL4 (S)
QW0363LB DS XL32 Low key buffer data.
* Use only if QW0363RK = 1
QW0363HB DS XL32 High key buffer data
* Use only if QW0363RK = 1
* (S) = FOR SERVICEABILITY
The IFCID 363 record is made up of a set of the following mapped sections:
QWT02PSO is mapped by DSNDQWHS.
QWT02R1O is mapped by QW0363.
QWT02R2O is mapped by QW0363E. In one QW0363E, QW0363WE might repeat up to
100 times, which is indicated by the field QW0363EN.
Straw model trace for one parallel group is written in a series of IFCID 363 records. Each
record contains the information about 100 workload elements except the last one. The
repetitions of QW0363WE in the last record is the remainder of number of actual workload
elements /100.
626 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 627
Appendix B. Summary of relevant
maintenance
With a new version of DB2 reaching general availability, the maintenance stream becomes
extremely important. Feedback from early users and development of additional functions
cause a flux of APARs which enrich and improve the product code.
In this appendix, we look at recent maintenance for DB2 10 for z/OS that generally relates to
new functionality and critical corrections:
DB2 APARs
z/OS APARs
OMEGAMON PE APARs
These lists of APARs represents a snapshot of the current maintenance at the moment of
writing. As such, the list becomes incomplete or even incorrect at the time of reading. It is
here to help identify areas of functional improvements. Make sure that you contact your IBM
Service Representative for the most current maintenance at the time of your installation. Also
check on RETAIN for the applicability of these APARs to your environment, as well as to
verify pre- and post-requisites.
We recommend that you use the Consolidated Service Test (CST) as the base for service.
B
628 DB2 10 for z/OS Technical Overview
B.1 DB2 APARs
In Table B-1 we present a list of APARs providing functional enhancements to DB2 10 for
z/OS.
This list is not and cannot be exhaustive, check RETAIN and the DB2 Web site.
Table B-1 DB2 10 current function-related APARsPM43293
APAR # Area Text PTF and notes
II10817 Virtual storage Info APAR for storage usage fixlist
II14219 zIIP zIIP exploitation support use information
II14334 LOBs Info APAR to link together all the LOB support delivery
APARs
II14401 Migration Info APAR to link together all the migration APARs
II14426 XML Info APAR to link together all the XML support delivery
APARs
II14441 Incorrout PTFs Recommended DB2 9 SQL INCORROUT PTFs
II14464 Migration DB2 V8 migration/fallback info APAR to/from DB2 9
(continued from II14401)
II4474 Migration Prerequisites for migration to DB2 10 from DB2 V8
II4477 Migration Prerequisites for migration to DB2 10 from DB2 99
PK28627 DCLGEN Additional DCLGEN option DCLBIT to generate
declared SQL variables for columns defined as FOR
BIT DATA (COBOL, PL/I, C, and C++).
UK37397
PK76100 Star join
(EN_PJSJ)
Enable dynamic index ANDing for star join (pair wise) UK44120
V9
PM00068 DSMAX Support up to 100000 open data sets in the DB2 DBM1
address space
UK58204/5
V8 and V9
PM01821 Migration Conversion of DBRMs to packages UK53480/1
V8 and V9
PM04968 Premigration This APAR adds the V10 DSNTIJPM job to DB2 V8 and
V9 under the name DSNTIJPA
V8 (UK56305) and
V9 (UK56306)
PM13466 Pricing Use the IFAUSAGE FBFE keyword when SMF 89
detailed data collection is enabled
UK62326/7
V8 and V9
PM13467 Pricing Use the IFAUSAGE FBFE keyword when SMF 89
detailed data collection is enabled
UK62328/9
V8 and V9
PM13525 SQL procedures Implicit auto regeneration for native SQL procedures UK67267
also V9
PM13631 IX on expression Implicit auto regeneration for index on expression UK68476
PM17542 Open/close Enable new z/OS 1.12 allocation interface UK60887/8
V8 and V9
PM18196 DFSORT Optimization of DFSORT algorithms for selecting the
sort technique that makes the best use of available
storage
UK62201
Appendix B. Summary of relevant maintenance 629
PM18557 Restart DB2 to support z/OS R12 changes to improve high
allocation requests processing
UK59887/8
V8 and V9
PM19034 DSNZPARM CHECK_FASTREPLICATION parameter to control
FASTREPLICATION keyword on DSSCOPY command
of CHECK utilities.
UK63215
(also V8, V9)
PM19584 LOAD LOAD utility performance improvement option for data
that is presorted in clustering key order.
See also PM35284.
UK68097
also V9
PM21277 DB2 utilities REORG, CHECK INDEX, and REBUILD INDEX users
who use DB2 Sort for z/OS
UK61213 V9
also V8
PM21747 DB2 Sort Performance enhancements UK60466
PM24721 BIND BIND performance improvement when using LOBs in
DB2 catalog. Also RTS fix.
UK63457
PM24723 IFCID 225 Provide real storage value with z/OS support UK68652
PM24808 DB2 installation Several installation changes UK63971
also V9
PM24937 Optimizer Various fixes for optimization hints UK66087
PM25282 IRLM IRLM Large Notify support. IRLM uses ECSA storage to
handle response data to a NOTIFY request. Prior to
DB2 10, the amount of data per member was 4 MB. In
DB2 10, this limit has been increased to 64 MB. IRLM
NOTIFY RESPONSE exceeding 4 MB is processed
using IRLM private storage.
UK64370
PM25357 Optimizer Getpage increase using subquery UK63087
PM25525 REORG PARALLEL keyword on REORG with LISTDEF
PARTLEVEL (apply with PM28654/UK64589)
UK64588
also V9
PM25635
ADMIN_INFO_SQL
Corrected DDL and STATs issues (also for
DSNADMSB)
UK62150
PM25648 REORG Improving REORG concurrency on defer ALTER
materialization
UK70310
PM25652 DSNACCOX Enhancement to recommend REORG on Hash Access
objects based on overflow index ratio vs. total rows
UK66610
PM25679 Optimizer Enhancement for APREUSE/APCOMPARE UK70233
PM26480 DDF Availability (MODIFY DDF ALIAS...) new functions UK63820
PM26762 DSNZPARM FLASHCOPY_PPRC (V10 only) and
REC_FASTREPLICATION for use by utilities
UK63366
(also V8, V9)
PM26781 DDF Availability (MODIFY DDF ALIAS...) preconditiong UK63818
PM26977 Security Separate system privileges from system DBADM
authority
UK65205
PM27073 SPT01 Inline LOB preconditioning UK65379
PM27097 WLM Support to start and maintain a minimum number of
WLM stored procedure address spaces
UK65858
also V9
APAR # Area Text PTF and notes
630 DB2 10 for z/OS Technical Overview
PM27099 LISTDEF Empty lists change from RC8 to RC4. Important since
the new LISTDEF DEFINED keyword defaults to YES
UK64424
PM27811 SPT01 Inline LOB UK66379
PM27828 UPDATE UTS update with small record goes to overflow
unnecessarily
UK64389
PM27835 Security DB2 now supports a TCB level ACEE for the authid
used in an IMS transaction that calls DB2.
UK70647
also V9
PM27872 SMF Decompression routine DSNTSMFD and sample JCL
DSNTEJDS to call DSNTSMFD
UK64597
PM27973 Segmented TS Better use free space for SEGMENTED pagesets
including UTS PBG and PBR
UK65632
PM28100 Stored
procedures
Support JCC JDBC 4.0 driver for Java stored
procedures
UK65385
PM28296 Security Support for secure audit policy trace start UK65951
PM28458 Casting Timestamp with timezone, add restrictions for extended
implicit cast for set operators
UK63890
PM28500 System profile Filters on client information fields UK68364
PM28543 Security Implicit system privileges have to be separated from
system DBADM authority
UK65253
PM28796
SYSROUTINEAUTH
Inefficient access to DB2 catalog during GRANT stored
procedures
UK65637
PM28925 DSNZPARM Force deletion of CF structures at group restart UK66376
PM29037 LOBs Altered LOB inline length materialization via REORG
SHRLEVEL CHANGE
UK70302
PM29124 CHAR
incompatibility
Help with handling the release incompatible change for
the CHAR(decimal) built-in function on DB2 10
UK67578
PM29900 Built-in functions Additions UK66476
PM29901 Built-in functions More additions UK66046
PM30394 Security DBADM authorities enhancements UK67132
PM30425 Optimizer Optimization hints enhancements UK67637
also V9
PM30468 zIIP Prefetch and deferred write CPU, when running on a
zIIP processor, is to be reported by WLM under the
DBM1 address space, not under the MSTR
UK64423
PM30991 RECOVER Prohibit RECOVER BACKOUT YES after mass delete
on segmented or UTS
UK66327
PM31003
PM31004
PM31006
PM31009
Data sharing DELETE data sharing member UK65750
UK67512
UK67958
UK69286
APAR # Area Text PTF and notes
Appendix B. Summary of relevant maintenance 631
PM31243 REORG REORG FORCE to internally behave like CANCEL
THREAD
OPEN
PM31313 Temporal ALTER ADD COLUMN to propagate to History Tables UK70215
PM31314 Temporal TIMESTAMP WITH TIMEZONE UK71412
PM31614 Packages Improvement in package allocation UK66374
PM31641 LOGAPSTG Default changed for Fast Log Apply from 100 to 500 MB UK66964
PM31807 IRLM IRLM support for DB2 DSNZPARM
DEL_CFSTRUCTS_ON_RESTART (PM28925) to
delete IRLM lock structure on group restarts.
UK65920
PM31813 DSNZPARM New DSNZPARM DISABLE_EDMRTS to specify
whether to disable collection of real time statistics (RTS)
by the DB2 Environmental Descriptor Manager (EDM).
This system configuration parameter requires
PM37672 to be fully enabled.
UK69055
PM33501 DSNZPARM Add a DSNZPARM to disable implicit DBRM to package
conversion during BIND PLAN with MEMBER option
and automatic REBIND processing.
UK68743
PM33767 Optimizer Various enhancements and fixes (OPTHINT,
APRETAINDUP, EXPLAIN PACKAGE).
UK69377
PM33991 Migration Several installation jobs fixes. UK69735
also V8 and V9
PM35190 Catalog Enable SELECT from SYSLGRNX and SYSUTILX (see
also PM42331).
UK73478
PM35284 LOAD Companion APAR to PM19584 LOAD/UNLOAD
FORMAT INTERNAL and LOAD PRESORTED.
UK68098
also V9
PM37057 SSL Additional enhancements to DB2 10 for z/OS digital
certificate authentication support.
UK73180
PM36177 DSNZPARM Pre-conditioning APAR that includes changes to
support the enhancement for IRLM timeout value for
DDL statements enabled by APAR PM37660.
UK69029
PM37112 REORG Enhancement for log apply phase of REORG
SHRLEVEL CHANGE when run at partition level (see
also PM45810).
UK71128
Also V9
PM37300 DDF Authorization changes when there is no private protocol
(see also PM17665 and PM38417). DSN6FAC
PRIVATE_PROTOCOL reinstated in V10 with new
option AUTH.
UK67639
also V8 and V9
PM37622 REORG Enablement of zIIP processor exploitation of the
UNLOAD phase of the REORG TABLESPACE utility.
See also PM47091 for V9.
UK71458
also V9
PM37625 DSNZPxxx
module
Preconditioning support in DB2 subsystem parameter
macro DSNDSPRM for APARs PM24723, PM36177,
PM31813, and PM33501.
UK67634
APAR # Area Text PTF and notes
632 DB2 10 for z/OS Technical Overview
PM37630 REORG For REORG TABLESPACE SORTDATA against a
non-partitioned table space containing a single table,
can avoid sorting on the table obid in the clustering data
sort and the clustering index keys when parallel index
build is used.
UK71437
also V9
PM37647 Real storage
monitoring
External enablement for APAR PM24723 (IFCID 225
REAL STORAGE STATISTICS ENHANCEMENTS
UK68659
PM37660 DSNZPARM DDLTOX in DSN6SPRM is introduced a separate time
out factor for DDL and DCL (GRANT, REVOKE, and
LOCK) statements. The time out value is the product of
DDLTOX and the IRLM time out value specified by
DSN6SPRM.IRLMRWT.
UK69030
PM37816 DSNZPARM Follow on to APAR PM33501, it adds DSNZPARM
DISALLOW_DEFAULT_COLLID in DSN6SPRM which
can prevent using
DSN_DEFAULT_COLLID_plan-name on implicitly
generated packages during the DB2 automatic DBRM
to package conversion process.
UK69199
PM38164 Access path During access path selection, index probing can have
repeated access to SYSINDEXSPACESTATS to
retrieve NLEAF. This occurs if NLEAF is NULL.
Make index probing more fault tolerant when NLEAF is
NULL.
UK71333
PM38417 DDF Complete DSNZPARM related changes for PM37300
(security with private protocol removal)
UK74175
also V8 and V9
PM39342 Online
compression
Build Dictionary routine for compression during INSERT
was modified not to be redriven by subsequent
INSERTs if the dictionary could not be built and
MSGDSNU235I is issued. See also PM45651.
UK68801
PM41447 DDF security Resolve issues with CICS or IMS related new user
processing at a DB2 10 for z/OS TCP/IP requester
system.
UK70483
PM42331 Catalog Foundation for SELECT from SYSLGRNX and
SYSUTILX.
UK71875
PM42528 Data sharing Delete data sharing member. See also PM51945 and
PM54873.
UK74381
PM42924 RUNSTATS Optimize sequential prefetch requests during
RUNSTATS TABLESAMPLE
UK70844
PM43292 DDF Allow RACF protected userIDs to be PassTicket
authenticated.
UK72212
also V9
PM43293 DDF MAXCONQN and MAXCONQW new ZPARMs UK90325
PM43597 REORG ALTER MAXROWS to set AREO* rather than AREOR UK71467
PM43817 Statistics DB2 code has been modified to improve execution
statistics reported by both IFCIDs 316 and 401 when
many concurrent threads execute same SQL statement
UK73630
PM45650 LOB LOB pageset support for RECOVER BACKOUT YES UK77584
APAR # Area Text PTF and notes
Appendix B. Summary of relevant maintenance 633
PM45651 Online
compression
The Build Dictionary routine for compression during
INSERT now periodically redrive attempts to build a
compression dictionary to maximize the benefits of
compression while minimizing the cost of unsuccessful
attempts to build a dictionary.
UK72447
PM45810 REORG Enhancement for log apply phase of REORG
SHRLEVEL CHANGE when run at partition level.
(See also PM37112)
UK71128
PM47091 REORG Completion for PM37622 UK71459
PM47618 XML Addition of XQuery support UK73139
PM51467 Data sharing Reduce high Coupling Facility F utilization, from 30% to
up to 90%, after migrating to V10 because of much
more Delete-Name processing (during pseudo close).
See also OA38419 and PM51655.
UK75324
PM51655 Data sharing The castout logic has been modified to not use a new
restart token algorithm.
UK73864
PM51945 Data sharing Delete member completion. See PM42528. UK74381
PM52012 REORG New zParm REORG_LIST_PROCESSING to control
the default behavior of REORG TABLESPACE LIST
partition processing
UK76650
PM52724 Mass delete Lock escalation on SYSCOPY. Follow on to PM30991 UK80113
PM52327 DDF Reduce CPU for excessive block scanning for each
DDF call to a remote location.
UK74981
PM53237 RESTORE RESTORE to work with Space Efficient FlashCopy UK77490
PM53243 Stored
procedures
Monitoring improvements to easily identify problematic
stored procedures or a statement within that stored
procedure
UK78514
PM53254 REORG REORG to ignore free space for PBG UK78208
PM54873 Data sharing Delete member, more code UK74381
PM55051 NPSI SORTNPSI parm in REORG and ZPARM (also
PM60449)
UK78229
also V9
PM56631 SPSS support Pack/Unpack buit-in functions (also PM55928) UK79243
PM56845 Access plan DSNZPARM for OPTIMIZE FOR 1 ROW to allow sort
access plans
UK77500
PM57206 Accounting IBM specialty engine eligible time that runs on a general
purpose CP will be reported in a serviceability field
QWACZIIP_ELIGIBLE in DB2 accounting records.
UK79406
PM57632 Load LOAD SHRLEVEL CHANGE partition parallelism with a
single input dataset
UK78632
PM57878 XML Performance improvement for XQuery constructors UK77739
PM58177 REORG REORG to accept a mapping table defined in a PBG
table space
UK78241
also V9
APAR # Area Text PTF and notes
634 DB2 10 for z/OS Technical Overview
B.2 z/OS APARs
In Table B-2 we present a list of APARs providing additional enhancements for z/OS.
This list is not and cannot be exhaustive, check RETAIN and the z/OS Web site.
Table B-2 z/OS DB2-related APARs
PM62797 Accounting New accounting aggregation (every 1 min with
statistics) functionality into an IFC statistics record by
connection type
UK81047
PM66287 Catalog SYSPACKAGE LASTUSED interference with RTS or
AUTOBIND
UK82732
PM67544 Data sharing CF DELETE_NAME, performance UK82633
PM69522 DB2 Sort DB2 Sort 1.3 performance with DB2 utilities UK81520
also V9
PM70046 Access plan Re-establishes OPTIOWGT UK83168
PM70270 Buffer pool Lower buffer pool hit ratio or more synchronous I/O UK82555
PM70575 Data sharing Coupling Facility 18 cache write-around to reduce
impact of updates to GBP (also XES OA37550)
OPEN
PM71114 Stored
procedures
New SYSPROC.ADMIN_UPDATE_SYSPARM
to change the value of DSNZPARMs
UK83171
also V9
PM72526 XML Asynchronous deletion of unneeded XML document
versions is now candidate for execution on zIIP
UK91203
PM80779 Performance ACCESS DB performance improvement (also
PM91930) with parallel tasks
UK95407
PM82301 LOB Users get down level page for lob objects defined with
GBPCACHE SYSTEM. It caused storage overlay (also
PM84750).
UK82633
PM86952 Storage Contraction for multi-block memory segments now
implemented for 64 bit pools
OPEN
PM94885 DDF Follow on to PM43293 OPEN
APAR # Area Text PTF and notes
APAR # Area Text PTF and notes
OA03148 RRS exit RRS support. UA07148
OA31116 1 MB page Large frames support. UA57254
OA33106 ECSA memory
for SRB
Reduce SRB storage usage. UA56174
OA33529 1 MB page Large frames support. UA57243
OA33702 1 MB page Large frames support. UA57704
OA34865 SMF Remove cause for accumulation of storage use in
subpool 245 (SQA/ESQA)
UA55970
Appendix B. Summary of relevant maintenance 635
B.3 OMEGAMON PE APARs
In Table B-3 we present a list of APARs providing additional enhancements for IBM Tivoli
OMEGAMON XE for DB2 PE on z/OS V5.1.0, PID 5655-W37.
This list is not and cannot be exhaustive; check RETAIN and the DB2 tools website.
Table B-3 OMEGAMON PE GA and DB2 10 related APARs
OA35057 Media manager Important media manager fixes. UA58937
OA35885 RSM RSM IARV64 macro provides an interface to obtain the
real frame and auxiliary storage in use to support an
input high virtual storage range.
UA60823
OA37550 Coupling facility Full function small programming enhancement (SPE)
for z/OS V1R12 (HBB7770) and V1R13 (HBB7780) to
provide Coupling facility cache structure performance
enhancements and RAS improvements.
UA66416
OA38419 Coupling facility Add two new operands to control the execution flow
when the DELETE_NAME command is deleting both
directory entries and the associated data elements.
One option controls generation or suppression of XI
signals as entries are deleted. The other option controls
whether or not the command should delete changed
directory entries and data. See PM51467.
UA66420
APAR # Area Text PTF and notes
APAR # Area Text PTF and notes
II14438 Info APAR for known issues causing high CPU
utilization.
PM22628 Various fixes and improvements to be considered part
of the GA installation package.
UK61094
PM23887 Various fixes and improvements to be considered part
of the GA installation package.
UK61093
PM23888 Various fixes and improvements to be considered part
of the GA installation package.
UK61142
PM23889 DB2 10 support for PLAN_TABLE for component
EXPLAIN.
UK61139
PM24082 Various fixes and improvements to be considered part
of the GA installation package.
UK61317
PM24083 Updates including columns to ACCOUNTING FILE
DDF, GENERAL and PROGRAM.
UK65325
PM32638 Collection of new functionality and development fixes. UK65399
PM32647 Collection of new functionality and development fixes. UK65412
PM35049 Collection of new functionality and development fixes. UK65924
636 DB2 10 for z/OS Technical Overview
PM47871 Batch Record Trace can now "FILE" IFCID 316 and 401
trace data (dynamic and static SQL statements evicted
from caches, IFI READA data) into a DB2 LOAD format.
UK72590
PM67565 Support stored procedure monitoring as delivered by
DB2 APAR PM53243/UK78514
UK8112
PM70645 Follow on for PM53243 UK81124
APAR # Area Text PTF and notes
Copyright IBM Corp. 2010. All rights reserved. 637
AC Autonomic computing
ACS Automatic class selection
AIX Advanced Interactive eXecutive
from IBM
APAR Authorized program analysis report
API Application programming interface
AR Application requester
ARM Automatic restart manager
AS Application server
ASCII American National Standard Code
for Information Interchange
B2B Business-to-business
BCDS DFSMShsm backup control data
set
BCRS Business continuity recovery
services
BI Business Intelligence
BLOB Binary large objects
BPA Buffer pool analysis
BSDS Boot strap data set
CBU Capacity BackUp
CCA Channel connection address
CCA Client configuration assistant
CCP Collect CPU parallel
CCSID Coded character set identifier
CD Compact disk
CDW Central data warehouse
CF Coupling facility
CFCC Coupling facility control code
CFRM Coupling facility resource
management
CICS Customer Information Control
System
CLI Call level interface
CLOB Character large object
CLP Command line processor
CM conversion mode
CMOS Complementary metal oxide
semiconductor
CP Central processor
CPU Central processing unit
CRCR Conditional restart control record
Abbreviations and acronyms
CRD Collect report data
CRUD Create, retrieve, update or delete
CSA Common storage area
CSF Integrated Cryptographic Service
Facility
CTE Common table expression
CTT Created temporary table
CUoD Capacity Upgrade on Demand
DAC Discretionary access control
DASD Direct access storage device
DB Database
DB2 Database 2
DB2 PE DB2 Performance Expert
DBA Database administrator
DBAT Database access thread
DBCLOB Double-byte character large object
DBCS Double-byte character set
DBD Database descriptor
DBID Database identifier
DBM1 Database master address space
DBRM Database request module
DCL Data control language
DDCS Distributed database connection
services
DDF Distributed data facility
DDL Data definition language
DES Data Encryption Standard
DLL Dynamic load library manipulation
language
DML Data manipulation language
DNS Domain name server
DPSI Data partitioning secondary index
DRDA Distributed Relational Data
Architecture
DSC Dynamic statement cache, local or
global
DSNZPARMs DB2s system configuration
parameters
DSS Decision support systems
DTT Declared temporary tables
DWDM Dense wavelength division
multiplexer
638 DB2 10 for z/OS Technical Overview
DWT Deferred write threshold
EA Extended addressability
EAI Enterprise application integration
EAS Enterprise Application Solution
EBCDIC Extended binary coded decimal
interchange code
ECS Enhanced catalog sharing
ECSA Extended common storage area
EDM Environmental descriptor manager
EJB Enterprise JavaBean
ELB Extended long busy
ENFM enable-new-function mode
ERP Enterprise resource planning
ERP Error recovery procedure
ESA Enterprise Systems Architecture
ESP Enterprise Solution Package
ESS Enterprise Storage Server
ETR External throughput rate, an
elapsed time measure, focuses on
system capacity
EWLC Entry Workload License Charges
EWLM Enterprise Workload Manager
FIFO First in first out
FLA Fast log apply
FTD Functional track directory
FTP File Transfer Program
GB Gigabyte (1,073,741,824 bytes)
GBP Group buffer pool
GDPS Geographically Dispersed Parallel
Sysplex
GLBA Gramm-Leach-Bliley Act of 1999
GRS Global resource serialization
GUI Graphical user interface
HALDB High Availability Large Databases
HPJ High performance Java
HTTP Hypertext Transfer Protocol
HW Hardware
I/O Input/output
IBM International Business Machines
Corporation
ICF Internal coupling facility
ICF Integrated catalog facility
ICMF Integrated coupling migration
facility
ICSF Integrated Cryptographic Service
Facility
IDE Integrated development
environments
IFCID Instrumentation facility component
identifier
IFI Instrumentation Facility Interface
IFL Integrated Facility for Linux
IMS Information Management System
IORP I/O Request Priority
IPL initial program load
IPLA IBM Program Licence Agreement
IRD Intelligent Resource Director
IRLM Internal resource lock manager
IRWW IBM Relational Warehouse
Workload
ISPF Interactive system productivity
facility
ISV Independent software vendor
IT Information technology
ITR Internal throughput rate, a
processor time measure, focuses
on processor capacity
ITSO International Technical Support
Organization
IU information unit
IVP Installation verification process
J2EE Java 2 Enterprise Edition
JDBC Java Database Connectivity
JFS Journaled file systems
JNDI Java Naming and Directory
Interface
JTA Java Transaction API
JTS Java Transaction Service
JVM Java Virtual Machine
KB Kilobyte (1,024 bytes)
LCU Logical Control Unit
LDAP Lightweight Directory Access
Protocol
LOB Large object
LPAR Logical partition
LPL Logical page list
LRECL Logical record length
LRSN Log record sequence number
LRU Least recently used
LSS Logical subsystem
LUW Logical unit of work
LVM Logical volume manager
Abbreviations and acronyms 639
MAC Mandatory access control
MB Megabyte (1,048,576 bytes)
MBps Megabytes per second
MLS Multi-level security
MQT Materialized query table
MTBF Mean time between failures
MVS Multiple Virtual Storage
NALC New Application License Charge
NFM new-function mode
NFS Network File System
NPI Non-partitioning index
NPSI Nonpartitioned secondary index
NVS Non volatile storage
ODB Object descriptor in DBD
ODBC Open Database Connectivity
ODS Operational Data Store
OLE Object Link Embedded
OLTP Online transaction processing
OP Online performance
OS/390 Operating S/390
OSC Optimizer service center
PAV Parallel access volume
PCICA Peripheral Component Interface
Cryptographic Accelerator
PCICC PCI Cryptographic Coprocessor
PDS Partitioned data set
PIB Parallel index build
PLSD page level sequential detection
PPRC Peer-to-Peer Remote Copy
PR/SM Processor Resource/System
Manager
PSID Pageset identifier
PSP Preventive service planning
PTF Program temporary fix
PUNC Possibly uncommitted
PWH Performance Warehouse
QA Quality Assurance
QMF Query Management Facility
QoS Quality of Service
QPP Quality Partnership Program
RACF Resource Access Control Facility
RAS Reliability, availability and
serviceability
RBA Relative byte address
RBLP Recovery base log point
RDBMS Relational database management
system
RDS Relational data system
RECFM Record format
RI Referential Integrity
RID Record identifier
RLSD row level sequential detection
ROI Return on investment
RPO Recovery point objective
RR Repeatable read
RRS Resource recovery services
RRSAF Resource recovery services attach
facility
RS Read stability
RTO Recovery time objective
RTS Real-time statistics
SAN Storage area networks
SBCS Store single byte character set
SCT Skeleton cursor table
SCUBA Self contained underwater
breathing apparatus
SDM System Data Mover
SDP Software Development Platform
SLA Service-level agreement
SMIT System Management Interface Tool
SOA Service-oriented architecture
SPL Selective partition locking
SPT Skeleton plan table
SQL Structured Query Language
SQLJ Structured Query Language for
Java
SRM Service Request Manager
SSL Secure Sockets Layer
SU Service Unit
TCO Total cost of ownership
TPF Transaction Processing Facility
UA Unit Addresses
UCB Unit Control Block
UDB Universal Database
UDF User-defined functions
UDT User-defined data types
UOW Unit of work
UR Uncommitted read
UR Unit of recovery
vCF Virtual coupling facility
640 DB2 10 for z/OS Technical Overview
VIPA Virtual IP Addressing
VLDB Very large database
VM Virtual machine
VSE Virtual Storage Extended
VSIP Visual Studio Integrator Program
Copyright IBM Corp. 2010. All rights reserved. 641
Related publications
We consider the publications that we list in this section particularly suitable for a more
detailed discussion of the topics that we cover in this book.
IBM Redbooks publication
For information about ordering these publications, see How to get Redbooks publications on
page 643. Note that some of the documents referenced here might be available in softcopy
only.
DB2 9 for z/OS and Storage Management, SG24-7823
DB2 9 for z/OS Performance Topics, SG24-7473
DB2 9 for z/OS Technical Overview, SG24-7330
DB2 9 for z/OS: Distributed Functions, SG24-6952
Extremely pureXML in DB2 for z/OS, SG24-7915
Data Studio and DB2 for z/OS Stored Procedures, REDP-4717
z/OS Version 1 Release 12 Implementation, SG24-7853
DB2 9 for z/OS: Configuring SSL for Secure Client-Server Communications, REDP-4630
DB2 UDB for z/OS Version 8: Everything You Ever Wanted to Know, ... and More,
SG24-6079
DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond, SG24-7604
DB2 9 for z/OS: Packages Revisited, SG24-7688
Securing DB2 and Implementing MLS on z/OS, SG24-6480
IBM DB2 9 for z/OS: New Tools for Query Optimization, SG24-7421
z/OS Version 1 Release 10 Implementation, SG24-7605
z/OS Version 1 Release 11 Implementation, SG24-7729
IBM z/OS V1R11 Communications Server TCP/IP Implementation Volume 2: Standard
Applications, SG24-7799
Other publications
These publications are also relevant as further information sources:
DB2 Version 8 Installation Guide, GC18-7418-11
DB2 10 for z/OS Administration Guide, SC19-2968
DB2 10 for z/OS Application Programming and SQL Guide, SC19-2969
DB2 10 for z/OS Application Programming Guide and Reference for Java, SC19-2970
DB2 10 for z/OS Codes, GC19-2971
DB2 10 for z/OS Command Reference, SC19-2972
DB2 10 for z/OS Data Sharing: Planning and Administration, SC19-2973
642 DB2 10 for z/OS Technical Overview
DB2 10 for z/OS Installation and Migration Guide, GC19-2974
DB2 10 for z/OS Internationalization Guide, SC19-2975
DB2 10 for z/OS Introduction to DB2 for z/OS, SC19-2976
IRLM Messages and Codes for IMS and DB2 for z/OS, GC19-2666
DB2 10 for z/OS Managing Performance, SC19-2978
DB2 10 for z/OS Messages, GC19-2979
DB2 10 for z/OS ODBC Guide and Reference, SC19-2980
DB2 10 for z/OS pureXML Guide, SC19-2981
DB2 10 for z/OS RACF Access Control Module Guide, SC19-2982
DB2 10 for z/OS SQL Reference, SC19-2983
DB2 10 for z/OS Utility Guide and Reference, SC19-2984
DB2 10 for z/OS What's New?, GC19-2985
DB2 10 for z/OS Diagnosis Guide and Reference, LY37-3220
DB2 Version 9.5 for Linux, UNIX, and Windows Call Level Interface Guide and Reference,
Volume 2, SC23-5845
Program Directory for DB2 10 for z/OS, GI10-8829
Program Directory for z/OS Application Connectivity to DB2 10 for z/OS, GI10-8830
Program Directory for DB2 Utilities Suite for z/OS V10, GI10-8840
z/OS V1R10.0 Security Server RACF Security Administrator's Guide, SA22-7683
IBM z/OS V1R11 Communications Server TCP/IP Implementation Volume 2: Standard
Applications, SG24-7799
z/OS V1R11.0 Communications Server New Function Summary z/OS V1R10.0-V1R11.0,
GC31-8771
Effective zSeries Performance Monitoring Using Resource Measurement Facility,
SG24-6645
DFSMSdss Storage Administration Reference, SC35-0423
IBM DB2 Sort for z/OS Version 1 Release 1 User's Guide, SC19-2961
Online resources
The following websites are also relevant as further information sources:
DB2 10 for z/OS
http://www.ibm.com/software/data/db2/zos/
DB2 Tools for z/OS
http://www.ibm.com/software/data/db2imstools/products/db2-zos-tools.html
Data Studio
http://www.ibm.com/software/data/optim/data-studio/
z/OS V1R10 IPv6 certification refer to Special Interoperability Test Certification of the IBM
z/OS Version 1.10 Operating System for IBM Mainframe Computer Systems for Internet
Protocol Version 6 Capability, US government, Defense Information Systems Agency,
Joint Interoperability Test Command
Related publications 643
http://jitc.fhu.disa.mil/adv_ip/register/certs/ibmzosv110_dec08.pdf
DB2 Sort for z/OS
http://www.ibm.com/software/data/db2imstools/db2tools/db2-sort/
DB2 Information Center
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm
.db29.doc/db2prodhome.htm
IBM zEnterprise 196 I/O Performance Version 1, technical paper
ftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03169usen/ZSW03169USEN.PDF
How to get Redbooks publications
You can search for, view, or download Redbooks, Redpapers, Technotes, draft publications
and Additional materials, as well as order hardcopy Redbooks publications, at this website:
ibm.com/redbooks
Help from IBM
IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
644 DB2 10 for z/OS Technical Overview
Copyright IBM Corp. 2010. All rights reserved. 645
Index
Numerics
45829
Head 4
CURRENT TIMESTAMP WITH TIME ZONE spe-
cial register 174
A
ABIND 478
Acceptable value 58
access 61, 82, 116, 132, 223, 287, 310, 337, 431, 535,
617
access control 133
catalog changes 390
Access Control Authorization Exit 367
access data 367
-ACCESS DATABASE 121
access path 134, 224, 401402, 487488, 535, 537
erratic changes 224
wild swings 224
ACCESSCTRL 339
ACCESSCTRL authority 356, 365, 369
grant privilege 371
active DB2
group member 474
subsystem 110
Active log 9899, 477478, 518
active version 130, 138139
ADD VERSIONING USE HISTORY TABLE clause 208
administrative authority 339
ADVISORY REORG 75, 89
advisory REORG-pending (AREOR) 71
aggregate functions 184
aggregation group 184, 187, 189
aggregation group boundaries 194
ALTER
restrictions 77
ALTER BUFFERPOOL 486, 563
ALTER Function 128, 366
main portion 133
remaining portion 134
ALTER statement 7172, 116, 130, 206, 259, 410
ALTER COLUMN clause 208
COLUMN clause 208
ALTER TABLE 81, 163, 259, 391, 459, 570
DB2R5.CUST OMER 395
DROP Organization 85
statement 85, 91, 100, 103, 209, 263, 396, 570, 579
syntax 8485
TB1 90
TRYTEMPORAL_B 216
TRYTEMPORAL_E 207
ALTER Table 70, 180, 207, 216, 570, 580
alter table
DDL statement 387
SQL statement 391
ALTER TABLESPACE 55, 73, 75, 115
CHANGES keyword 91
clause 80
command 78, 91, 103
DSSIZE 7879, 87
FREEPAGE 74, 79, 91
ITSODB.TS1 BUFFERPOOL BP16K0 78
ITSODB.TS1 BUFFERPOOL BP8K0 7576
ITSODB.TS1 BUFFERPOOL BP8K9 77
ITSODB.TS1 Drop 76
ITSODB.TS1 DSSIZE 79
ITSODB.TS1 DSSIZE 8 G 86
ITSODB.TS1 SEGSIZE 8 86
ITSODB.TS2 MEMBER Cluster 92
MAXPARTITIONS 7374, 87
option 78, 92
SQL code 77
statement 74, 76, 78
ALTER TABLESPACE statement 74, 84, 116
APAR 55, 120121, 224225, 244, 600
APAR II4474 473
APAR II4477 473
APIs 413
APPEND=YES 558
application 52, 55, 67, 125, 262, 310, 317, 337, 535
application developer 166, 204, 371, 603
application period 205
AREO* 71
AREOR 7172, 75, 116, 487, 570
AREOR state 74
AREOR status 73, 78
argument 130, 245, 400
AS ROW BEGIN attribute 206
AS TRANSACTION START ID attribute 206
attribute 62, 73, 127, 231, 245, 339, 516, 554
ATTRIBUTE2 column 610611
Audit category 338
CONTEXT 339
DBADMIN 339
EXECUTE 339
group auditable events 354
OBJMAINT 339
SECMAINT 339
SYSADMIN 340
VALIDATE 340
Audit category CHECKING 339
audit policy 338, 488
auditing capability 337
AUDITPOLICYNAME value 343
AUDTPLCY keyword 342
AUDTPLCY parameter 344
display trace command 351
auth ID
DB0BAC 370371
646 DB2 10 for z/OS Technical Overview
DB0BDA 363, 368
DB2R51 375376
DB2R52 373
DB2R53 419
auth Id
trusted context thread reuse 424
AUTHORIZATION Id 60, 130, 338, 514, 605
authorization Id
SECADM authority 362
automatic class selection (ACS) 516
automatic GRECP recovery 120
AUX keyword 452
auxiliary index 271, 569
auxiliary table 92, 277, 454, 569
space 454, 569570
auxiliary table space 569
B
base table 88, 269, 452, 541
base table space
asynchronous multi-page read requests 545
online reorganization 107
pending definition changes 92
base tables 452
BASE TABLESPACE 454
Basic row format (BRF) 577
BIGINT 149, 249, 515, 582
BIND DEPLOY 140
bind option 128
bind package options 374
bind parameter 537
BIT 134, 287
BLOB 280, 408, 489, 570
BLWLINTHD 35
BLWLTRPCT 35
BSDS 63, 100, 365
BSDS expanded format 477
buffer pool 56, 73, 75, 109110, 132, 486, 536, 545, 558
activity 119, 562
attribute 77, 562563
BP0 91
change 521
complete scans 114
CPU overhead 559560
data entries 120
directory 119120, 559
entire size 561
manager 563
manager page 563
name 77, 119
page sets 563
relevant pages 119
scan 114
setting 520521
size 55, 75, 77, 114, 505, 561, 581
space 561
storage 561
support 561
buffer pool management 563
buffer pool scan avoidance 114
buffer pools 114, 120, 618
BUSINESS_TIME period 205, 216
temporal table 216
BUSINESS_TIME semantics 218
C
catalog table 55, 116, 127, 163, 206, 223224, 307, 332,
340, 496, 571
BRANCH column 397
pertinent records 225
CATMAINT 379, 517
CATMAINT Update 517
CCSID 91, 127, 492
CF 118, 506, 618
Changing the DSSIZE of a table space 80
Changing the page size of a table space 75
Changing the page size of an index 78
Changing the segment size 78
character string 150, 250, 408, 415
explicit cast 156
CHECK DATA 108, 269, 464
CHECK INDEX 269, 465, 546
CHECK LOB 108, 464
CHECK utility 108, 464465
main purpose 465
Checkpoint Type (CHKTYPE) 97
CICS 110, 235, 309
class 54, 57, 184, 288, 377, 490
CLI 308
CLOB 73, 176, 280, 408, 489, 570
CLUSTER 6263, 109, 114, 492, 535
CM 107, 224, 361, 535
CM* 476
COLGROUP 87
COLLID 53, 127, 241, 339
colon 155, 317
Column
XML 272
column DRIVETYPE 590
Column mask 360
column access control 398
column mask 209, 367, 376, 384, 395, 497
Column name 72, 88, 340, 359, 474, 571, 579, 582
column value 219, 390, 395, 582
command line
processor 137, 412
COMMENT 55, 408
components 296, 317, 416, 485
compression 3, 55, 64, 103, 225, 277, 518, 621
Compression dictionary 105
compression dictionary 103
condition 137, 251, 391, 604
CONNECT 137, 348
connected DB2 subsystem
IRLM parameters 331
conversion mode 57, 60, 98
parallelism enhancements 593
conversion mode (CM) 473, 475476, 535536, 538,
540, 545, 548, 550551, 559560, 563, 565, 569570,
572, 574, 592593
Index 647
COPY 86, 105, 364, 478
corresponding history table
table name 207
corresponding system-period temporal table
table name 207
COUNT 137, 351
coupling facility (CF) 118
CPU overhead 549, 561
CPU reduction 518, 535, 587
CPU time 57, 559
significant savings 570
total cost 559
CREATE TABLE
LIKE 367
CREATE_SECURE_OBJECT system privilege 376
CREATEIN 364
CREATOR 375
cross invalidation 114, 119
CS 132, 239, 405
CTHREAD 52, 503
CURRENT EXPLAIN MODE 373
CURRENT SQLID 362363
CURRENT TIMESTAMP 138, 159, 174, 300
time zone 174
customer table 250, 381, 390
column access control 390, 395
column mask objects 398
following tasks 398
row access control 391
row policies 393
D
data 3, 68, 109, 125, 243, 309, 337, 426, 473, 615
data page 79, 83, 105, 115, 118, 558559, 576
different set 115
data set 53, 57, 6162, 69, 73, 80, 274, 311, 418, 477,
561, 565
maximum number 503
data sharing 55, 77, 110, 132, 354, 468, 474, 598
Data type 72, 88, 90, 132, 134, 154, 205206, 340, 359,
485, 554, 571
data type
returns information 166
DATAACCESS 339, 486
DATAACCESS authority 356, 367, 488
database administrator 204, 337
Database name 341
database object 68, 91, 338, 355, 385, 460
database system 154, 156, 183, 203204
existing applications 154
timestamp data 165
DATASIZE 106, 584
DATE 132, 290, 362, 478, 583
date format 132
datetime constants 154
DB0B DSNWVCM1 345
DB2 10 12, 3, 5152, 6769, 109110, 125, 203204,
243, 337338, 471, 475, 531, 575
64-bit COMMON storage 597
access control feature 385
access plan stability support 229
address 107, 354
audit category 340
authority 348
base 484
break 599
BSAM buffers 572
buffer pool enhancements 560
catalogue 63, 489
CDB table SYSIBM.USER Name 413
Centimeters 458
change 405, 413, 559
CM9 489
conversion mode 52, 475
conversion mode (CM) 535536, 538, 540, 545, 548,
550551, 559560, 563, 565, 569570, 572, 574,
592593
conversion mode 8 476
conversion mode 9 476
Curium 107, 361, 458, 473
data 57
DB2 catalog 489
disables compression 56
early code 111
ENFM 476
enhancement 230
environment 349, 362
externalize 347, 599
format 514
function 520
group 338
identity propagation 417418
index scan 545
installation CLIST 502
Load 569
make 240
member consolidation 597
memory mapping 599
migration planning 474
new BIND QUERY feature 498
new column 359
new column value 359
new DSNZPARMs 505
new enhancements 606
new function 518
new z/OS identity propagation 417
new-function mode 118, 148, 166, 231, 476, 519
NFM 62, 116, 360, 458, 490, 532
object changes enhancement 497
online changes 71
packaging 480
page fix 574
policy 379
premigration 52
pre-migration health check job 473
running 418
ship 570
SQL 383
SQL function 530
SQL functionality 530
648 DB2 10 for z/OS Technical Overview
supports DRDA 531
supports index 540
table spaces 499
terminate 474
tolerate 515, 518
track 590
use 565
utility 572
DB2 10 for z/OS
performance improvements 533
DB2 8 68, 103, 126, 460
DB2 9 7, 52, 55, 57, 68, 109110, 130, 137, 139, 224,
243, 309, 312, 338, 383, 416, 468, 538, 545
big incentive 545
DB2 10 5657, 71, 77, 130, 229, 315, 383, 468
described behavior 465
documentation 518
dynamic prefetch 546
IPv6 support 316
new driver 53
package 224, 227
partitioned table spaces 458
subsequent REBINDs 227
system 121, 473
DB2 9 XML additional functions 244
DB2 authorization 339
DB2 Catalog 500
DB2 catalogue 77, 85, 140, 163, 474, 484, 490, 559
SMS definitions 517
DB2 command
interface 348
DB2 family 125, 591
other products 129
DB2 issue 226, 347, 503
DB2 member 109111
DB2 QMF
Enterprise Edition 482
DB2 startup 340, 574
Automatic policy start 340
automatic start 343
DB2 subsystem 1, 51, 54, 97, 110111, 132, 354, 356,
464465, 475, 536, 575
query tables 367
DB2 system 107, 215, 359, 480, 606, 610
DB0A 412414
privilege 359
DB2 utility 61, 63, 381, 427, 468, 484
additional EAV support 63
DB2 V8 63, 68, 100, 120, 227, 312, 458, 473, 475
behavior 120
Curium 475
end of service 472
function 312
group attachment behavior 120
NFM 13, 475, 517
supported column type 515
system 474
DB2 V9 54
56-way system 54
DB2 Version
7 599
8 120, 458
8 Installation Guide 478
9 562, 601
DB2 Version 1 54, 575
DB2 Version 5 115, 553
DB2 Version 8 111112
DB2R5.AUDE MP 350
Value 406
DB2R5.CUST OMER 392
PERMISSION RA02_CUSTOMERS 402
DB2R5.CUSTOMER 391
PERMISSION RA01_CUSTOMERS 393
DB2-supplied routine 523
Validate deployment 529
DBA toolbox 578
DBADM authority 363
DBADMIN column 367, 369
DBAT 315, 548, 603
DBCLOB 176, 280, 570
DBD01 121, 491
DBET 121, 465
DBID/OBID 405
DBM1 address space 52, 530, 536
virtual storage usage 599
DBMS 118, 204, 243
DBNAME 72, 76, 339
DBPROTOCOL 531
DBRM 233234, 372, 478, 599
DDF 109, 414, 535
DDF command 117, 312313, 549
DDF indoubt URs 117
DDF location 598
DDF restart light 117
DDL 55, 73, 91, 307, 385, 491, 584
DDLTOX 632
DECFLOAT 132, 149, 251, 579
decimal 132, 251
decomposition 246, 482
default value 57, 106, 130, 225, 231, 245, 361, 377, 502,
569, 572
deferred write 47
DEFINING SECTION (DS) 616
definition change 7172, 74, 487
DEGREE 53, 131, 622
DEL_CFSTRUCTS_ON_RESTART 506, 618
DELETE 119, 278, 350, 466, 474, 569
delete 237, 282, 341, 496, 616
delete name 119120
dependent privilege 383
cascading revokes 383
DEV Type 94
DFSMS 428, 473
DIS DB 72, 459460
DIS LOG
command output 99
Output 98
dis thd 416
DIS Trace 352
-DISPLAY DDF 312
Index 649
-DISPLAY GROUP DETAIL 111
-DISPLAY LOCATION 311
-DISPLAY LOCATION DETAIL 311
display profile 374
DISPLAY THREAD 416
Distributed 418
distributed data facility (DDF) 3, 52, 109, 308, 310
availability 310
Domain Name 315
domain name 312, 315
DRDA 165, 262, 310, 373, 481
DS8000 3, 62
DSN_STRUCT_TABLE 404
DSN_XMLVALIDATE 266
DSN088I 313314
DSN1COMP 105
DSN1COPY 116
DSN6SPRM 59, 225, 501
DSN6SYSP 501
DSNAME 94, 262
DSNDB01.SYSUTILX 468
DSNJU003 98
DSNJU004 312, 468, 477
DSNL519I 315
DSNL523I 315
DSNLEUSR 413
DSNT404I SQLCODE 75, 89, 382
DSNT408I 77, 90, 373
DSNT408I SQLCODE 77, 103
DSNT415I SQLERRP 75, 77
DSNT416I SQLERRD 75, 77
DSNT418I SQLSTATE 75, 77
DSNTEP2 137, 373
DSNTIJEN job 495
DSNTIJIN 99, 490
DSNTIJMV 528
DSNTIJPM 52, 473
DSNTIJTM 60
DSNTIJXA 510
DSNTIJXB 511
DSNTIJXC 513
DSNTIJXZ 520
DSNTIP7 306
DSNTIP9 58, 538
DSNTIPK 111
DSNTIPN 64
DSNTIPSV 520
DSNTSMFD 64
DSNTWFG 60
DSNTXTA 510
DSNTXTB 511
DSNUM 94
DSNWZP 381
DSNZPARM 59, 119, 225, 274, 344, 501, 538, 618
ABIND 478
CACHEDYN 240
CHKFREQ 97
CHKLOGR 97
CHKMINS 97
CHKTYPE 97
FCCOPYDDN 427
FLASHCOPY_COPY 426
FLASHCOPY_PPRC 426
IMPDSDEF 306
LOB_INLINE_LENGTH 570
PLANMGMT 225
REVOKE_DEP_PRIV 383
REVOKE_DEP_PRIVILEGES 383
SECADM1 380
SEPARATE_SECURITY 344, 361, 377
SMFCOMP 64
TCPALVER 413
DSNZPARM parameter
ACCUMACC 601
ACCUMUID 601
PTASKROL 601
dsnzparm SEPARATE_SECURITY 365
DSNZPARM SPRMRRF 69
DSNZPARMs
for the SECADM authority 360
DSSIZE 5758, 60, 68, 89, 490, 564
duplicate LRSN value 119, 557
duplicate row 190
alternative orderings 193
possible orderings 193
Dynamic prefetch 545
Dynamic SQL
access path 224
query 221
dynamic SQL 131, 224, 350, 553, 620
access path stability 230
OMPE record trace 354
dynamic SQL statement 405
Dynamic statement cache 535, 616
Dynamic VIPA 315
E
EAV volume
format 62
list 62
EDMPOOL 56
efficiency 337
EIM 416
element 154, 247, 624
empty Lob 567
enable-new-function mode (ENFM) 476
end column 205
ENFM 468
enterprise identity mapping (EIM) 416
ENVID value 390
environment 53, 72, 77, 110, 115, 140, 223, 262, 348,
485, 537, 627
Equivalent SQL 164
error message 76, 8990, 314, 373, 428, 452, 516, 529
EXEC SQL 233, 351, 555
EXPLAIN 132, 541
QUERYNO 374
Explain 241, 482
EXPLAIN authority 371
EXPLAIN output 403, 541
650 DB2 10 for z/OS Technical Overview
explicit cast 149, 232
truncation semantics 151
expression 126, 232, 245, 366, 476, 540
extended address volume
Support 62
extended address volume (EAV) 62
extended addressability (EA) 516
Extended addressing space (EAS) 62
extended correlation token 316
extended format
data sets 516
extended format (EF) 62, 516, 571
extended indicator variable 231
extended indicator variables 230
extended timestamp precisions
SQLTYPE 392/393) 530
EXTERNAL Action 127
external length 157
F
FALLBACK SPE
APAR 474
Applied 474
Fallback SPE 474
fallback SPE 474
FCCOPYDDN 427
FETCH 275, 535
fetch 131, 274, 415, 540, 616
FICON 3
file reference variables 565567, 569
FINAL TABLE 280
FlashCopy 12, 3, 22, 425, 535
FlashCopy enhancements 426
FlashCopy image copy 426, 444
with the COPY utility 426
flexibility 13, 100, 115, 517, 606
fn
contains 252
FOR BIT DATA 287
FOR SYSTEM_TIME AS OF timestamp-expression 213,
216
fractional second 155, 157
missing digits 158
function 3, 5657, 107, 125, 204, 244, 361, 465
FUNCTION TRYXML 136
function version 133, 139
G
GB bar 52
data stores 53
virtual storage 52
GENERATED ALWAYS 277, 407
GENERATED BY DEFAULT 92
getpage 546
given table space
currently pending definition change 91
other running processes 461
XML data 566
GRANT ACCESSCTRL 380
GRANT DATAACCESS 363
GRANT option 359
graphic string 148
implicit cast 149
GRECP 120
group attach name 110
group attachment 110, 120
GRP 110
H
handle 51, 86, 120, 230, 449, 518, 567
hard disk drive (HDD) 590
Hash access
bad choice 578
perfect candidate 578
hash access 499, 536, 560
hash key 85, 577578
new columns 580
unique hash overflow index 580
HASH space 81, 83, 578, 584
hash space 576
extra space 578
hash table 576
hash overflow index 583
other access path options 580
space 80, 85
table space 577, 580
heavy INSERT
application 92
environment 92, 115
HIGH DSNUM 94
High Performance Option (HPO) 486
HISTOGRAM 87
history table 205206
corresponding columns 210
ROWID values 210
host variable 163, 231232, 280, 415, 537, 540, 581
target column 231
host variables 231, 281, 415, 537
I
I/O 122, 502, 535, 544, 622
I/O parallelism 558
I/O subsystem 575
IBM DB2
Driver 482
server platform 338
Utility 484
IBMREQD 73, 76, 408
ICTYPE 87, 96, 451452, 571
IDCAMS 6263, 99
identity propagation 416
IEAOPTxx 35
IFC DESCRIPTION (ID) 338
IFCID 140 340
IFCID 141 341, 363
audit trace record 363
grant 339
IFCID 142 341
Index 651
IFCID 143 341, 349
IFCID 144 351
IFCID 145 405
IFCID 148 539
IFCID 225 599, 618
Duplicate data 599
IFCID 269 342
IFCID 270 341
IFCID 3 599
IFCID 306 55
IFCID 359 119, 598, 622
IFCID 361 348, 367
audit trace record 369
trace record 348
IFCID 362 342
RECTRACE report 347
trace record 347
IFCID 365
record 602
trace 602
trace data 602
IFCID record 411, 599
IFICD 23 340
II10817 628
II14219 628
II14334 628
II1440 628
II14426 244, 628
II14438 635
II14441 628
II14464 628
II4474 628
II4477 628
image copy 93, 426, 519, 570
IMMEDWRITE 132, 226
IMPLICIT 77, 391
implicit cast 148149
implicit casting 148
Examples 150
implicit casting optimization 154
implicit time zone 170
possible difference 175
IMS 235, 309, 484, 576
INCLUDE XML TABLESPACES 270
index 70, 109, 114, 163, 251, 340, 489, 622
index ORing 539
index page
split 119, 557, 622
indicated time zone
time zone value 176177
Indicator variable 231
indicator variable 181, 230231
negative value 234
special values 231
infix arithmetic operators 150
informational APARs 473
inline copy 96, 454
INLINE Length 570571
inline LOB 569
inline Lob 569
inline SQL scalar function 127
KM2MILES 136
return 137
IN-LIST enhancement 540
IN-list predicate 541
transitive closure predicates 541
inner workfile 593
input parameter 60
INSERT 92, 103, 109, 152, 231, 247, 343, 555
insert 53, 122, 152, 230, 247, 341, 496, 535, 616
installation 52, 131, 225, 306, 340, 471, 565
installation CLIST 60, 477, 502
different name 521
installation job
DSNTIJRT 413
DSNTIJSG 606
INSTANCE 94, 348, 527
instrumentation facility interface (IFI) 603
Interactive Storage Management Facility (ISMF) 516
International Standards Organization (ISO) 155
IP address 310, 312, 317, 419420, 488, 605, 607
IPLIST 310
IPNAMES 310, 413
IPv6 support 316
IRLM 118, 468, 473, 599, 618
IS 6061, 77, 98, 114, 281, 346, 428, 475, 567, 622
ISO format 154155
IX 114
J
Japanese industrial standard (JIS) 132
Java 156, 308309, 482, 535
JCC 308, 554
JDBC 110, 280, 381, 482
JDBC driver 381, 485
K
KB 55, 75, 500, 504, 538
KB buffer pool 56, 545
KB chunk 561
KB page 548, 561
keyword 72, 75, 158, 261, 312, 342, 487, 554
L
LANGUAGE SQL 127, 136, 304
LARGE 75, 585
LDAP 416
left hand side
UTC representation 172
LENGTH 157, 460, 570
LIKE 137, 280, 339
list 59, 111, 137, 232, 343, 535, 618
list prefetch 537
LOAD 116, 181, 237, 261, 368, 426, 501, 546
LOB 77, 149, 243, 408, 452, 481
LOB column 53, 79, 491, 566567, 570
inline length 571
inline length attribute 571
652 DB2 10 for z/OS Technical Overview
inline portion 570
inlined piece 570
LOB object 77, 88
LOB table 70, 77, 271, 453, 489, 567
LOB table space 6970, 79, 89, 307, 454455, 458, 491,
497, 514, 567
REORG SHRLEVEL options 458
space reclamation 458
LOB/XML 306, 566
LOBs 56, 149, 271, 481, 489, 536
processing 308, 566
location alias name 311
LOCATIONS 310, 413
LOCK 118
locking 51, 55, 109, 279, 489, 583
locks 117, 273, 462, 504
LOG 73, 94, 487, 575
log record 54, 96, 503, 575
log record sequence number 118
log record sequence number (LRSN) 118
LOGGED 94, 492
logging 51, 67, 118
logical aggregation group 192
LOGICAL Part 94
logical partition 100, 102, 460, 624
logical unit of work (LUW) 338
Logical Unit of Work identifier 316
LOGICAL_PART 101102
LOGRBA 353
long-running reader 106, 461, 504
LOW DSNUM 94
LPAR 111
LPL 121
LRSN 94, 118, 277, 557
LRSN spin avoidance 118
LUMODES 310
M
M 81, 154, 292, 341
machine- readable material (MRM) 73
MAINT mode 520
maintenance 119, 121, 128, 243, 332, 474, 627
map page 115
MASK/PERM 405
mass delete 239
materialization 74, 78, 308, 538
materialized query tables 400
maximum number 5961, 69, 80, 165, 503, 538, 550
default value 504
MAXOFILR 52
MAXTEMPS 538
MB page
frame 561
MEMBER CLUSTER 74, 115, 558
MEMBER Cluster 74, 115, 536, 577
member cluster 92
MEMBER CLUSTER column 499
MEMBER CLUSTER for UTS 114
MEMBER Name 94, 110
MERGE 103, 231, 278, 604
Migration 381
MODIFY 311, 488, 549
-MODIFY DDF ALIAS 311
MONSIZE 503
moving aggregates 184
MSTR address space 107
msys for Setup 481
MVS LPAR 620
N
namespace 255
native SQL procedure 148, 301
native SQL stored procedures 300
new-function mode 55, 115, 473
DB2 10 57, 115, 126, 476
DB2 9 118, 148, 476
NFM 103, 224, 338, 535
node 252, 583
non-inline SQL scalar function 127
non-LOB 565566
non-XML data 566
NORMAL Completion 98, 345, 460
NPAGESF 94
NPI 546
NULL 72, 126, 220, 245, 340, 514, 567
null value 130, 177, 213, 231, 397, 583
numeric value 149
Implicit cast 150
NUMPARTS 57, 116, 507
O
OA03148 634
OA17735. 35
OA24811 426
OA25903 262
OA31116 634
OA33106 634
OA33529 634
OA33702 634
OA34865 634
OA35057 635
OA35885 635
OA38419 633, 635
Object 72, 341
OBJECTNAME 339
OBJECTTYPE 339
OBJECTTYPE column 339
ODBC 53, 110, 165, 280, 484, 535
ODBC application 53, 165, 572
ODBC driver 53
OLAP specification 184
important concepts 184
OLAP specifications 184
Online Communications Database changes 310
Online DDF location alias name changes 311
online performance (OP) 539
online reorg 106, 108, 452, 490
online schema change 71, 93, 518
optimization 132, 223, 307, 535
Index 653
Optimization Service Center (OSC) 482
options 12, 7273, 127, 225, 271, 337, 434, 541
ORDER 56, 102, 184, 246, 398, 539
ORDER BY 188, 250, 544
ORDER BY clause 188, 250
ordering 188
P
package bind
owner 365
page access 545
page level sequential detection 547
page set 63, 7172, 88, 93, 110, 114, 119, 353, 431, 548,
561
page size 56, 60, 68, 70, 77, 492, 500, 545, 564
different buffer pool 78
valid combinations 80
parameter marker 132, 165, 231, 537
PART 83
PARTITION 6061, 86, 88, 180, 585, 622
partition-by-growth 68
partition-by-growth (PBG) 57, 490
partition-by-growth table space 6970, 73
MAXPARTITIONS 73, 87
structure 73
partition-by-growth table spaces
in workfiles 57
PARTITIONED 103
partitioned data set (PDS) 477
partitioned data set extended (PDSE) 477
partitioned table space 116, 432, 506
disjoint partition ranges 107
Partitioning 187, 587
partitioning 184, 187, 277, 501, 577
partitions 57, 121, 132, 308, 458, 501, 577
password phrase 411
RACF user 412
PATH option 245
PBG table space 457, 492494, 577, 579580
PBR table space 453, 577, 579
pending changes 71
Performance 12, 17, 338, 486, 535
performance 1, 3, 57, 109, 115, 204, 243, 337, 485
performance improvement 57, 148, 557, 559
period, 204
PGFIX 561
PHASE Statistic 454, 586
physical aggregation
group 189
physical aggregation group 189
PIC X 163
PIT LRSN 94
PK28627 628
PK51537 258
PK51571 244, 250, 254
PK51572 244, 250, 254
PK51573 244, 250, 254
PK52522 224
PK52523 224
PK55585 258
PK55783 251
PK55831 258
PK55966 279
PK56922 474
PK57409 244
PK58291 63
PK58292 63
PK58914 244
PK62876 478
PK65772 511
PK70423 516
PK76100 628
PK79327 120
PK79925 478
PK80320 121
PK80375 55, 225
PK80732 252, 257
PK80735 252, 257
PK80925 122
PK81151 63
PK81260 251
PK82631 257
PK83072 53
PK83735 122, 558
PK85068 508
PK85833 478
PK85856 49
PK85889 49
PK90032 50, 262
PK90040 50, 262
PK94122 122
PK96558 252
PK99362 53
PM00068 628
PM01821 478, 628
PM02631 121
PM02658 331
PM03795 121
PM04968 474, 628
PM05255 121
PM05664 244
PM06324 121
PM06760 121
PM06933 121
PM06972 121
PM07357 121
PM11441 121
PM11446 121
PM11482 244
PM11941 332
PM12286 567
PM12819 467
PM13466 628
PM13467 628
PM13525 148, 480, 628
PM13631 148, 628
PM17542 9, 63, 628
PM18196 49, 628
PM18557 9, 629
PM19034 629
654 DB2 10 for z/OS Technical Overview
PM19584 629
PM21277 467, 629
PM21747 467, 629
PM22091 331
PM22628 635
PM23887 635
PM23888 635
PM23889 635
PM24082 635
PM24083 635
PM24721 565, 629
PM24723 629
PM24808 332, 629
PM24808. 501
PM24937 230, 629
PM25282 629
PM25357 629
PM25525 629
PM25635 332, 629
PM25648 629
PM25652 629
PM25679 629
PM26480 310, 629
PM26762 426, 629
PM26781 310, 629
PM26977 344, 629
PM27073 629
PM27099 630
PM27811 630
PM27828 630
PM27835 630
PM27872 64, 630
PM27973 630
PM28100 630
PM28296 344, 630
PM28458 630
PM28500 630
PM28543 344
PM28796 630
PM28925 630
PM29037 630
PM29124 480, 630
PM29900 630
PM29901 630
PM30394 344
PM30425 630
PM30468 47, 630
PM30991 630, 633
PM31003 112, 630
PM31004 112
PM31006 112
PM31007 112
PM31009 112
PM31243 631
PM31313 631
PM31314 631
PM31614 631
PM31641 631
PM31807 631
PM31813 631
PM32217 344
PM32638 635
PM32647 635
PM33501 631
PM33767 631
PM33991 631
PM35049 635
PM35190 631
PM35284 631
PM36177 631
PM37057 631
PM37112 631
PM37300 631
PM37622 631, 633
PM37625 631
PM37630 632
PM37647 632
PM37660 632
PM37816 632
PM38164 632
PM38417 632
PM39342 632
PM41447 632
PM42331 632
PM42528 632633
PM42924 632
PM43292 632
PM43293 632
PM43597 632
PM43817 632
PM45650 632
PM45651 632633
PM45810 633
PM47091 631, 633
PM47616 633
PM47871 636
PM51467 633, 635
PM51655 633
PM51945 632633
PM52012 633
PM52327 633
PM52724 633
PM53237 633
PM53243 633
PM53254 633
PM54873 632633
PM55051 633
PM55928 633
PM56631 633
PM56845 633
PM57206 633
PM57632 633
PM57878 633
PM58177 633
PM62797 634
PM66287 634
PM67544 634
PM67565 636
PM69522 634
PM70046 634
Index 655
PM70270 634
PM70575 634
PM70645 636
PM71114 634
PM72526 634
PM80779 634
PM82301 634
PM84750) 634
PM86952 634
PM91930 634
PM94885 634
policy based audit 338
POLICY Integer 207
PORT 312313, 413
POSITION 566
precision 12 157
timestamp value 158
precision x 159
time zone 173
precompiler 483
predicate 57, 196, 232, 250, 339
prefetch quantity 545
Additional requests 545
prefix 110, 150
prior DB2 release 127
Private protocol 481, 529, 602
Procedural Language (PL) 126, 536
PROCESSING SYSIN 428, 517
progressive streaming 308, 573
PSRBD 81
PTF 262, 526, 600
pureXML 7, 243
Q
qualified name 268
QUALIFIER 141, 226, 367, 428
query 1, 52, 106, 116, 127, 224, 245, 317, 339, 476, 535
query performance 402, 485, 587
QUERYNO 241, 402, 623
R
RACF 25, 337, 428, 473
RACF group 362
RACF profile 367, 377
RACF profiles 363
RACF user
DB2R51 419
Id 416
RANDOM 111
RANDOM ATTACH 111
RANDOMATT 120
range-partitioned table space 6971, 74
SEGSIZE 89
range-partitioned table spaces 68
RBA 54, 94, 277, 449, 575
RBDP 71, 78, 580
re/bind 535
read stability (RS) 504
READS 127, 304, 382, 605
Real 57, 381
real storage 561, 620
potential spike 597
reason code 262, 346, 531, 610, 618
REBIND PACKAGE 128, 225, 487
REBIND Package 128, 225, 374
SWITCH option 227
rebuild pending 580
RECOVER 93, 116, 487, 570
Redbooks Web site
Contact us xxxiii
referential integrity 500
REGION 349
relational data system (RDS) 404
RELCREATED 76, 90, 408
remote location 310, 412413, 530531, 602
remote server 309310
RENAME INDEX 55, 92
REOPT 132, 537
reordered row format 277, 577
reordered row format (RRF) 69
REORG 79, 83, 115116, 163, 274, 368, 426, 452, 481,
486, 535, 546
REORG Index 73, 78, 88
REORG TABLESPACE 73, 78, 86, 89, 452453
control statement 458
DSN00063.PART TB 455
job 458
pending definition change 74
ROTATE.TS1 2
4 460
SHRLEVEL Change 91, 458
SHRLEVEL Reference 86, 458
SHRLEVEL REFERENCE job output 86
statement 94, 107
syntax 454
TESTDB.TEST TS SHRLEVEL Reference 453
TESTDB.TS1 PART 3 459
utility 88, 91, 462
utility execution 95
REORG utility 86, 104105, 116, 274, 453, 570, 579
REORP 71
REPAIR 95, 368
repeatable read (RR) 504
REPORT 9394, 269, 313, 349, 468
REPORT RECOVERY utility 469
repository 263, 488
requirements 116, 231, 243, 309310, 337, 472, 536
system software 473
RESET 102
resource unavailable 579
RESTART 98, 100, 475, 621
restart light 117, 340
RESTRICT 72, 146, 382
return 54, 87, 89, 138, 245, 312, 392, 428, 537
RETURN Code 75, 428, 456, 517, 527
RETURNS VARCHAR 140
Revoke statement syntax 383
RID 59, 465, 504, 621
RIDs 59, 538
656 DB2 10 for z/OS Technical Overview
ROLLBACK 100, 137
root anchor point (RAP) 577
ROUND_HALF_UP mode 152
row access
control 380
control activation 391
control activation DB2 insert 390
control rule 393
control search condition 394
rule 385, 393
row and column access control 384
row format 277, 569
row length 56
row level sequential detection 547
Row level sequential detection (RLSD) 546
Row permission 360, 376, 389, 497
DB2 description 408
enforcement 385
new SQL statements 411
object 386
row permissions 209, 364365
row-begin column 205
row-end column 208
maximum timestamp value 212
ROWID 73, 408
RRSAF 110, 235
RTS 83, 104, 558
RUNSTATS 86, 145, 332, 374, 476
S
same page 73, 77, 118, 121, 547
next row 547
same way 109, 149, 361, 567
SCA 118, 506, 618
scalar aggregate functions 184
scalar functions 184
SCHEMA 240, 347, 497
Schema 92, 263, 341, 514
schema 6768, 127, 221, 250, 339, 482
SCOPE PENDING 269
SCOPE XMLSCHEMAONLY 270
SDSNLINK 63
SECADM authority 339, 344, 359, 488
auth ID 390
SECADM DSNZPARMs 361
second ALTER
change 91
TABLESPACE command 91
SECURITY 423
segment size 5859, 76, 506
segmented table space 58, 69, 507
SEGSIZE 58, 74, 115, 492
segsize 74, 87, 115
SEGSIZE value 115
SELECT CURRENT_TIMESTAMP 160
SELECT Id 189
SELECT Policy 211
SELECT TRYTMSPROMO 171
SEPARATE_SECURITY 377
Separation of duties 354
SEQ 94
Server 286, 315, 337, 473
SESSION TIME ZONE 175
SESSION TIMEZONE 175
alternative syntax 175
SET 90, 111, 137, 234, 260, 315, 343, 428, 478, 544, 622
-SET SYSPARM command 111
SGRP 110
shared communications area 118
SHR LVL 94
SHRLEVEL 78, 272, 481, 558
SHRLEVEL Change 74, 426
SHRLEVEL None 85
SHRLEVEL Reference 74, 431, 458
Shutdown and restart times 64
side 79, 101, 172, 286, 542
simple table space 69, 87
SKIP LOCKED DATA 238
skip-level migration 472
SMF 9, 64, 338, 506, 618
SMS definition 517
SNA 316
solid state drive (SSD) 590
sort key 56, 189, 593
data type 196
length 56
ordered sequence 190
sort merge join (SMJ) 593
sort record 56, 564
row length 56
sort key length 56
source control data set (SCDS) 516
source text 73, 408
space map 89, 105, 115
page 105, 115
sparse index 593
spin avoidance 118
SPT01 481
SPUFI 137, 301
SQL 55, 75, 116117, 203, 244, 317, 337, 339, 460, 476,
535, 553, 620
SQL access
path 224
plan 373, 537
SQL Data 127, 382
SQL DDL
interface 387
statement 387
SQL DML
attempt 373, 395
operation 384, 386, 391
operation reference 397
privilege 369, 373
statement 350, 373, 384
SQL interface 330, 358
SQL PL 126, 278, 591
SQL procedure 148, 301, 532, 535, 591
SQL procedures 137, 224, 490, 535
SQL Reference 126, 342
DB2 10 128, 359
Index 657
SQL scalar 126, 250
function 126, 302
function body 126
function PRIVET 141
function processing 147
function TRYTMSPROMO 170
function version 139
SQL scalar function 126, 302
ALTER FUNCTION 133
examples 136
syntax changes 141
versioning 138
SQL scalar functions 126
SQL statement 52, 56, 81, 89, 126127, 147, 206,
230231, 250, 341342, 485, 517, 545546, 620
authorization requirement 366
DISTINCT specifications 56
metadata information 380
original source CCSID 604
semantics checks 372
syntax check 374
time information 349
trace records 349, 556
SQL statement text 349, 556
SQL stored procedure 301
SQL stored procedures 300
SQL table 126
SQL table function
ALTER FUNCTION 146
syntax changes 148
SQL table functions 144
SQL variable 126, 132, 302, 591
SQL/XML 246
SQL0217W 375
SQLADM 339
SQLADM authority 356, 358, 371
query table 376
SQLCODE 56, 72, 75, 116, 153, 165, 262, 343, 362, 497,
564
SQLCODE +610 75
SQLCODE -104 365, 368
SQLCODE 20497 170
SQLERROR 128, 372, 486
SQLID DB0B 399
SQL DML statement 400
SQLJ 240, 280, 482
SQLJ applications 287, 482
SQLSTATE 56, 75, 165, 375, 497, 584
SQLSTATE 22007 170
SSID 60
SSL 312
START LRSN 9495
START Trace 338, 488, 597
startup procs 528
statement 52, 56, 115, 126, 247, 317, 535, 616
Static SQL
COBOL program 351
static SQL 132, 224, 350351, 540, 553
DB2 9 VSCR 224
host variables 553
last bind time 604
OMPE record trace 353
statistics 57, 8687, 145, 332, 374, 476
STATUS 102, 313, 347, 460, 517, 537
STATUS VARCHAR 207
STMTID 241
STOP keyword 312
STOSPACE 365
straw model 595596, 623
string representation 132, 149, 155, 169
standard formats 155
time zone value 170, 172
string value 149, 248
STYPE 87, 451, 571
subgroup attach 110
subgroup attach name 110
subquery predicate 543
SUBSTR 151, 308, 391, 570
SUBSYSTEM Id 514
synchronous I/O 546
Syntax Diagram 128, 383
SYSADM 61, 337, 520
SYSADM authority 343344, 348, 361
Audit use 343
SECADM authority 355, 379
security administration 355
security administrative duties 379
SQL access 380
SYSCOLUMNS 86, 163, 389, 493, 582
SYSCOPY 8687, 500, 571
SYSCOPY entry 94
SYSCTRL 340
SYSFUN.DSN_XMLVALIDATE 262
SYSIBM.DSN_PROFILE_ATTRIBUTES 317, 605
SYSIBM.DSN_PROFILE_TABLE 317, 605
SYSIBM.IPLIST 316
SYSIBM.SYSA UDITPOLICY 340
SYSIBM.SYSAUDITPOLICY 340, 343
audit policy 340
catalog table columns 342
policy row 343
SYSIBM.SYSC OLUMNS 206, 390, 516
DEFAULT column 206
PERIOD column 206
SYSIBM.SYSC ONTROLS 390, 497
SYSIBM.SYSC OPY 93, 432
corresponding row 451
entry 94
Record 452
table 451
SYSIBM.SYSCOLUMNS 390, 407, 499
SYSIBM.SYSCONTROLS 389, 497
SYSIBM.SYSCOPY 87, 426
SYSIBM.SYSD EPENDENCIES 390, 394
SYSIBM.SYSD UMMY1 140, 375
SYSIBM.SYSDBAUTH 517
SYSIBM.SYSDEPENDENCIES 389, 409
SYSIBM.SYSDUMMY1 138, 293, 402
SELECT SESSION TIME ZONE 176
SELECT SESSION TIMEZONE 176
658 DB2 10 for z/OS Technical Overview
sysibm.sysdummy1
select 1 375
SYSIBM.SYSENVIRONMENT 390
SYSIBM.SYSINDEXSPACESTATS 499, 584
SYSIBM.SYSOBJROLEDEP 409
SYSIBM.SYSP ENDINGDDL 76, 497
new row 78
SYSIBM.SYSPACKDEP 406
SYSIBM.SYSPENDINGDDL 72, 116, 497
SYSIBM.SYSQ UERY 498
SYSIBM.SYSQUERY 221, 498
SYSIBM.SYSROUTINES 127, 380, 409, 499
SYSIBM.SYST ABLES 207, 390
column access control 390
SYSIBM.SYST ABLESPACE 83
updates column DATAPAGES 84
SYSIBM.SYSTABLEPART 101, 307, 499
SYSIBM.SYSTABLES 389, 410, 499
SYSIBM.SYSTABLESPACE 74, 116, 499
SYSIBM.SYSTABLESPACESTATS 106, 499, 584
SYSIBM.SYSTRIGGERS 410
SYSIBM.SYSU SERAUTH 359
SYSIBM.SYSUSERAUTH 359, 410, 499
SYSIN 94, 99, 237, 349, 428, 517
SYSLGRNX 449, 495
SYSOPR 340, 520
SYSPACKAGE 53, 127, 220, 239, 493
SYSPACKSTMT 350, 491, 604
SYSPARM command 111
SYSPRINT 61, 99, 237, 428, 521
SYSTABLEPART 86, 101, 307, 493, 571
system authorities and privileges 354
System DBADM 339, 341, 349
auth ID 365
System Level
Backup 468
system parameter 62, 342, 501, 605
SEPARATE_SECURITY 378
system period 205
System z 3, 51, 472, 485, 533
System z9 600
SYSTEM_TIME period 205206
end column 209
system-period data 205
temporal table 209
system-period data versioning 205
system-period temporal table 205
new period-specification clauses 213
T
table expression 542
TABLE OBID 353
table owner 373, 379, 384
table row 249, 343, 559
table space
AREOR state 87
automation 96
buffer pool 75
CHKP status 465
CONSIST.TS1 465
copy information 469
data 590
data pages 79, 115, 561
data set 74
DB1.TS1 8990
DB2 data compression 105
definition 72, 74, 497, 546
DFSMS data class 502
DSNDB06.SYST SCTD 497
execution 102
fixed part 582
FREESPACE percentage 585
GBP-dependent status 121
HASHDB.HASH TS 81
intent 548
level 116, 549
lock 239
LOCKSIZE definition 583
logical end 100101
name 492, 499
NOAREORPEND state 96
option 72, 92, 306, 578
organization 68, 83, 582
original SEGSIZE 74
other tables 211
page 561
page set 76, 432
page size 75
partition 79
pending definition changes 72, 86, 91
REORG TABLESPACE 8283, 86
restrictive states 108
scan 537
segment size 78
structure 68
SYSPACKAGE 242
tiny clusters 81
type 564
type conversion 69
unconditional separation 59
user data sets 86
table space (TS) 61, 68, 73, 78, 106, 109, 115, 209, 339,
432, 482, 504, 536, 545
table space scans 545
tables 55, 61, 80, 115, 130, 246, 310, 317, 337, 474, 538
TABLESPACE statement 57, 68, 75, 115116, 449, 506
TABLESPACE TESTDB.TEST TS
PART 1 454
PART 2 454, 456
PART 3 454
TBNAME 382, 497, 540
TCP/IP 312, 423
temporal 204
TEXT 628, 634635
th
tm 166
TIME 86, 290, 348, 600
time stamp data 290
TIME Zone 129, 155, 166, 168, 205
2-byte representation 168
Index 659
Alternative spelling 167
C1 TIMESTAMP 175
C2 TIMESTAMP 175
CURRENT TIMESTAMP 174
hour component 179
minute component 166
new data type TIMESTAMP 166
nullable timestamp 182
precision 0 168
precision 12 173
precision 6 173
second byte 168
SELECT CURRENT_TIMESTAMP 174
TMSTZ_DFLT TIMESTAMP 167
valid range 168
valid string representation 178
time zone value 168170
times DB2 583, 585, 616
TIMESTAMP 73, 86, 153, 166, 204, 290, 340, 408, 478,
622
timestamp column 158, 205206
timestamp data 153, 156157, 163, 530
type 156157
timestamp data type 156157
timestamp precision 156157
input data 165
timestamp value 155156, 159, 212
character-string or graphic-string representations
176
String representation 157
TIMESTAMP WITH TIME ZONE 166
TOTALROWS 584
TRACE privilege 348
Trace record
OMPE formatted audit trace 363
trace record 339, 347, 504, 506, 556, 598, 618
traces 338, 468, 488, 603
transitive closure 541
triggers 204, 242, 360
TRUNCATE 364, 566
Trusted context 422
TRYIMPCAST Value 152
TS 102, 453
TYPE 60, 74, 116, 127, 249, 346, 460, 516, 570
Type 2 583
Type 4 308, 573
U
UA07148 634
UA55970 634
UA56174 634
UA57243 634
UA57254 634
UA57704 634
UA58937 635
UA60823 635
UA66416 635
UA66420 635
UDF 366, 381382
UDFs 300, 365
UK33456 279
UK37397 628
UK44120 628
UK50918 53
UK52302 53
UK53480/1 628
UK56305 628
UK56306 628
UK58204 628
UK59100 467
UK59101 467
UK59680 567
UK59887 629
UK60466 467, 629
UK60887 628
UK61093 635
UK61094 635
UK61139 635
UK61142 635
UK61213 629
UK61317 635
UK62150 629
UK62201 628
UK62326 628
UK62328 628
UK63087 629
UK63215 629
UK63366 629
UK63457 629
UK63818 629
UK63820 629
UK63890 630
UK63971 629
UK64370 629
UK64389 630
UK64423 47, 630
UK64424 630
UK64588 629
UK64597 64, 630
UK65205 629
UK65253 630
UK65325 635
UK65379 629
UK65385 630
UK65399 635
UK65412 635
UK65632 630
UK65637 630
UK65750 630
UK65859 629
UK65920 631
UK65924 635
UK65951 630
UK66046 630
UK66087 629
UK66327 630
UK66374 631
UK66376 630
UK66379 630
UK66476 630
660 DB2 10 for z/OS Technical Overview
UK66610 629
UK66964 631
UK67132 630
UK67267 628
UK67512 630
UK67578 630
UK67634 631
UK67637 630
UK67639 631
UK67958 630
UK68097 629
UK68098 631
UK68364 630
UK68476 628
UK68652 629
UK68659 632
UK68743 631
UK68801 632
UK69029 631
UK69030 632
UK69055 631
UK69199 632
UK69286 630
UK69377 631
UK69735 631
UK70215 631
UK70233 629
UK70302 630
UK70310 629
UK70483 632
UK70647 630
UK70844 632
UK71128 631, 633
UK71333 632
UK71412 631
UK71437 632
UK71458 631
UK71459 633
UK71467 632
UK71875 632
UK72212 632
UK72447 633
UK72590 636
UK73139 633
UK73180 631
UK73426 632
UK73478 631
UK73630 632
UK73864 633
UK74175 632
UK74381 633
UK74981 633
UK75324 633
UK76650 633
UK77490 633
UK77500 633
UK77584 632
UK77739 633
UK78208 633
UK78229 633
UK78514 633
UK78632 633
UK79243 633
UK79406 633
UK80113 633
UK81047 634
UK8112 636
UK81124 636
UK81520 634
UK82555 634
UK82633 634
UK82732 634
UK83168 634
UK83171 634
UK90325 632
UK91203 634
UK95407 634
unary arithmetic operators 150
Unicode 149, 277, 473
UNIQUE 81, 570
unique index 80, 82, 85, 215, 535, 578
additional non-key columns 586
unit of recovery (UR) 238, 450
Universal table space
support 74
universal table space 116, 274, 339, 558
universal table space (UTS) 115, 425, 559
universal table spaces (UTS) 68
UNLOAD 95, 181, 287, 364, 454, 565
UPDATE 86, 115, 119, 133, 231, 261, 343, 474, 569
URI 261
USAGE 182, 281, 368
use PLANMGMT 227
user data 8789, 166, 338, 356
user DB2R53 362
user-defined function 126, 176, 268, 360, 474, 523
registers different types 126
special registers 176
user-defined functions
types 126
UTC representation 172
UTF-8 128, 277
UTILITY Execution 86, 105, 428, 456, 517
UTS 115, 339, 507
UTSERIAL lock 61, 468
V
VALUE 152, 281, 542, 617
VALUES 152, 233, 265, 343, 428, 555
VARCHAR 72, 140, 233, 245, 340, 502, 567
variable 83, 114, 126, 250, 390, 618
VERSION 94, 127
Version 54, 58, 107, 111, 535
VERSION V1 140
VERSION V2 140
VERSIONING_SCHEMA 207
VERSIONING_TABLE 207
versions 3, 51, 55, 106, 115, 138, 204, 259, 316, 361,
470
virtual storage
Index 661
constraint relief 51, 503
consumption 569
relief 5152, 503
use 52
VPSIZE 76, 561
W
well-formed XML 258
WFDBSEP 59
whitespace 262
window specification 187
WITH 104, 106, 231, 340, 428, 474, 616
WLM 3, 130, 262, 312, 366, 481, 561
WLM environment 131, 523, 526
address space procs 528
minimal set 526
work file 56, 538
parallelism improvements 593
work load manager (WLM) 565
workfile 56, 273, 477, 535, 623
workfile database 57
available space 57
in-memory workfiles 57
KB table spaces 58
table space 57
workfile record 56
workfile table space 56, 59
space calculations 60
workfiles 57
Workload Manager (WLM) 130131, 312, 481, 565
X
XA recover processing 117
XML 1, 243, 481, 535
XML and LOB streaming 243
XML column 245, 341
XML columns 259, 339, 565
XML data 136, 243, 565
XML data type 7, 136, 243, 579
XML documents 245, 566
XML index 251
XML multi-versioning 274
XML record 522
XML schema 250
XML schema repository 266
XML schema validation 262
XML support 244, 535
XML System Services 262
XML type modifier 258, 263
XMLEXISTS 250
XMLEXISTS predicate 250
XMLMODIFY function 266
xmlns 246
XMLPARSE 262
XMLQUERY 250
XMLQUERY function 250
XMLTABLE table function 244
XMLXSROBJECTID scalar function 268
XPath 244
XPath expression 245
XPath functions 258
XPATH performance enhancement 252
XPath scan 257
XSR 263
Z
z/Architecture 34
z/OS 1, 51, 110, 243
z/OS enhancement 572
z/OS Installation 381, 518
DB2 10 381
z/OS Security Server
identity propagation 416
V1R11 416
V1R8 416
z/OS Security Server V1R11 416
z/OS V1R10 9, 337, 473
z/OS V1R11 9, 337, 417, 473
zIIP 262, 536
662 DB2 10 for z/OS Technical Overview
(
1
.
0
s
p
i
n
e
)
0
.
8
7
5
<
-
>
1
.
4
9
8
4
6
0
<
-
>
7
8
8
p
a
g
e
s
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w
D
B
2
1
0
f
o
r
z
/
O
S
T
e
c
h
n
i
c
a
l
O
v
e
r
v
i
e
w