SG 248104
SG 248104
SG 248104
Experiences with
Oracle 11gR2 on
Linux on System z
Installing Oracle 11gR2 on Linux on
System z
Managing an Oracle environment
Provisioning an Oracle
environment
Sam Amsavelu
Kathryn Arrell
Gaylan Braselton
Armelle Chev
Ivan Dobos Damian Gallagher
Hlne Grosch Michael MacIsaac
Romain Pochard Barton Robinson
David Simpson Richard Smrcina
ibm.com/redbooks
SG24-8104-00
Note: Before using this information and the product it supports, read the information in Notices on
page xi.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Chapter 1. Why customers are choosing to use Oracle products on Linux on IBM
System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Virtualization capabilities of IBM System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Ability to use existing disaster recovery plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Trusted Security and Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 System z is optimized for High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Total cost of ownership advantages of IBM System z . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Ease of interfacing with traditional data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Increased performance and scalability capabilities of System z, including zEC12, z114,
and z196. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Specialty engines available on IBM System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9 IBM zEnterprise BladeCenter Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10 End-to-end solution for dynamic infrastructure data center . . . . . . . . . . . . . . . . . . . . . 7
1.11 Cost savings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.12 Ability to easily add more capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13 IBM Cloud Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.14 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.15 Oracle solutions available on IBM System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Part 1. Setting up and installing Oracle 11gR2 on Linux on System z . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux
on System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.1 Single Instance database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 Single Instance with Cluster Ready Services or RAC One-Node . . . . . . . . . . . . . 15
2.1.3 Two-node RAC on the same LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Multinode RAC on more than one LPAR on one CPC . . . . . . . . . . . . . . . . . . . . . 17
2.1.5 Multinode RAC in two or more CPCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.6 Data Guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.7 Using GoldenGate for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.8 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Sizing tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 Memory sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.3 Threads for dedicated processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 I/O considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.1 Fibre Channel Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2 ECKD and DASD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
iii
25
25
25
25
26
26
27
28
29
31
32
33
33
37
37
38
41
42
45
50
52
53
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise
Linux 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1 Installing Oracle 11gR2 on SUSE Linux Enterprise Server guest. . . . . . . . . . . . . . . . . 56
4.1.1 Linux required RPMs for SUSE Linux Enterprise Server 11 . . . . . . . . . . . . . . . . . 56
4.1.2 Network Time Protocol TIME option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Installing Oracle 11.2.0.3 on a Red Hat Enterprise Linux 6 guest . . . . . . . . . . . . . . . . 58
4.2.1 Verify SELinux is permissive or disabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2.2 Linux required RPMs for Red Hat Enterprise Linux installations . . . . . . . . . . . . . 60
4.2.3 Setting NTP TIME for Red Hat Enterprise Linux (optional only for Oracle Grid
installations) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Customization that is common to SUSE Linux Enterprise Server and Red Hat Enterprise
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.1 Required parameters for Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3.2 Oracle RAC installations only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.3 Create and verify required UNIX groups and Oracle user accounts . . . . . . . . . . . 64
4.3.4 Setting file descriptors limits for the oracle and grid users . . . . . . . . . . . . . . . . . . 65
4.3.5 Pre-create user directories for product installs . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.6 Other rpm for grid installs for SUSE Linux Enterprise Server and Red Hat Enterprise
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Chapter 5. Using the Cloud Control agent to manage Oracle databases . . . . . . . . . .
5.1 Basic Enterprise Manager Cloud Control Architecture . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Creating Cloud Control infrastructure on x86 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Downloading and extracting the installation files . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.2 Installing and configuring the Enterprise Manager Cloud Control 12c . . . . . . . . .
5.3 Updating the Cloud Control Software Library in online mode . . . . . . . . . . . . . . . . . . . .
5.3.1 Upgrading Software Library by using the Self Update Feature in online . . . . . . .
5.4 Updating the Cloud Control Software Library in offline mode . . . . . . . . . . . . . . . . . . . .
5.4.1 Upgrading Software Library by using the Self Update Feature in offline mode . .
5.5 Deploying the Agents from Cloud Control console . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
71
73
74
75
75
89
89
95
95
99
117
118
118
118
119
119
120
120
120
121
121
122
124
125
126
128
129
130
132
133
134
137
138
138
138
139
140
145
146
146
146
147
149
151
153
155
156
156
156
156
158
158
158
158
158
159
Contents
179
181
183
183
184
185
185
186
188
189
189
190
191
192
194
194
195
196
199
201
201
203
205
10.2.1 Preparing to install Red Hat Enterprise Linux 6.2 on the golden image . . . . . .
10.2.2 Installing Red Hat Enterprise Linux 6.2 Linux on the golden image . . . . . . . . .
10.2.3 Configuring the Red Hat Enterprise Linux 6.2 golden image . . . . . . . . . . . . . .
10.2.4 Copying a REXX EXEC on z/VM for cloning support . . . . . . . . . . . . . . . . . . . .
10.2.5 Testing the cloning a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 SaaS for Oracle stand-alone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.1 Configuring a Linux system for the Oracle boot script . . . . . . . . . . . . . . . . . . .
10.3.2 Cloning a virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.3 Silently installing Oracle Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
205
206
210
219
219
222
223
224
226
Contents
vii
279
280
280
280
280
282
282
284
286
290
294
294
295
296
299
301
302
303
307
311
312
312
312
313
313
314
317
326
326
326
332
333
333
333
333
334
334
334
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise
Linux 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
B.1 Obtaining the Oracle code and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
B.2 Installing the Oracle code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
B.3 Upgrading to the latest patch set update level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
B.4 Creating an Oracle database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Appendix C. Working effectively with Oracle support. . . . . . . . . . . . . . . . . . . . . . . . .
C.1 Oracle Support for Linux on System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.2 Oracle patching process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C.3 Prior planning and preparation prevent poor performance . . . . . . . . . . . . . . . . . . . . .
C.3.1 Gathering information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
349
350
351
352
353
353
353
354
354
357
357
357
358
358
358
359
360
364
378
379
389
390
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
My Oracle Support notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
391
391
391
392
393
393
Contents
ix
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xi
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX
BladeCenter
CloudBurst
DB2
DirMaint
ECKD
FICON
Global Business Services
HiperSockets
IA
IBM
IMS
InfoSphere
POWER7
RACF
Redbooks
Redpaper
Redpapers
Redbooks (logo)
S/390
Smarter Planet
System p
System x
System z
Tivoli
Velocity
WebSphere
XIV
z/OS
z/VM
z/VSE
z10
zEnterprise
zSeries
xii
Preface
Linux on System z offers many advantages to customers who rely on the IBM mainframe
systems to run their businesses. Linux on System z makes use of the qualities of service in
the System z hardware and in z/VM, making it a robust industrial strength Linux. This
provides an excellent platform for hosting Oracle solutions that run in your enterprise.
This IBM Redbooks publication is divided into several sections to share the following
experiences that are gained while Oracle Database 11gR2 is installed and tested:
Setting up Red Hat Enterprise Linux 6 for Oracle
Managing an Oracle on Linux on System z environment
Provisioning Linux guests using several tools
It also includes many general hints and tips for running Oracle products on IBM System z with
Linux and z/VM.
Interested readers include database consultants, installers, administrators, and system
programmers. This book is not meant to replace Oracle documentation but to supplement it
with our experiences while Oracle products are installed and used.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Poughkeepsie Center.
Sam Amsavelu is a Certified Consulting IT Architect in the IBM Advanced Technical Support
Organization, supporting Oracle on System z Linux and Siebel on System z customers. He
has more than 25 years of IT experience in IBM products.
Kathryn Arrell is an Oracle Specialist at the IBM/Oracle International Competency Center at
IBM San Mateo. Previously, she worked as an ERP specialist at the ITSO in Poughkeepsie,
New York.
Gaylan Braselton is Sales Specialist for System z Oracle Solutions and responsible for
System z Sales with Oracle for North America. He has worked with Oracle Solutions on
System z since 2000.
Armelle Cheve is a System z Client Technical Specialist in the Advanced Technical Support
(ATS) IBM Oracle Center (IOC), in Montpellier, France. She has 19 years of IT experience,
including development in C/C++, SQL, and performance benchmark tests for European
customer accounts. She provides presales support to new customers workloads since 2001
for IBM, and then Oracle Solutions on Linux System z. This support includes technical
support, consolidation sizings, architecture, and performance studies.
Ivan Dobos is a Project Leader at International Technical Support Organization. He has 15
years of experience with IBM System z. He joined IBM in 2003 and worked in different sales
and technical roles supporting mainframe clients as Technical Leader for Linux on System z
projects in the System z Benchmark Center, IT Optimization Consultant in the System z New
Technology Center and Mainframe Technical Sales Manager in Central and Eastern Europe.
During the past 10 years, he worked with many clients and supported new workloads on
System z projects.
xiii
Damian Gallagher is a Global Technical Lead with Oracle Global Support, working with
Oracle solutions on IBM System z. He has worked with databases and specialized in
performance for over 25 years.
Hlne Grosch is an IT Architect who has works since 2010 in the IBM Oracle Center in
Montpellier, France. She is dedicated to Oracle solutions on Linux on System z projects. Her
previous focus was on consolidation projects on Linux on System z solutions in the New
Technology Center, Montpellier. She has 15 years of experience in IT. Before joining IBM in
2007, she was a software developer for business applications on mainframes in a consultant
and system integration company, mainly for the public sector.
Michael MacIsaac has worked for IBM for 25 years. He is now a z/VM and Linux consultant
working in the field with customers on infrastructure and workloads, such as Oracle. He has
written technical documentation and presents at conferences and users group meetings.
Romain Pochard is a System z IT Specialist in the IBM Client Center in Montpellier, France.
As part of the Advanced Technical Support organization, he provides technical presales
support on Linux on System z and z/VM. His areas of expertise include consolidation,
virtualization, and Cloud Computing solutions on IBM zEntreprise.
Barton Robinson is Founder and CTO of Velocity Software. After working for IBM in
Poughkeepsie in VM Planning, Washington Systems Center for VM Performance, and then in
San Jose as a senior manager in the System Performance Evaluation Lab, he started a
company to provide VM Performance management products. After 25 years of developing
performance software for Velocity Software and providing performance analysis and
education, he continues to develop instrumentation and understanding for optimizing Linux
performance in a z/VM environment.
David Simpson is a System z Oracle Specialist working for Advanced Technical Support in
the Americas Sales and Distribution team. David previously worked for IBM Global Business
Services to host Oracle systems for consulting engagements.
Richard Smrcina is a Senior System Engineer with Velocity Software. He has been in the
industry over 25 years, primarily as a z/VM and z/VSE Systems Programmer. He has
installed and supported z/VM and z/VSE his entire career. He was involved with the original
IBM Linux for S/390 proof of concept and was instrumental in implementing Linux as one of
the first customers to use it in production. With Velocity Software, he provides customer
support and is involved in presales, education, customer product installation, and
development.
Thanks to the following people for their contributions to this project:
Joe Comitini
Steve Conger
Joanne Lazarz
Giri Krishnan
ADP
Dan Davenport
IBM Dallas
Andrew Braid
Sylvain Carta
Sebastien Llaurency
Claire Noirault
IBM France, Montpellier
xiv
Roy Costa
International Technical Support Organization, Poughkeepsie Center
Eugene Ong
Stephen McGarrillI
IBM STG WW Mainframe Benchmark Center
David Ong
Sonia Campins-Punter
Sandra Skehin
Oracle
Terry Elliott
Thomas Kennelly
Leon Rich
IBM USA
Juergen Doelle
IBM Germany
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at this website:
http://www.ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Preface
xv
xvi
Chapter 1.
Easy of management:
Fast cloning or provisioning of preinstalled and configured Linux images. This can be
accomplished in minutes instead of days or weeks.
Reduced space, electric connections, or network cables.
Compatible with the data center practice of standardizing on strategic software stacks
with consistent levels and patches.
The capability to rapidly deploy Linux guests with Oracle databases is used by many
customers in their infrastructure simplification strategy.
Tip: For more information about the benefits of virtualization on System z, see Using IBM
Virtualization to Manage Cost and Efficiency, REDP-4527-00, at this website:
http://www.redbooks.ibm.com/abstracts/redp4527.html
Chapter 1. Why customers are choosing to use Oracle products on Linux on IBM System z
By using z/VM, you can create a virtual machine for short-term Oracle database use. For
example, if a DBA must resolve a unique problem or handle a specific test and then recycle
the resources back to the pool when completed rather than installing and uninstalling a
physical box.
The mainframe includes the following green features:
Distributed servers (including production servers, development servers, test servers) often
run at average usage levels of 5% - 20%.
Virtualization and workload management enable standardization and consolidation on the
mainframe.
Run multiple images on fewer processors
Achieve usage levels of 85% or more without performance degradation
Become lean and green through IT consolidation and infrastructure simplification, as
shown in Figure 1-1.
Chapter 1. Why customers are choosing to use Oracle products on Linux on IBM System z
zBX is the new infrastructure for extending System z governance and management
capabilities across a set of integrated, fit-for-purpose POWER7, and IBM System x compute
elements in the zEnterprise System. This expands the zEnterprise portfolio to applications
that are running on AIX, Linux on System x, and Microsoft Windows. Attachment of the zBX to
the zEnterprise Central Processing Complex (CPC) is via a secure high-performance private
network. The blades can help increase flexibility in fit for purpose application deployment.
Chapter 1. Why customers are choosing to use Oracle products on Linux on IBM System z
1.14 Summary
IBM System z brings the following advantages to Oracle and Linux:
The most reliable hardware platform available:
System z offers the ultimate in virtualization with z/VM, which virtualizes everything with
high levels of usage:
Chapter 1. Why customers are choosing to use Oracle products on Linux on IBM System z
10
Part 1
Part
Setting up and
installing Oracle 11gR2
on Linux on System z
In the first part of this book, we describe the following processes:
How to get started with a project that is running the Oracle Database on Linux on
System z in Chapter 2, Getting started on a proof of concept project for Oracle Database
on Linux on System z on page 13.
How to set up the Network Infrastructure for Grid Control in Chapter 3, Network
connectivity options for Oracle on Linux on IBM System z on page 29.
How to install and set up a Red Hat Enterprise Linux 6 and SUSE Linux Enterprise Server
11 guest for Oracle in Chapter 4, Setting up SUSE Linux Enterprise Server 11 SP2 and
Red Hat Enterprise Linux 6.2 on page 55.
For more information, see Appendix A, Setting up Red Hat Enterprise Linux 6.3 for
Oracle on page 311 and Appendix B, Installing Oracle and creating a database 11.2.0.3
on Red Hat Enterprise Linux 6 on page 335.
How to install the Grid Control Agent in Chapter 5, Using the Cloud Control agent to
manage Oracle databases on page 71.
The objective is to provide you with the current best practice information for setting up Linux
for Oracle, get the Oracle Database binaries installed, and create an Oracle single instance
database.
For more information about installing Real Application clusters, see the following publications:
Silent Installation Experiences with Oracle Database 11gR2 Real Application Clusters on
Linux on System z, REDP-9131
Installing Oracle 11gR2 RAC on Linux on System z, REDP-4788
11
12
Chapter 2.
13
2.1 Architecture
There are several options for how you can configure your environment to run an Oracle
Database on Linux on System z. The choice depends on the requirements for High
Availability (HA) and disaster recovery. The following choices are described in this chapter:
Tip: Figures in this chapter show the architecture under z/VM. You also have the option of
running Linux native on the IFL with no z/VM virtualization.
Guards against:
z/VM LPAR 1
Linux
Application
Server(s)
V
S
W
I
T
C
H
Oracle DB
Instance
Node 1
Oracle Database
Look here
and here
Comments:
Sufficient for many
databases
Can have access to all
IFLs and memory in the
LPAR
14
You can add to this set up by using Oracles ASM as shown in Figure 2-2.
Guards against:
z/VM LPAR 1
Application
Application
Server(s)
Application
Server(s)
Server(s)
V
S
W
I
T
C
H
Linux
Linux
SS
Oracle
Oracle DB
DB
Instance
Instance
Node
Node 11
M
M
Oracle Databases
C
CC
R
RR
S
SS
Linux
A
Linux S S A
Linux M S A
M S
Oracle Linux
DBLinux
M
Oracle DB
Instance
M
Oracle
DB
C
Instance
Oracle DB
C
Instance
R
OracleRDB
C
NodeInstance
1
C
NodeInstance
1 SSR
R
Node 1
S
Node 1
S
A
S
Comments:
C
R
S
z/VM LPAR 1
Linux
A
S
Oracle DB
Instance
Application
Server(s)
V
S
W
I
T
C
H
M
&
C
Prod
Linux
Stand by
R
S
Oracle Database
S
M
Oracle DB
Instance
&
C
Comments:
Added a hot standby
Linux for the same Oracle
DB instance.
R
S
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
15
This RAC One Node option also can be accomplished across LPARs that use HiperSockets
connections in the same CPC. It can also be accomplished across different System z
systems by using appropriate network connectivity. It is only allowed between Oracle
Databases that use the same binaries (in this case, Linux on System z) at the same level.
Oracle CRS and Oracle RAC do not support heterogeneous platforms in the same cluster.
For example, you cannot have one node in the cluster that is running Linux on System z and
another node in the same cluster that is running Solaris UNIX. All nodes must run the same
operating environment; that is, they must be binary compatible. Oracle RAC does not support
machines that have different chip architectures in the same cluster. However, you can have
machines of different speeds and sizes in the same cluster.
RAC One Node's availability option is less capable than that of a full RAC implementation.
Therefore, outages can be a short duration with this option. Some database recovery might
need to occur in the failover node.
One advantage of System z in this scenario is that the failover guest uses few IFL and
memory resources until they are needed for an actual failover. After the failover occurs, IFL
and memory resources from production guest become available to the failover guest, which
negates the need for significant duplicate IFL and memory resources that are required in
other hardware architectures.
z/VM LPAR 1
A
Linux
S
M
Oracle DB
Instance
Application
Server(s)
V
S
W
I
T
C
H
&
C
R
Prod
Private interconnect
A
Linux
Oracle DB
Instance
Prod
Oracle Database
S
M
&
C
R
S
Interconnect - The private network communication link that is used to synchronize the memory cache of the nodes
in the cluster. Can be a vSwitch, Linux channel bonding or Oracle HAIP.
Comments:
Unlike hot stand by there
will be little impact to the
end users resulting from
a Linux node or instance
failure.
All IFLs and memory can
be shared across nodes.
16
In addition to availability, RAC can be used for workload distribution because all work does
not have to go through all nodes. Oracle RAC can be deployed in the following ways:
In the same LPAR for test and development applications
Across LPARs for LPAR maintenance or software failures (most common implementation)
Across CPCs when taking entire systems down is a common occurrence
These options are shown in Figure 2-4 on page 16, Figure 2-5, and Figure 2-6 on page 18.
z/VM LPAR 1
Hardware failure
System z
Linux OS or Oracle DB
failure
Allows for maintenance
to either z/VM, Linux and
possibly Oracle DB in
either Prod guest
Application
Server(s)
V
S
W
I
T
C
H
Linux
S
M
Oracle DB
Instance
&
C
R
Prod
HiperSockets
Private
Interconnect
V
S
W
I
T
C
H
Linux
S
M
Oracle DB
Instance
Prod
z/VM LPAR 2
&
Oracle Database
Comments:
IFLs can be shared across
LPARs but memory cannot.
C
R
S
HiperSockets is a memory
based technology that
provides high-speed TCP/IP
connectivity between
LPARs within a System z.
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
17
System z #1 z/VM
Hardware failure
System z
Linux OS or Oracle DB
failure
Allows for maintenance
to either z/VM, Linux and
possibly Oracle DB in
either Prod guest
Application
Server(s)
V
S
W
I
T
C
H
Linux
S
M
Oracle DB
Instance
&
C
R
Prod
Private
Interconnect
V
S
W
I
T
C
H
Linux
Oracle Database
S
M
Oracle DB
Instance
Prod
&
Comments:
Physically separate
System z machines
R
S
System z #2 z/VM
Figure 2-6 Multi-node RAC on two or more CPCs
Oracle Database
Oracle DB
Instance
Application
Server(s)
Long distances
Comments:
Think of Oracle Data
Guard for disaster
recovery as well as
traditional RMAN
backups
Linux
Oracle DB
Instance
Standby
LPAR or CPC
18
Oracle Database
As shown in Figure 2-7 on page 18, Standby is the replication to the standby database.
Oracle Data Guard replicates the database to the standby database and includes the
following features:
Uses redo log shipping for log apply or SQL apply
Less data transmitted than replication
Sync or async modes are available
Various configurations of logical and physical standby databases
Important: Support for Data Guard on heterogeneous systems is not certified; therefore,
the primary system and the standby system must match for endian formats, chip sets, and
headers.
Data Guard often is deployed between CPCs.
2.1.8 Summary
System z is recognized as a highly available platform. This is based on the attention to detail
over decades of engineering. It has a fault tolerant (HA) design that is based on the
elimination of single points of failure.
Oracle Database HA options blend well with IBM System z to provide a highly available
solution. The synergies of System z HA design can be augmented with the various levels of
Oracle Database HA to achieve the required level of availability. Additionally, each guest that
is running Oracle Database (see Figure 2-2 on page 15), can be customized to meet those
levels of availability instead of requiring a one size fits all approach.
2.2 Sizing
The sizing process is the most important step in the planning stage for a successful PoC or a
production implementation.
This section describes the sizing tool options that are available and the sources of vital input
data.
The objective is to plan for the correct number of IFLs and the memory that is needed for the
Linux guest to run the Oracle Database.
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
19
zCP3000
CP2KVMXT
SCON w/SURF
Accuracy
SCON
z/VM Planner
zPCR
RACEv
CCL Sizer
zPSG
general
Customer data/Methodology
detailed
20
DB
http
Distributed
Servers
Questionnaire
Measured
Data
Type of Questions:
- Servers make & model
- Speed (MHz)
- Peak Average Utilization (%)
- Workload type (i.e. DB,Mail,http)
Figure 2-9 Process for sizing with SCON tool and SURF report
Increase estimate when Oracle SGA is large and there are expected to be hundreds of dedicated server
connections or use Linux hugepages with Oracle 11gR2.
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
21
22
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
23
2.2.4 Summary
Remember the following key points about sizing:
Use the most accurate data that is available. Take the time to look at the Oracle AWR
reports. The garbage in, garbage out rule applies here. Do not make assumptions about
or guess the input to the sizing tool.
Choose the most appropriate time of day or month to measure the existing load so you are
working with meaningful data.
After you complete the sizing process and are ready to implement it, ensure that the CPU
(IFL) capacity and the memory that is required is available, and have a high-quality I/O
infrastructure in place.
Contact an IBM System z Oracle Specialist to help with the sizing process or request
sizing assistance from IBM Techline.
24
Consider separating redo logs from database files for better performance.
Make redo logs large enough to reduce frequent log switches.
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
25
26
Consider the use of Linux Hugepages for large SGAs (Oracle DB 11gR2 only)
The more dedicated connections the database has, the larger the page table reduction.
However, this choice does feature the limitation that there are no advanced management
modules (AMMs).
Create an appropriate disk multipathing SCSI disk.
Review the following options to move an Oracle Database to Linux on System z:
Use transportable table space or transportable database for metadata when endian
formats are the same.
Other steps, such as, rman conversions, are required for unlike endian formats.
Import/export might be required when the source database is older than 10gR2.
I/O performance
SGA and PGA usage via automatic memory management (see Figure 2-10 on
page 22, Figure 2-11 on page 22, and Figure 2-12 on page 23)
Chapter 2. Getting started on a proof of concept project for Oracle Database on Linux on System z
27
Tip: Database administrators, Linux administrators, and z/VM system programmers must
work as a team in any virtualized environment.
Stand-alone database
RAC with Active/Active or Active/Passive
Use of multiple physical System z machines
Data Guard for Disaster Recovery
Is there sufficient IFL capacity, memory, and I/O capacity for production?
Are you ready to measure capacity usage over the long term?
Are the latest Oracle patches applied?
Consider z/VM prioritization to appropriately manage the large number of guests.
28
Chapter 3.
29
30
3.1 Overview
Oracle RAC with Linux on System z was one of the first certified and supported virtualized
platforms for running Oracle RAC in a virtual environment with z/VM. Table 3-1 shows the
current certification matrix for running Oracle RAC with Linux on System z.
Table 3-1 Supported Virtualization Technologies for Oracle database and RAC product releases1
Platform
Virtualization
Technology
Operating
System
Certified Oracle
Single Instance
Database
Releases
Certified Oracle
RAC Databases
Releases
IBM System z
Red Hat
Enterprise Linux
SUSE Linux
Enterprise Server
10gR2a
11gR2a
10gR2a
11gR2a
a. Oracle RAC is certified to run virtualized with z/VM and by using native logical partitions.
A key component for a successfully running Oracle RAC database is the network interconnect
between nodes. The default for Oracle RAC in 11gR2 allows up to 30 seconds of network
interruption between nodes before the cluster evicts an unresponsive node.
Performance of the cluster interconnect server connectivity is an important consideration to
reduce latencies, lost fragments, and Oracle data blocks between connecting nodes.
Table 3-2 shows the certified Oracle RAC interconnect configurations for running RAC on
System z.
Table 3-2 RAC Technologies Matrix for Linux Platforms1
Platform
Technology
categories
Technology
Notes
IBM System z
Linux
Server Processor
Architecture
IBM System z
Certified and
supported on certified
distributions of Linux
running natively in
LPARs or as a guest
OS on z/VM virtual
machines, deployed on
IBM System z 64-bit
servers
Network Interconnect
Technologies
IBM System z is an ideal platform for consolidating Oracle RAC workloads. One example of
this is the unique System Assist Processors (SAP) of System Z. SAP processors are internal
System z CPU processors that assist with the offload of network and I/O CPU cycles to the
SAP processors from the main processors that an Oracle RAC node might be using.
Source: http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html
31
CPU offload helps prevent Oracle RAC interconnect CPU starvation wait events. CPU usage
still should be monitored to avoid interconnect waits that are related to CPU starvation.
Consolidating databases in a shared System z RAC environment is supported, if the network
traffic for the private interconnect is restricted to networks with similar performance and
availability characteristics. If the environment requires increased security between RAC
clusters, VLAN tagging can be used between distinct Oracle cluster nodes.
For example, if the Oracle private interconnect is configured with a public routable IP address,
it is possible for other systems to affect the Oracle RAC databases interconnect traffic.
Although not mandatory, it is recommended and a best practice to use private IP address
ranges for the interconnect configuration.
An Oracle RAC workload sends a mixture of short messages of 256 bytes and database
blocks of the database block size for the long messages. Another important consideration is
to set the Maximum Transition Unit (MTU) size to be a little larger than the data base block
size for the database. To support a larger MTU size, the network infrastructure (switches)
should be configured to support Jumbo frames for the Oracle Interconnect network traffic.
32
Table 3-3 shows the extra number of network reassemblies that are required when the MTU
size is not set to a value that is larger than database block size (8K).
Table 3-3 Example of setting the MTU Size to a size that is greater than the DB Block Size
netstat -s of interconnect
Before reassemblies
43,530,572
1,563,179
After reassemblies
54,281,987
1,565,071
Delta assemblies
10,751,415
1,892
The smaller MTU results in a higher number of network assemblies. High network
reassemblies on the receive and transmit side can result in higher CPU to be used because
of the breaking apart and then reassembling of these other network packets.
In the physical switch configuration that is uplinked from the System z machine, it is
recommended to prune out the private Oracle Interconnect traffic from the rest of the network.
33
The following methods can be used to configure an HA solution for a System z environment
that uses a multi-node configuration:
Virtual Switch (Active/Passive): When one Open System Adapter (OSA) Network port
fails, z/VM moves the workload to another OSA card port; z/VM handles the fail over.
Link Aggregation (Active/Active): Allows up to eight OSA-Express adapters to be
aggregated per virtual switch. Each OSA-Express port must be exclusive to the virtual
switch (for example, it cannot be shared); z/VM handles the load balancing of the network
traffic.
Linux Bonding: Creates two Linux interfaces (for example, eth1 and eth2) and create a
bonded interface bond0 made up of eth1 and eth2, which the application uses. Linux can
be configured in various ways to handle various failover scenarios.
Oracle HAIP: New in 11.2.0.2+, Oracle can have up to four Private Interconnect interfaces
to load balance Oracle RAC Interconnect traffic. Oracle handles the load balancing and is
exclusive to Oracle Clusterware implementations.
Figure 3-1 shows a shared Active/Passive VSWITCH configuration. An Active/Passive
VSWITCH configuration provides HA if an OSA card fails. The failover time to the other
redundant OSA card must be considered when you are working in a Oracle RAC
configuration to not affect the performance of the cluster.
IBM z/VM
Public Network
(255.255.255.224)
eth0: VLAN 100
129.40.18.10
Public Network
Private Network
eth1 : VLAN 183
10.1.28.10
Public Network
Private Network
z/VM
Private z/VM
VSWITCH #2
Public z/VM
VSWITCH #1
OSA 2
OSA 1
Layer 2
Adjacent
OSA 3
LAN Switch 1
OSA 4
LAN Switch 2
Trunk Inter-switch Link
Active Connection:
1 Hop between
System z Nodes
Passive Connection:
Figure 3-1 Shared Private Virtual Switch across Multiple System z systems (CPCs)
Similar to the Active/Passive VSWITCH configuration is z/VM Link Aggregation. z/VM Link
Aggregation allows for up to eight OSA cards to be aggregated to provide failover and extra
bandwidth capabilities. One restriction with link aggregation is that any OSA ports that are
defined with Link Aggregation must be exclusive to the VSWITCH in the LPAR and cannot be
shared with other LPARs or VSWITCHES in that LPAR.
34
Private Network
(255.255.255.248)
Public Network
Private Network
Public Network
Private Network
z/VM
Private z/VN
VSWITCH #1
OSA 1
Private z/VN
VSWITCH #2
OSA 2
OSA 3
LAN Switch 1
OSA 4
LAN Switch 2
OSA Card
Dedicated to each VSWITCH
With Link Aggregation
Another HA option is to use Linux bonding across multiple network OSA cards, as shown in
Figure 3-3 on page 36. If the network configuration fails, the other network interface that uses
a separate network/OSA card and switch provides the failover capability for the Oracle
Interconnect.
35
z/VM LPAR
Linux Guest
ora-raca-1
VLAN 182
Private
Bond0
Linux Guest
ora-racb-1
Public
VSwitch eth0
Private
eth1
Private
eth1
Private
eth2
Private
eth2
OSA 1
Public
VSwitch eth0
Private
Bond0
VLAN 183
OSA 3
Private
eth1
Private
eth2
Private
eth2
VLAN 184
OSA 4
Public
VSwitch eth0
Private
Bond0
Private
eth1
Private
eth2
Private
eth2
Linux Guest
ora-racb-2
Private
Bond0
Public
VSwitch eth0
Private
eth1
Linux Guest
ora-raca-2
Private
Bond0
Public
VSwitch eth0
Private
eth1
OSA 2
Linux Guest
ora-racc-1
z/VM LPAR
Public
VSwitch eth0
Linux Guest
ora-racc-2
Private
Bond0
Figure 3-3 Linux Bonding Oracle Interconnect across Multiple System z systems (CPCs)
The final HA option to consider is Oracles new 11gR2 HAIP capabilities. Similar to Linux
bonding, two separate Linux OSA channels that are configured on separate cards and
switches are presented to the Linux guest (for example, as eth1 and eth2). Oracle HAIP
provides the failover and load balances the Oracle Interconnect traffic across both
interconnect connections (eth1 and eth2), as shown on Figure 3-4.
z/VM LPAR
ora-raca-1
VLAN 182
z/VM LPAR
Public
VSwitch eth0
ora-raca-2
Private
eth1
Private
eth2
ora-racb-1
OSA 1
VLAN 183
OSA 3
Public
VSwitch eth0
ora-racb-2
Private
eth1
Private
eth2
OSA 2
ora-racc-1
VLAN 184
Public
VSwitch eth0
Private
eth1
Private
eth2
36
OSA 4
ora-racc-2
If business requirements allow, HiperSocket across multiple LPARs on the same System z
machine (CPC) is recommended for production environments because this provides
protection from any LPAR or z/VM maintenance and key performance benefits with
HiperSockets. HiperSockets can provide low latencies in the range of 50 microseconds for a
250 byte packet compared to 300 microseconds for an OSA network card.
37
Linux on System z
LPAR 2
E-Business Suite
LPAR 3
Data Warehouse
Peoplesoft
Available
Oracle Clusterware
Min 3
Max 3 Imp 10
Min 2
Max 3
Imp 5
Min 1
Max 2
Imp 2
Linux
Guest
Linux
Guest
Linux
Guest
Linux
Guest
Linux
Guest
Linux
Guest
Linux
Guest
Linux
Guest
Linux
G uest
ASM
z/VM
z/VM
z/VM
z/OS
OSA
Card
Memory
#IFLs
# FC
Cards
OSA
Card
Memory
# IFLs
# FC
Cards
OSA
Card
Memory
# IFLs
# FC
Cards
Cluster management that uses Oracle Server Pools can be used for planned or unexpected
outages. For example, if one of the systems that is shown in Figure 3-5 needs maintenance in
LPAR 1, Oracle Clusterware migrates the instance to a free available node in LPAR 3. The
relocation occurs while the database is fully operational to users.
Oracle Server Pools work well in a System z environment because resources such as
network interfaces, IFL (CPU) capacity, and disk storage are shared. When the database
moves to another node, the resources can be used by the other nodes.
38
To change the MTU size for the interconnect interfaces to a value that is slightly larger than
the 8K database block size, we modified the VSWITCH MTU size to 8992 from the default
1492 size, as shown in the following example:
SET VSWITCH ORACHPR PATHMTUD VAL 8992
To make the change permanent, the SYSTEM CONFIG file that is shown in Example 3-1
requires updating.
Example 3-1 Sample SYSTEM CONFIG for VLAN Tagging
/*------------------------------------------------------------------------------/*
Perf VSWITCH Definition
/*------------------------------------------------------------------------------DEFINE VSWITCH ORACHPR ETHERNET VLAN 100 RDEV 1100 3870 PORTTYPE TRUNK
MODIFY VSWITCH ORACHPR GRANT MAINT
MODIFY VSWITCH ORACHPR GRANT ORARACA2 VLAN 182
MODIFY VSWITCH ORACHPR GRANT ORARACB2 VLAN 183
MODIFY VSWITCH ORACHPR GRANT ORARACC2 VLAN 184
VMLAN MACPREFIX 021112 USERPREFIX 020000
For Ethernet vswitches, a unique MACPREFIX must be defined for each switch, even if they
exist on different CPCs if they must communicate with each other. Example 3-2 shows the
access list of a vswitch with VLAN tagging.
Example 3-2 VLAN Tagging access list
39
Example 3-3 shows a detailed configuration for the Private Interconnect VSWITCH with
VLAN tagging.
Example 3-3 Detailed VLAN Tagging access list
When you are switching from this setup to use Link Aggregation, it is important to remember
that the entire OSA card is used for a port group, not just a subset of three addresses.
If you try to use a range of addresses on this card for another function, then you see
exclusive use errors.
40
The network administrator also must change the physical switch to enable port groups.
Otherwise, you see a message stating: LACP (Link Aggregation Control Protocol) was not
enabled on partner.
The advantages of a Link Aggregation setup include increased throughput, resiliency, and
bandwidth because I/O that is sent from one of the interfaces (in this case, 1100 interface)
can be returned via the other interface, which in our case was 3870.
Interface Name
Description
Public Interface
eth0
Private interconnect
eth1
z/VM VSWITCH
Private interconnect
eth2
Private interconnect
eth3
Table 3-5 also shows the multiple Linux interfaces that were configured. If you are using Linux
bonding eth0 and eth1 can be bonded for the Public Interface. We configured two Private
Interfaces (eth1 with VSWITCH Aggregation) and used eth2 and eth3 for the Bonded private
interconnect solutions.
For the private interconnect, we used two physical OSA cards on each CPC. To segregate
different networks (per RAC cluster) that use the same NIC (such as a VSWITCH), we
configured VLAN tagging.
41
42
43
44
45
MTU='8992'
NAME='OSA Express Network card (0.0.1423)'
ora-raca-1:/etc/sysconfig/network # cat ifcfg-bond0
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=0 miimon=100'
BONDING_SLAVE0='eth2'
BONDING_SLAVE1='eth3'
BOOTPROTO='static'
STARTMODE='auto'
MTU='8992'
NAME='OSA Express Network card (0.0.1423 and 1303 bond)
ora-raca-1:/etc/sysconfig/network # cat ifcfg-bond0_182
BOOTPROTO='static'
ETHERDEVICE='bond0'
IPADDR='10.1.28.3/29'
NETMASK='255.255.255.248'
BROADCAST='10.1.28.7'
STARTMODE='auto'
MTU='8992'
NAME='OSA Express Network card (0.0.1423 and 1303 bond)'
Important: The ETHERDEVICE statement is required on SUSE Linux Enterprise Server for
VLAN tagging and is the link back to the bond0 device.
For bonding options, the mode is the bonding policy such as round robin, active/backup,
and so on. The default is 0 (round robin). The miimon option specifies how often each slave
is monitored for link failures.
Also, in the ifcfg-bond0 file the BONDING_SLAVE0 and BONDING_SLAVE1 statements link the
bond back to the eth2 and eth3 devices.
7. Reboot the system and run the ifconfig command to verify (only the eth2, eth3, bond0,
and bond0_182 interfaces are shown here):
ora-raca-1:~ # ifconfig
bond0
Link encap:Ethernet HWaddr 02:00:00:8F:36:87
inet6 addr: fe80::ff:fe8f:3687/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:8992 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:136 (136.0 b) TX bytes:1582 (1.5 Kb)
bond0_182 Link encap:Ethernet HWaddr 02:00:00:8F:36:87
inet addr:10.1.28.3 Bcast:10.1.28.7 Mask:255.255.255.248
inet6 addr: fe80::ff:fe8f:3687/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:8992 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:128 (128.0 b) TX bytes:438 (438.0 b)
47
eth2
eth3
48
BOOTPROTO=none
USERCTL=no
MTU=8992
BONDING_OPTS='mode=0 miimon=100'
[root@ora-racb-1 network-scripts]# cat ifcfg-bond0_183
DEVICE=bond0_183
BOOTPROTO=static
IPADDR=10.1.28.11
MTU="8992"
NETMASK=255.255.255.248
VLAN=yes
Tips: The VLAN=yes statement is required on RHEL6 VLAN tagging.
Different from SUSE Linux Enterprise Server, each ifcfg script that is used for bonding
states whether that device is a slave and who the master is (in our case, bond0).
For bonding options, the mode is the bonding policy, such as round robin, active/backup,
and so on. The default is 0 (round robin). The miimon option specifies how often each slave
is monitored for link failures.
4. Reboot the system and issue ifconfig to verify (only the eth2, eth3, bond0, and bond0_183
interfaces are shown here):
[root@ora-racb-1 ~]# ifconfig
bond0
Link encap:Ethernet HWaddr 02:00:00:3F:FD:F3
inet6 addr: fe80::ff:fe3f:fdf3/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:8992 Metric:1
RX packets:79 errors:0 dropped:0 overruns:0 frame:0
TX packets:31 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16217 (15.8 KiB) TX bytes:4657 (4.5 KiB)
bond0_183 Link encap:Ethernet HWaddr 02:00:00:3F:FD:F3
inet addr:10.1.28.11 Bcast:10.1.28.15 Mask:255.255.255.248
inet6 addr: fe80::ff:fe3f:fdf3/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:8992 Metric:1
RX packets:79 errors:0 dropped:0 overruns:0 frame:0
TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16217 (15.8 KiB) TX bytes:3917 (3.8 KiB)
eth2
eth3
49
50
eth2
eth2_182
51
ifup eth1_182
card (0.0.1420)
out of range
out of range
eth1_182.
52
3.9 Summary
The performance of each of these HA solutions is similar, including the operating system
versions that were selected.
With all of these solutions, careful monitoring is needed from the Oracle AWR reports to the
Linux netstat output to the z/VM network or HMC Network usages reports to ensure network
bandwidths are not exceeded.
Figure 3-6 shows the Interconnect Ping latencies section of an Oracle AWR report when the
performance of the Oracle RAC Interconnect is monitored.
Important: The decision to add more HA into your network IT solutions is based on your
business requirements, hardware availability (OSA features), and the skill sets that are
available.
VSWITCH aggregation, Linux Bonding, and Oracles Redundant Interconnect have many
benefits for providing more HA for running Oracle databases with Linux on System z
workloads for planned or unplanned network outages.
53
54
Chapter 4.
55
Tip: For SUSE Linux Enterprise Server 11 SP2, be sure to review the My Oracle Support
note OHASD fails to start on SuSE 11 SP2 on IBM: Linux on System z [ID 1476511.1].
56
Required RPMs
The following rpm packages are required for each version of Linux. The RPM release
numbers can be higher than the minimum versions listed here:
Review the Note: 1383381.1 - 11.2.0.3 PREREQ CHECK WARNING FOR MISSING
compat-libstdc++-33.3.2.3-47.3 ON IBM: LINUX ON SYSTEM Z ON SLES 11
Important: Certain packages require the 31 bit (s390) and the 64 bit version (s390x) of
the rpm to be installed.
The following packages should be installed as part of a base installation:
binutils-2.20.0-0.7.9.s390x.rpm
glibc-2.11.1-0.17.4.s390x.rpm
glibc-32bit-2.11.1-0.17.4.s390x.rpm
ksh-93t-9.9.8.s390x.rpm
libaio-0.3.109-0.1.46.s390x.rpm
libaio-32bit-0.3.109-0.1.46.s390x.rpm
libstdc++33-3.3.3-11.9.s390x.rpm
libstdc++33-32bit-3.3.3-11.9.s390x.rpm
libstdc++43-4.3.4_20091019-0.7.35.s390x.rpm
libstdc++43-32bit-4.3.4_20091019-0.7.35.s390x.rpm
libgcc43-4.3.4_20091019-0.7.35.s390x.rpm
make-3.81-128.20.s390x.rpm
The remaining rpm requirements can be installed by selecting all the C Libraries and
extensions or by manually installing each of the following rpms:
libaio-devel-0.3.109-0.1.46.s390x.rpm
libaio-devel-32bit-0.3.109-0.1.46.s390x.rpm
sysstat-8.1.5-7.9.56.s390x.rpm
glibc-devel-2.11.1-0.17.4.s390x.rpm (requires
linux-kernel-headers-2.6.32-1.4.13.noarch.rpm)
gcc-4.3-62.198.s390x.rpm (requires gcc43-4.3.4_20091019-0.7.35.s390x.rpm)
glibc-devel-32bit-2.11.1-0.17.4.s390x.rpm
gcc-32bit-4.3-62.198.s390x.rpm (requires
gcc43-32bit-4.3.4_20091019-0.7.35.s390x.rpm and
libgomp43-32bit-4.3.4_20091019-0.7.35.s390x.rpm)
libstdc++43-devel-4.3.4_20091019-0.7.35.s390x.rpm
gcc-c++-4.3-62.198.s390x.rpm (requires
gcc43-c++-4.3.4_20091019-0.7.35.s390x.rpm)
libstdc++43-devel-32bit-4.3.4_20091019-0.7.35.s390x.rpm
libstdc++-devel-4.3-62.198.s390x.rpm
libcap1-1.10-6.10.s390x.rpm
The following rpm command is used to verify the full extensions of the rpms. Some of the
requirements need the s390 (31 bit), and some need the s390x (64 bit) version of the rpm:
# rpm -qa --queryformat="%{n}-%{v}-%{r}.%{arch}.rpm" | grep <package>
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
57
#NTPD_OPTIONS="-g -u ntp:ntp"
NTPD_OPTIONS="-x -g -u ntp:ntp"
Restart the network time protocol daemon after you complete this task by running the
following command as the root user:
# /sbin/service ntp restart
Shutting down network time protocol daemon (NTPD)
done
Starting network time protocol daemon (NTPD)
done
# ps -ef | grep ntp | grep -v grep
ntp 56945 1 0 11:06 00:00:00 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -x -g -u
ntp:ntp -i /var/lib/ntp -c /etc/ntp.conf
Next, we must configure the system by using the following command so that the NTP daemon
is started on reboot:
# chkconfig --level 35 ntp on
You might encounter the problem that is shown in Example 4-2 when Oracle runs its system
pre-checks.
Example 4-2 Clock synchronization error
PRVE-0029 : Hardware clock synchronization check could not run on node xxxxx"
To resolve this problem, add the following lines to the /etc/init.d/halt.local file:
CLOCKFLAGS="$CLOCKFLAGS --systohc"
#/sbin/hwclock --systohc
You can now proceed to 4.3, Customization that is common to SUSE Linux Enterprise
Server and Red Hat Enterprise Linux on page 60.
58
For more information about how to install Red Hat Enterprise Linux 6 for an Oracle Database,
see Appendix A, Setting up Red Hat Enterprise Linux 6.3 for Oracle on page 311.
For Oracle Database 11gR2 (11.2.0.3), the minimum version is RHEL 6.2, kernel -2.6.32-220
or higher. This was certified in Q1 2013.
To check the version of RHEL you have installed, use the following command:
# cat /proc/version
Linux version 2.6.32-220.el6.s390x (mockbuild@s390-001.build.bos.redhat.com)
(gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:20:08
EST 2011.
You also should review the following notes:
Note 1377392.1: How to Manually Configure Disk Storage devices for use with Oracle
ASM 11.2 on IBM: Linux on System z Red Hat 6
Note 1459030.1: 11.2.0.3 Grid Installer Hangs at 75% When Using DASD Softlink Device
Note 1514012.1: runcluvfy stage -pre crsinst generates reference data is not available for
verifying prerequisites for RHEL 6.
(returns "Enforcing")
(returns "Permissive")
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
59
4.2.2 Linux required RPMs for Red Hat Enterprise Linux installations
For more information about how to set up a Linux guest with all the required rpms for an
Oracle Database installation, see Appendix A, Setting up Red Hat Enterprise Linux 6.3 for
Oracle on page 311.
The rpm checker for RHEL 6 is ora-val-rpm-EL6-DB-11.2.0.3-1.s390x.rpm.
4.2.3 Setting NTP TIME for Red Hat Enterprise Linux (optional only for Oracle
Grid installations)
Oracle Grid/ASM performs a system check to verify that the Cluster Time Synchronization
Service is set in such a way as to prevent the system time from being adjusted backward.
If you are installing Oracle Grid for Single Instance ASM or Oracle RAC, you should modify
the NTP configuration to include the slewing option with the x parameter.
To do this on RHEL, edit the /etc/sysconfig/ntpd file and add the -x flag, as shown in the
following example:
# cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no
# Additional options for ntpdate
NTPDATE_OPTIONS=""
Restart the network time protocol daemon after you complete this task as the root user with
the following command:
# /sbin/service ntpd restart
Next, configure the system so that the NTP daemon is started on reboot by using the
following command:
# chkconfig --level 35 ntpd on
Proceed to 4.3, Customization that is common to SUSE Linux Enterprise Server and Red
Hat Enterprise Linux.
60
Kernel parameters
Oracle User Groups and accounts
File Descriptor limits
User directories
Other RPMs
Kernel parameters
As the root user, ensure that the required Kernel parameters are set in /etc/sysctl.conf file,
as shown on Example 4-3. The recommended Kernel requirements are listed in My Oracle
Support notes, such as Requirements for Installing Oracle 11gR2 on SLES 11 on IBM: Linux
on System z (s390x), ID 1290644.1 and Requirements for Installing Oracle 11.2.0.3 RDBMS
on RHEL 6 on IBM: Linux on System z (s390x), ID 1470834.1.
Example 4-3 Sample /etc/sysctl.conf
/etc/sysctl.conf.old
Network configuration
You should comment out any IPV6 (see number 2 in Example 4-4) entries from your
/etc/hosts file if you are not using IPv6 IP addresses. Also, the first line of the /etc/hosts
should contain local hosts, as shown in number 1 in Example 4-4.
Example 4-4 The hosts file in the lab environment.
# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost 1
# special IPv6 addresses
#localhost ipv6-localhost ipv6-loopback 2
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
61
9.82.34.164
ora1.wsclab.washington.ibm.com ora1
# Additional Required Only for Oracle RAC install
9.82.34.165
ora2.wsclab.washington.ibm.com ora2
10.0.0.164
ora1-priv.wsclab.washington.ibm.com ora1-priv
10.0.0.165
ora2-priv.wsclab.washington.ibm.com ora2-priv
9.82.34.167
ora1-vip.wsclab.washington.ibm.com ora1-vip
9.82.34.168
ora2-vip.wsclab.washington.ibm.com ora2-vip
#
# If Not using Oracle SCAN IP's for Oracle then setup 2 DNS entries as below
#
#9.82.34.166
ora-cluster.wsclab.washington.ibm.com ora-cluster crs
#9.82.34.169
ora-cluster-scan.wsclab.washington.ibm.com ora-cluster-scan
9.82.34.167
ora1-vip.wsclab.washington.ibm.com ora1-vip
9.82.34.168
ora2-vip.wsclab.washington.ibm.com
ora2-vip
Oracle also requires that the host name be the fully qualified domain name, with a
corresponding entry in the /etc/hosts file, as shown in the following example:
# hostname
ora1.wsclab.washington.ibm.com
10.0.0.164
10.0.0.165
ora1-priv.wsclab.washington.ibm.com ora1-priv
ora2-priv.wsclab.washington.ibm.com ora2-priv
Tip: For other steps and requirements, see Chapter 3, Network connectivity options for
Oracle on Linux on IBM System z on page 29 and Appendix B in the Installing Oracle
11gR2 RAC on Linux on System z, REDP4788.
One other network interface on each server must be created, such as hsi0 - virtual
HiperSocket. This network interface is between the Linux Guests for Oracles Interconnect
and should be on a private non-routable interface (192.x.x.x or 10.x.x.x). Only the nodes
in the RAC cluster should contact the private interface.
You also require two other IP addresses for the Oracle Virtual IPs (VIPs) that must be on the
same subnet as the public eth0 interface, as shown in Example 4-6.
Example 4-6 Oracle VIPs
9.82.34.167
9.82.34.168
62
ora1-vip.wsclab.washington.ibm.com ora1-vip
ora2-vip.wsclab.washington.ibm.com ora2-vip
Finally, you need three SCAN IP addresses to be defined as Class A DNS entries, as shown
in Example 4-7 (there are three IP addresses for each DNS entry) for the new 11gR2 Oracle
RAC systems. These should also be on the same subnet as the public interface.
Example 4-7 DNS SCAN entries
rac-scan IN A 9.82.34.166
rac-scan IN A 9.82.34.167
rac-scan IN A 9.82.34.168
# Note 3 IPs to one DNS (host file entry, but we require DNS entries for this to
work)
#
9.82.34.166rac-scan.<domain name>rac-scan
9.82.34.167rac-scan.<domain name>rac-scan
9.82.34.168rac-scan.<domain name>rac-scan
If you cannot set up DNS SCAN entries at this time, you can define two /etc/host entries on
each of the nodes, but you receive a warning that can be ignored during the installation, as
shown in Example 4-8.
Example 4-8 Non-SCAN Oracle RAC configuration
9.82.34.166
9.82.34.169
Important: When the two network interfaces for Oracle RAC are configured (public and
private interfaces), you must have ARP enabled (that is, NOARP must not be configured).
The root.sh script fails on the first node if NOARP is configured.
Example 4-9 shows the command ifconfig -a run as though root user.
Example 4-9 ifconfig output
eth0
hsi0
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
63
4.3.3 Create and verify required UNIX groups and Oracle user accounts
When Oracle 11gR2 is installed, Oracle recommends that two groups be created: one for the
group named dba, and another group called oinstall. (It is possible to install with one group;
for example, dba.)
If only database executable files are installed, often one user account called oracle is
created.
If Oracle Grid for Oracle ASM or a Real Application Cluster (RAC) system is installed, another
user account called grid should be created to manage the grid infrastructure components.
As part of a grid infrastructure installation, Oracle changes certain directories and files to
have root access privileges. Having separate user IDs (one for grid, and one for oracle)
makes it easier to configure the environment variables that are required to maintain each
environment.
To verify that the Linux groups and users were created, you can view the group and password
files by using the following commands:
# cat /etc/passwd | grep oracle
# cat /etc/group
If your users and groups were not created, run the commands that are shown in
Example 4-10 to create users. Having consistent group IDs (that is, 501) and user IDs (for
example, 502) across nodes is required, particularly if you share storage or files between
systems.
Example 4-10 Commands to create users
64
4.3.4 Setting file descriptors limits for the oracle and grid users
As the root user, edit or verify the /etc/security/limits.conf file. If you created a separate
user for the Oracle Grid user, the file descriptor limit or ulimit entries for the grid user should
be created, as shown in Example 4-11.
Example 4-11 Unlimit entries
#vi /etc/security/limits.conf
grid
soft
nofile 1024
grid
hard
nofile 65536
grid
soft
nproc
2047
grid
hard
nproc
16384
#
oracle
soft
nofile 1024
oracle
hard
nofile 65536
oracle
soft
nproc
2047
oracle
hard
nproc
16384
#
# Use memlock for Huge Pages support (commented out)
#*
soft
memlock
3145728
#*
hard
memlock
3145728
Ensure that the /etc/pam.d/login file has an entry for pam_limits.so. Also, you should make
a backup if changes are made to /etc/pam.d/login and to test any changes with a superuser
or login before logging off, as a typographical error can make future logins problematic, as
shown in Example 4-12.
Example 4-12 Making a backup and verification
# cp /etc/pam.d/login /etc/pam.d/login.old
# cat /etc/pam.d/login
#%PAM-1.0
auth
required
pam_nologin.so
session optional
pam_mail.so standard
session required
/lib/security/pam_limits.so
session required
pam_limits.so
session optional
pam_mail.so standard
To increase the limits at oracle logon, as the Oracle User, verify the oracle user's profile (for
example, /home/oracle/.profile for ksh users) and ensure the following lines were added:
#vi .profile
ulimit -n 65536
ulimit -u 16384
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
65
Another method is to modify the main system profile by adding the following lines to the file
called /etc/profile, as shown on Example 4-13. Change this if the oracle user is using a
separate user shell program, such as csh or bash.
Example 4-13 Modifying the main system profile
# su - oracle
$ ulimit -a
address space limit (kbytes)
core file size (blocks)
cpu time (seconds)
data size (kbytes)
file size (blocks)
locks
locked address space (kbytes)
nice
nofile
nproc
pipe buffer size (bytes)
resident set size (kbytes)
rtprio
socket buffer size (bytes)
stack size (kbytes)
threads
process size (kbytes)
66
(-M)
(-c)
(-t)
(-d)
(-f)
(-L)
(-l)
(-e)
(-n)
(-u)
(-p)
(-m)
(-r)
(-b)
(-s)
(-T)
(-v)
unlimited
0
unlimited
unlimited
unlimited
unlimited
unlimited
0
65536
16384
4096
unlimited
0
4096
10240
not supported
unlimited
$ cat .profile
export ORACLE_BASE=/u01/grid/base
export GRID_BASE=/u01/grid
#export ORACLE_HOME=$GRID_BASE/11.2
#
# comment out the following lines for use later, do not have set for runInstaller
#
#export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:. 1
#export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH 2
umask 022
#defaults for shell startup for ulimits of oracle user
ulimit -u 16384
ulimit -n 65536
Example 4-17 Example of /home/oracle/.profile
$cat .profile
export ORACLE_BASE=/u01/oracle
#export ORACLE_HOME=$ORACLE_BASE/11.2
#
# comment out the following lines for use later, do not have set for runInstaller
#
#export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:. 1
#export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH 2
umask 022
#defaults for shell startup for ulimits of oracle user
ulimit -u 16384
ulimit -n 65536
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
67
4.3.6 Other rpm for grid installs for SUSE Linux Enterprise Server and Red Hat
Enterprise Linux
If you are performing an Oracle RAC install, you must install the cvudisk-1.0.9-1 rpm
package from the Oracle 11gR2 distribution media, as shown in Figure 4-1.
You can create a fix up script or install the RPM from the software distribution on each of the
nodes in the RAC cluster.
To run the fixup scripts, complete the following steps:
1.
2.
3.
4.
68
Chapter 4. Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat Enterprise Linux 6.2
69
70
Chapter 5.
71
72
73
Figure 5-1 shows the Cloud Control Architecture as shown in the Oracle Enterprise Manager
Grid Control Basic Installation Guide.
74
In the server, we created the Oracle Enterprise Manager Cloud Control infrastructure by
following these steps:
1.
2.
3.
4.
5.
6.
Oct
Oct
Aug
Aug
Aug
Aug
Aug
Sep
Aug
Feb
Aug
Aug
Aug
11
11
31
31
31
31
31
4
31
10
31
31
31
08:33
08:34
17:02
17:01
17:03
16:52
17:04
12:49
17:02
2010
17:03
17:02
16:52
./
../
install/
jdk/
libskgxn/
oms/
plugins/
Release_Notes.pdf*
response/
runInstaller*
stage/
wls/
WT.zip*
5.2.2 Installing and configuring the Enterprise Manager Cloud Control 12c
The following tasks are completed by the installation wizard as part of a new Enterprise
Manager system:
Install the Middleware Components in the Middleware home (in our example,
/u01/app/mw). The following components are installed in the Middleware home:
Oracle Management Agent 12c Release 2 (12.1.0.2) is installed in the agent base
directory that is specified during installation (outside the Middleware home, in our
example, /u01/app/oracle/agent).
An Oracle WebLogic domain called GCDomain and a default user account, weblogic were
used as the administrative user and a node manager account also was created.
75
Oracle Management Service is configured in the Instance Base location (gc_inst) in the
Middleware home for storing all configuration details that are related to Oracle
Management Service 12c (in our example, /u01/app/mw/gc_inst).
Configured Oracle Management Repository in the existing Oracle Database (in our
example, Oracle SID: orcl).
Configured the various installed components.
Start the installation wizard as oracle user from the extracted directory location of the
downloaded installation files. As shown in Figure 5-2, the installation starts to specify the
Oracle Support Credentials. (The Software Updates option is skipped in Figure 5-2.)
During prerequisite checks, the installer verifies the requirements for installation. We made
sure that all the steps are successful.
76
However, during software packages checks, we encountered the following software error, as
shown in Figure 5-3:
Checking for glibc-devel-2.5-49.i386; Not found. failed <<<<
In SUSE Linux Enterprise Server 11, the following package also failed:
Checking for libstdc++43-4.3; Not found. Failed <<<<
However, the warnings that are shown in Figure 5-3 can be ignored for the following packages
if you have them already installed in your system as per My Oracle Support documents.
Recommended My Oracle Support documents: For more information, see the following
documents:
EM 12c: Installation on OEL6 64-bit Fails At Pre-requisite Check Due To Missing
Package 'glibc-devel-2.5-49.i386', ID 1478035.1
EM 12c: Agent Installation on SLES11 fails at Pre-requisite check Checking for
libstdc++-4.1.0; Not found. Failed, ID 1471398.1
Ignore the warnings and continue the installation.
77
As shown in Figure 5-4, we selected Create a New Enterprise System and the Advanced
option.
78
As shown in Figure 5-5, we specified the middleware home location, agent base directory
location, and the host name where the installation is carried out.
79
As shown in Figure 5-6, the mandatory plug-ins are automatically grayed out and we can
select any other plug-ins that are needed. In our example, we left the default selection.
80
As shown in Figure 5-8, we specified the required information for the installed Database
connection in the server. We also choose the deployment size as SMALL.
81
When we clicked Next, we encountered the error that is shown in Figure 5-9. Although we did
not configure the database for Enterprise Manager when we created the database, we still
received the error.
82
The Oracle Cost-Based Optimizer statistics (CBO) gathering job prerequisite appeared, as
shown in Figure 5-11 and we choose Yes to fix the issue automatically.
The Database configuration prerequisite warnings were shown (see Figure 5-12) and we
choose to fix the database configurations as recommended by the installation wizard. We
used the SQL*PLUS tool to change the parameters and then clicked OK.
83
The Repository configuration password details were entered, as shown in Figure 5-13.
84
The configuration values were reviewed and we clicked Install, as shown in Figure 5-15.
85
The installation started and we see the progress of the installation, as shown in Figure 5-16.
86
When the installation and configuration completed, we ran the allroot.sh command as the
root user based on the instructions, as shown in Figure 5-17.
87
The Enterprise Manager Cloud Control configuration and installation took more than 40
minutes to complete and displayed the status. As shown in Figure 5-18, the URL and port
number access information for Enterprise Manager Cloud Control and Admin Server was
shown.
When we used the Enterprise Manager Cloud Control URL, we were asked to trust and
certify and add the exception. Then, the Enterprise Manager Cloud Control window opened
and we logged on as sysman with the assigned password.
We accepted the license requirement and this completed our Enterprise Manager Cloud
Control 12c Server installation.
These tasks are for a simple configuration installation process. Enterprise Manager Cloud
Control offers multiple configurations. For more information about advanced installation and
configuration options, see the Oracle manuals.
88
5.3.1 Upgrading Software Library by using the Self Update Feature in online
In the following example, we show how the Software Library can be updated in online mode
on an Enterprise Manager Cloud Control that is running on x86-64 Linux architecture to get
the management agent for Linux on System z.
In general, the process includes the following steps:
1.
2.
3.
4.
89
90
5. As shown in Fig 5-24, in the Administration page, we choose OMS Shared Filesystem
and selected Add to add a new OMS Shared file system.
6. As shown in Figure 5-22, in the Add OMS Shared File System Location panel, we entered
the Name (emlib) and Location (/oracle/emlib) of the OMS host.
91
7. When we clicked OK, a metadata registration job is submitted (as shown in Figure 5-23) if
it is the first time that the process is done. The metadata registration job imports all of the
metadata information of all the installed plug-ins from the Oracle home of the OMS.
The progress of the job can be tracked from Enterprise menu by selecting Job, and then
clicking Activity.
On the Job Activity page in the Advanced Search region, enter the name of the job, choose
Targetless as the Target Type, and then click Search. Typically, the name of the job starts
with SWLIBREGISTERMETADATA_*. If this job status shows as succeeded, it implies the software
library is configured properly.
92
3. As shown in Figure 5-25, Select the Setup menu at the upper right of the page.
4. Select Extensibility Self Update, and then select Check Updates to see the complete
list of available updates for the Agent software.
A background job is submitted to get the new updates from Oracle. The output log shows
the output of the job.
93
Complete the following steps to download Management Agent software in online mode:
1. As shown in Figure 5-26, Select the Setup menu at the upper right of the page and then
click Extensibility Self Update.
2. Select the entity type Agent Software and choose Open from the Action menu. The entity
type page shows agent software for different platforms.
a. We selected IBM: Linux on System z OS Platform and 12.1.0.2 version from the list of
available updates.
b. We clicked Download and scheduled the download job for an immediate run, as
shown in Figure 5-27.
94
5.4.1 Upgrading Software Library by using the Self Update Feature in offline
mode
In the following example, we describe how the Software Library can be updated in offline
mode on an Enterprise Manager Cloud Control that is running on x86-64 Linux architecture to
get the management agent for Linux on System z. Oracle requires the use of the Enterprise
Manager command-line interface (EMCLI to update the EM Cloud Control Software updates.
In general, the process includes the following tasks:
1.
2.
3.
4.
95
5. Set up the EMCLI client by running the following command from where the EMCLI client is
installed:
./emcli setup -url="https://9.12.5.131:7802/em" -username=sysman
-dir=/zCode/patches -trustall -certans=yes
6. Use the ./emcli setup command to see how the EMCLI client was set up in that
environment, as shown in Figure 5-29.
96
5. By using the provided link (you must provide MOS logon credentials), the .zip file is
downloaded, as shown in Figure 5-32.
97
6. Use the emcli import_update_catalog command to specify the location for the
downloaded patch file and omslocal options, as shown in Figure 5-33.
98
In the Self Update window, the status is shown as Downloaded for the IBM: Linux on
System z agent type.
5. Select Apply for the Downloaded Agent.
The agent software is staged in the Software Library and is available to the Add Targets
wizard, which we used to install the agent on System z Linux host machines.
After the job is completed, the status is shown as Applied.
The network between Cloud Control Server where the OMS is running and the destination
hosts should be accessible.
We used the ping by host name to make sure that the OMS and hosts can be reached.
99
We used the following process to install Oracle Management Agent 12c for Linux on System z
from Enterprise Manager Cloud Control 12c.
To add or install an Agent on a host, the software distribution of the Agent that corresponds to
the hosts platform must be available in the Software Library.
Complete the following steps to verify the availability of Linux on System z agents availability
in the EM Cloud Control Server:
1. Log on to Enterprise Manager Cloud Control 12c.
2. Select Setup Extensibility Self Update, as shown in Figure 5-35.
3. In the Status section of the Self Update window, select Agent Software as the type, as
shown in Figure 5-36
4. In the Agent Software Updates section (see Figure 5-37 on page 101), you can see that
Agent Software for the Linux on System z shows a status of Applied. When the rows are
highlighted, the bottom of the window shows the status, as it did when the agent software
was available, downloaded, and applied.
100
5. To deploy the agent, select Setup Add Target Add Targets Manually, as shown in
Figure 5-38.
6. On the Add Targets Manually page, select Add Host Targets and then click the Add Host
button.
7. On the Host and Platform page (see Figure 5-39 on page 102), complete the following
steps:
a. Accept the default name that is assigned for this session.
b. Click Add and enter the fully qualified name of the host. Select IBM: Linux on
System z as the platform of the host on where we want to install the Management
Agent. Select Next.
101
8. On the Installation Details page (see Figure 5-40), complete the following steps:
a. In the Deployment Type section, select Fresh Agent Install.
b. In the Installation Details section, enter the path to the base directory for Installation
Base Directory (the software binaries, security files, and inventory files of
Management Agent are copied here). In our case, it is /u01/oracle/agentHome.
c. For the Instance Directory, we accept the default instance directory location (all
Management Agent-related configuration files can be stored here). In our case, the
/u01/oracle/agentHome/agent_inst directory is used.
Recommendation: Oracle recommends that the instance directory is maintained
inside the installation base directory.
102
9. From the Named Credential list, add a new profile of which the credentials are used for
setting up the SSH connectivity between the OMS and the remote host. They are also
used for installing a Management Agent, as shown in Figure 5-41. Click Next.
103
10.On the Review page (see Figure 5-42), review the details and then click Deploy Agent to
install the Management Agent.
104
The progress of installation can be monitored in the Add Hosts Status window, as shown
in Figure 5-43.
105
During the prerequisite check stage, the deployment failed concerning root.sh
authorization messages, as shown in Figure 5-44.
Continue the installation by selecting the Continue all hosts option, as shown in
Figure 5-45.
106
An Agent Deployment Summary message shows when the process is complete. The
Deployment of the agent shows the status that is shown in Figure 5-46.
11.Run root.sh in the host location (as recommended) and, then click Done.
107
By selecting Targets Hosts in the Cloud Control (as shown in Figure 5-47), we can see
the availability of the hosts in the Hosts window, as shown in Figure 5-48.
108
From the output, you can see that IBM: Linux on System z, 12.1.0.2.0 is available in the
Enterprise Manager Cloud Control Server.
109
2. Run the $ ./emcli get_agentimage command to download the Agent software, as shown
in Figure 5-50.
3. The software needs at least 1 GB of storage. Copy this file into the Destination host and
extract the .zip there, as shown in Figure 5-51.
At the Destination host, the agent.rsp file was customized, as shown in Figure 5-52.
110
This completes the Agent Deployment in silent mode process. From the Grid Control Server
Console, we can add the databases that are running on the Destination host to be monitored.
111
2. In the Host field of the Add Database Instance Target: Specify Host window, specify the
fully qualified host name lnxcl2n1.itso.ibm.com and click Continue, as shown in
Figure 5-55.
112
3. The agent discovers the database (REMOTEDB), ASM instance, and the Listener as shown in
Figure 5-56.
4. Configure the database by selecting the Configure at the REMOTEDB line and specify the
database- related parameters. Save the configuration.
The database configuration is now saved and the databases that are running on Linux on
System z in the Oracle Enterprise Manager Cloud Control can be monitored.
113
5.8 Summary
In this chapter, we shared our experiences with installing a Cloud Control Server on x86
based Linux server and deployed the agents from there to monitor the databases that are
running on Linux on System z.
Before deploying the agents, we also updated the Cloud Control Software Library with the
required levels of agent software and plug-ins by connecting to the Oracle repository site
online.
Starting with Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2), we showed
how the command line utility emcli can also be used to update the Grid Control Software
Library in offline mode. The agents were deployed from the Cloud Control Console and the
silent agent deployment option was used at the Linux guests.
We also showed how to enable the cloud control to monitor Oracle databases.
114
Part 2
Part
Managing an Oracle
environment on Linux
on System z
In this part, we provide information about the following topics to help you manage your
environment:
Using z/VM new feature of Live Guest Relocation (LGR) to move an Oracle Linux guest
from one LPAR to another.
Important: This feature is not certified by Oracle at the time of this writing. For more
information, see Chapter 6, Using z/VM Live Guest Relocation to relocate a Linux
guest on page 117.
Considerations for tuning your environment to improve performance of your Oracle
database. It covers z/VM, Linux, and Oracle tuning possibilities, as described in Chapter 7,
Tuning z/VM, Linux, and Oracle to run on IBM System z on page 137.
Options that were used to migrate Oracle Database instances to an Linux on System z
platform, as described in Chapter 8, Cross-platform migration overview on page 155.
Options that can be used to provide High Availability and Disaster Recovery solutions
when Oracle Database is run on Linux on System z. These options include Oracle
components and IBM components to provide a highly available environment, as described
in Chapter 9, High Availability and Disaster Recovery environment for Oracle on
page 179.
115
116
Chapter 6.
117
User directory
DASD volumes
User minidisks
Spool files
Network devices
Members can be on the same or separate Central Processor Complexes (CPCs). They can
be first-level or second-level z/VM systems. SSI enables the members of the cluster to be
managed as one system, which allows service to be applied to each member of the cluster,
thus avoiding an outage to the entire cluster.
6.1.2 LGR
With the IBM z/VM Single System Image, a running Linux on System z virtual machine can be
relocated from one member system to any other, a process that is known as LGR. LGR
occurs without disruption to the business. It provides application continuity across planned
z/VM and hardware outages and flexible workload balancing that allows work to be moved to
available system resources.
You might need to relocate a running virtual server for the following reasons:
Maintaining hardware or software
Fixing performance problems
Rebalancing workload
Relocating virtual servers can be useful for load balancing and for moving workload off a
physical server or member system that requires maintenance. After maintenance is applied to
a member, guests can be relocated back to that member, which allows you to maintain z/VM
and keep your Linux on System z virtual servers available.
118
LGR support: Linux on System z is the only guest environment that is supported by LGR.
Because the LGR process is not yet certified by Oracle, it is not recommended for use in
relocating an active Oracle RAC node.
In this chapter, we describe two scenarios of Oracle relocation between two members of a
z/VM SSI cluster with an Oracle Single Instance and an Oracle two nodes RAC by stopping
each node before their relocations.
MOPVMEM1
MOPVMEM2
CTC link
TCPIP
DIRMAIN T
z196 LPAR
TCPIP
DIRMAINT
VMPTK
VMPTK
z196 LPAR
119
Use CP command QUERY SSI to display the information about your SSI cluster, as shown in
the following example:
==> query SSI
SSI Name: MOPVMSSI
SSI Mode: Stable
Cross-System Timeouts: Enabled
SSI Persistent Data Record (PDR) device: VMPCOM on 70BE
SLOT SYSTEMID STATE
PDR HEARTBEAT
RECEIVED HEARTBEAT
1 MOPVMEM1 Joined
11/28/12
18:08:07 11/28/12
18:08:07
2 MOPVMEM2 Joined
11/28/12
18:08:16 11/28/12
18:08:16
3 -------- Available
4 -------- Available
Ready; T=0.01/0.01 18:08:28
120
MOPVMEM2
LGR
ITSOORSI
ITSOORSI
Application server(s)
(Swingbench)
Disk type
Size (MB)
Mount
100
3390 mod 9
7042
101
VDisk
146
102
3390 mod 9
7042
Second-level swap
200
3390 mod 9
7042
/u01/oracle
121
ITSOORSI is connected to an OSA adapter through a layer 2 virtual switch (VSWITCH) SWCLO.
The SWCLO VSWITCH was defined identically in both members of the SSI cluster to allow the
LGR.
As a requirement for a guest to by eligible for LGR, we add the option CHPIDVIRTUALIZATION
ONE in the user directory, as shown in the following example:
USER ITSOORSI XXXXXX 8G 8G G
COMMAND COUPLE 0D20 SYSTEM SWCLO
COMMAND DEFINE VFB-512 AS 0101 BLK 299008 1
CPU 00 BASE
CPU 01
MACHINE ESA 4
OPTION CHPIDVIRTUALIZATION ONE 2
SCR INA WHI NON STATA RED NON CPOUT YEL NON VMOUT GRE NON INRED TUR
CONSOLE 0009 3215 T
NICDEF 0D20 TYPE QDIO DEVICES 3 LAN SYSTEM SWCLO 3
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
MDISK 0100 3390 1 10016 CL2B03 MR 4
MDISK 0102 3390 1 10016 CL2B04 MR 5
MDISK 0200 3390 1 10016 CL2B05 MR 6
The definition of ITSOORSI virtual machine includes the following entries (the numbers in the
following list refer to the numbers in the preceding example):
1.
2.
3.
4.
5.
6.
122
This file is used by Oracle clients to retrieve the following information that is required to
connect to the database (the numbers in the following list refer to the numbers in the
preceding example):
1. Connection name
2. Listener address of the database
3. Database service name
In Swingbench GUI that is shown in Figure 6-3, we define the connection information to
access the database (the service name as defined in the tnsnames.ora file), the number of
users to generate, and other benchmark runtime parameters.
We completed the following steps to verify the connectivity between Swingbench tool and
Oracle database check:
1. TCP sockets were opened between Swingbench and the Oracle database guest
ITSOROSI, as shown in the following example:
# netstat -a | grep swingbench | grep 10.3.58.50
tcp
0
0 swingbench.:60814 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60667 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60631 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60769 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60731 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60776 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60749 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60652 10.3.58.50:ncube-lm
tcp
0
0 swingbench.:60700 10.3.58.50:ncube-lm
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
123
2. TCP sockets were opened between Swingbench and ITSOROSI, as shown in the following
example:
# netstat -a | grep swingbench | grep 10.3.58.50 | wc -l
200
A total of 200 connections are opened, which correspond to the 200 users that are defined in
Swingbench.
124
Mon Dec
ttl=255 time=7.77 ms
Mon Dec
ttl=255 time=0.499 ms
Mon Dec
ttl=255 time=0.561 ms
Mon Dec
ttl=255 time=0.590 ms
Mon Dec
ttl=255 time=0.797 ms
Mon Dec
In the Swingbench GUI, we can observe that during the quiesce time, the transactions froze
(as shown in Figure 6-4) in the Transaction per seconds graph. However, all of the users
remained connected to the database.
125
When one of the nodes is shut down, the clients are automatically reconnected to the survival
node. Figure 6-5 shows the tested infrastructure.
Two members SSI cluster MOPVMSSI
MOPVMEM1
ITSOTST5
Publi c VSwitch
SWZG ORA1
MOPVMEM2
LGR
ITSOTST6
Publi c VSwitch
SWZG ORA1
Application server(s)
(Swingbench)
126
MDISK
MDISK
MDISK
MDISK
MDISK
0300
0301
0302
0303
0304
3390
3390
3390
3390
3390
1
1
1
1
1
10016
10016
10016
10016
10016
CL2B0C
CL2B0D
CL2B0E
CL2B0F
CL2B10
W
W
W
W
W
8
8
8
8
8
127
128
Public IP
Virtual IP
Interconnect IP
ITSOTST5
10.3.58.51
10.3.58.55
10.7.17.10
ITSOTST6
10.3.58.52
10.3.58.56
10.7.17.11
129
(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))
)
)
This example included the following components (the following numbers correspond to the
numbers that are shown in the preceding example):
1. The connection name.
2. By setting LOAD_BALANCE=on, Oracle RAC distributes the users connection fairly
across the different nodes accessing the same database.
3. The address of the SCAN listener.
4. The database service name.
5. The definition of the FAILOVER_MODE to provide client recovery capabilities when a
node is shut down.
Tip: Make sure that the remaining node (or nodes) can handle all of the connections that
are redirected by the TAF from the failed node.
130
In accordance with our TAF configuration, Swingbench generates the workload of 100 users
using the single IP address of the SCAN Listener and RAC balances the connection between
the two nodes.
To verify the connectivity between Swingbench tool and Oracle RAC, check the following
components:
The state of the nodes, as shown in the following example:
itsotst5:~ # su - grid
grid@itsotst5:/home/grid> crsctl status server
NAME=itsotst5
STATE=ONLINE
NAME=itsotst6
STATE=ONLINE
The TCP sockets that are opened between Swingbench and the Oracle RAC, as shown in
the following example:
swingbench:~ # netstat -a | grep swingbench | grep 10.3.58.5
tcp
0
0 swingbench.:46789 10.3.58.56:ncube-lm
tcp
0
0 swingbench.:46777 10.3.58.56:ncube-lm
tcp
0
0 swingbench.:59034 10.3.58.55:ncube-lm
tcp
0
0 swingbench.:59030 10.3.58.55:ncube-lm
tcp
0
0 swingbench.:59042 10.3.58.55:ncube-lm
tcp
0
0 swingbench.:59038 10.3.58.55:ncube-lm
tcp
0
0 swingbench.:59026 10.3.58.55:ncube-lm
tcp
0
0 swingbench.:46785 10.3.58.56:ncube-lm
tcp
0
0 swingbench.:46781 10.3.58.56:ncube-lm
tcp
0
0 swingbench.:46773 10.3.58.56:ncube-lm
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
ESTABLISHED
The number of TCP sockets that are opened between Swingbench and ITSOTST5, as
shown in the following example:
swingbench:~ # netstat -a | grep swingbench | grep 10.3.58.55 | wc -l
50
There are 50 connections that are open, which corresponds to the half of the 100 users
that are defined in Swingbench.
The number of TCP sockets that are opened between Swingbench and ITSOTST6, as
shown in the following example:
swingbench:~ # netstat -a | grep swingbench | grep 10.3.58.56 | wc -l
50
There are 50 connections that are open, which corresponds to the half of the 100 users
that are defined in Swingbench.
131
132
As shown in Figure 6-7, when the ITSOTST6 is stopped, transaction per seconds froze for a
short time on all the RAC. Some transactions are rolled back by the TAF and automatically
balanced to the second node. However, all users stay connected to the database.
133
After ITSOTST6 rejoins the cluster, the 100 users on ITSOTST5 cannot be rebalanced between
the two nodes. Only new user connections are automatically balanced.
134
We can apply the steps that are described in Scenario 2 for Oracle cluster on ITSOTST5,
relocate it with LGR, and then restart it.
By using Oracle failover capacities and z/VM LGR, we relocated all of our Oracle RAC
infrastructure in a new z/VM with only a few seconds of interruption of services during the
relocation and with no loss of users connections, queries, or transaction to the database.
135
136
Chapter 7.
137
7.1.1 Architecture
In this section, the Linux and z/VM architecture is described.
Linux
From a Linux perspective, processor resource is assigned to kernel or processes. Processes
can be Oracle processes, kernel processes, or from many infrastructure or other applications.
The important part of analyzing the use of the processor at the Linux level is knowing which
processes are using how much CPU. See Figure 7-4 on page 142 for an example.
When analyzing delay, it is common for installations to look at steal time. This is when the
Linux server is waiting because another server or logical partition (LPAR) is using the CPU
resource and Linux is not dispatched. This is the perfect example of the value of having
higher level information: knowing there is a problem is not as valuable as knowing why there
is a problem. From inside Linux, it is difficult to determine the total IFL usage that is causing
the steal time.
z/VM
For our purpose, Linux servers run under z/VM, which, in turn, runs in one of possibly many
LPARs. Given a specific number of IFL processors in one System z server, sharing of the
processors is across LPARs, then virtual machines, and then within Linux might be across
multiple virtual processors. The one metric that is important to monitor is the Total CPU
Utilization. Processor distribution is handled within Linux by using nice, within z/VM by using
share, and for LPARs with weights, none of which matter at low usage. At higher usage, it is
up to the installation to decide which processes, virtual machines, and LPARs are allowed
access to the real processors.
From a Linux perspective, nicing processes lower (negative) and gives them higher access to
the processors. From a z/VM perspective, setting shares higher gives virtual machines more
access to the physical processors. From an LPAR perspective, setting weights via the
Hardware Management Console (HMC) determines which LPAR has access to how much
processing power.
Linux
Best practices for Linux require an understanding of costs. This is a shared resource
environment and resources that are used by untuned applications, unneeded applications, or
infrastructure take away resources from productive work. An untuned application that uses
excessive resources has a cost in terms of capital expenditures that are required to support
that application because more IFLs are needed at some point than if the application was
tuned. An IFL has a cost that can be identified.
138
z/VM
Without a full description of the z/VM scheduler, share settings are used to control CPU
distribution, and can be absolute or relative. The following should be followed when you are
setting share for virtual servers:
There is a choice between absolute shares and relative shares. Both of these shares are
normalized based on current load on the system in terms of logged on servers. The
normalized share is used by the scheduler to schedule servers that are requesting
service. Relative shares are provided to servers that offer a share of the processing power
relative to other servers and that share drops as more servers log on to the system.
Absolute shares are fixed and independent of load. As more servers log on to a system,
they increase the servers share of the CPU resource as compared to relative servers.
Thus servers where service requirements increase as load increases should be absolute.
Servers that should compete for CPU resource should be relative. For example, most
Linux servers should be relative; TCP/IP and RACF should be absolute.
Shares are divided by the number of virtual processors that are defined to the server.
Thus, a default share of Relative 100 for a virtual server with two virtual processors
provides both virtual processors a Relative share of 50. It is important to have a base line
performance target (if there are two virtual processors) because the relative share should
be 200. If there are four virtual processors, the relative share should be set to 400.
Changing shares should be done in small increments.
Linux
For more information about Linux performance on this architecture, see Linux on IBM System
z: Performance Measurement and Tuning, SG24-6926.
z/VM
Performance analysis for the processor starts at the LPAR level. When there are multiple
LPARs, as in Figure 7-1, the graph shows the total IFL utilization and the total General
Purpose processor utilization.
139
Linux
Storage size is critical to Linux and Oracle performance. When the Oracle SGA/PGA does not
fit into Linux storage, Linux storage management moves pages to the swapping subsystem.
The following storage is measured:
Minimizing storage requirements in this shared storage environment means more storage is
available for other servers and other work. In IBM System z environment, a much higher I/O
bandwidth is supported and expected than distributed environments, which allows Linux on
System z to reduce the cache size at the expense of more I/O. With the objective of
maximizing throughput, oversizing storage reduces any configurations work potential.
z/VM
The z/VM LPAR is allocated storage that is then shared between the virtual servers. When
storage is over allocated and all virtual servers do not fit into storage, it is called overcommit,
or level of over commitment. When storage is overcommitted, pages are paged to the paging
subsystem on Expanded Storage or disk.
z/VM analysis
In environments where servers do not drop from the scheduler queue because of processes
that poll on a 10 ms basis, it becomes difficult to differentiate between servers doing work and
those that are only polling. When z/VM Control Program (CP) takes storage from a server that
is polling, there is little cost if that page goes to disk. When CP takes a page from a server
that is doing work, there is a high likelihood that the page is needed again by the server and
there is a high cost when that page has gone to disk. Expanded Storage becomes the buffer,
and the larger the buffer the better. In total, 20% - 25% of your storage should be configured
as Expanded Storage to ensure active pages are not moved directly to disk.
Recommendation: The 20% - 25% suggestion for Expanded Storage is for z/VM LPARs
up to 8 GB. For larger z/VM LPARs, we suggest 2 GB - 4 GB of Expanded Storage, based
on observed z/VM paging rates.
QDROP1 is important. Avoid running noisy processes that poll every 10 ms, which keeps
the server active. Servers that do not drop from queue require more real storage at a lower
overcommit level.
140
QDROP is the process that CP uses to remove servers from the 'run queue', and thus defining them as inactive.
Linux
When storage is analyzed, the example in Figure 7-2 and Figure 7-3 shows three Oracle
servers and their storage layout. Understanding from the Linux perspective the storage layout
highlights the issues and the opportunities. In the following examples, the node lnxsa3 is a
4 GB server with approximately 80 MB used for kernel and page structure tables (4096,
subtract 4015.7). Swap space has few pages, so at one time this server used all of its
storage, likely during the Oracle installation. The right buffer is 75 MB, and the page cache is
2,800 MB. Knowing the storage layout of all the servers shows where to focus from a system
level.
Screen: ESAUCD2 ITSO
1 of 2 LINUX UCD Memory Analysis Report
Node/
Time
Group
-------- -------13:56:00 lnxsa3
lnxsa1
lnxcl2n1
Node/
Time
Group
-------- -------14:52:00 lnxsa3
lnxsa1
lnxcl2n1
<------------------Storage sizes
<--Real Storage--> <--Storage in
Total Avail Used Shared Buffer
------ ----- ----- ------ -----4015.7 892.7 3123
0.0
75.2
996.5 502.0 494.5
0.0
30.9
4015.7 1219 2796
0.0
40.7
To understand where the Linux storage is allocated, split screen mode is available, as shown
in Figure 7-4 on page 142. In this case, the LNXCL2N1 node was selected in split screen with
the ESAUCD2 window showing the Linux perspective and the ESALNXP showing the RSS
Resident Storage Size (RSS) for the active processes. The Linux storage management
shares many of the pages between processes. Thus, the Oracle processes might each show
40 MB and 39 MB, the overlap between the two might be most of it. The Shared metric was
not implemented in Linux now, thus it is not possible to know the overlap between these
processes. In this case, the Java process is using considerably more than all the Oracle
processes combined. This scenario highlights that because a server is Oracle, it should not
create assumptions about how the storage resource is being used.
141
Time
Node
Name
-------- -------- --------13:52:00 lnxcl2n1 *Totals*
init
snmpd
ohasd.bi
oraagent
oracle
oracle
java
<-Process Ident->
ID PPID GRP
----- ----- ----0
0
0
1
1
1
2610
1 2609
2688
1 2688
3039
1 3039
3207
1 3207
5126
1 5126
26947 26902 26902
PF1=Help
PF3=Quit PF4=Select PF5=Plot PF6=ESALNXR PA1=CP
PF7=Backward PF8=Forward
PF10=Parms PF11=More PF12=Exit
PA2=Copy
====>
Screen: ESAUCD2 ITSO
ESAMON 4.110 10/18 13:54-13:55
2 of 2 LINUX UCD Memory Analysis Report
CLASS THEUSRS NODE L 2827
Node/
Time
Group
-------- -------13:55:00 lnxcl2n1
<------------------Storage sizes
<--Real Storage--> <--Storage in
Total Avail Used Shared Buffer
------ ----- ----- ------ -----4015.7 79.4 3936
0.0 144.2
PF1=Help
PF2=Zoom
PF3=Quit PF4=Select PF5=Plot
PF7=Backward PF8=Forward
PF10=Parms PF11=More
PA2=Copy
PA1=CP
PF12=Exit
Figure 7-4 Split Screen Mode showing Linux storage and active processes
z/VM
Real storage in a z/VM system is assigned to one of many different types of address spaces.
To analyze storage, first understand where the existing storage is allocated. The example in
Figure 7-5 shows from the system level that there are 6.8 million pages.
Screen: ESASTR1 ITSO
1 of 2 Main Storage Analysis
ESAMON 4.11
<------------------Pages------------------>
System Fixed Non- Free Frame <Available >
Time
Storage Store Pgble Stor Table <2gb >2gb
-------- ------- ----- ----- ---- ----- ----- -----12:06:00 6815744 2458 5058 252 53248 178K 2094K
12:05:00 6815744 2458 5058 252 53248 178K 2094K
Figure 7-5 Main Storage Analysis
142
Capture
Ratio
----0.998
0.998
In Figure 7-6, storage for each address space type is found. In this case, users are using
2.1 million pages, saved segments are using 130 K pages, VDisk address spaces are using
56,000 pages, and minidisk cache (MDC) has 175K pages. MDC should never be allowed to
use that much storage, especially in a Linux environment. The exception is for shared root
and binary implementations where multiple servers are sharing some number of minidisks
read only.
Screen: ESASTR1 ITSO
2 of 2 Main Storage Analysis
Time
-------12:55:00
12:54:00
12:53:00
<----------------------Pages---------------------->
Systm User NSS/DCSS <-AddSpace> VDISK <MDC> Diag
ExSpc Resdnt Resident Systm User Rsdnt Rsdnt 98
----- ------ -------- ----- ----- ----- ----- ---5148 2105K
139189 47416
0 56597 175K 2304
5142 2106K
139186 47416
0 56597 175K 2304
5145 2107K
139186 47416
0 56597 175K 2304
Capture
Ratio
----0.998
0.998
0.998
Each type of address space can be further analyzed. For example, the number of pages in
use by Named Saved Systems (NSS) and Discontiguous Saved Segment (DCSS) seemed
high at 139,189 pages. So, in looking at the ESADCSS display (as shown in Figure 7-7), the
VSMDCSS is more than 90% of that and VSMDCSS was the SMAPI saved segment.
Screen: ESADCSS ITSO
1 of 3 NSS/DCSS Analysis
Time
Name
-------- -------13:14:00 System
CMS
CMSFILES
CMSPIPES
CMSVMLIB
GCS
INSTSEG
MONDCSS
NLSAMENG
SCEE
SCEEX
VSMDCSS
ZMON
ZVWS
ESAMON 4.110
DCSS *
143
A best practice for managing many Linux servers includes classifying those servers. In
Figure 7-8, our Cloned Linux servers are in TheUsrs, with other servers classified per their
use. In this analysis, because we were concerned about how much storage was going to be
used when we started 100 servers, it was important to watch how much storage was used to
avoid having the whole system abend because of a lack of resources (our paging subsystem
was only 8 GB).
Screen: ESAUSPG ITSO
1 of 2 User Storage Analysis
UserID
Time
/Class
-------- -------15:48:00 System:
ORACloud
Cluster1
Cluster2
StnAlone
OthLinux
TheUsrs
Servers
Velocity
KeyUser
<Address Spaces>
<Pages Resident>
VirtDisk AddSpce
-------- ------132629
0
73520
0
31154
0
9369
0
16721
0
1838
0
0
0
0
0
27
0
0
0
With a potential for 100s of servers on the LPAR, it is more important to understand resource
requirements by workload rather than by server. After the workload is analyzed, in this case
(as shown in Figure 7-9), the next step was to look at the cloud workload. By carefully
reviewing the ORACLOUD workload, all the servers in that workload show up. In this experiment,
we cloned 100 servers in about 60 minutes and then logged them on.
Screen: ESAUSPG ITSO
1 of 2 User Storage Analysis
UserID
Time
/Class
-------- -------15:51:00 S11OR001
S11OR003
S11OR002
S11OR025
S11OR039
S11OR040
S11OR022
S11OR030
S11OR024
S11OR032
S11OR013
S11OR018
144
<Address Spaces>
<Pages Resident>
VirtDisk AddSpce
-------- ------1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
1838
0
Extended count key data (ECKD) versus Fibre Channel Protocol (FCP)
Disk size or logical unit number (LUN) size
HiperPAV or parallel access volume (PAV)
Ficon Channel Extension (FCX)/High Performance Ficon (HPF)
Logical Volume Manager (LVM), striped or non-striped and strip size
Linux I/O Scheduler
DIRECT I/O versus buffered
Different technologies have different costs, performance, and management capabilities. For
more information about disk I/O and Oracle, see the Oracle Database on Linux on System z
Disk I/O Connectivity Study2 white paper. It provides guidelines for disk configuration and
tuning hints regarding ECKD and FCP disk devices, specifically for the Oracle stand-alone
database that uses a transactional workload (OLTP) in an LPAR on IBM zEnterprise System.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102234
145
7.2 Oracle
This section describes the Oracle specific aspects of performance tuning.
7.2.1 Architecture
Oracle is a standard Linux application, in that it uses various processes to perform tasks on
behalf of the database and its users. Each database process allocates the Oracle kernel and
runs the code to address the System Global Area (SGA). For this reason, the SGA must be in
a shared memory area. It is this method that permits each process to believe that the SGA is
uniquely assigned to it, and yet shares it among other processes.
Oracle processes consist of the following tasks:
Database background tasks: These tasks perform session management and cleanup,
write I/O, cache management, pool management, and lock detection. They also manage
logging, log archival locally and remotely, check pointing, and corruption detection. In a
clustered environment, they manage inter-instance communication, log management, and
cluster membership. In an Automatic Storage Management (ASM) configuration, they
manage segment allocation and storage pool management. Other tasks include manage
time, job scheduling, memory management, and Automatic Workload Repository (AWR)
statistics collection.
Database server processes: User tasks that perform operations on behalf of user
requests. They can exist one per user session (Dedicated Server), or as Shared Servers
that can be used to multiplex user sessions to processes, which saves resources.
User processes are critical for performance measurement because they run the SQL and
also perform read I/O on behalf of the user session into the buffer cache. Because a database
is principally designed to service SQL and perform I/O to get the data to perform that service,
most performance measurement and management is concentrated on these processes.
For more information about the Oracle Process architecture, see Chapter 15 Process
Architecture of Oracle Database Concepts 11g Release 2 (11.2), E25789-01, which is
available at this website:
http://docs.oracle.com/cd/E11882_01/server.112/e25789/process.htm#i7265
146
It is critical to keep up with maintenance and patching. For example, with Release 11.2.0.3
the following parameters exist:
Virtual Keeper of Time (VKTM) process uses slightly less CPU minutes (about 0.08 versus
0.09 with 11.2.0.2).
Great improvements with ora_dia0 process (about 0.07 sec cpu/minute versus 0.28 with
11.2.0.2).
Install only the database modules that are needed and be aware of the following results:
When the database is installed with no options, the gettimeofday function is called 300
times every 15 seconds.
When the database is installed with all options (Java, .xml, Text, spatial, APEX, and so
on), the gettimeofday function is called 1500 times every 15 seconds.
Consider whether it is appropriate to use the Oracle Resource Manager, especially under
z/VM because you then have multiple layers of CPU scheduling and slicing potentially
interfering. You can disable this with the following Oracle parameter:
resource_manager_plan = ''
Additionally, you need to disable the Maintenance Window Resource Plan, as shown in the
following example:
select window_name,RESOURCE_PLAN from DBA_SCHEDULER_WINDOWS;
WINDOW_NAME RESOURCE_PLAN
------------------------------ -----------------------------MONDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
execute dbms_scheduler.set_attribute('MONDAY_WINDOW','RESOURCE_PLAN','');
WINDOW_NAME RESOURCE_PLAN
------------------------------ -----------------------------MONDAY_WINDOW
147
For more information about using AWR to diagnose performance issues, see How to Use
AWR reports to Diagnose Database Performance Issues, 1359094.1.
Based on a real-world scenario of a RAC system, we show a worked example in Figure 7-10.
Here, the DB Time is a high multiple of the elapsed time, which indicates that this system and
all nodes on the cluster are busy.
If we look at the top timed events that are shown in Figure 7-11 (remembering that these are
not necessarily waits), we see that most of the time is spent in I/O and RAC interconnect.
However, what is important to notice is that the average response times for log file sync are
245 ms, interconnect 30 ms - 200 ms, and even local I/O 42 ms. These numbers are unusual
for a System z, and we can suspect that what we see from the AWR is an artifact of another
problem.
148
We are in general, over 75% are busy (as an average over the period), which does not seem
too bad. However, AWR is not virtual aware and Oracle is unaware that it is not running on a
dedicated machine. By taking a step back and looking through the Linux performance view,
perhaps we can see the cause of the issue.
In this case, we use the vmstat command to see the Linux view of performance, as shown in
Figure 7-13.
We can readily see that we have high run queue, which indicates that we cannot service the
resource demand. We have rapidly reducing free memory, and we can see that although
AWR reports free CPU, we can see that this is taken up by steal time, which AWR does not
take into account.
To summarize this example, the guests were running with insufficient memory to service the
SGA and pGA demands, which led to swapping. The CPU demands of this swapping across
the LPAR reduced the CPU available to service the database, which leads to a process queue
building up of both database tasks, network tasks, and I/O tasks, which are reported in the
AWR as slow I/O and interconnect. This is an example of how a shortage of one resource can
appear as a different issue and is only diagnosable by using multiple layers of diagnostic
information.
For more information about Linux performance on this architecture, see Linux on IBM System
z: Performance Measurement and Tuning, SG24-6926.
149
The Oracle Database 11g AMM feature is enabled by the MEMORY_TARGET / MEMORY_MAX_TARGET
instance initialization parameters. These cause the SGA to be allocated in the /dev/shm
temporary space, and this must be configured to be as large as the sum of MEMORY_TARGET /
MEMORY_MAX_TARGET for all instances that use AMM on the guest or LPAR.
One of the principal issues with large SGA sizes with large user populations is the amount of
memory that is required to manage the memory structures: the page tables. Each Oracle
process must map the SGA as though it were a private memory area and maintain a page
table to perform the page translation from the process local address to the real page address
in the shared memory segment. These tables take 4 bytes per 4k SGA page addressed, so it
can easily be seen that a 60 GB SGA with 3500 active users needs an enormous amount of
memory to maintain these tables. This memory often is not tracked, and if you are not aware
of it, it can cause some unexpected surprises and out-of-memory conditions. To view the
usage, you can use the following command:
$ cat /proc/meminfo | grep page
AnonPages:
1039248 kB
PageTables:
98820932 kB
HugePages_Total:
0
HugePages_Free:
0
HugePages_Rsvd:
0
Hugepagesize:
2048 kB
The best way to avoid this condition is to use Hugepages to increase the page size of the
shared memory segment, which reduces the number of mapping page tables that are
required per process.
150
In either case, you must also increase the ulimit parameter memlock in
/etc/security/limits.conf.
Set the value (in KiB) slightly smaller than installed RAM. For example, if you have 64 GB
RAM installed, you can set the following parameters:
*
*
soft
hard
memlock
memlock
60397977
60397977
We also strongly recommend setting the Oracle parameter as shown in the following
example:
use_large_pages = ONLY
This ensures that you do not use small pages when you expect to use large or Hugepages.
The implications of this on a busy production system can be serious performance issues.
Set ONLY for those instances where you expect hugepages, and FALSE for those that you know
do not require the small SGA, in general.
Tip: For more information about Hugepages, see the following My Oracle Support Notes:
HugePages on Linux: What It Is... and What It Is Not..., 361323.1
HugePages and Oracle Database 11g Automatic Memory Management (AMM) on
Linux, 749851.1
ORA-00845 When Starting Up An 11g Instance With AMM Configured, 460506.1
USE_LARGE_PAGES To Enable HugePages In 11.2, 1392497.1
The Linux default is the CFQ scheduler for 2.6 Linux kernels and deadline for 3.x kernels.
3
151
CFQ is optimized for access to direct physical disks and is not suitable for typical storage
servers that are used with System z.
The I/O scheduler is configurable by setting the elevator= boot parameter in the
/etc/zipl.conf file.
You can verify your current settings by checking the following parameters:
# cat /sys/block/<device>/queue/scheduler
noop anticipatory [deadline] cfq
For Linux on System z Oracle environments, we recommend the Deadline or Noop I/O
scheduler.
It is important to investigate the best I/O scheduler for your environment by testing with tools
such as Oracles io_calibrate or Orion.
On one system, we changed the I/O scheduler in the zipl.conf parameters elevator=noop,
which helped with reducing CPU for the SAN environment that we were using.
If a non-ASM file system is used for your database files, it is recommended to reduce the
Linux ReadAhead for your database LVM file systems with the following command:
# lvchange -r none <lv device name>
152
It used to be said that, Because disk I/O is slow, make use of large Oracle and OS buffers.
Now, with much faster disk (especially in the low ms service time range), disk is still not as
fast as memory but it is possible for random transactional I/O to use disk. If a transaction can
get its data in a few milliseconds, overall transaction time is unaffected.
Tip: Use direct, asynchronous I/O to bypass the OS cache and tune the Oracle buffer
cache for optimal operation when you are migrating from another platform because this
usually means reduce extensively.
It used to be said that you should avoid network operations. While this is still true, SAN
devices over modern channel and network attachments coupled with multipath or PAV means
that network or pseudo network devices are no longer the slow option they used to be. Drive
the I/O subsystem as hard as you can. You paid for the capacity, so use it.
7.3 Summary
This is a constantly evolving subject. The latest information about this topic often is presented
at the Oracle Collaborate, SHARE User Group, and zSeries Oracle Special Interest Group
Conferences.
For more information, see this website:
http://www.zseriesoraclesig.org
153
154
Chapter 8.
Cross-platform migration
overview
Attention: The following command conventions are used in this chapter:
Linux commands that are running as root are prefixed with #
Linux commands that are running as non-root are prefixed with $
In this chapter, we provide an overview of cross-platform migration topics.
We first highlight what you need to consider before any migration to choose the appropriate
technique. Then, we describe the most popular migration techniques with a set of best
practices to perform an efficient migration.
We also provide a real example in which we describe the steps of a migration by using Oracle
Export/Import Data Pump utilities.
This chapter includes the following topics:
Introduction
Considerations before any migration
Migration techniques
Best practices
Example of migration by using Export/Import Data Pump
Summary
155
8.1 Introduction
Cross platform migration became a common operation in the IT world. Companies perform
this operation for the following reasons:
After the decision to migrate from one platform to another is made, several options are
available. The choice of the migration technique is guided by technical criteria and
organizational criteria.
In this chapter, we describe the most used techniques (based on our experience) to give an
overview on what is available and the advantages and the limitations of each technique. We
do not recommend one technique more than another because each case is specific.
Furthermore, migration can also combine several techniques.
In our example, we focus on the Oracle Export/Import Data Pump utility, which is likely the
most used technique for migrating from one platform to another.
This document does not replace any IBM or Oracle documents. We assume that the user is
reasonably skilled in the following areas:
Oracle database administration activities
Linux System administration skills
8.2.1 Downtime
Each migration leads to a database downtime. Depending on the technique, this downtime
can be from a minimum of a few minutes to more than a day. For critical applications that must
be always available, downtime is the main criteria for clients to choose the appropriate
technique.
Endianness
The Endianness describes how the bits are organized as seen from the outside. Depending
on the platform, this can be Little Endian or Big Endian.
Some cross platform migration methods require the same Endianness.
156
ENDIAN_FORMAT
Solaris OE (32-bit)
Big
Solaris OE (64-bit)
Big
Little
Linux IA (32-bit)
Little
Big
HP-UX (64-bit)
Big
HP Tru64 UNIX
Little
HP-UX IA (64-bit)
Big
Linux IA (64-bit)
Little
HP Open VMS
Little
Little
Big
Little
Apple Mac OS
Big
Little
Little
Big
Use the following command to check the Endianness for your platform:
select platform_id, platform_name from v$database;
Objects
Some objects cannot be migrated with certain techniques, as shown in the following
examples:
Streams cannot handle secure files Character Large Object (CLOB), National Character
Large Object (NCLOB), Binary Large Object (BLOB), and other types. For more
information, see Migration techniques on page 159).
Export/Import Data Pump utilities cannot be used for XML types.
157
8.2.5 Network
If the migration technique uses the network (for example, replication techniques), you need to
ensure that the network is efficient in terms of bandwidth and latency; otherwise, this potential
bottleneck dramatically increases the migration operation duration.
The chosen technique must also take into account the location of the servers, target, and
source. Constraints can include that they are geographically dispersed or they cannot
communicate with each other.
8.2.7 Skills
A migration can be considered a risky operation. Depending on the products and techniques
that are already used in your environment, you might prefer one technique over another.
Tip: Whenever possible, perform the migration with known products to mitigate the risks.
158
159
Advantages
This technique features the following advantages:
Limitations
This technique includes the following limitations:
Cannot be used with database versions below Oracle 10g. For older versions, use a
standard Export/Import. It is slower and has the following restrictions:
BINARY_DOUBLE and BINARY_FLOAT data types cannot be exported with EXP
utility
Java classes, resources, and procedures that are created with Enterprise JavaBeans
are not processed
Data is not stored in the compressed format when it is imported
Dump files that are generated by the Data Pump Export utility are not compatible with
dump files that are generated by the original Export utility
Downtime can be significant for large databases
Advantage
The main advantage in using this technique is that it can be used across different
Endianness. If Endianness are the same, we can use the Transportable Database feature, as
described in Other techniques on page 164.
160
Limitations
This technique includes the following limitations:
Requires a larger time investment to test the migration and to develop methods of
validating the database and application. Consider whether the additional testing time,
complexity, and risk that is involved are worth the potential to reduce migration downtime.
Requires a higher level of skills for the database administrator and application
administrator that is compared to the use of Data Pump full database Export and Import.
Does not transport objects in the SYSTEM tablespace or objects that are owned by
special Oracle users, such as, SYS or SYSTEM. Applications that store some objects in
the SYSTEM tablespace or create objects as SYS or SYSTEM require more steps and
increase the complexity of the platform migration.
Self-contained Oracle TableSpaces can be moved between platforms only.
If the destination database already contains a tablespace with the same name, you must
rename or drop it.
Triggers, packages and procedures must be re-created on the target database.
Only user tablespaces can be transported. System and SYSAUX objects must be created
at the target.
Tablespaces must be self-contained. (Materialized views or contained objects, such as,
partitioned tables, are not transportable unless all of the underlying or contained objects
are in the tablespace set.)
The source and target databases must have the same character set.
All system privileges are not imported into the upgraded database.
Resetting sequences and recompiling invalid objects might be needed.
The Transportable Tablespaces migration approach does not allow for re-architecture of
the database (logical and physical layout) as part of the migration.
Fragmented data is still exists.
Advantages
This technique features the following advantages:
There is no need for extra space for dump files because you copy directly from the source
to the target.
This technique can be used for one or several large tables.
161
Limitations
This technique includes the following limitations:
Because this technique uses the network, the network traffic can be significant and slow
down other operations, depending on the size of the tables.
This can be used to migrate one or several tables, but we cannot envisage to use it for an
entire database.
This technique can be used with other techniques (Export/Import or Transportable
Tablespaces, for example).
Advantages
This technique includes the following advantages:
162
Limitations
This technique includes the following limitations:
Some setup activity is required.
Some data types are not supported for capture processes, so an Export/Import of the
following object types also is required:
SecureFile CLOB, NCLOB, and BLOB
BFILE
Rowid
User-defined types (including object types, REFs, arrays, and nested tables)
XMLType stored object relationally or as binary XML
The following Oracle supplied types:
Any types
URI
Spatial
Media
Advantages
This technique includes the following advantages:
Near zero downtime
Works across platforms without conversion
Failback is possible
163
Limitations
This technique includes the following limitations:
Associated extra license costs
Memory and CPU overhead (3% - 5% CPU impact of Oracle GoldenGate Replication on
the source system, depending on the number of redo logs that are generated)
The following data types are not supported:
ORDDICOM
ANYDATA
ANYDATASET
ANYTYPE
BFILE
MLSLABEL
TIMEZONE_ABBR
TIMEZONE_REGION
URITYPE
UROWID
Transportable database
With transportable database, we can transport an entire database (user data and the Oracle
directory) to a platform with the same Endian format.
This technique uses Recovery Manager (RMAN) and is similar to transportable tablespaces,
but the main limitation is that the Endian must be the same for the source and the target.
For more information, see Chapter 10 of Experiences with Oracle Solutions on Linux for IBM
System z, SG24-7634.
164
8.3.7 Considerations when migrating from File System to ASM or vice versa
The following organization types are available for your database files:
File System
Automatic Storage Management (ASM)
ASM is built into the Oracle kernel and provides the DBA with a way to manage many disk
drives for single and clustered instances of Oracle. ASM is a file system/volume manager for
all Oracle physical database files (such as, data files, online redo logs, control files, archived
redo logs, RMAN backup sets, and SPFILEs). All of the database files (and directories) to be
used for Oracle are contained in a disk group.
If you decide to change the way your Oracle database files are organized, you can use RMAN
Backup/Restore capabilities.
For more information about migrating databases to and from ASM by using recovery
manager, see this website:
http://docs.oracle.com/cd/B14117_01/server.101/b10734/rcmasm.htm
165
CPU
To evaluate the CPU that is needed (quantity of IFLs), you can ask IBM Techline for a sizing,
or to use the IBM SURF and SCON tools if you have access to them.
For this evaluation you need the following information:
Details for the type of server for your target source (server and type model, number of
CPUs and cores, type and speed of cores, and so on)
The average and peak CPU usage (from NMON or SAR data for Linux, perform for
Windows or equivalent product)
The type of workload (for example: production database)
You can ask your IBM representative to get this evaluation.
Memory
On Linux on System z, it is not recommended to oversize the memory, especially in a
virtualized environment.
You need to make sure that the source database memory is optimized. For more information,
see the SGA target advisory and PGA target advisory sections of the Oracle AWR reports.
Assuming that the source system memory is optimized, the suggestion is to use the same
quantity of memory on the target database as on the source database.
To get a representative Oracle AWR, report you must take your snapshots during the peak
period of your workload. You need to determine the peak period within the most loaded day of
the week (or month at certain periods). You can run an ADDM report from your Oracle
Enterprise Manager Database Control for this purpose.
You can find the memory that is used by Oracle in AWR report in the Memory statistics
section, as shown in Figure 8-1.
166
You can find the quantity of memory that is allocated to the Oracle database in AWR report in
the init.ora section, as shown in Figure 8-2.
To make sure SGA and PGA are optimized, you can check the AWR reports advisory section.
For SGA, you find the information in the AWR SGA Target Advisory section, as shown in
Figure 8-3.
167
For information about PGA, see the AWR PGA Target Advisory section, as shown in
Figure 8-4.
For the dedicated server processes, you can calculate the memory that is needed as shown
in the following example:
Memory needed for dedicated server processes = Max(logons concurrent) X memory
used per thread
(On average, dedicated connections use 4.5 MB per connection, which is application
dependant.)
You can find the concurrent log ons in AWR report, in the Instance Activity Stats - Absolute
Values section, as shown in Figure 8-5.
168
I/O information
You can find the I/O information in AWR reports in the load profile section, as shown in
Figure 8-6 on page 169. Physical reads and Physical writes values help you to size the I/O for
this workload on System z.
SAR
NMON
VMSTAT
IOSTAT
169
170
8.4.10 Considerations when you are migrating from Oracle on z/OS to Oracle
on Linux on System z
In this section, we describe the considerations about the character set and Export/Import with
Data Pump utilities.
Character set
z/OS does not have the same character set as Linux; z/OS is EBCDIC whereas Linux is
ASCII. Unicode solves code page mapping issues. For more information, see My Oracle
Support note: Choosing a database character set means choosing Unicode, 333489.1.
8.5.1 Infrastructure
The source infrastructure is IBM AIX 6.1/Power 7 based. The source DB version is Oracle
11gR2.
The target infrastructure is SUSE Linux Enterprise Server 11 SP1 on zEnterprise. The target
DB version is Oracle 11g R2.
171
Location
Role name
Snapshot of DB status
Source
DBA
Export source DB
Source
DBA
Source
DBA
Source
DBA
Import source DB
Target
DBA
Open DB
Target
DBA
Check snapshot
Target
DBA
APEX_030200
OWBSYS
OWBSYS_AUDIT
SCOTT
SOE
SH
XS$NULL
32 rows selected.
77
78
82
83
85
86
2147483638
2. Identify the users that you are migrating. In our case, we are interested in SOE and SH.
3. Check the number of objects of each type before and after the migration, as shown in the
following example:
select owner, object_type, count(*) from dba_objects
where owner in ('SOE','SH')
group by owner ,object_type order by 1,2
OWNER
-----------------------------SH
SH
SOE
SOE
SOE
SOE
SOE
SOE
OBJECT_TYPE
COUNT(*)
------------------- ---------INDEX
10
TABLE
8
INDEX
23
PACKAGE
1
PACKAGE BODY
1
SEQUENCE
2
TABLE
9
VIEW
2
8 rows selected.
4. Check the number of invalid objects, as shown in the following example:
select owner ||' - '|| object_name ||' - '|| object_type from dba_objects where
status = 'INVALID';
no rows selected
This information is important, even for those objects that are not included in the previous
schemas.
5. Check the grants for each user, as shown in the following example:
select 'grant '||granted_role||' to '||grantee||';'
from dba_role_privs
where grantee ='SOE'
UNION
select
'grant '||privilege||' on '||owner||'.'||table_name
||' to '||grantee||';'
from dba_tab_privs
where grantee ='SOE'
UNION
select
'grant '||privilege||' ('||column_name
||') on '||owner||'.'||table_name
||' to '||grantee||';'
from dba_col_privs
where grantee ='SOE';
'GRANT'||PRIVILEGE||'TO'||GRANTEE||';'
Chapter 8. Cross-platform migration overview
173
export.log
soe01.dmp
soe02.dmp
soe03.dmp
soe04.dmp
soe05.dmp
soe06.dmp
It is a recommended best practice that you compress the files to reduce the transfer time.
3. Check the target availability, as shown in the following example:
$ ping 10.3.58.126
PING 10.3.58.126 (10.3.58.126): 56 data bytes
64 bytes from 10.3.58.126: icmp_seq=0 ttl=63 time=0 ms
...
4. Transfer the files, as shown in the following example:
$ sftp 10.3.58.126
The authenticity of host '10.3.58.126 (10.3.58.126)' can't be established.
RSA key fingerprint is 10:00:25:b6:de:9a:50:d8:15:c9:b9:b6:d5:cd:f0:a5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.3.58.126' (RSA) to the list of known hosts.
oracle@10.3.58.126's password:
Connected to 10.3.58.126.
sftp> cd /opt/oracle/export
sftp> mput *.dmp
Uploading soe01.dmp to /opt/oracle/export/soe01.dmp
soe01.dmp
100% 312MB 26.0MB/s
00:12
175
Uploading
soe02.dmp
Uploading
soe03.dmp
Uploading
soe04.dmp
Uploading
soe05.dmp
Uploading
soe06.dmp
sftp> bye
soe02.dmp to /opt/oracle/export/soe02.dmp
100%
soe03.dmp to /opt/oracle/export/soe03.dmp
100%
soe04.dmp to /opt/oracle/export/soe04.dmp
100%
soe05.dmp to /opt/oracle/export/soe05.dmp
100%
soe06.dmp to /opt/oracle/export/soe06.dmp
100%
541MB
25.8MB/s
00:21
339MB
26.1MB/s
00:13
243MB
27.0MB/s
00:09
303MB
25.2MB/s
00:12
25MB
24.7MB/s
00:01
176
SOE
SOE
SOE
SOE
SOE
8 rows selected.
PACKAGE
PACKAGE BODY
SEQUENCE
TABLE
VIEW
1
1
2
9
2
5. Check the grants for each user, as shown in the following example:
select 'grant '||privilege||' to '||grantee||';'
from dba_sys_privs
where grantee ='SOE'
UNION
select 'grant '||granted_role||' to '||grantee||';'
from dba_role_privs
where grantee ='SOE'
UNION
select
'grant '||privilege||' on '||owner||'.'||table_name
||' to '||grantee||';'
from dba_tab_privs
where grantee ='SOE'
UNION
select
'grant '||privilege||' ('||column_name
||') on '||owner||'.'||table_name
||' to '||grantee||';'
from dba_col_privs
where grantee ='SOE'
;
'GRANT'||PRIVILEGE||'TO'||GRANTEE||';'
------------------------------------------------------------------------------grant ALTER SESSION to SOE;
grant CONNECT to SOE;
grant CREATE MATERIALIZED VIEW to SOE;
grant CREATE VIEW to SOE;
grant QUERY REWRITE to SOE;
grant RESOURCE to SOE;
grant UNLIMITED TABLESPACE to SOE;
7 rows selected.
You see there is one grant that is missing, as shown in the following example:
grant EXECUTE on SYS.DBMS_LOCK to SOE;
Grant succeeded.
This was an example of a simple migration. There were no database links, no scheduled jobs,
and no external tables.
177
8.6 Summary
In the Table 8-3, we summarize the features and the limitations of most of the techniques that
were described in this chapter. This overview can be a good starting point to evaluate what
you can and cannot use in your organization.
Table 8-3 Migration techniques summary
Technology
Complexity
Dump/Staging
space
requirements
Downtime
Limitations
EXP/IMP Data
Pump
Simple
Significant:
Depends on
database size
Transportable
Tablespaces
Complex
Significant:
Depends on
database size
Create Table As
Select
No extra space
needed
Significant:
Depends on
database size
Table-by-table
Streams
Complex
Minimum
Additional
repository
database
needed. Some
data types are not
supported
Oracle
GoldenGate
Complex
Need some
space for the
archive logs
Minimum
Additional
product required.
Some data types
are not supported
Transportable
Database
Simple
No extra space
needed
Significant:
Depends on
database size
Same
Endianness
required
Oracle
Data Guard
Simple
Minimum
Cross-platform
limitations
Tip: Testing is the key part of the migration. You need several iterations of tests to
successfully perform your migration.
178
Chapter 9.
This chapter is an introduction to planning a highly available and disaster recovery (HADR)
environment for Oracle Databases that are running on Linux on System z in a virtualized
environment. IBM System z hardware is designed for continuous availability and offers a set
of reliability, availability, and serviceability features (RAS).
Oracle Database is one of the leading technologies with built-in High Availability options. The
combination of IBM System z and Oracle Database provides a system that is comprehensive,
reliable, and capable of deploying highly available environments that offer varying levels of
data, application, and infrastructure resilience.
Many tiers of an HADR solution are possible. Oracle recommends Maximum Availability
Architecture (MAA) as the best practices blueprint for an HA environment. The right HADR
configuration is a balance between recovery time and recovery point requirements and cost.
Based on our experiences in implementing Oracle on Linux on System z, we provide a road
map in this chapter to plan an HADR environment for Oracle databases.
A highly available environment is a combination of technology, coordination across multiple
teams, change control, skills, enterprise culture, and operational discipline. This chapter is an
introduction to the various technology options that are available to users (in-depth information
that is necessary to implement the architectures is not included here). We encourage the
reader to review Oracle MAA white papers that are available on the Oracle Technology
Network website for more in-depth descriptions about implementing the right solutions for
complex environments.
For more information about High Availability, see Chapter 2, Getting started on a proof of
concept project for Oracle Database on Linux on System z on page 13, Chapter 3, Network
connectivity options for Oracle on Linux on IBM System z on page 29, and Chapter 6, Using
z/VM Live Guest Relocation to relocate a Linux guest on page 117.
179
180
High Availability
Oracle technologies for High Availability
High Availability with z/VM
Disaster Recovery solutions
Summary
181
Data Center
System z
LPAR
Network
z/VM
Linux Guest
Oracle Instance
Firewall
Applications
Users
Environment
Facility
Security
Storage
Data Center
The Data Center is the top layer where all the components that are needed for Oracle
databases on Linux on System z are running. This center encompasses all of the servers,
storages, network, facility, human resources, software, and other components that are
required to run the databases.
Servers
System z server is hardware that provides the computing power (CPU, memory, I/O
connections, and power supply).
Logical partitions
System z servers are typically divided into multiple logical partitions (LPARs) to share the
System z hardware resources.
z/VM
z/VM is a hypervisor operating system that is running in an LPAR.
Linux guests
z/VM hypervisor that is running in an LPAR can host one or more Linux operating systems in
entities that are known as virtual machines. We refer to a virtual machine that is running an
operating system as guest. Also, Linux operating system can run natively in an LPAR.
Oracle instances
In a Linux guest, one or more Oracle instances are running.
182
Disk Storage
z/VM, Linux, and Oracle need non-volatile storage for their operations, in the same way that
Oracle instances have their database files on the storages.
In the environment that is shown in Figure 9-1 on page 182, there can be failures in any one
of the components, which can cause unavailability. Planned or unplanned downtime is costly
and in the following sections we describe the general causes for planned and unplanned
outages in the environment that is shown in Figure 9-1 on page 182.
It might be surprising to note that the largest share of time that a database is rendered
unavailable is because of the planned maintenance activities.
In many situations, the planned downtime activities can be coordinated with users in advance.
With the understanding of the business requirements and proper planning, the Service Level
Agreements (SLA) can be met and the effect to the user community can be minimized.
Data Center
The Data Center where the systems are deployed might not be available because of any of
the following factors:
Natural disasters
Sabotages
Power failures
Network or firewall failures
Hardware components
The hardware components that are hosting the applications might not be available for any of
the following reasons:
Hardware failures (CPU, memory, power supply, cables, storage, or switches)
Bottleneck of resources (CPU, memory, storage, or network)
183
Application components
The applications, load balancers, web servers, and other associated software components
might fail because of the following factors:
Application Logic
Application overload
Oracle database
Oracle databases might not be available for any of the following reasons:
Instances failure
Components (Listeners) are not available
Data files corruption or deletion
Logical data corruptions
Security violations
Administration issues (file sizes cannot be extended)
Performance issues
Scalability issues (additional users)
Critical patches application requirements
184
SLA
SLA is the agreed upon downtime for an application with the user. This might span from
system availability to functional availability in the application. Typically, the query response
time during online or the reports creation time during batch running dictates this
requirement.
In business environments, users also divide their applications into multiple tiers. Typically,
Tier 1 applications can have the strictest RTO, RPO, and SLA requirements. Tier 2 and Tier 3
have less stringent requirements. The HADR solutions for Tier 1 applications normally are
costlier to implement.
Note: The right High Availability configuration is a balance between the recovery
requirements and cost.
185
For Oracle Databases backup Oracles Recovery Manager (RMAN) utility is the ideal choice
for most users. Many third-party vendors, such as, IBM Tivoli integrated with RMAN utilities
to offer value-added services for backup and recovery.
Using RMAN for backup includes the following advantages:
RMAN automatically determines what files are to be backed up and what files must be
used for media-recovery operations.
Online database backups are done without placing tablespaces in backup mode.
Block-level incremental backups and data block integrity checks are done during backup
and restore operations.
Automated tablespace point-in-time recovery and block media recovery is available.
HADR applicability
Solid backup and recovery process is the foundation and part of any HADR configuration.
HADR applicability
HADR applicability includes data corruption that is caused by human errors.
187
ASM features
ASM includes the following features:
An ASM disk group is a collection of disks that us managed as a unit. A disk group can
have as many as 10,000 disks and each disk can have a maximum size of 2 TB.
Each disk group is self-contained and has its own ASM metadata. An ASM instance
manages that ASM metadata.
In Oracle 11.2 three disk groups are specified: one for data, one for flash recovery and
archive, and another one for SPFILE, voting, and Oracle Cluster Registry (OCR).
In large enterprises, the data disks can be groups that are based on the storage tiers. The
best practice is to use similar performance level and similar sized disks within a group. The
disk size is not an influential factor, and a minimum of four disks is recommended per
group.
ASM looks for disks in the operating system location that is specified by the
ASM_DISKSTRING initialization parameter.
For 11gR2, the SCAN listener is run from GI Home and database listener from DB HOME.
Oracle recommends RMAN to back up and transport database files in ASM
ASM benefits
ASM features the following benefits:
ASM spreads data evenly across all disks in a disk group. This software-controlled striping
evenly distributes the database files to eliminate the hot spots.
Optionally, ASM supports two-way mirroring in which each file extent receives one
mirrored copy. It also supports three-way mirroring in which each file extent receives two
mirrored copies. Additionally, ASM mirrors at file level, and the mirrored copy is kept at a
disk other than the original copy disk. This configuration improves the availability.
Dynamic addition of disks and removal facility of ASM improves the storage availability.
ASM can now store Voting and OCR files for Oracle clusters.
ASM reduces administrative tasks by enabling files that are stored in Oracle ASM disk
groups to be Oracle Managed Files. It reduces the complexity of managing thousands of
files in a large environment.
HADR applicability
HADR applicability includes the following factors:
Data corruption
Storage failures
188
HADR applicability
HADR applicability includes the following features:
189
Databases can be consolidated into a single cluster for efficient administration. If a server
fails, they can be quickly relocated.
Ready to scale and upgrade to multi-node Oracle RAC for scalability.
HADR applicability
HADR applicability includes the following features:
190
Unlike Oracle Data Guard, RAC is a single database (no secondary database) and data
corruptions, lost writes, or database-wide failures are possible.
Storage complexity.
HADR applicability
HADR applicability includes the following features:
FAN
FAN emits events when database conditions change, such as service, instance, or site goes
up or down. The events are propagated by Oracle Notification System (ONS) or Streams
Advanced Queuing (AQ). Compared to TCP/IP timeout, FAN provides fast detection of
condition change and fast notification.
The FAN events can be used by the applications or users that connect to a new primary
database upon failover by using Fast Connection Failover (FCF).
FCF
FCF is an Oracle High Availability feature for Java Database Connectivity (JDBC) applications
and supports the JDBC Thin and JDBC Oracle Call Interface (OCI) drivers. FCF works with
the JDBC connection caching mechanism, FAN, and Oracle RAC.
FCF provides the following High Availability features for client connections in planned and
unplanned outages:
Rapid database service, instance, or node failure detection then stops and removes
invalid connections from the pool
Recognition of new nodes that join an Oracle RAC cluster
Load balancing the connection requests to all active Oracle RAC instances
Chapter 9. High Availability and Disaster Recovery environment for Oracle
191
HADR applicability
A server failure, Linux crash, or other faults can cause the crash of an individual Oracle
instance in an Oracle RAC database. To maintain availability, application clients that are
connected to the failed instance are quickly notified of the failure and immediately established
with a new connection to the surviving instances of the Oracle RAC database.
An Oracle Data Guard configuration can include any combination of these types of standby
databases.
192
Physical standby database can be used for taking backup, incremental backups, report
creations, and creating clone databases.
HADR applicability
HADR availability includes the following features:
Data Guard technology addresses High Availability and Disaster Recovery requirements.
Data Guard technology complements Oracle RAC.
Provides one or more synchronized standby databases and protects data from failures,
disasters, errors, and corruptions.
193
HADR applicability
HADR applicability includes the following features:
Maintains transactional integrity; it is resilient against interruptions and failures.
Heterogeneous replication, transformations, subsetting, and multiple topologies.
All sites fully active (read/write).
HADR applicability
z/VM offers the following technologies to enhance the High Availability of Oracle databases
on Linux on System z environment:
Single points of system failures can be avoided by implementing z/VM multi-system
clustering technology, as described in Chapter 6, Using z/VM Live Guest Relocation to
relocate a Linux guest on page 117.
Ability to use multiple LPARs and distributing Linux guests to run on them reduces several
potential single points of failure at the system-image level.
194
The applications that are running on LPARs in a single System can communicate with
each other by using HiperSockets or memory-to-memory data transfers. This avoids any
external traffic and is a good choice to implement Oracle RAC interconnect requirements.
Virtual switch (VSWITCH) under z/VM and OSA Channel Bonding under LPAR can be
used to avoid network single point of failures.
For ECKD DASD devices that are accessed over FICON channels, redundant
multipathing is provided and handled invisibly to the Linux operating system.
For SCSI (fixed-block) LUNs that are accessed over System z FCP channels, each path to
each LUN appears to the Linux operating system as a different device. Linux kernel (2.6
and above) multipath facility handles this and provides High Availability.
Tip: For more information about High Availability with System z, see High-Availability of
System Resources: Architectures for Linux on IBM System z Servers,
ZSW03236-USEN-01, which is available at this website:
http://public.dhe.ibm.com/common/ssi/ecm/en/zsw03236usen/ZSW03236USEN.PDF
Hardware
Software
Network
Site facilities
Human Resources
Under-used standby resources
No immediate ROI until a disaster occurs
Highly reliable
Low complexity
Proven technologies
Less expensive to implement
195
Many of the System z customers have well-established business processes for DR scenarios,
which usually uses the Capacity BackUp (CBU) features of System z1. Their current DR
environments also can be easily extended to include Oracle databases that are running on
the Linux on System z environment. For Oracle databases, the major requirement for DR is
data resiliency and can be achieved by any of the following technologies:
Storage array-based remote mirroring solutions
Extended cluster solutions (Extended RAC)
Oracle Data Guard-based solutions.
Oracle MAA recommends that you build a DR solution that is based on Oracle Data Guard
technology for Oracle databases for the following reasons:
9.5 Summary
In this chapter, we described how to plan for a highly available and disaster recovery (HADR)
environment for Oracle Databases that are running on Linux on System z in a virtualized
environment.
HADR solutions are possible by using Oracle MAA blueprint with IBM System z hardwares
proven design for continuous availability, which offers a set of reliability, availability, and
serviceability features (RAS). The right HADR configuration is a balance between recovery
time and recovery point requirements and cost.
196
http://www-03.ibm.com/systems/z/advantages/resiliency/datadriven/cuod.html#temporary
Part 3
Part
Provisioning an Oracle
environment on Linux
on System z
In this section, we describe several alternatives for provisioning a Linux guest for an Oracle
Database on System z. The following methods are available:
Provisioning an environment by using scripts (which are provided), as described in
Chapter 10, Automating Oracle on System z on page 199.
Provisioning an environment by using Tivoli Products, as described in Chapter 11,
Provisioning an Oracle environment on page 231.
Provisioning an environment by using Velocity software z/PRO, which is described in
Chapter 12, Using z/Pro as a Cloud infrastructure for Oracle on page 279.
197
198
10
Chapter 10.
199
200
In the environment that is described in this chapter, z/VM 6.2 at the latest service level (1201)
was installed on an LPAR and sufficient processors (four), memory (26 GB central, 2 GB
expanded), disk space, and networking resources (OSA devices and TCP/IP addresses)
were available.
Complete the tasks that are described in this section to implement IaaS.
For the SYSTEM CONFIG file configuration, some details are provided in this section. For more
information about TCP/IP and the FTP server, see Configure TCP/IP and Turn on the z/VM
FTP server and on DirMaint and SMAPI in Chapter 18 of The Virtualization Cookbook for
z/VM 6.2 RHEL 6.2 and SLES 11 SP2, which is available at this website:
http://www.vm.ibm.com/devpages/mikemac/CKB-VM62.PDF
The SYSTEM CONFIG file is the first configuration file that z/VM processes when it loads initially.
A layer 3 virtual switch named VSWITCH1 is created for the z/VM TCP/IP stack. A layer 2 virtual
switch named VSWITCH2 is created for the Linux systems primary network interfaces. Oracle
Grid requires a private interconnect among all nodes in a cluster. A third layer 2 virtual switch
named VSWITCH3 with no OSA connection is created for this purpose. Layer 2 virtual switches
are now recommended over layer 3 because they are required for DHCP and IPv6.
To customize the SYSTEM CONFIG file, complete the following steps:
1. Link and access the PMAINT CF0 disk, as shown in the following example:
==> link pmaint cf0 cf0 mr
==> acc cf0 f
2. Edit the SYSTEM CONFIG file, as shown in the following example:
==> x system config f
201
3. The following highlighted items are configured in this file. At the top, the many DASD
volumes are added as User_Volume_List statements so that z/VM can use them as
minidisks:
/**********************************************************************/
/* User volumes for local minidisks
*/
/**********************************************************************/
User_Volume_List
User_Volume_List
User_Volume_List
...
LX7W01 LX7U1R
LX6601
LX6602
4. Add the Disconnect_Timeout off and Vdisk clauses in the Features statement and
configure the system so that disconnected users are not forced off, and Linux virtual
machines can create virtual disks for in-memory swap spaces, as shown in the following
example:
...
Features ,
Disable ,
Set_Privclass ,
Auto_Warm_IPL ,
Clear_TDisk
,
Enable,
STP_TZ,
Retrieve ,
Default 99 ,
Maximum 255 ,
MaxUsers noLimit ,
Passwords_on_Cmds ,
Autolog yes ,
Link
yes ,
Logon
yes ,
Disconnect_Timeout off ,
Vdisk ,
Syslim infinite ,
Userlim infinite
...
*/
*/
*/
*/
/*
/*
/*
/*
/*
/*
/*
/*
/*
/*
/*
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
5. Define virtual switches. In this example, devices 2040-2045 are OSA devices on one
CHPID and OSA card, while devices 2060-2065 are on a different CHPID on another OSA
card. This architecture prevents VSWITCH1 and VSWITCH2 from having a single point of
failure, as shown in the following example:
/**********************************************************************/
/*
VSWITCHes
*/
/* VSWITCH1 - layer 3 - z/VM TCPIP stack
*/
/* VSWITCH2 - layer 2 - Linux primary interfaces
*/
/* VSWITCH3 - layer 2 - Linux secondary interconnect - no OSA
*/
/*
*/
/**********************************************************************/
DEFINE
MODIFY
DEFINE
DEFINE
202
VSWITCH
VSWITCH
VSWITCH
VSWITCH
VSWITCH1
VSWITCH1
VSWITCH2
VSWITCH3
The SHUTDOWN time is set to 10 minutes (600 seconds). With this setting, when z/VM is shut
down, it sends a signal to all guests and each Linux system has up to 10 minutes to shut
down cleanly, as shown in the following example:
/**********************************************************************/
/*
Set signal for shutdown
*/
/**********************************************************************/
SET SIGNAL SHUTDOWN 600
...
z/VM should now be sufficiently configured to install Oracle.
203
1G 6G G
10016 LX9A12 MR
30050 LX6602 MR
10016 LX9A13 MR
A similar virtual machine, S112GOLD, is created for the SUSE Linux Enterprise Server 11 SP2
golden image.
204
The DEDCIATE 400 and 500 supply FCP devices for access to the SAN, as shown in the
following example:
USER LNXC2N1 ORACLE 4G 6G G
INCLUDE LNXDFLT
MDISK 0100 3390 0001 10016 LX9A1D MR
MDISK 0101 3390 0001 30050 LX6606 MR
MDISK 0200 3390 1 1000 LX9A0E MR
MINIOPT NOMDC
MDISK 0201 3390 1001 1000 LX9A0E MR
MINIOPT NOMDC
MDISK 0202 3390 2001 1000 LX9A0E MR
MINIOPT NOMDC
MDISK 0302 3390 20033 10016 LX6705 MR
MINIOPT NOMDC
DEDICATE 0400 B803
DEDICATE 0500 B903
Define other virtual machines to be part of the cluster. There are LINK statements to the
seven Oracle ASM disks on the first node in the cluster in MW mode. Normally, this link
mode is dangerous; however, Oracle CRS is designed to work with multiple systems that
are writing to the same disks, as shown in the following example:
USER LNXC2N2 ORACLE 4G 6G G
INCLUDE LNXDFLT
MDISK 0100 3390 0001 10016 LX9A0A MR
MDISK 0101 3390 0001 30050 LX6702 MR
LINK LNXC1N1 0200 0200 MW
LINK LNXC1N1 0201 0201 MW
LINK LNXC1N1 0202 0202 MW
MDISK 0302 3390 0001 10016 LX9A0B MR
DEDICATE 0400 B804
DEDICATE 0500 B904
This completes the profile and virtual machine definitions for a reference environment. This
configuration can be considered Iaas.
10.2 PaaS
Now that a virtual machine is defined, an operating system can be added to it. This can be
considered PaaS. A Linux distribution must be obtained. This chapter focuses on Red Hat
Enterprise Linux (RHEL) 6.2, which is now formally certified by Oracle.
To set up PaaS, complete the tasks that are described in this section.
10.2.1 Preparing to install Red Hat Enterprise Linux 6.2 on the golden image
When RHEL 6.2 Linux is installed, two CMS files are needed: a parameter file and a
configuration file. The parameter file has a few common settings and it points to the
configuration file. It is common to keep all Linux parameter and configuration files on the
same disk, with each Linux virtual machine linking to them read-only. In this example, this is
the LNXMAINT 192 disk.
205
Create the parameter and configuration files on the LNXMAINT 192 disk with the user ID being
the file name. This disk is the read-only 191 disk for all Linux virtual machines. Following are
the contents of those two files:
==> type rh62gold parm-rh6 d
root=/dev/ram0 ro ip=off ramdisk_size=40000
CMSDASD=191 CMSCONFFILE=RH62GOLD.CONF-RH6
vnc vncpassword=12345678
==> type rh62gold conf-rh6 d
DASD=100-101,300-302
HOSTNAME=rh62gold.itso.ibm.com
NETTYPE=qeth
IPADDR=9.12.7.2
SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602
NETMASK=255.255.240.0
SEARCHDNS=itso.ibm.com
GATEWAY=9.12.4.1
DNS=9.12.6.7
MTU=1500
PORTNAME=DONTCARE
PORTNO=0
LAYER2=1
IPADDR2=10.1.1.2
All variables are recognized by the RHEL installer except for IPADDR2. This variable is used
later by the first boot script, boot.onetime, for the private interconnect.
10.2.2 Installing Red Hat Enterprise Linux 6.2 Linux on the golden image
RHEL 6.2 is installed onto the RH62GOLD virtual machine (a detailed description of the
installation is outside the scope of this book). For more information, see Chapter 8 of The
Virtualization Cookbook for z/VM 6.2 RHEL 6.2 and SLES 11 SP2, which is available at this
website:
http://www.vm.ibm.com/devpages/mikemac/CKB-VM62.PDF
The virtual machines are given a 3390-9 (approximately 7 GB) for the Linux system, a
3390-27 (approximately 21 GB) for the Oracle binaries, and another 3390-9 for a swap space
on disk.
The root file system is not put into a logical volume, but many other file systems are. The
rationale for this is that logical volumes allow file system sizes to be easily extended. The root
file system is in a single partition because if there is any problem with logical volumes, the
Linux system still boots. Because the root file system cannot grow and is relatively small
(512 MB), it is important that it is not allowed to fill up. Data of large sizes should be put into
logical volumes such as /tmp/ or /opt/.
The FCP/SCSI LUNs for Oracle data are not added during the Linux installation. This is done
after cloning is completed.
206
Size
Logical
volume name
Minidisk
512 MB
None
None
100
/tmp/
1 GB
system_vg
tmp_lv
100
/usr/
3 GB
system_vg
usr_lv
100
/var/
512 MB
system_vg
var_lv
100
/opt/
20 GB
opt_vg
opt_lv
101
swap
7 GB
None
None
302
After starting the RHEL 6.2 install, but before starting an SSH session as the user install,
there is a possible intermediate step that is necessary. There is an issue in which the Red Hat
installer does not recognize disks that were formatted with CPFMTXA. If you used the dasdfmt
command to format the minidisks, you can skip this step. If not, start an SSH session and use
the dasdfmt command to format the disks. Complete the following steps:
1. Start SSH session to the all system and log in as root. A password is not required.
2. Run the lsdasd command to observe the disks. In the following example, dasdb, dasdc,
and dasdf are minidisks that must be formatted:
# lsdasd
Bus-ID
Status
Name
Device Type BlkSz Size
Blocks
==============================================================================
0.0.0100
active
dasdb
94:4
ECKD ???
7042MB
???
0.0.0101
active
dasdc
94:8
ECKD ???
21128MB
???
0.0.0300
active
dasdd
94:12
FBA
???
256MB
???
0.0.0301
active
dasde
94:16
FBA
???
512MB
???
0.0.0302
active
dasdf
94:20
ECKD ???
7042MB
???
3. Use a bash for loop and put the dasdfmt commands in the background so the formats can
be performed in parallel, as shown in the following example:
# for i in b c f
> do
>
dasdfmt -b 4096 -y -f /dev/dasd$i &
> done
[1] 640
[2] 641
[3] 642
207
4. When you are prompted by the question from the installer: Which type of installation
would you like?, select Create Custom Layout. Use the GUI tools to create file systems,
swap spaces, and logical volumes. As shown in Figure 10-1, the Summary window shows
the disk and swap space layout.
Figure 10-1 File system and swap space layout in RHEL 6.2 golden image
5. Complete the following steps to choose the software during the installation process:
a. Click Customize now at the bottom of the main panel, as shown in the top of
Figure 10-2 on page 209.
b. Remove some package groups that were deleted from the Base System group, as
shown in Figure 10-2 on page 209.
c. Add two package groups to the Development group. The development tools were
added because many software products require access to GNU Collection of
Compilers (gcc) and the associated tools and libraries that it pulls in.
208
Figure 10-2 Software installation choices in Base System and Development groups
After the installation is complete, start an SSH session and observe the file systems and
swap spaces with the df -h and swapon -s commands. The reference system is shown in the
following example:
# df -h
Filesystem
Size Used Avail Use% Mounted on
/dev/dasdd1
504M 162M 317M 34% /
tmpfs
498M
0 498M
0% /dev/shm
/dev/mapper/opt_vg-opt_lv
21G 172M
20G
1% /opt
/dev/mapper/system_vg-tmp_lv
1008M
34M 924M
4% /tmp
/dev/mapper/system_vg-usr_lv
3.0G 1.5G 1.4G 53% /usr
/dev/mapper/system_vg-var_lv
504M
62M 417M 13% /var
# swapon -s
Filename
Type
Priority
/dev/dasdc1
partition
/dev/dasdb1
partition
/dev/dasda1
partition
Size
Used
259956 0
519924 0
7211416 0
-1
-2
-3
The RHEL 6.2 golden image should now be installed and is ready to be configured.
209
10.2.3 Configuring the Red Hat Enterprise Linux 6.2 golden image
The tasks that are described in this section are recommended to configure the golden image.
While some of these tasks might seem specific to Oracle servers, often they are helpful for all
Linux systems regardless of the running workload.
Name: DONTCARE
Devices: 3
VSWITCH: SYSTEM VSWITCH2
Name: UNASSIGNED Devices: 3
VSWITCH: SYSTEM VSWITCH3
There is also a second NIC starting at virtual device address 700. This was created in the
profile LNXDFLT and it is attached to VSWITCH3.
2. List the configured NIC by using the znetconf -c command, as shown in the following
example:
# znetconf -c
Device IDs
Type
Card Type
CHPID Drv. Name
State
------------------------------------------------------------------------------0.0.0600,0.0.0601,0.0.0602 1731/01 GuestLAN QDIO
00 qeth eth0
online
This shows that the 600 NIC is configured and associated with the eth0 interface.
3. List the unconfigured NIC by using the znetconf -u command, as shown in the following
example:
# znetconf -u
Scanning for network devices...
Device IDs
Type
Card Type
CHPID Drv.
-----------------------------------------------------------0.0.0700,0.0.0701,0.0.0702 1731/01 OSA (QDIO)
01 qeth
This shows that the 700 NIC is not yet configured.
4. Configure all NICs by using the znetconf -A command and again view the configured
NICs, as shown in the following example:
# znetconf -A
Scanning for network devices...
Successfully configured device 0.0.0700 (eth1)
# znetconf -c
Device IDs
Type
Card Type
CHPID Drv. Name
State
------------------------------------------------------------------------------0.0.0600,0.0.0601,0.0.0602 1731/01 GuestLAN QDIO
00 qeth eth0
online
0.0.0700,0.0.0701,0.0.0702 1731/01 GuestLAN QDIO
01 qeth eth1
online
This shows that the 700 NIC is now configured and associated with eth1.
210
211
8. Start a new SSH session as root and run the ifconfig eth1 command. You should see
the new network interface, as shown in the following example:
The system is going down for reboot NOW!
...
# ifconfig eth1
eth1
Link encap:Ethernet HWaddr 02:00:00:00:00:08
inet addr:10.1.1.2 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::ff:fe00:8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:830 (830.0 b)
If this interface is not present or does not show an IP address, you must remedy the
problem.
You should see a long list of package groups that was obtained from the installation
server. If you do not see a long list of commands, you must remedy the problem.
213
libXdmcp.s390x 0:1.0.3-1.el6
libXp.s390x 0:1.0.0-15.1.el6
libXxf86misc.s390x 0:1.0.2-1.el6
libxkbfile.s390x 0:1.0.6-1.1.el6
xkeyboard-config.noarch 0:2.3-1.el6
xorg-x11-xkb-utils.s390x 0:7.4-6.el6
Complete!
2. The VNC server configuration file is /etc/sysconfig/vncservers. Edit the file by adding
one line at the bottom (another line is commented that can be used after the oracle and
grid users are added), as shown in the following example:
# cd /etc/sysconfig
# vi vncservers
...
# VNCSERVERS="2:myusername"
# VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -localhost"
4. Start the VNC server. This creates some initial configuration files under the /root/.vnc/
directory, as shown in the following example:
# service vncserver start
Starting VNC server: 1:root
New 'rh62gold.itso.ibm.com:1 (root)' desktop is rh62gold.itso.ibm.com:1
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/rh62gold.itso.ibm.com:1.log
[
214
OK
5. The directory /root/.vnc/ is where configuration files are kept. Change to that directory
and list the files, as shown in the following example:
# cd /root/.vnc
# ls
passwd rh62gold.itso.ibm.com:1.log
rh62gold.itso.ibm.com:1.pid
xstartup
6. The file xstartup is the script that is run when the VNC server starts and where the
window manager is set. It is recommended that you change from the Tiny window manger,
twm, to the more usable Motif window manager, mwm, as shown in the following example
# vi xstartup // change last line
...
xsetroot -solid gray
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
mwm &
7. Verify that the Motif window manager is available by using the which command, as shown
in the following example:
# which mwm
/usr/bin/mwm
8. Restart the VNC server by using the service command, as shown in the following
example:
# service vncserver restart
Shutting down VNC server: 1:root [ OK ]
Starting VNC server: 1:root
New 'rh62gold.itso.ibm.com:1 (root)' desktop is rh62gold.itso.ibm.com:1
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/rh62gold.itso.ibm.com:1.log
[
OK
You can now use the VNC client to connect to the IP address of the Linux administration
system with a :1 appended. Supply the root password to start a session.
215
oracleRedbook-SG248104/linux/boot.oracle
oracleRedbook-SG248104/linux/boot.onetime
oracleRedbook-SG248104/linux/database.rsp
This creates one directory, oracleRedbook-SG248104/, with subdirectories linux/ for Linux
files and vm/ for z/VM files.
Copy the two Linux boot scripts and the two response files to the /etc/init.d/ directory on
the RHEL 6.2 golden image. Also, copy the other files that are necessary for Oracle and
Velocity Software, if applicable, as shown in Table 10-2.
Table 10-2 Files that are needed on golden image
File (RHEL 6.2)
Source
Description
/etc/init.d/boot.onetime
SG248104.tgz file
Configure network
/etc/init.d/boot.oracle
SG248104.tgz file
/tmp/database.rsp
SG248104.tgz file
/tmp/grid.rsp
SG248104.tgz file
/tmp/LV11R6.s390x.rpm
Velocity Software
/tmp/ora-val-rpm-EL6-DB-11.2.0.3-1.s390x.rpm
Oracle
Pre-requisite checker
/tmp/cvuqdisk-1.0.9-1.rpm
Oracle
You should now have all the additional files on the golden image.
5. Add the flag -10 to the /etc/init.d/snmpd service script, as shown in the following
example:
# cd /etc/init.d
# vi snmpd
...
# diff snmpd /tmp/snmpd.orig
45c45
<
daemon -10 --pidfile=$pidfile $binary $OPTIONS
-->
daemon --pidfile=$pidfile $binary $OPTIONS
The system should now be configured for Velocity Software products.
Customizing rc.local
The file /etc/rc.d/rc.local is a boot script that is put in place for local configurations. To
customize rc.local, perform the following steps:
1. Create a backup copy, then edit the file /etc/rc.d/rc.local, which is run at boot time, as
shown in the following example:
# cd /etc/rc.d
# cp rc.local /tmp/rc.local.orig
2. Edit the file and add the following lines:
# vi rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
217
218
ftp 9.12.4.200
user MAINT
put CLONE.EXEC
quit
219
4. Edit the configuration file and modify the HOSTNAME and IPADDR variables for the new host
name and primary IP address. Also, modify the IPADDR2 variable, which the Red Hat
installer does not reference but the boot.onetime script does, to that of the secondary IP
address, as shown in the following example:
==> x lnxsa2 conf-rh6
DASD=100-101,200-202,210-213,300-302
HOSTNAME=lnxsa2.itso.ibm.com
NETTYPE=qeth
IPADDR=9.12.7.4
SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602
NETMASK=255.255.240.0
SEARCHDNS=itso.ibm.com
GATEWAY=9.12.4.1
DNS=9.12.6.7
MTU=1500
PORTNAME=DONTCARE
PORTNO=0
LAYER2=1
IPADDR2=10.1.1.4
5. Log on to MAINT.
6. Start the CLONE EXEC with the source user ID and the target user ID as parameters, as
shown in the following example:
==> clone rh62gold lnxsa2
HCPCQU045E RH62GOLD not logged on
Are you sure you want to overwrite disks
y
Trying FLASHCOPY of 1100 to 2100 ...
Command complete: FLASHCOPY 1100 0 10015
DASD 1100 DETACHED
DASD 2100 DETACHED
Trying FLASHCOPY of 1101 to 2101 ...
Command complete: FLASHCOPY 1101 0 30049
DASD 1101 DETACHED
DASD 2101 DETACHED
Trying FLASHCOPY of 1302 to 2302 ...
Command complete: FLASHCOPY 1302 0 10015
DASD 1302 DETACHED
DASD 2302 DETACHED
Starting new clone LNXSA2
Command accepted
on lnxsa2 (y/n)?
TO 2100 0 10015
TO 2101 0 30049
TO 2302 0 10015
In this example, FLASHCOPY succeeded so the cloning process took only a few seconds.
7. Quickly log out of MAINT and log on to the new clone. You should see boot messages
scrolling on the console. If Linux does not boot, there was a problem copying disks that
must be remedied. You should see some from S01boot.onetime similar to those shown in
the following example:
AUTO LOGON ***
LNXSA2
USERS = 65
HCPCLS6056I XAUTOLOG information for LNXSA2: The IPL command is verified by the
IPL command processor.
...
dasd-eckd 0.0.0191: DASD with 4 KB/block, 360000 KB total size, 48 KB/track,
linux disk layout
dasdf:CMS1/ LXM192: dasdf1
220
...
The next step is SaaS.
221
WWPN
LUN 1
LUN 2
Device 400
CHPID 1
Linux
virtual machine
WWPN
LUN 1
LUN 2
IOCP
CHPID 2
Device 500
Ficon channel 1
Ficon channel 2
FCP SAN
Port 1
Port 2
At a high level, the SaaS first boot script, boot.oracle, performs the following tasks:
222
223
5. Set the script to start at boot time by using the chkconfig command, as shown in the
following example:
# chkconfig boot.oracle on
6. Shut down the golden image.
After the golden image is cloned and the clone is booted, the boot.oracle script runs near the
end of run level 3 and reads the configuration file on the LNXMAINT 192 disk.
224
Attention: If your disk array does not support FLASHCOPY, or if the service is not ready (for
example, if you ran two clones in a short period), the code falls back to the z/VM DDR
command. The output is different and the process is much slower, as shown in the
following example:
==> clonesa2
HCPCQU045E RH62GOLD not logged on
Are you sure you want to overwrite disks on LNXSA2 (y/n)?
y
Trying FLASHCOPY of 1100 to 2100 ...
Command complete: FLASHCOPY 1100 0 10015 TO 2100 0 10015
DASD 1100 DETACHED
DASD 2100 DETACHED
Trying FLASHCOPY of 1101 to 2101 ...
HCPCMM296E Status is not as required - 1101; an unexpected condition
HCPCMM296E occurred while executing a FLASHCOPY command, code = AE.
FLASHCOPY failed, falling back to DDR ...
z/VM DASD DUMP/RESTORE PROGRAM
HCPDDR696I VOLID READ IS 0X0101
HCPDDR696I VOLID READ IS 0X0101
COPYING
0X0101
COPYING DATA 10/21/12 AT 12.15.11 GMT FROM 0X0101 TO 0X0101
INPUT CYLINDER EXTENTS
OUTPUT CYLINDER EXTENTS
START
STOP
START
STOP
...
...
4. Start an SSH session to the newly cloned Linux system.
225
5. Change the directory to /tmp/ and review the output file, /tmp/boot.oracle.out.
The system should now be prepared for an Oracle database installation.
oracle.all.db.config.starterdb.fileSystemStorage.dataLocation=/oradata
oracle.all.db.config.asm.ASMSNMPPassword=xxxx
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
oracle.installer.autoupdates.option=SKIP_UPDATES
3. Use the grep command to view variables that must be set for your enterprise, as shown in
the following example:
# grep xxxx database.rsp
ORACLE_HOSTNAME=xxxx
oracle.all.db.config.starterdb.password.ALL=xxxx
oracle.all.db.config.asm.ASMSNMPPassword=xxxx
These are replaced by the boot.onetime and boot.oracle first boot scripts based on
variables set in the CMS CONF file. The other values should be correct for the settings that
are made by the boot.oracle script. The values for ORACLE_HOME and ORACLE_BASE point to
/opt/ over which a large logical volume mounted.
172M
20G
1% /opt
227
6. This process runs for some time. It is possible to use the tail --follow command to
monitor the log file as messages are added. Eventually, you should see a success
message, as shown in the following example:
The installation of Oracle Database 11g was successful.
Please check '/opt/oraInventory/logs/silentall2012-11-10_06-07-16AM.log' for
more details.
As a root user, execute the following script(s):
1. /opt/oraInventory/oraRoot.sh
2. /opt/oracle/product/11.2.0/dbhome_1/root.sh
Successfully Setup Software.
7. Exit from the oracle user to return to the root shell, as shown in the following example:
$ exit
8. Run the first script, as shown in the following example:
# /opt/oraInventory/oraRoot.sh
Changing permissions of /opt/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /opt/oraInventory to oinstall.
The execution of the script is complete.
9. Run the second script, as shown in the following example:
# /opt/oracle/product/11.2.0/dbhome_1/root.sh
Check
/opt/oracle/product/11.2.0/dbhome_1/all/root_lnxsa2.itso.ibm.com_2012-11-10_0652-22.log for the output of root script
10.Check the output of the second script, as shown in the following example:
# cat
/opt/oracle/product/11.2.0/dbhome_1/all/root_lnxsa2.itso.ibm.com_2012-11-10_0652-22.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/oracle/product/11.2.0/dbhome_1
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
This output shows that the script was successful.
228
11.Verify that Oracle is working by using the sqlplus command, as shown in the following
example:
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Nov 12 10:54:05 2012
Copyright (c) 1982, 2011, Oracle.
229
230
11
Chapter 11.
Provisioning an Oracle
environment
Attention: The following command conventions are used in this chapter:
Linux commands that are running as root are prefixed with #
Linux commands that are running as non-root are prefixed with $
This chapter describes the steps to provision an Oracle Linux guest by using Tivoli products.
This chapter includes the following topics:
Introduction
Customizing a new Linux reference for Oracle
Optimizing the Linux environment for Oracle workload
Linux configuration for Tivoli Service Automation Manager environment
Installing a new Oracle Single Instance database and recording the Silent Install file on a
test server
Customizing a script
Creating the Tivoli Service Automation Manager and Tivoli Provisioning Manager objects
and workflows for PaaS provisioning
Summary
231
11.1 Introduction
Many IBM System z customers use Linux on System z in a z/VM hypervisor environment to
achieve the benefits of server virtualization. While some of these customers used
home-grown tools to install, configure, and manage their Linux servers, there is a need for
standardized management tools to automate these system management tasks.
IBM Tivoli Service Automation Manager provides such a set of tools. This solution enables
customers to rapidly create, configure, provision, and de-provision Linux on System z servers
running in the IBM z/VM host environment. It also provides the tools to help you make a
staged entrance into cloud computing. Cloud computing is an IT model that facilitates better
use of existing IT resources with fewer provisioning processes, lower capital, and operating
expenses.
Tivoli Service Automation Manager helps the automated provisioning, management, and
de-provisioning of cloud resources, consists of hardware servers, networks, operating
systems, middleware, and application-level software. Several virtualization environments
(hypervisors) are supported while individual virtual server provisioning.
Tivoli Service Automation Manager helps you define and automate services that are
lifecycle-oriented; for example, a service to establish and administer an IT server network for
a limited period to satisfy increased demand for processing capacity, or to serve as a test
environment. Predefined service definitions determine the overall framework for the services.
The actual service instances are requested by using these service definitions.
The Self-Service Virtual Server Management environment is used by the cloud users to
request provisioning and manage the virtual environments. Tivoli Service Automation
Manager provides a Self-Service Virtual Server Management environment, which allows a
user to request the provisioning of projects that consists of virtual servers that are based on
IBM System x, System p, or System z and the WebSphere CloudBurst Appliance product.
Figure 11-1 on page 233 shows the following components of Tivoli Service Automation
Manager:
The Managed From component model refers to resources that are needed for Tivoli
Services Automation Manager (Tivoli Service Automation Manager/Tivoli Provisioning
Manager).
The Managed Through component model refers to z/VM, MAPSRV, Linux Master,
DIRMAINT, and VSMSERVE.
The Managed To component model refers to the Linux instances to be provisioned in
z/VM.
232
Oracle
11.2.0.3
Reference
Managed
Through
MAPSRV
Managed
From
Oracle
11.2.0.3
DirMaint
TS AM/TPM
VS MSER V
Managed
To
z/VM
Note: This chapter is not meant to replace the IBM z/VM, Linux, Tivoli Service Automation
Manager, or Oracle Databases Enterprise Server products documentation. z/VM, Linux, or
Tivoli Service Automation Manager installations are not in the scope of this chapter.
For more information about how to deploy a Tivoli provisioning environment on System z, see
Tivoli Service Automation Manager Version 7.2.2 - Installation and Administration Guide,
SC34-2657.
In this chapter, we focus on Oracle for Linux on System z provisioning automation
environment. The following methods can be used to provision an Oracle Linux guest by using
Tivoli Service Automation Manager:
Define a new offering in the Tivoli Service Automation Manager Services catalog and
expose it through the Web User Interface.
Use Tivoli Service Automation Managers predefined service Project Linux Servers under
z/VM and use an Oracle Image for Oracle middleware installation. (We used this
deployment option in this chapter.)
The following steps are developed later in this chapter:
Create a SUSE Linux Enterprise Server 11 SP1 guest.
Customize the guest for Oracle and Tivoli Service Automation Manager.
Install a new Oracle Single Instance Database and record the Silent Install file on a test
server.
Customize a shell script (preporacle.sh) to check Linux environment parameters and run
the Oracle silent installation.
233
Create the Tivoli Service Automation Manager/Tivoli Provisioning Manager objects and
Workflows.
Provision a new Oracle 11gR2 Linux PaaS (Platform as a Service), ready for DB
installation.
11.2.1 Requirements
The minimum requirement to install Oracle 11gR2 is SUSE Linux Enterprise Server 10 SP3
(or later). Kernel -2.6.16.60-0.54.5 or later is required for an Oracle 11gR2 installation. SUSE
Linux Enterprise Server 11 SP1 (2.6.32.12-0.7) + is available and is preferable for Oracle
11gR2 because it incorporates various features of System z hardware.
Verify the release by using the cat /proc/version command.
The initial 11.2.0.2 base release software can be downloaded from E-Delivery and is
available at this website:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112020-zli
nux64-352074.html
11.2.0.3 Patchset1 can be downloaded from My Oracle Support (MOS), Patches and Updates
Patch:10404530. Check the readme file to determine which .zip files you require.
234
In line with other platforms, it is now a full release that consists of six .zip files
Figure 11-2 shows the Oracle Reference Disk Layout that is based on our installation.
RDBMS instance
Oracle instance
100
dasda1 /
dasda2 swap
400
401
402
40n
Software
The following software was used:
235
Procedure
Download the appropriate rpm checker from My Oracle Support (MOS) Note 1306465.1. The
rpm checker verifies whether the required rpms for Oracle Grid and database are installed.
This prevents problems with the installation of Oracle. You must log on to the Oracle secure
website and select one of the rpm checkers. S11 Grid Infrastructure/Database RPM checker
11.2.0.2 (1.38 KB) (SUSE Linux Enterprise Server 11 Checker) was used for this chapter. For
more information, see this website:
https://support.oracle.com/CSP/main/article?cmd=show&type=ATT&id=1086769.1:DB_S11_
11202_ZLINUX
Extract the download .zip file and then install the extracted rpm to verify your Linux rpm
requirements. The rpm checker does not actually install anything. Instead, the checker uses
the dependencies of rpm to check your system. Run the RPM checker command as the root
user, if possible.
Call the rpm checker by using the following command:
rpm -ivh ora-val-rpm-S11-DB-11.2.0.2-1.s390x.rpm
236
Install all missing libraries that are shown in the rpm checker output. Check your installed rpm
again, as shown in Figure 11-5.
Procedure
For an Oracle Grid install, install the cvudisk-1.0.9-1 rpm package from the Oracle 11gR2
distribution media.
The Cluster Verification Utility (CVU) requires root privilege to gather information about the
SCSI disks during discovery. A small binary uses the setuid mechanism to query disk
information as root. This process is a read-only process with no adverse effect on the system.
To make this secured, this binary is packaged in the cvuqdisk rpm and needs root privilege to
install on a machine.
237
When this package is installed on all the nodes, CVU performs discovery and shared storage
accessibility checks for SCSI disks. Otherwise, it complains about the missing package
cvuqdisk. You can disable the SCSI device check feature by setting the
CV_RAW_CHECK_ENABLED to FALSE in $CV_HOME/cv/admin/cvu_config file. CVU does not
complain about the missing rpm if this variable is set to FALSE.
This rpm can be found in the grid installation disk, which can be sent via FTP it to the target
server and installed by using the following command:
rpm -iv cvuqdisk-1.0.9-1.rpm
fbset off
network-remotefs off
postfix off
splash off
splash_early off
smartd off
alsasound off
kbd off
xdm off
238
Important: When the two (public and private) network interfaces for Oracle RAC are
configured, you must have ARP enabled (that is, NOARP must not be configured). The
root.sh script fails on the first node if NOARP is configured.
Oracle requires a host name with a fully qualified domain name and a corresponding entry
in the /etc/hosts file.
Procedure
Back up the sysctl.conf file before any modification is done by using the -p option to keep
the date, as shown in the following example:
cp -p /etc/sysctl.conf /etc/sysctl.conf.orig
Several parameters should be reviewed concerning the function of the activity (see the
comments in this file).
To check the kernel parameters, enter the following commands:
cat /proc/sys/fs/file-max
or
# sysctl -A | grep file-max (file-max parameter for example)
To check all, enter the following command:
# sysctl -A > collect_sysctl
To add or change the kernel parameters, edit the sysctl.conf file. Add or change the
following values:
# Oracle Kernel Specific parameters
#
#fs.file-max = 512 x oracle processes (for example 6815744 for 13312 processes)
fs.file-max = 6815744
# fs.aio-max-nr = 3145728 (use for large concurrent I/O databases
fs.aio-max-nr = 1048576
#kernel.shmall = set to (sum of all SGAs on system) / 4096 or a default of
2097152
kernel.shmall = 2097152
#kernel.shmmax=MAX (1/2 the virtual RAM, largest SGA_MAX_SIZE/SGA_TARGET on
system)
kernel.shmmax = 4218210304
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
Chapter 11. Provisioning an Oracle environment
239
net.core.wmem_max = 1048576
kernel.spin_retry = 2000
#vm.nr_hugepages = 4000 (Use for large SGAs > 10 GB)
Note: The minimum value that is required for shmmax is 0.5 GB. However, Oracle
recommends that you set the value of shmmax to 2 GB for optimum performance of the
system.
By specifying the values in the /etc/sysctl.conf file, they persist when the system restarts.
However, on SUSE Linux Enterprise Server systems, enter the following command to ensure
that the system reads the /etc/sysctl.conf file when it restarts:
# /sbin/chkconfig boot.sysctl on
Run sysctl -p for the kernel parameter changes to take effect.
11.3.6 Creating and verifying the required UNIX groups and Oracle user
accounts
In this section, we describe the process to update the Oracle user file descriptors. This step is
managed by the preporacle shell script.
Procedure
When Oracle 11gR2 Database Server single instance is installed, Oracle recommends
creating the following groups:
dba
oinstall
It is possible to install with one group (for example, only dba). If you are installing only
database executable files, often one user account called oracle is created.
To verify that the Linux groups and users were created, you can view the group and password
files by using the following commands:
#
#
#
#
#
#
#
Updates can be verified with the commands that are shown in the previous example.
240
/u01
/oracle
ORACLE_BASE
/oralnventory
/product
/11.2.0
/dbhome-1
ORACLE_HOME
Procedure
Table 11-1 describes the disk space requirements for software files and data files for each
installation type on IBM Linux on System z.
Table 11-1 Disk space requirements for software and data files
Installation type
Enterprise Edition
4.9 GB
2 GB
Standard Edition
4.5 GB
1.5 GB
More disk space on a file system or an Oracle ASM disk group is required for the fast
recovery area if you choose to configure automated backups of the following components:
Oracle executable files
Oracle software is installed under the root file system.
Oracle data files
These files can be handled by ASM or be placed in a Linux Logical Volume.
Oracle data directories that have the correct permission bits
Create the ORACLE_BASE directory that is the Oracle directory tree root, /u01/oracle
product, as shown in the following example:
# mkdir -p /u01/oracle
# chown -R oracle:oinstall /u01/oracle
Create ORACLE_HOME directory that handle Oracle software executable files,
/u01/oracle/11.2.0/db_home1/, as shown in the following example in our case:
# mkdir -p /u01/oracle/11.2.0/db_home1/
241
11.3.8 Setting file descriptors limits for Oracle and grid users
A few open file descriptors (sockets) can significantly reduce the performance and load that
Oracle can generate. In this section, we describe the process that is used to update the
Oracle user file descriptors.
This step is managed by the preparedly shell script.
Procedure
Complete the following steps to set the file descriptors limits for Oracle and grid users:
1. Edit the /etc/security/limits.conf file by adding or modifying the following lines:
# oracle
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
# Use memlock for Huge Pages support
#*
soft
memlock 3145728
#*
hard
memlock 3145728
# End oracle
To increase the limits at Oracle logon as the oracle user, verify the oracle users .profile
(for example, /home/oracle/.profile for KSH users) and ensure that the following lines
were added:
ulimit -n 65536
ulimit -u 16384
ulimit -s 32768
Another method is to modify the main system profile by adding these lines to the
/etc/profile file. Change this if the oracle user is using a separate user shell program,
such as, csh or bash.
2. When you log in to the Linux machine, the login program reads the /etc/pam.d/login file.
The following line was included in this file, which instructs the login program to load the
pam_limits.so module during login.session:
session required pam_limits.so
242
Browse to the /etc/pam.d directory. The session required pam_limits_.so setting can
be added to /etc/pam.d/login, /etc/pam.d/sshd, or /etc/pam.d/su, depending on
whether you want to set limits on login, SSH, or su types of log ins. The PAM module
(pam_limits.so) is not loaded by default for various applications, such as, login, SSH, or
su. By adding it explicitly and loading it, you can limit the login sessions. Ensure that the
/etc/pam.d/sshd, login, and su file has an entry for pam_limits.so.
Important: After any change, test a new login or sshd session before all opened sessions
are closed.
Procedure
Complete the following steps to activate the swap in memory:
1. Ensure that all your Linux production guests define a virtual swap. Check that Linux users
are allowed to define VDISKS in the SYSTEM CONFIG file, as shown in the following example:
VDisk, /* Allow VDISKS for Linux swaps */
Syslim infinite,
Userlim infinite
2. Ensure that SWAPGEN EXEC is installed in LNXMAINT, which is the Linux instance that is
dedicated to Linux users administration.
3. Ensure that SWAPGEN is called by the Linux users' PROFILE EXEC, at Linux IPL. as shown in
the following example:
/* PROFILE EXEC for zLinux */
'CP SET RUN ON'
/* CP READ will not stop server */
'CP SET PF11 RETRIEVE FORWARD' /* Next command */
'CP SET PF12 RETRIEVE'
/* Previous command */
'SWAPGEN 101 250000'
/* VDisk swap space; Q VDSK U*/
'CP IPL 100 CLEAR'
/* Let's roll */
4. Enable this virtual device as a swap in Linux (as shown in Figure 11-7) by completing the
following steps:
a. Set the specified device online, as shown in the following example:
# chccwdev -e 0.0.101
b. Check whether the swap device is available in Linux, as shown in the following
example:
# lsdasd
c. Display the swap summary by device, as shown in the following example:
# swapon -s
243
5. Edit the /etc/fstab file to add the new swap device, which provides a higher priority to the
swap device in memory so that the first amounts of swapped data are done to the memory.
6. Set up a Linux swap area on the swap device, as shown in the following example:
# mkswap /dev/disk/by-path/ccw-0.0.0101-part1
7. Make available all devices that are marked as swap in /etc/fstab, as shown in the
following example:
# swapon -a
8. Display again the swap summary by device and verify that the new swap was added, as
shown in Figure 11-8:
# swapon -s
244
245
246
To get a better display with your VNC client, choose Hextile encoding instead of tight. Go to
the VNC Connection Options panel and select Hextile from the Format and Encodings
drop-down list, as shown in Figure 11-12.
247
248
If no errors were encountered, the output of the runInstaller script looks similar to what is
shown in Figure 11-15.
249
250
5. Click Save Response File in the Summary panel, as shown in Figure 11-18.
251
6. Accept the home directory as the target directory for the response file for access
authorization reasons. Click Install.
7. Run the orainstRoot shell to change access permissions for the Oracle Referential.
8. Run the root shell script. This script contains all the product installation actions that require
root privileges.
9. Return to the Execute Configuration Scripts panel and click OK. The Install Product Status
for Execute Root Scripts is now green. Click Next.
The Finish panel displays the message, The installation of Oracle Database was
successful. Click Close.
The installation log can be found in the following directory:
/u01/oraInventory/logs/installActionsInstallDate.log.
2. Look for a correct installation message in the tail of the log file, as shown in the following
example:
$ tail -10 /u01/oraInventory/logs/installActions<timestamp>.log | grep
"Successfully executed"
252
3. As root user during the post-installation process, run the orainstRoot.sh and root.sh
scripts, which contain all of the product installation actions that require root privileges, as
shown on Figure 11-20.
253
#------------------------------------------------------------------------------# Specify the hostname of the system as set during the install. It can be used
# to force the installation to use an alternative hostname rather than using the
# first hostname found on the system. (e.g., for systems with multiple hostnames
# and network interfaces)
#------------------------------------------------------------------------------ORACLE_HOSTNAME=l1oradb1.mop.ibm.com
Note: This part is included in preporacle.sh script that is started by the Tivoli Service
Automation Manager/Tivoli Provisioning Manager workflow.
254
255
2. Users and Team management can be set in this panel. Create an Oracle team, which
includes all Oracle Users, as shown in Figure 11-22.
256
3. As shown in Figure 11-23, create other Oracle users that belong to the Oracle
Administration team, ORAADM, that was created in Step 2 (you can complete the Oracle
instances provisioning later). In the Security Group panel, choose Security Group for
Customer Level Policy.
257
258
A customer is associated with one or more Cloud Server Pools. Assign Resources Pools that
were previously created to the new Cloud Provider.
Tivoli Service Automation Manager maintains a large collection of objects and their attributes,
which forms its DCM that is stored in a separate database. Each object in Tivoli Service
Automation Manager Data model is assigned a unique identification number. The objects
description can be retrieved from the Data Model Object Finder, which is in the main page of
the Tivoli Service Automation Manager Start Center.
259
Cloud Storage Pools: The Cloud Storage Pools are not used in our case. Instead, we
use the MAPSRV server default pool. A z/VM discovery gathers disk information about the
default server pool MAPSRV (no storage pool is required).
Master Images, as shown in Figure 11-26:
For more information, see the Configuring the service provider section of the Tivoli Service
Automation Manager Administration Guide.
Configuring the XML files that are imported into the DCM
This step describes the process that is used to tailor the XML files that are used to import
requested objects into Tivoli Service Automation Manager. This process includes the
following overall steps:
1. Tivoli Service Automation Manager distributes sample XML files with the product.
2. These files must be tailored to the local Linux and z/VM environment.
3. The resulting XML input files are incorporated into the Tivoli Service Automation Manager
database and the DCM.
4. Any text editor (for example, Notepad) can be used to work with .xml files.
5. All objects that are contained in the .xml file are imported into the DCM.
260
To build an Oracle image and use it with the default Tivoli Service Automation Manager
service for Linux that is running as a guest under z/VM, we prepared and built the following
elements in a Tivoli Service Automation Manager environment:
Oracle Master Image
Software Stack
Virtual Server Template
We need to register a new image in the Tivoli Service Automation Manager image library
for the Linux under z/VM. To instantiate new Oracle environments, we must create a Tivoli
Provisioning Manager workflow. This workflow is started after the Linux OS installation in
the virtual machine, as described in Updating the Software Stack on page 265.
Tivoli Service Automation Manager provides a way to directly define DCM Objects in its
interface or to build .xml files and import them by using the administrative interface to
import the following components:
Software Stack
Master Image
Virtual Server Template
Master Image Template: OraclezLinuxImageSLES11SP1_zVMMOP2012.xml
To have a Master Image available, the virtual server template must have some information
in it, such as, Linux Prototype, Linux reference name, and the MDisk that is to be cloned,
as shown in Figure 11-29.
261
Importing resources
This step consists of importing the .xml files that were created in Configuring the XML files
that are imported into the DCM on page 260 into the DCM. This task is done by the Tivoli
Service Automation Manager Administrator.
From the Start Center main page, click: Go To Service Automation Configuration
Cloud Server Pool Administration, as shown on Figure 11-31 on page 263.
262
Click Import DCM Objects. This import creates the DCM objects in the Tivoli Service
Automation Managers database. We use those objects to register our new Oracle Image.
This is the image we select with the default service of Tivoli Service Automation Manager
Create Project Linux under z/VM.
263
264
Search for keyword Oracle in the Virtual Server Templates search field. The results are
shown in Figure 11-34.
This displays the SLES11 for Oracle Software Stack. In the Select Action box, select Add
Stack Entry.
In the Add Stack Entry panel (as shown on Figure 11-36 on page 266), search for Software
definitions that contain Oracle, and then select the Oracle definition that contains your
workflow. For more information about designing the workflow, see Sample workflow for Linux
and Oracle installation on page 268.
265
Select the Oracle chosen template and click Submit. The Oracle software (workflow) is
added to the Software Installable section of the Software stack configuration for Oracle
provisioning.
Save the new Software Stack configuration. Click the Save icon. The Oracle 11gR2 Software
Installable module is added to the Linux SLES11 OS Image for Oracle 11gR2. This results in
a new software module that can install Oracle software immediately after the new Linux
Instance is generated in a single project. The oracle11g_SoftwareInstallable_Install
workflow is now associated with this Software Stack because you can check in the Software
Products window.
266
2. Click the link to edit the code and then copy it into the clipboard.
3. Complete the following steps to create a Post Install workflow for Oracle:
a. Click the New Workflow icon that is the top of the page.
b. Paste the code that was previously inserted in the clipboard, and then save the new
workflow with a new name: Oracle_SoftwareInstallable_InstallPost, as shown in
Figure 11-38.
c. Add the Post Installation workflow, as described in Oracle Post Install actions on
page 268. Debug the information to facilitate problem analysis.
d. Click the Save icon at the top of the page. The new code is compiled. Click OK when
completed.
4. Add the workflow to the Oracle 11gR2 software stack. Open the software stack as
described in Updating the Software Stack on page 265, and shown in Figure 11-35 on
page 265.
5. From the Software Stack tab, select the Workflows tab in the Installable Files section, and
click Assign Provisioning Workflow.
6. In the Assign Workflow list, search for the InstallPost available workflows, as shown in
Figure 11-39 on page 268.
267
7. Select the Oracle post installation workflow and click OK, then agree to save the changes.
The new post installation workflow is added to the Software Installable files.
268
An external script is useful when the script is long and easier to maintain separately, or when
the script is used by multiple workflows or workflows in different automation packages.
An external script file (called preporacle.sh)is used in this chapter and performs the following
tasks:
1. Uses the Device.CopyFile device operation to copy the script to the target computer.
First, the script is manually uploaded to the Tivoli Provisioning Manager server in a
repository directory. Then, the Device.CopyFile function copies the script from Tivoli
Provisioning Manager repository to the new provisioned guest.
2. Runs the shell script by using the Device.ExecuteCommand operation.
The following functions parameters are available:
Device.CopyFile, SourceDeviceID, SourcePath, SourceFile, DestinationDeviceID,
DestinationPath, DestinationFile, ClientCredentialsKey, HostCredentialsKey, and
TimeoutInSeconds
Device.ExecuteCommand, DeviceId ExecuteCommand, WorkingDirectory, CredentialsKey,
TimeoutInSeconds, TreatTimeoutAs, ReturnCode, ReturnErrorString, and ReturnResult
For more information about the Workflow design process, see the Tivoli Information Center at
this website:
http://publib.boulder.ibm.com/infocenter/tivihelp/v10r1/index.jsp?topic=%2Fcom.ibm
.tsam.doc_7.2%2Ft_config_zvm_setup.htmlhttp://publib.boulder.ibm.com/infocenter/ti
vihelp/v10r1/index.jsp?topic=%2Fcom.ibm.tsam.doc_7.2%2Ft_config_zvm_setup.html
269
4. A new workflow editor is displayed in the browser. Copy and paste your code inside the
Workflow editor, as shown on Figure 11-41. Click the Save icon. The new code is
compiled.
Click OK when completed, as shown in Figure 11-42. Return to the workflow list with the
List tab and select the new workflow.
270
5. Click the green arrow to start the workflow, as shown in Figure 11-43.
6. Enter the requested parameters in the workflow panel, as shown in Figure 11-44. Here,
the requested parameter is the ID of a Linux Instance where Oracle code is installed. Click
Run and then click Yes to open the Provisioning Task Tracking window.
271
7. Click the Workflow Log ID to show the Provisioning Task Tracking log, as shown in
Figure 11-45.
The Oracle installation on the new Instance, which is started by the Tivoli Provisioning
Manager server, can also be monitored by using the following methods from the new
instance environment:
List the /mnt directory files (mounted by the preporacle script).
List the temporary files directory /tmp and the created Oracle logs.
List the Oracle installation process by using the ps -fu oracle command.
Monitor the system performance that shows Java and I/O activity by using the top or
equivalent command.
8. After workflow successful completion, the Status is shown in the Workflow Execution Logs
tab, as shown in Figure 11-46.
272
7. Adjust the values in the Resources window (if needed) and then click Next.
8. Make any change (if needed) in Network Configuration window and click Next.
9. A Summary window opens. Click Finish.
273
274
Tivoli Service Automation Manager application workflow sends an email showing the
approval of the service to the requester, as shown in Figure 11-50.
275
You receive an email from Tivoli Service Automation Manager administrator. Follow the
progress the same way as described in Requesting a new Oracle Service on page 274.
276
11.8 Summary
For more information, see the following resources:
z/VM and Linux on IBM System z: The Virtualization Cookbook for SLES 11 SP1,
SG24-7931-00
Directory Maintenance Facility Tailoring and Administration Guide; Version 6 Release 1,
SC24-6190-00
Deploying a Cloud on IBM System z, REDP-4711-00
Provisioning Linux on IBM System z with Tivoli Service Automation Manager,
REDP-4663-00
Installing Oracle 11gR2 RAC on Linux on System z, REDP-4788
Experiences with a Silent Install of Oracle Database 11gR2 RAC on Linux on System z
(11.2.0.3), REDP-9131
Tivoli Service Automation Manager Version 7.2.2: Installation and Administration Guide,
SC34-2657-00
Tivoli Information Center, which is available at this website:
http://publib.boulder.ibm.com/infocenter/tivihelp/v10r1/index.jsp?topic=%2Fcom.
ibm.tsam.doc_7.2%2Ft_config_zvm_setup.htmlhttp://publib.boulder.ibm.com/infocen
ter/tivihelp/v10r1/index.jsp?topic=%2Fcom.ibm.tsam.doc_7.2%2Ft_config_zvm_setup
.html
277
278
12
Chapter 12.
zPRO introduction
Cloud implementation overview
Shared Binary Linux implementation: SUSE Linux Enterprise Server 11 SP2
Shared Binary Oracle implementation: Oracle 11g
Cloning 100 Oracle Servers for Development: Oracle 11g
References
279
12.2.1 Requirements
Managing a large z/VM system is a specialized skill. It involves understanding the concepts
and facilities that z/VM provides and how to configure and use them to construct the
environment that is needed for your organization. The z/VM Systems Management
Application Programming Interface (SMAPI) simplifies the management of z/VM with a
standardized, platform-independent programming interface that can reduce the required skills
in this environment.
SMAPI must be configured so that it can be used. Because the intent is to use zPro with the
SMAPI, these configuration changes include those that are required for zPro.
The following overall steps are required:
1. Directory Maintenance (Dirmaint) changes
2. SMAPI configuration
The zPRO software also is needed for Cloud implementation.
280
VSMGUARD
VSMGUARD
VSMWORK1
VSMWORK1
VSMWORK2
VSMWORK2
VSMWORK3
VSMWORK3
ZPRO01
ZPRO01
ZPRO02
ZPRO02
ZPRO03
ZPRO03
*
*
*
*
*
*
*
*
*
*
*
*
*
*
140A
150A
140A
150A
140A
150A
140A
150A
140A
150A
140A
150A
140A
150A
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
ADGHOPSM
4. Create a file that is called CONFIGSM DATADVH D (as shown on Figure 12-2) by using the
following command:
==> xedit configsm datadvh d
/* SMAPI config */
ALLOW_ASUSER_NOPASS_FROM=
VSMGUARD *
ALLOW_ASUSER_NOPASS_FROM=
VSMWORK1 *
ALLOW_ASUSER_NOPASS_FROM=
VSMWORK2 *
ALLOW_ASUSER_NOPASS_FROM=
VSMWORK3
ALLOW_ASUSER_NOPASS_FROM=
ZPRO01
*
ALLOW_ASUSER_NOPASS_FROM=
ZPRO02
*
ALLOW_ASUSER_NOPASS_FROM=
ZPRO03
*
ASYNCHRONOUS_UPDATE_NOTIFICATION_EXIT.TCP= DVHXNE EXEC
ASYNCHRONOUS_UPDATE_NOTIFICATION_EXIT.UDP= DVHXNE EXEC
Figure 12-2 CONFIGSM DATADVH file
The entries ALLOW_ASUSER_NOPASS_FROM permits the SMAPI server machines and zPRO
server machines to issue Dirmaint calls as another user.
The entries ASYNCHRONOUS_UPDATE_NOTIFICATION_EXIT notifies the SMAPI of changes that
are made to the directory.
5. Restart Dirmaint by using the following command:
==> dvhbegin
281
282
Comment out the section of the file at the end under the comment, as shown in
Figure 12-3.
**************************************************************
*** the following machines are only available in ensembles ***
**************************************************************
* Default Management Network Server
*:server.VSMREQIM
*:type.REQUEST
*:protocol.AF_MGMT
*:address.INADDR_ANY
*:port.44446
* Primary Vswitch Controller
*:server.DTCENS1
*:type.VCTRL
* Backup Vswitch Controller
*:server.DTCENS2
*:type.VCTRL
* Management Guest
*:server.ZVMLXAPP
*:type.MG
Figure 12-3 DMSSIVR NAMES
When the configuration steps are completed, start the SMAPI server machines by using the
following command from an authorized user:
==> XAUTOLOG VSMGUARD
This virtual machine automatically starts the other SMAPI server machines. To verify that the
SMAPI is running and listening to requests, run the following commands:
==> vmlink tcpmaint 592
==> netstat
283
As shown in Figure 12-4, the results show a listener at port 44444 and 55555, which are
SMAPI TCP/IP services.
VM TCP/IP Netstat Level 620
Conn
---1001
1079
1073
1044
1083
1056
UDP
1003
1055
1031
1063
1074
1072
1036
1067
Local Socket
----- -----*..FTP-C
*..TELNET
9.12.4.200..TELNET
9.12.4.200..TELNET
9.12.4.200..TELNET
9.12.4.200..TELNET
*..1024
*..81
*..80
*..80
*..80
*..80
*..80
*..44444
*..55555
Foreign Socket
------- -----*..*
*..*
9.12.5.239..2037
9.12.5.139..4862
9.12.5.239..2800
9.57.138.250..2986
*..*
*..*
*..*
*..*
*..*
*..*
*..*
*..*
*..*
State
----Listen
Listen
Established
Established
Established
Established
UDP
Listen
Listen
Listen
Listen
Listen
Listen
Listen
Listen
284
ZPROCFG
ZPRO PROD1320
First server
ZPRO01
Server list
ZPRO01 ZPRO02 ZPRO03
Log server
ZPROLOG
zPRO Log Location
SFSZVPS:ZPROLOG.LOGS
zPRO Log Prune Age
30
SMAPI-authorized userid
ZPRO01
SMAPI-auth userid password
VSIZPRO
Show passwords?
NO
Socket timeout
60
TCP/IP userid
TCPIP
z/VM image hostname
ZVM
SMAPI Port
44444
Client Port
8889
Network domain
ITSO.COM
Directory manager
DIRMAINT
Dir Mgr Command Location
MAINT 19E
ESM
NONE
DIRM2RACF
NO
Directory Mgr Src Location
DIRMAINT 1DF
Directory Mgr Cfg Location
DIRMAINT 11F
RACF default owner
IBMUSER
RACF default group
SYS1
Proxy user for batch RACF processing ZPROXY
Proxy user to set unexpired passwords ZPROXY
RSCSAUTH config location
RSCSAUTH 191
Max logon time
480
E-mail server
VM:SMTP
E-mail from address
ZPRO@ITSO.COM
Expire active servers
SHUTDOWN
IP parameter files location ZPRO 391
IP parm minidisk read pw
READ
IP parm minidisk write pw
WRITE
Debug
YES
Javascript debugging
NO
zPRO Hourly Exits
PF1: Help
PF2: Validate/Save
PF3: Exit
PF10: Default
PF12: Cancel
In this window, there are various zPro configuration parameters. Most of these can be left to
default. Verify that the settings are correct if RACF is in use and modify Network Domain and
the E-mail from address sections for your organization. Make sure that the SMAPI-auth user
ID password is the CP (or RACF) password for the ZPRO01 virtual machine.
When the changes are complete, press F2 to validate the changes, then press F2 again to
save them.
285
Now that the configuration is complete, zPro can be started. Use the XAUTOLOG command to
start ZPRO01, as shown in the following example:
==> xautolog zpro01
This automatically brings up the other zPro servers.
286
On the right side of the page under the node name is a Log In button. Click the button and the
zPRO Login window opens, as shown in Figure 12-7.
287
The user ID that was logged in is shown in the upper right. The timer underneath ticks down
as an indication of how much time is left in the session. The VM node name is under the
timer, and the Logout button is under the node name. The zPro release number is in the
upper middle of the window.
The blue bar is the main menu window. Hovering the mouse over a menu shows the contents.
Click a menu item to select that function. Figure 12-9 shows the Manage Users menu.
For example, click Directory Maintenance to show all of the defined virtual machines.
Figure 12-10 on page 289 shows the resulting Directory Maintenance window with the
defined virtual machines listed on the left, Factories along the top and the Work zone
underneath the factories. A Factory is a function. Drag a virtual machine to a factory to
perform that function on it. At the bottom of the window is the Action Log, which is used to
display messages about functions that are performed.
In the User List on the left side of the window, some virtual machines are displayed in different
colors. Those that have a purple background are logged on. Virtual machines that have a gold
background are considered Gold machines, or template machines. They can be used for
cloning new virtual machines.
288
IP address set up
zPro can assign IP addresses to virtual machines that are cloned. The range of IP address
that zPro should assign are entered in the IP Address Maintenance window under the
Manage Users menu, as shown in Figure 12-11.
Figure 12-12 on page 290 shows the maintenance window. Enter the start and end of the IP
address range in the fields that are provided and click Add. zPro display again the window
with the entire range in the Available IP addresses section. Addresses that cannot be used for
any reason can be removed by entering the address (or range) and clicking Delete on the far
right. They are removed from the list of available addresses.
289
IP Address Assignment
Changing the IP address of any clones that are created by zPro requires action on the Linux
machine as it comes up for the first time. A shell script is included as part of zPro to
accomplish this task.
When the IP addresses are added to zPro, they are put on a list of available IP addresses.
During the cloning process, an IP address from the available list is selected. It is assigned to
the clone by writing it to a file on a common minidisk with the name of the new virtual
machine. The new hostname and interface device address are also written to the file. If there
are multiple cloned machines, they are assigned IP addresses, starting with the first one that
is selected from the list.
The golden image requires the shell script to be installed into /etc/rc.d/boot.d. The
chkconfig command is used to turn on the new boot service. The following commands
demonstrate the process:
# cp boot.vsisetup /etc/rc.d/boot.d
# cd /etc/rc.d/boot.d
# chkconfig boot.vsisetup on
Verify that it is turned on, as shown in the following example:
s112gold:/etc/rc.d # chkconfig -A --list | grep boot.vsisetup
boot.vsisetup
0:off 1:off 2:off 3:off 4:off 5:off 6:off
B:on
When the Linux machine is first booted after cloning, the script runs and then reads the file
that was created for it on the common minidisk. The new IP address and hostname are
obtained from the file and the appropriate files are changed on Linux. This process occurs
before the network starts. After the script ends, the boot process continues normally. When
networking is started, the new IP address and hostname are set and there is more re-boots
are not needed.
12.2.7 Cloning
Cloning is the process of creating a virtual machine that looks exactly like an existing virtual
machine and copying any read/write disk areas from the source machine (the gold image) to
new locations for the destination virtual machine (the clone). The intent is to mass-produce
new virtual machines from an existing virtual machine.
In addition to the system administration capabilities, zPro can create virtual machine clones.
To clone a machine, it must be in the gold list. To add a virtual machine to the gold list, go to
the Directory Maintenance page, drag the name of a machine from the User List on the left of
the display to the Add Gold factory. Messages in the Action Log on the bottom confirm when
the process is completed.
290
Linux should be installed and prepared to start when the clone is initially loaded. It should
contain the software packages and configuration settings that are generally required for any
Linux machine in your organization. Before it is cloned, the gold image Linux must be logged
off. As protection against this, zPro does not allow the cloning of an active gold image.
Any number of gold images can be created. They can be plain Linux machines, or a Linux
machine setup for a specific task, such as, Oracle or IBM WebSphere.
To start the cloning process, select Cloning under the Manage Users menu. The zPRO
Cloning window opens, as shown in Figure 12-13.
Figure 12-15 on page 292 shows the zPro User Authorization window. The User List on the
left side of the window shows the defined zPro users. The Factories are tasks that can be run.
By using the New Factory, you can create a zPro user from scratch or based on a user. By
using the Edit Factory, you can modify a zPro user. By using the Delete Factory, you can
delete a zPro user.
291
An existing CMS user can become a zPro user by creating a zPro profile for them. Dragging
the New user to the New Factory displays a blank user profile that can be used to build a zPro
user, as shown in Figure 12-16.
In the Work Zone, the tabs across the top are the various security areas that might need to be
completed, based on the level of access the new user requires. Start by completing the User
field and selecting whether this new user is an Admin. Then, select the tabs one at a time and
complete the fields in those tabs to authorize the user for various functions.
Any field that has the More Information graphic
292
Clicking the More Information graphic for the Admin drop-down menu shows a window in
which the selections are described and how they affect the user, as shown in Figure 12-17.
Click the Return button to close the window.
Another way to create a zPro user is to drag an existing user to the New Factory. This creates
a zPro user profile that has the same authorizations as the old user. It is necessary to
complete the User field to give the new user a name.
Figure 12-18 shows a new user that is called JADMIN. It is based on the delivered ZPRO user.
When all of the fields in the various tabs are complete (or modified), click Save to save the
new profile.
Any new users that are defined to zPro also must be authorized for Dirmaint. Add them to
AUTHFOR CONTROL, as shown in the following example:
ALL JADMIN * 140A ADGHOPSM
ALL JADMIN * 150A ADGHOPSM
293
12.3.1 Architecture
Two virtual machines were used as templates for read-only root. The first machine, S11ROMST,
contains the read/write root file system for the remaining images. All root file system changes
to make read-only root work properly was done there. It is a complete and functioning Linux
virtual machine. The second machine, S11ROGLD, was created as an image of the first,
including its root file system. Then, the directory entry was changed. Replacing the root file
system minidisk with a link to the root file system of S11ROMST. Then, S11ROGLD can be cloned.
Figure 12-19 shows the directory entry view function of zPro displaying S11ROMST. It occupies
a 3390 model 9 DASD. There is also a link to the LNXSA3 101 minidisk that contains the Oracle
binaries. As each machine is cloned, Oracle can be installed.
A private subnet was used for the cloned machines that allowed the creation of more virtual
machines with no concern about addressing limitations on the primary network. The LNXSA3
machine was used as a router between the primary network and the new subnet.
When the networking parameters were established, LNXSA3 was set up to properly route traffic
to the new network. This includes turning on Enable IP Forwarding on the Routing tab in
YaST. Additionally, the following command was issued on any machine that needed access to
the new clones:
# route add 10.1.1.0 mask 255.255.255.0 9.12.7.5
294
The Windows route print command on our testing machine then displayed the new route
that was created:
Network Destination
Netmask
0.0.0.0
0.0.0.0
9.12.0.0
255.255.240.0
9.12.5.239 255.255.255.255
9.255.255.255 255.255.255.255
10.1.1.0
255.255.255.0
127.0.0.0
255.0.0.0
224.0.0.0
240.0.0.0
255.255.255.255 255.255.255.255
Default Gateway:
9.12.4.1
Gateway
9.12.4.1
9.12.5.239
127.0.0.1
9.12.5.239
9.12.7.5
127.0.0.1
9.12.5.239
9.12.5.239
Interface Metric
9.12.5.239
10
9.12.5.239
10
127.0.0.1
10
9.12.5.239
10
9.12.5.239
1
127.0.0.1
1
9.12.5.239
10
9.12.5.239
1
The S11ROMST virtual machine and its clones use the new subnet. In YaST, configure it with an
IP address on the 0700 network device (eth1). After it is functioning, the 0600 network device
(eth0) can be removed and deleted from the directory entry.
zPro must also know about the new subnet. From Manage Users, click IP Address
Maintenance. Add the IP addresses for the new subnet, as shown in Figure 12-20.
295
Modify /etc/rwtab
A new file called /etc/rwtab must be modified. It contains a list of the files or directories that
should be available in read/write mode. Our /etc/rwtab file is shown in the following example:
dirs
#dirs
dirs
dirs
dirs
/var/cache/man
/var/gdm
/var/lock
/var/log
/var/run
empty
#empty
#empty
#empty
#empty
#empty
empty
#empty
#empty
#empty
#empty
empty
#empty
#empty
empty
empty
empty
/tmp
/var/cache/foomatic
/var/cache/logwatch
/var/cache/mod_ssl
/var/cache/mod_proxy
/var/cache/php-pear
/var/cache/systemtap
/var/db/nscd
/var/lib/dav
/var/lib/dhcp
/var/lib/dhclient
/var/lib/dhcpcd
/var/lib/php
/var/lib/ups
/var/tmp
/var/tux
/media
files
files
files
files
files
files
files
files
files
files
files
files
files
files
files
#files
/etc/adjtime
/etc/ntp.conf
/etc/resolv.conf
/etc/yp.conf
/etc/lvm/.cache
/var/account
/var/adm/netconfig/md5
/var/arpwatch
/var/cache/alchemist
/var/lib/iscsi
/var/lib/logrotate.status
/var/lib/ntp
/var/lib/xen
/root
/var/lib/gdm
/etc/X11/xdm
state
state
state
state
state
state
/var/lib/misc/random-seed
/etc/ssh
/etc/fstab
/etc/HOSTNAME
/etc/hosts
/etc/sysconfig/network
The last four lines of the file contain the entries that were added to support the process of
modifying IP addresses dynamically. The file is only referenced when read-only root is used.
296
Modifying /etc/zipl.conf
The file zipl.conf must be modified to tell the boot procedure that read-only root is used and
to identify the state and scratch disk devices. The following example shows the SLES11_SP2
boot section from our read-only root template:
[SLES11_SP2]
image = /boot/image-3.0.13-0.27-default
target = /boot/zipl
ramdisk = /boot/initrd-3.0.13-0.27-default,0x2000000
parameters = "root=/dev/disk/by-path/ccw-0.0.0100-part1 hvc_iucv=8
TERM=dumb vmpoff=LOGOFF vmhalt=LOGOFF"
The following example shows the modified boot section. The parameters keyword were
changed to add readonlyroot, in addition to the scratch and state subkeywords. The values
of scratch and state are the new minidisks that were added to the Linux machine to support
the read/write parts of the file system:
[SLES11_SP2]
image = /boot/image-3.0.13-0.27-default
target = /boot/zipl
ramdisk = /boot/initrd-3.0.13-0.27-default,0x2000000
parameters = "readonlyroot scratch=/dev/disk/by-path/ccw-0.0.0103-part1
state=/dev/disk/by-path/ccw-0.0.0104-part1
root=/dev/disk/by-path/ccw-0.0.0100-part1 hvc_iucv=8 TERM=dumb vmpoff=LOGOFF
vmhalt=LOGOFF"
Symlink /etc/mtab
The next step is to create a symlink to replace /etc/mtab with /proc/mounts. This is an
important step that allows modification of the mount table because the /etc directory is
read-only. Enter the following command:
s11romst:/etc # ln -sf /proc/mounts /etc/mtab
s11romst:/etc # ll mtab
lrwxrwxrwx 1 root root 12 Oct 16 15:11 mtab -> /proc/mounts
Modify /etc/fstab
At boot time, it is normal for Linux to check the integrity of any file systems that it uses. This
check is done at regular intervals. Because a read-only file system cannot be checked, the file
/etc/fstab must be modified to indicate that the root file system should not be checked. The
following example shows the /etc/fstab file before it is changed to turn off file system
checking:
/dev/system/root
/
ext3
/dev/disk/by-path/ccw-0.0.0200-part1 /boot ext3
acl,user_xattr
acl,user_xattr
1 1
1 0
The last field on each line indicates the sequence in which a file system is checked. Changing
this value to zero for the root file system disables checking.
Chapter 12. Using z/Pro as a Cloud infrastructure for Oracle
297
Running zipl
Run the zipl command to create the boot data since /etc/zipl.conf was changed.
Reboot
S11ROMST was shut down, logged off, and logged back on. When it came back up, the
read-only root startup took effect.
The lsdasd command shows the disk configuration, including the new disk devices for
read-only root, a shown in the following example:
s11rogld:/etc # lsdasd
Bus-ID
Status
Name
Device Type BlkSz Size
Blocks
==============================================================================
0.0.0100
active
dasda
94:0
ECKD 4096
7042MB
1802880
0.0.0101
active
dasdb
94:4
ECKD 4096
21128MB
5409000
0.0.0103
active
dasdc
94:8
ECKD 4096
35MB
9000
0.0.0300
active
dasdd
94:12
FBA
512
256MB
524288
0.0.0301
active
dasde
94:16
FBA
512
512MB
1048576
0.0.0104
active
dasdf
94:20
ECKD 4096
35MB
9000
All of the files or directories after the /var mount point are mounted over the two new disk
devices, as shown in the following example:
s11rogld:/etc # df
Filesystem
1K-blocks
rootfs
507624
udev
510208
tmpfs
510208
/dev/dasda1
507624
/dev/mapper/system_vg-opt_lv
253920
/dev/mapper/system_vg-tmp_lv
34764
/dev/mapper/system_vg-usr_lv
3096336
/dev/mapper/system_vg-var_lv
516040
/dev/dasdf1
34764
/var/lib/readonlyroot/state
/dev/dasdc1
34764
/var/lib/readonlyroot/scratch
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/dev/dasdc1
34764
/var/adm/netconfig/md5
/dev/dasdc1
34764
/var/lib/logrotate.status
/dev/dasdc1
34764
/dev/dasdc1
34764
298
27916
16%
5056
5056
5056
5056
5056
5056
5056
5056
5056
5056
5056
5056
5056
27916
27916
27916
27916
27916
27916
27916
27916
27916
27916
27916
27916
27916
16%
16%
16%
16%
16%
16%
16%
16%
16%
16%
16%
16%
16%
5056
27916
16%
5056
5056
27916
27916
16% /var/lib/ntp
16% /root
/var/cache/man
/var/lock
/var/log
/var/run
/tmp
/var/lib/dhcpcd
/var/tmp
/media
/etc/ntp.conf
/etc/resolv.conf
/etc/yp.conf
/etc/lvm/.cache
/dev/dasdf1
/var/lib/misc/random-seed
/dev/dasdf1
/dev/dasdf1
/dev/dasdf1
/dev/dasdf1
/dev/dasdf1
/etc/sysconfig/network
34764
640
32332
2%
34764
34764
34764
34764
34764
640
640
640
640
640
32332
32332
32332
32332
32332
2%
2%
2%
2%
2%
/etc/ssh
/etc/fstab
/etc/HOSTNAME
/etc/hosts
2. The Create a list filter window opens. Enter S11* in the first field, as shown in
Figure 12-23. Click Apply.
3. The list of users now contain only those that match the filter criteria that was entered.
From the filtered list, drag S11ROMST to the Add Gold Factory. It is added to the gold list and
now it can be cloned.
299
4. From Manage Users, click Cloning to open the window that is shown in Figure 12-24. This
shows the cloning window with our gold images on the left of the display, including
S11ROMST. Drag it to the Clone Factory to see the entry fields that prepare it to be cloned to
S11ROGLD.
5. A password for the new machine must be entered with minidisk allocation parms. No IP
address parameters must be entered because the clone uses the same IP address as the
gold image. If they are not both up at the same time, this is not a concern. The cloning
operation allocates new minidisks from the Dirmaint group called MOD9. Click Start
Cloning on the right. This creates the directory entry that is based on the old directory
entry. The minidisks are created based on any read/write minidisks in the gold image and
are copied from the gold image to the clone.
After the cloning operation is complete, S11ROGLD must be modified to replace the root (200)
minidisk with a LINK to S11ROMST 200. When it is initially loaded, it uses a read-only version of
the root file system.
The network device on the clone looks similar to the following example that shows that eth1 is
on our private 10 subnet:
s11rogld:/etc # ifconfig
eth1
Link encap:Ethernet HWaddr 02:00:00:00:00:E5
inet addr:10.1.1.101 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::ff:fe00:e5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1
RX packets:263 errors:0 dropped:0 overruns:0 frame:0
TX packets:234 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21240 (20.7 Kb) TX bytes:43457 (42.4 Kb)
lo
300
s11rogld:/etc # route -n
Kernel IP routing table
Destination
Gateway
0.0.0.0
10.1.1.100
10.1.1.0
0.0.0.0
127.0.0.0
0.0.0.0
169.254.0.0
0.0.0.0
Genmask
0.0.0.0
255.255.255.0
255.0.0.0
255.255.0.0
Flags
UG
U
U
U
Metric
0
0
0
0
Ref
0
0
0
0
Use
0
0
0
0
Iface
eth1
eth1
lo
eth1
The TCP/IP stack on z/VM must also be configured to access the new subnet so that ZVPS
can monitor each of the new Linux machines. The following example shows the modified
GATEWAY section:
GATEWAY
; Network
Subnet
First
; Address
Mask
Hop
; ------------- --------------- --------------10.1.1.0/24
9.12.7.5
DEFAULTNET
9.12.4.1
; (End GATEWAY Static Routing information)
Link
Name
---------------OSA3020
OSA3020
MTU
Size
----1500
1500
301
Moving the mouse over a gold image name displays the bubble that shows part of the
virtual machine configuration. From this bubble, we can see that there are two 50-cylinder
minidisks.
302
When you are cloning multiple virtual machines, you can enter the new name pattern in
this form. It is the common characters for each new virtual machine. The remaining part of
the name is an increasing numeric value, up to the number of machines that are created.
The name pattern must allow for all of the possible numbers to be generated. Because
100 machines are cloned, the pattern must be five or fewer characters. The pattern S11OR
was chosen in our example.
The next field is the CP (or ESM) password that is used for all of the new machines.
Next is the starting value that is to be generated for the first clone, then the number of
clones to be created.
The next section is where the IP address is selected for the first clone. IP addresses are
assigned automatically to subsequent clones from that point in the list.
The next section determines the lifetime of the cloned images. A value entered here does
expire (and delete) the clones after the time frame that is indicated. An optional expiration
email can be delivered, which indicates that the clones are about to expire. The server life
time can be changed in the Manage Users menu section of the zPro Server Expirations
window.
The minidisks for the clones were put in our MOD27 group, which is made up of 3390 Model
27 devices. The last section of the cloning form shows the group that was selected. To see
how much disk space is available in that group, click Check Freespace. The results are
shown in Figure 12-27 on page 304. It shows that there is plenty of disk space for 100
copies of 100 cylinders of minidisk
303
2. When you are satisfied that the values are entered correctly, click Start Cloning on the
right side of the form. As shown in Figure 12-28, the action log (which appears at the
bottom of the window) shows the current step of the cloning process for each new virtual
machine.
A window opens that shows the clone that is in the process of being created, as shown in
Figure 12-29.
304
After all of the clones are created, the user status display shows them all as Inactive, as
shown in Figure 12-30.
305
3. Click a virtual machine name to open the window that is shown in Figure 12-31.
4. Select the Start option, then click Go to activate (XAUTOLOG) the virtual machine. The Alert
window opens, as shown in Figure 12-32, in which the start is confirmed.
5. To activate many virtual machines, select the check boxes to the left of the machine
names that must be activated, then click Action at the top of the list to open the same
window. Click Start Go.
306
As shown in Figure 12-33, many of the virtual machines are activated. The column to the
right of the machine name shows DSC, which indicates that it is running and is
disconnected.
12.6 References
The following documents and books were used as reference material in writing this chapter:
z/VM V6R2 Directory Maintenance Facility Tailoring and Administration Guide,
SC24-6190-02
z/VM V6R2 Systems Management Application Programming, SC24-6234-03
z/VM and Linux on IBM System z: The Cloud Computing Cookbook for z/VM 6.2 RHEL 6.2
and SLES11 SP2
SUSE Linux Enterprise Server 11 SP2 Release Notes, which is available at this website:
https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP2/
Sharing and maintaining Linux under z/VM, REDP-4322
307
308
Part 4
Part
Appendixes
We included in this publication the following appendices to provide information that might be
useful when you set up your Oracle database on Linux on System z:
Appendix A, Setting up Red Hat Enterprise Linux 6.3 for Oracle on page 311 for an
Oracle Database. Oracle products require that more rpms are installed.
Appendix B, Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise
Linux 6 on page 335. The following Redpapers are available about how to install Oracle
Real Application Clusters on Linux on System z:
http://www.redbooks.ibm.com/abstracts/redp4788.html?Open
http://www.redbooks.ibm.com/abstracts/redp9131.html?Open
Appendix C, Working effectively with Oracle support on page 349. There is a dedicated
team working on this platform.
Appendix D, Additional material on page 357.
309
310
Appendix A.
Introduction
Step 1: Starting the Red Hat bootstrap loader
Step 2: Installing Red Hat Enterprise Linux
Step 3: Running oravalidate rpm to import all other rpms
Installing and setting up vncserver
Step 4: Customizing the Linux setup for Oracle
Summary
311
A.1 Introduction
Before you begin this process, ensure that z/VM guest Directory entries were prepared and
the user can log in to z/VM and use CMS. The Red Hat installation process includes the
following two major steps, and there is a third step to add the rpms that is needed for Oracle:
1. Starting the Red Hat bootstrap loader
2. Installing Red Hat Enterprise Linux
3. Running oravalidate rpm to bring in all the other rpms
These steps are described next.
*
*
313
(initrd) that is necessary to begin the installation. Be sure to set the logical record length to
80 before transferring the kernel and initrd (QUOTE LOCSITE FIX 80 if transferring via FTP
from z/VM, or SITE FIX 80 if transferring via FTP to z/VM).
Next, create the EXEC that is shown in Example A-4, then run it to begin the installation.
Example A-4 Example Punch file: Redhat exec
*/
The commands associated punch (loads into the reader) the necessary images in the correct
order and prepares them to be loaded. The last command initially loads the reader, which
loads the files that were punched.
When the initial program load (IPL) is run, the reader is loaded (as shown in Example A-5)
and the Linux guest is ready to load.
Example A-5 RDRLIST
PAZXXT11 RDRLIST S0 V
Cmd Filename Filetype
KERNEL IMG
GENERIC PRM
INITRD IMG
Col=1 Alt=0
Records Date
113488 11/09
2 11/09
223040 11/09
Time
13:02.28
13:02.33
13:02.33
Because the CONF file contains the networking and DASD information, the installation
proceeds in silent mode, which brings up Figure A-6 on page 317.
GENERIC
INITRD
KERNEL
REDHAT
RHU3
314
PRM
IMG
IMG
EXEC
CONF
A1
A1
A1
A1
A1
V
F
F
V
F
53
80
80
38
80
2
226772
113591
9
12
1
4430
1723
1
1
11/30/12 8:19
11/12/12 9:45
11/12/12 9:44
11/26/12 17:26
11/26/12 17:26
The installation process takes you through several windows, as shown in Figure A-1,
Figure A-2 on page 315, Figure A-3 on page 316, Figure A-4 on page 316, and Figure A-5 on
page 317.
Figure A-1 Choose the language to use during Red Hat Installation
Figure A-2 Identify the media that contains Red Hat 6.3 code
315
316
317
We selected the two available DASD disks to use for storage, as shown in Figure A-8.
318
These disks were used before, so we confirmed that we wanted to write over the existing
data, as shown in Figure A-9 on page 319.
Enter the host name to be used by the Linux guest, as shown in Figure A-10 on page 320.
319
320
321
Choose your password and make sure that it is noted, as shown in Figure A-14.
322
We choose the target DASDs for the installation, as shown in Figure A-16.
323
The disks are now formatted and an LVM is created, as shown in Figure A-17.
We choose Basic Server installation and added the other rpms for Oracle later by using the
oravalidate process, as shown in Figure A-18.
324
This stage takes 10 - 20 minutes to complete. You see the packages as they are installed, as
shown in Figure A-19.
After you rebooted Linux guest, you should install errata that are recommended for Red Hat
6.3 on Linux on System z, which is available at this website:
http://rhn.redhat.com/errata/RHSA-2012-1156.html
An example of the errata is shown in Example A-7.
Example A-7 Example of the installation of the errata
Packages:
kernel-2.6.32-279.9.1.el6.s390x.rpm
kernel-firmware-2.6.32-279.9.1.el6.noarch.rpm
kernel-devel-2.6.32-279.9.1.el6.s390x.rpm
kernel-headers-2.6.32-279.9.1.el6.s390x.rpm
Install:
[root@pazxxt10 RHEL6.3]# rpm -ivh kernel-devel-2.6.32-279.9.1.el6.s390x.rpm
warning: kernel-devel-2.6.32-279.9.1.el6.s390x.rpm: Header V3 RSA/SHA256
Signature, key ID fd431d51: NOKEY
Preparing...
########################################### [100%]
1:kernel-devel
########################################### [100%]
[root@pazxxt10 RHEL6.3]# rpm -ivh kernel-headers-2.6.32-279.9.1.el6.s390x.rpm
warning: kernel-headers-2.6.32-279.9.1.el6.s390x.rpm: Header V3 RSA/SHA256
Signature, key ID fd431d51: NOKEY
Preparing...
########################################### [100%]
1:kernel-headers
########################################### [100%]
[root@pazxxt10 RHEL6.3]# rpm -ivh kernel-firmware-2.6.32-279.9.1.el6.noarch.rpm
325
326
Lines of output are produced as the other needed rpms are installed.
Example A-8 Running oravalidate
327
328
Dependencies Resolved
==============================================================================
Package
Arch Version
Repository Size
==============================================================================
Installing:
ora-val-rpm-EL6-DB s390x 11.2.0.3-1
/ora-val-rpm-EL6-DB-11.2.0.3-1.s390x
0.0
Installing for dependencies:
cloog-ppl
s390x 0.15.7-1.2.el6 rhel-source 89 k
compat-libcap1
s390x 1.10-1
rhel-source 18 k
compat-libstdc++-33 s390 3.2.3-69.el6
rhel-source 182 k
compat-libstdc++-33 s390x 3.2.3-69.el6
rhel-source 186 k
cpp
s390x 4.4.6-3.el6
rhel-source 3.2 M
elfutils-libelf-devel s390x 0.152-1.el6
rhel-source 31 k
gcc
s390x 4.4.6-3.el6
rhel-source 6.5 M
gcc-c++
s390x 4.4.6-3.el6
rhel-source 4.2 M
glibc
s390 2.12-1.47.el6 rhel-source 3.8 M
glibc-devel
s390 2.12-1.47.el6 rhel-source 973 k
glibc-devel
s390x 2.12-1.47.el6 rhel-source 974 k
glibc-headers
s390x 2.12-1.47.el6 rhel-source 593 k
kernel-headers
s390x 2.6.32-220.el6 rhel-source 1.6 M
ksh
s390x 20100621-12.el6 rhel-source 709 k
libaio
s390 0.3.107-10.el6 rhel-source 21 k
libaio-devel
s390 0.3.107-10.el6 rhel-source 13 k
libaio-devel
s390x 0.3.107-10.el6 rhel-source 13 k
libgcc
s390 4.4.6-3.el6
rhel-source 85 k
libstdc++
s390 4.4.6-3.el6
rhel-source 306 k
libstdc++-devel
s390 4.4.6-3.el6
rhel-source 1.6 M
libstdc++-devel
s390x 4.4.6-3.el6
rhel-source 1.6 M
mpfr
s390x 2.4.1-6.el6
rhel-source 162 k
nss-softokn-freebl s390 3.12.9-11.el6 rhel-source 156 k
ppl
s390x 0.10.2-11.el6 rhel-source 1.2 M
Transaction Summary
==============================================================================
Install
25 Package(s)
Total download size: 28 M
Installed size: 79 M
Is this ok [y/N]:
Downloading Packages:
------------------------------------------------------------------------------Total
27 MB/s | 28 MB
00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libstdc++-devel-4.4.6-3.el6.s390x 1/25
Installing : elfutils-libelf-devel-0.152-1.el6.s390x
2/25
Installing : kernel-headers-2.6.32-220.el6.s390x
3/25
Appendix A. Setting up Red Hat Enterprise Linux 6.3 for Oracle
329
Installing : libgcc-4.4.6-3.el6.s390
4/25
Installing : nss-softokn-freebl-3.12.9-11.el6.s390
5/25
Installing : glibc-2.12-1.47.el6.s390
6/25
Installing : glibc-headers-2.12-1.47.el6.s390x
7/25
Installing : glibc-devel-2.12-1.47.el6.s390
8/25
Installing : mpfr-2.4.1-6.el6.s390x
9/25
Installing : libaio-0.3.107-10.el6.s390
10/25
Installing : libstdc++-4.4.6-3.el6.s390
11/25
Installing : libstdc++-devel-4.4.6-3.el6.s390
12/25
Installing : libaio-devel-0.3.107-10.el6.s390
13/25
Installing : libaio-devel-0.3.107-10.el6.s390x
14/25
Installing : cpp-4.4.6-3.el6.s390x
15/25
Installing : glibc-devel-2.12-1.47.el6.s390x
16/25
Installing : compat-libstdc++-33-3.2.3-69.el6.s390x
17/25
Installing : compat-libcap1-1.10-1.s390x
18/25
Installing : ksh-20100621-12.el6.s390x
19/25
Installing : ppl-0.10.2-11.el6.s390x
20/25
Installing : cloog-ppl-0.15.7-1.2.el6.s390x
21/25
Installing : gcc-4.4.6-3.el6.s390x
22/25
Installing : gcc-c++-4.4.6-3.el6.s390x
23/25
Installing : compat-libstdc++-33-3.2.3-69.el6.s390
24/25
Installing : ora-val-rpm-EL6-DB-11.2.0.3-1.s390x
25/25
****************************************************************************
*
Validation complete - please install any missing rpms
*
*
The following output should display both (s390) - 31-bit and
*
*
(s390x) 64-bit rpms - Please provide the output to Oracle
*
*
Support If you are still encountering problems.
*
****************************************************************************
Found
glibc-dev (s390)
Found
glibc-dev (s390x)
Found
libaio (s390)
Found
libaio (s390x)
Found
libaio-devel (s390)
330
Found
libaio-devel (s390x)
Found
compat-libstdc++-33 (s390)
Found
compat-libstdc++-33 (s390x)
Found
glibc (s390)
Found
glibc (s390x)
Found
libgcc (s390)
Found
libgcc (s390x)
Found
libstdc++ (s390)
Found
libstdc++ (s390x)
Found
libstdc++-devel (s390)
Found
libstdc++-devel (s390x)
Found
libaio-dev (s390)
Found
libaio-dev (s390x)
rhel-source/productid
| 1.7 kB
00:00
...
Installed products updated.
Installed:
ora-val-rpm-EL6-DB.s390x 0:11.2.0.3-1
Dependency Installed:
cloog-ppl.s390x 0:0.15.7-1.2.el6
compat-libcap1.s390x 0:1.10-1
compat-libstdc++-33.s390 0:3.2.3-69.el6 compat-libstdc++-33.s390x
0:3.2.3-69.el6
cpp.s390x 0:4.4.6-3.el6
elfutils-libelf-devel.s390x
0:0.152-1.el6
gcc.s390x 0:4.4.6-3.el6
gcc-c++.s390x 0:4.4.6-3.el6
glibc.s390 0:2.12-1.47.el6
glibc-devel.s390 0:2.12-1.47.el6
glibc-devel.s390x 0:2.12-1.47.el6
glibc-headers.s390x 0:2.12-1.47.el6
kernel-headers.s390x 0:2.6.32-220.el6
ksh.s390x 0:20100621-12.el6
libaio.s390 0:0.3.107-10.el6
libaio-devel.s390 0:0.3.107-10.el6
libaio-devel.s390x 0:0.3.107-10.el6
libgcc.s390 0:4.4.6-3.el6
libstdc++.s390 0:4.4.6-3.el6
libstdc++-devel.s390 0:4.4.6-3.el6
libstdc++-devel.s390x 0:4.4.6-3.el6
mpfr.s390x 0:2.4.1-6.el6
nss-softokn-freebl.s390 0:3.12.9-11.el6 ppl.s390x 0:0.10.2-11.el6
Complete!
The Oracle required rpms are installed and you can now prepare for the Oracle installation.
331
332
echo
echo
echo
echo
echo
echo
echo
echo
"kernel.shmmni=4096" >>/etc/sysctl.conf
"kernel.sem=250 32000 100 128" >>/etc/sysctl.conf
"fs.file-max=65536" >>/etc/sysctl.conf
"net.ipv4.ip_local_port_range=1024 65000" >>/etc/sysctl.conf
"net.core.rmem_default=1048576" >>/etc/sysctl.conf
"net.core.rmem_max=1048576" >>/etc/sysctl.conf
"net.core.wmem_default=262144" >>/etc/sysctl.conf
"net.core.wmem_max=262144" >>/etc/sysctl.conf
optional
required
required
echo
echo
echo
echo
"oracle
"oracle
"oracle
"oracle
soft
hard
soft
hard
333
A.7 Summary
You now have a Red Hat Enterprise Linux 6.3 guest ready for the installation of an Oracle
Database 11gR2 (11.2.0.3).
You should review Chapter 4, Setting up SUSE Linux Enterprise Server 11 SP2 and Red Hat
Enterprise Linux 6.2 on page 55 for customization information. Then, review Appendix B,
Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6 on
page 335 as a guide to install an Oracle Database with the My Oracle Support notes and the
Oracle installation manuals.
334
Appendix B.
Not all of the panels are included here. A detailed installation is described in the Oracle
documentation and the Red Hat documentation.
This appendix includes the following topics:
335
336
The panels that are shown in Figure B-3 are new with 11gR2.
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
337
Figure B-4 Error message you receive for RHEL 6 as it is a new version
Because RHEL 6 is a new version of Linux, the verification file is not included and you see an
error in the install_log file.
Select the type of installation you want, as shown in Figure B-5. In our case, we selected to
install the software only for a single instance database.
338
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
339
340
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
341
342
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
343
We choose the Group ID, as shown in Figure B-11. We used group dba.
344
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
345
346
Appendix B. Installing Oracle and creating a database 11.2.0.3 on Red Hat Enterprise Linux 6
347
After steps in the DBCA installation process are completed, Figure B-18 shows that the
database was successfully created.
Oracle Database 11gR2 code was installed and a single instance database was created.
348
Appendix C.
349
That is, there is a team for every product that runs on or interfaces with a database on any
platform.
These teams can handle any service request (SR) that does not have platform-specific
dependencies. Because it is the same Oracle code that is running the same underlying
operations, the platform is irrelevant, and thus there are thousands of support staff members
that are trained to handle 99.9% or more of the issues you might encounter in your operation.
In our experience, platform-specific issue (those that occur only on, or are caused by, a
specific problem in the Linux on System z Architecture) are rare.
However, in addition to the generic Support organization, Oracle provides a team of dedicated
Linux on System z engineers, not because we expect platform-specific issues, but because
there are some features of the platform, which require skilled, knowledgeable engineers to
assist at time. An example of this is z/VM, which is not something that is encountered on any
other platform, and an SR that involves interaction between z/VM, Linux and Oracle, and
Performance, for example, is greatly assisted by these technical specialists.
350
This team does not work independently, but with the generic teams to provide specific
relevant technical expertise and customer knowledge to a broader generic issue. For
example, the following issues require such knowledge:
Installation and Performance: It helps to know that the Java JDK is provided by IBM, not by
Oracle Sun as on other platforms.
Clustering: Specific knowledge of z/VM Virtual Switch or Hipersocket architecture might
be required.
If you have an SR that you believe could benefit from this teams input, you can request in the
SR that the Linux on System z team are engaged by including the following text: Please
request the assistance of the Database Specialized Mainframe/Linux on System z team.
When you are opening a new SR, if you choose the following specific information, the SR
should be routed directly to the specialist team:
The correct platform: IBM System z with Linux
Problem Type: Issues on Linux on System z
Problem Clarification:
351
The source change is done at one of two levels: the current development level (12c) or the
current maintenance level (11.2.0.n). This change is made on the current reference
development platform, which changes periodically (currently, Oracle Enterprise Linux). A
determination is made whether the change can be merged into older versions (back-level
versions). If so, we generate the information to back port the change.
This back porting process is performed in response to customer request, that is, in response
to a service request. If we identify that a customer issue is resolved by a particular bug or
patch, we can request that the object code is generated for this bug on the customers
platform and Oracle release combination. This generates the Opatch package and loads it to
My Oracle Support for a customer to download and apply. A patch of this nature is known as
an interim patch.
Sometimes, there might be a conflict between one patch that was applied and a new one to
be applied. Opatch can detect this and Support usually asks for an Opatch lS inventory, a
detailed listing that shows patches that are already applied. Support can then detect potential
conflicts before patches are generated.
Conflicts occur because both patches contain changes to the same module, so we must
merge these changes before a new module is produced. This process takes longer to perform
because there is manual code change and Quality Assurance tests to perform; however, after
this is complete, the porting process continues as before.
Because the Interim Patch process is cumbersome and the potential for conflicts can cause
delay in providing a solution, we now strongly recommend that customers make full use of the
patch collections we provide: Patch Sets and Patch Set Updates.
Patch Sets are major collections of patches and source updates that generally appear
annually and contain solutions for most of serious issues that are discovered since the
previous patch set was delivered. The work that is required to integrate the hundreds of bug
fixes is enormous. For that reason, there is no specific time frame or schedule for delivery on
every platform because quality takes precedence over hard shipping times.
This might sound like a problem, but it was addressed by the second collection type: the
Patch Set Update. This consists of solutions for the most serious issues that are discovered
that are related to security, integrity, and availability. These are delivered quarterly on a
specific schedule and are simultaneously on all platforms, including Linux on System z. The
only exception to this rule is if a patch set is shipped close to the scheduled date of a Patch
Set Update. This work cannot begin on the update until the Patch Set is available, so in this
case, the update is delayed by the time that is required to perform this work. However, this is
a rare occurrence.
Thus, by placing Patch Set Updates into their normal maintenance cycle, customers can
continually keep their systems at the highest available service level, which proactively avoids
many potential issues, and thus maintains service quality standards.
352
353
Important: Do not confuse Severity with Escalation. Severity reflects the effect on your
business, whereas escalation means bringing management attention to your SR and,
where appropriate, more resources. This direct, two-way communication with a Manager in
Support is from where the next action plan comes. Severity increases are discussed during
this communication.
If you are dissatisfied with the progress or response to an SR, escalate rather than change
the severity because this gets a manager on the case immediately.
C.5 Tools
The following list of tools is not a comprehensive list, but a starter. Some tools already might
be installed; others might need to be installed or authenticated:
z/VM:
Performance Toolkit, which is available at this website:
http://www.vm.ibm.com/related/perfkit/
Velocity Software zVPS, which is available at this website:
http://www.velocity-software.com/product.html
Linux:
Sysstat, which is available at this website:
http://sebastien.godard.pagesperso-orange.fr/
OSWatcher (for more information, see OSWatcher Black Box Analyzer User Guide,
Doc ID 461053.1)
ksar, which is available at this website:
http://sourceforge.net/projects/ksar/
Oracle:
AWR or Statspack (for more information, see Performance Tuning Using Advisors and
Manageability Features: AWR, ASH, ADDM and Sql Tuning Advisor, Doc ID 276103.1)
LTOM (for more information, see LTOM - The On-Board Monitor User Guide, Doc ID
352363.1)
RDA/OSWatcher/ProcWatcher
354
355
356
Appendix D.
Additional material
Attention: Linux commands that are running as root are prefixed with # in this appendix.
This book refers to additional material that can be downloaded from the Internet as described
in the following sections.
Description
Code samples in a compressed tar file
357
Location
Description
boot.onetime
/etc/init.d/
boot.oracle
/etc/init.d/
database.rsp
/tmp/
For more information about how to use these files, see Chapter 10, Automating Oracle on
System z on page 199.
358
CLONE EXEC
The following CLONE EXEC attempts to use FLASHCOPY then falls back to DDR if that does not
succeed. This clones a Linux on z/VM, which assumes minidisks 100 (Linux), 101 (Oracle),
and 302 (swap):
/*+------------------------------------------------------------------+*/
/* EXEC to clone minidisks 100 101 and 302 using FLASHCOPY
*/
/*+------------------------------------------------------------------+*/
Parse Arg sourceID targetID .
If sourceID = '' | sourceID = '?' | targetID = '' Then Do
say 'Syntax is:'
say 'CLONE sourceID targetID'
exit 1
End
/* verify that the source ID is logged off */
'CP QUERY' sourceID
If rc <> 45 Then Do
Say sourceID 'does not exist or is not logged off?'
exit 2
End
Say 'Are you sure you want to overwrite disks on' targetID '(y/n)?'
Parse upper pull answer .
If answer <> 'Y' then
exit 3
/* FLASHCOY the 100, 101 and 302 disks from sourceID to targetID */
call copyDisk sourceID '100 1100' targetID '100 2100'
call copyDisk sourceID '101 1101' targetID '101 2101'
call copyDisk sourceID '302 1302' targetID '302 2302'
/* start the target virtual machine */
say "Starting new clone" targetID
'CP XAUTOLOG' targetID
exit
/*+------------------------------------------------------------------+*/
copyDisk:
/* copy a minidisk by linking the source R/O and the target R/W then */
/*
try FLASHCOPY - if it fails, fall back to DDR
*/
/* Parm 1: source user ID
*/
/* Parm 2: rdev of the minidisk to copy from R/O
*/
/* Parm 3: temporary rdev of the source disk
*/
/*+------------------------------------------------------------------+*/
Arg sourceID vdev1 vdev2 targetID vdev3 vdev4 .
/* Link source disk read-only then target disk read-write */
'CP LINK' sourceID vdev1 vdev2 'RR'
If rc <> 0 Then Do
say 'CP LINK' sourceID vdev1 vdev2 'RR failed with' rc
exit 4
End
'CP LINK' targetID vdev3 vdev4 'MR'
If rc <> 0 Then Do
Appendix D. Additional material
359
*/
*/
*/
*/
*/
*/
*/
*/
boot.onetime script
The following boot.onetime script sets the IP address and host name for a newly cloned
Linux at first boot:
#!/bin/bash
#
# /etc/init.d/boot.onetime
#
# chkconfig: 345 01 99
### BEGIN INIT INFO
# Provides:
boot.onetime
# Description:
upon first boot find/modify IP@ + hostname, gen SSH keys
### END INIT INFO
#
# This script requires two RHEL 5 parameter files to exist on the user ID's
# 191 disk: (1) the file RH62GOLD PARM-RH5 - the parameter file of the
# golden image and (2) $userid PARM-RHr6 - parameter file of the clone where
# $userid is the ID of the user that is running the script. It then modifies
# the IP address, Host name and fully qualified domain name in three
# configuration files that contain this info. It also regenerates SSH keys
# and checks the SOFTWARE variable to determine if additional scripts need be
# Copied. Finally it turns itself off via "chkconfig" so it only runs once.
#
# IBM DOES NOT WARRANT OR REPRESENT THAT THE CODE PROVIDED IS COMPLETE
# OR UP-TO-DATE. IBM DOES NOT WARRANT, REPRESENT OR IMPLY RELIABILITY,
# SERVICEABILITY OR FUNCTION OF THE CODE. IBM IS UNDER NO OBLIGATION TO
# UPDATE CONTENT NOR PROVIDE FURTHER SUPPORT.
# ALL CODE IS PROVIDED "AS IS," WITH NO WARRANTIES OR GUARANTEES WHATSOEVER.
360
# IBM EXPRESSLY DISCLAIMS TO THE FULLEST EXTENT PERMITTED BY LAW ALL EXPRESS,
# IMPLIED, STATUTORY AND OTHER WARRANTIES, GUARANTEES, OR REPRESENTATIONS,
# INCLUDING, WITHOUT LIMITATION, THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR
# A PARTICULAR PURPOSE, AND NON-INFRINGEMENT OF PROPRIETARY AND INTELLECTUAL
# PROPERTY RIGHTS. YOU UNDERSTAND AND AGREE THAT YOU USE THESE MATERIALS,
# INFORMATION, PRODUCTS, SOFTWARE, PROGRAMS, AND SERVICES, AT YOUR OWN
# DISCRETION AND RISK AND THAT YOU WILL BE SOLELY RESPONSIBLE FOR ANY DAMAGES
# THAT MAY RESULT, INCLUDING LOSS OF DATA OR DAMAGE TO YOUR COMPUTER SYSTEM.
# IN NO EVENT WILL IBM BE LIABLE TO ANY PARTY FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY TYPE
# WHATSOEVER RELATED TO OR ARISING FROM USE OF THE CODE FOUND HEREIN, WITHOUT
# LIMITATION, ANY LOST PROFITS, BUSINESS INTERRUPTION, LOST SAVINGS, LOSS OF
# PROGRAMS OR OTHER DATA, EVEN IF IBM IS EXPRESSLY ADVISED OF THE POSSIBILITY
# OF SUCH DAMAGES. THIS EXCLUSION AND WAIVER OF LIABILITY APPLIES TO ALL
# CAUSES OF ACTION, WHETHER BASED ON CONTRACT, WARRANTY, TORT OR ANY OTHER
# LEGAL THEORIES.
#
#+--------------------------------------------------------------------------+
function msg()
# A wrapper around echo that also sends output to the output file in /tmp
# Args: text to echo
#+--------------------------------------------------------------------------+
{
echo "$this: $@"
echo "$@" >> $outFile
} # msg()
#+--------------------------------------------------------------------------+
function enable191disk()
# Enable the 191 disk and set the global variable disk191
#+--------------------------------------------------------------------------+
{
local devLine
local diskName
msg "Enabling the 191 disk"
chccwdev -e 191 > /dev/null 2>&1
rc=$?
if [ $rc != 0 ]; then # unable to enable 191 disk
echo "$this: Unable to enable 191, rc from chccwdev = $rc"
exit 1
fi
sleep 1
# wait a second for disk to be ready
udevadm settle
local devLine=`grep "^0.0.0191" /proc/dasd/devices` # the line with 191
if [ $? = 0 ]; then
diskName=`echo $devLine | sed -e 's/.* is //g' | awk '{print $1}'`
disk191="/dev/$diskName"
else
msg "Error: 191 disk not found in /proc/dasd/devices"
exit 2
fi
} # enable191disk()
361
#+--------------------------------------------------------------------------+
function initialize()
# set up for customizing new clone
#+--------------------------------------------------------------------------+
{
this=`basename $0`
echo "$this - starting at `date`" > $outFile
thisID=$(cat /proc/sysinfo | grep "VM00 Name" | awk '{print $3}')
if [ $thisID = "RH62GOLD" ]; then # don't do anything on this ID
msg "Warning: on golden image RH62GOLD - exiting"
exit
fi
msg "this userID = $thisID"
enable191disk
} # initialize()
#+--------------------------------------------------------------------------+
function findSourceIP()
# Get the source IP address and hostName
# Args: none
#+--------------------------------------------------------------------------+
{
sourceConf="$sourceID.$confType"
cmsfslst -d $disk191 | grep $sourceID | grep $confType > /dev/null
rc=$?
if [ $rc != 0 ]; then
echo "$0: $sourceConf not found on 191 minidisk. Exiting"
exit 2
fi
export local $(cmsfscat -a -d $disk191 $sourceConf)
# set global variable names escaping any dots (.) in the strings
sourceName=$(echo "$HOSTNAME" | sed -e 's:\.:\\\.:g')
sourceHost=${HOSTNAME%%.*} # Chop domain name off to leave host name
msg "source host name = $sourceHost"
sourceIP=$(echo "$IPADDR" | sed -e 's:\.:\\\.:g')
msg "source IP address = $sourceIP"
sourceIP2=$(echo "$IPADDR2" | sed -e 's:\.:\\\.:g')
msg "source IP address 2 = $sourceIP2"
} # findSourceIP()
#+--------------------------------------------------------------------------+
function findTargetIP()
# Get my new IP address and hostname
# Args: none
#+--------------------------------------------------------------------------+
{
targetParm="$thisID.$confType"
msg "targetParm = $targetParm"
cmsfslst -d $disk191 | grep $thisID | grep $confType > /dev/null
rc=$?
if [ $rc != 0 ]; then
echo "$0: $targetParm not found on 191 minidisk. Exiting"
exit 3
fi
export local $(cmsfscat -a -d $disk191 $targetParm)
362
targetName=$HOSTNAME
targetHost=${HOSTNAME%%.*} # Chop domain name off to leave host name
msg "target host name = $targetHost"
targetIP=$IPADDR
msg "target IP address = $targetIP"
targetIP2=$IPADDR2
msg "target IP address 2 = $targetIP2"
} # findTargetIP()
#+--------------------------------------------------------------------------+
function modifyIP()
# Modify IP address and host name in /etc/hosts, /etc/sysconfig/network and
# /etc/sysconfig/network-scripts/ifcfg-eth0
# Args: none
#+--------------------------------------------------------------------------+
{
# TODO: this function should also modify, DNS, Gateway, broadcast, etc.
eth0file="/etc/sysconfig/network-scripts/ifcfg-eth0"
eth1file="/etc/sysconfig/network-scripts/ifcfg-eth1"
msg "Modifying network values"
sed --in-place -e "s/$sourceName/$targetName/g" /etc/hosts
sed --in-place -e "s/$sourceHost/$targetHost/g" /etc/hosts
sed --in-place -e "s/$sourceIP/$targetIP/g" /etc/hosts
sed --in-place -e "s/$sourceHost/$targetHost/g" /etc/sysconfig/network
sed --in-place -e "s/$sourceIP/$targetIP/g" $eth0file
sed --in-place -e "s/$sourceIP2/$targetIP2/g" $eth1file
hostname $targetHost
# change the hostname in the two Oracle response files
msg "Modifying values in Oracle response files"
sed --in-place -e "s/HOSTNAME=xxxx/HOSTNAME=$targetName/g" /tmp/database.rsp
} # modifyIP()
#+--------------------------------------------------------------------------+
function rmSSHkeys()
# Remove the host SSH keys - when sshd starts they will be recreated
# Args: none
#+--------------------------------------------------------------------------+
{
rm /etc/ssh/ssh_host_*
} # rmSSHkeys()
#+--------------------------------------------------------------------------+
# global variables
disk191=""
# device file name of the 191 disk
sourceID="RH62GOLD"
# VM user ID where first Linux was installed
confType="CONF-RH6"
# File type of configuration file on 191 disk
outFile="/tmp/boot.onetime.out" # the output file
this=""
# the name of this command
# main()
if [ "$1" = "start" ]; then # configure the system
initialize "$@"
findSourceIP
findTargetIP
363
modifyIP
rmSSHkeys
chkconfig boot.onetime off
fi
boot.oracle script
The following boot.oracle script prepares a newly cloned RHEL 6.2 Linux system for an
installation of Oracle stand-alone or grid software:
#!/bin/bash
#
# boot.oracle
Configure this virtual machine for Oracle standalone or grid
#
# chkconfig:
345 98 2
#
# IBM DOES NOT WARRANT OR REPRESENT THAT THE CODE PROVIDED IS COMPLETE
# OR UP-TO-DATE. IBM DOES NOT WARRANT, REPRESENT OR IMPLY RELIABILITY,
# SERVICEABILITY OR FUNCTION OF THE CODE. IBM IS UNDER NO OBLIGATION TO
# UPDATE CONTENT NOR PROVIDE FURTHER SUPPORT.
# ALL CODE IS PROVIDED "AS IS," WITH NO WARRANTIES OR GUARANTEES WHATSOEVER.
# IBM EXPRESSLY DISCLAIMS TO THE FULLEST EXTENT PERMITTED BY LAW ALL EXPRESS,
# IMPLIED, STATUTORY AND OTHER WARRANTIES, GUARANTEES, OR REPRESENTATIONS,
# INCLUDING, WITHOUT LIMITATION, THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR
# A PARTICULAR PURPOSE, AND NON-INFRINGEMENT OF PROPRIETARY AND INTELLECTUAL
# PROPERTY RIGHTS. YOU UNDERSTAND AND AGREE THAT YOU USE THESE MATERIALS,
# INFORMATION, PRODUCTS, SOFTWARE, PROGRAMS, AND SERVICES, AT YOUR OWN
# DISCRETION AND RISK AND THAT YOU WILL BE SOLELY RESPONSIBLE FOR ANY DAMAGES
# THAT MAY RESULT, INCLUDING LOSS OF DATA OR DAMAGE TO YOUR COMPUTER SYSTEM.
# IN NO EVENT WILL IBM BE LIABLE TO ANY PARTY FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY TYPE
# WHATSOEVER RELATED TO OR ARISING FROM USE OF THE CODE FOUND HEREIN, WITHOUT
# LIMITATION, ANY LOST PROFITS, BUSINESS INTERRUPTION, LOST SAVINGS, LOSS OF
# PROGRAMS OR OTHER DATA, EVEN IF IBM IS EXPRESSLY ADVISED OF THE POSSIBILITY
# OF SUCH DAMAGES. THIS EXCLUSION AND WAIVER OF LIABILITY APPLIES TO ALL
# CAUSES OF ACTION, WHETHER BASED ON CONTRACT, WARRANTY, TORT OR ANY OTHER
# LEGAL THEORIES.
#
#+--------------------------------------------------------------------------+
function msg()
# A wrapper to send messages to console and to the output file
# Args: text to echo
#+--------------------------------------------------------------------------+
{
echo "$this: $@"
echo "$@" >> $outFile
} # msg()
#+--------------------------------------------------------------------------+
function section()
# A wrapper to send messages for the start of a new section
# Args: text to echo
#+--------------------------------------------------------------------------+
{
echo "" >> $outFile
364
any changes
disk
chccwdev = $rc"
ready
365
366
function exitCmd()
# Issue a command, check return code and abort if non-zero
# Arg 1: Exit code
# Remaining args: Command to issue
#+--------------------------------------------------------------------------+
{
local exitVal=$1
shift
cmd="$@"
msg "running: $cmd"
$cmd
rc=$?
if [ "$rc" != 0 ]; then # issue message and abort
msg "Severe error: $cmd returned $rc"
msg "Exiting with $exitVal"
chkconfig $this off
exit $exitVal
fi
return $rc
} # exitCmd()
#+--------------------------------------------------------------------------+
function checkForRPMs()
# Check for co-requisite Oracle RPMs
# Arg 1: "ora" or "grid"
#+--------------------------------------------------------------------------+
{
local type=$1
if [ ! -f "$oravalRPM" ]; then # co-req RPM not found => error
msg "Error: $oravalRPM not found - exiting with 5"
exit 5
fi
if [ "$type" = "grid" -a ! -f "$cvuqdiskRPM" ]; then # error
msg "Error: $cvuqdiskRPM not found - exiting with 6"
exit 6
fi
} # checkForRPMs()
#+--------------------------------------------------------------------------+
function yumInstallRPMs()
# Install s390 and s390x RPMs for Oracle in the global variable "allRPMs"
# Also install "ksh" which only has a s390x flavor. Finally, install the
# Oracle RPM that tests co-reqs: ora-val-rpm-EL6-DB-11.2.0.3-1.s390x.rpm
# Arg 1: "ora" or "grid"
#+--------------------------------------------------------------------------+
{
local type=$1
local nextRPM
section "Installing all RPMs with yum"
for nextRPM in $allRPMs; do # install this RPM
warnCmd yum -y -q install $nextRPM.s390
warnCmd yum -y -q install $nextRPM.s390x
367
done
# install the ksh RPM which only has an s390x flavor
warnCmd yum -y -q install ksh
# install the Oracle RPM that tests co-reqs
section "Installing RPM $oravalRPM to test co-reqs"
warnCmd rpm -i $oravalRPM
# for grid servers, also install the cvuqdisk RPM
if [ "$type" = "grid" ]; then # also install the cvuqdisk RPM
warnCmd rpm -i $cvuqdiskRPM
fi
} # yumInstallRPMs()
#+--------------------------------------------------------------------------+
function createGroup()
# Create a group if it doesn't already exist
# Arg 1: The group name
# Arg 2: The group ID
#+--------------------------------------------------------------------------+
{
# set the three required parameters
local theGroup=$1
local gid=$2
# check if the group exists
grep "^$theGroup" /etc/group > /dev/null 2>&1
if [ $? = 0 ]; then # group exists => issue warning message
msg "group $theGroup already exists"
else # group doesn't exist => create it
warnCmd groupadd -g $gid $theGroup
fi
} # createGroup()
#+--------------------------------------------------------------------------+
function createUser()
# Create one user if it doesn't already exist
# Arg 1: The user name
# Arg 2: The user ID
# Arg 3: The primary group
# Arg 4: Additional groups separated by commas
#+--------------------------------------------------------------------------+
{
# set the three required parameters
local user=$1
local uid=$2
local mainGrp=$3
local suppGrps
local ksh="/bin/ksh"
# check if the user exists
id $user > /dev/null 2>&1
if [ $? = 0 ]; then # user exists => issue warning message
msg "user $user already exists"
368
369
370
local profile
user=$1
if [ "$user" = "ora" ]; then # use oracle global var
profile=$oraProfile
else
profile=$gridProfile
fi
echo 'ulimit -n 65536' >> $profile
echo 'ulimit -u 16384' >> $profile
} # setUlimits()
#+--------------------------------------------------------------------------+
function createOraProfile()
# Set environment variables and PATH variable in oracle's .profile
# Args: none
#+--------------------------------------------------------------------------+
{
if [ ! -d /home/oracle ]; then
msg "Error: /home/oracle does not exist"
exit 7
fi
# set environment variables for the oracle user
echo "export ORACLE_HOME=/opt/oracle/11.2" > $oraProfile
echo "export ORACLE_BASE=/opt/oracle" >> $oraProfile
echo "export ORACLE_SID=orcl" >> $oraProfile
echo 'export PATH=$PATH:$ORACLE_HOME/bin' >> $oraProfile
chown oracle.oinstall $oraProfile
# add ulimit commands
setUlimits oracle
} # createOraProfile()
#+--------------------------------------------------------------------------+
function createGridProfile()
# Set environment variables and PATH variable in grid's .profile
# Args: none
#+--------------------------------------------------------------------------+
{
if [ ! -d /home/grid ]; then
msg "Error: /home/grid does not exist"
exit 8
fi
# set environment variables for the grid user
echo "export GRID_HOME=/opt/grid/??" > $gridProfile
echo "export GRID_BASE=/opt/grid" >> $gridProfile
echo 'export PATH=$PATH:$GRID_HOME/bin' >> $gridProfile
chown grid.oinstall $gridProfile
# add ulimit commands
setUlimits grid
} # createGridProfile()
371
#+--------------------------------------------------------------------------+
function enableLUNs()
# Create the config file /etc/zfcp.conf and run zfcpconf.sh which onlines LUNs
# Args: none
# return: 0 = success
#
1 = one of 4 FCP variables not set
#+--------------------------------------------------------------------------+
{
if [ ${#FCP400WWPN} = 0 ]; then
msg "Warning: FCP400WWPN is not set in $parmFile"
return 1
fi
if [ ${#FCP500WWPN} = 0 ]; then
msg "Warning: FCP500WWPN is not set in $parmFile"
return 1
fi
if [ ${#FCPLUN1} = 0 ]; then
msg "Warning: FCPLUN1 is not set in $parmFile"
return 1
fi
if [ ${#FCPLUN2} = 0 ]; then
msg "Warning: FCPLUN2 is not set in $parmFile"
return 1
fi
# export the variables in the CONF file and create 4 LUNs in /etc/zfcp.conf
section "Enabling LUNs in /etc/zfcp.conf with zfcpconf.sh"
echo "0.0.0400 $FCP400WWPN $FCPLUN1" >> /etc/zfcp.conf
echo "0.0.0400 $FCP400WWPN $FCPLUN2" >> /etc/zfcp.conf
echo "0.0.0500 $FCP500WWPN $FCPLUN1" >> /etc/zfcp.conf
echo "0.0.0500 $FCP500WWPN $FCPLUN2" >> /etc/zfcp.conf
# run zfcpconf.sh to configure LUNs which reads /etc/zfcp.conf
msg "cmd: /sbin/zfcpconf.sh"
/sbin/zfcpconf.sh # can't check rc as it always returns 1
} # enableLUNs()
#+--------------------------------------------------------------------------+
function mkLogicalVolume()
# Make a logical volume named from the partitions passed in with the
# following characteristics:
#
Number of stripes = num partitions (-i $numPartitions)
#
Stripe size = 64 KB (-I 64)
#
Read ahead = off (-r 0)
#
Use all space (-l 100%VG)
#
LV name = oradata_lv (-n oradata_lv) - set in the dataName global var
# Args: partitions from which to make the logical volume
#+--------------------------------------------------------------------------+
{
local allPartitions="$@"
local numPartitions="$#"
# make physical volumes of each of the partitions
section "Creating logical volume for data"
372
373
function setFCPdevices()
# Given two FCP devices (400 and 500) and two LUNs, set the LUNs online,
# make an LVM out of them and mount it over /oradata
# Further assumptions are that there are two LUNs and one WWPN for each
# of the 400 and 500 FCP devices. The values are set in variables:
# FCP<device>WWPN, FCP<device>LUN1 and FCP<device>LUN2
# Arg 1: "ora" or "grid"
# return: 0 = success
#
1 = enableLUNs() failed
#+--------------------------------------------------------------------------+
{
local type=$1
# enable the two LUNs over two channel paths
enableLUNs
if [ $? != 0 ]; then # unable to enable LUNs
msg "Warning: enableLUNs() failed - not setting up LUNs"
return 2
fi
# start
exitCmd
warnCmd
warnCmd
#
#
#
#
#
#
# for Oracle standalone, make a logical volume of the LUNs then mount it
if [ "$type" = "ora" ]; then # make LV and mount it
mkLogicalVolume /dev/mapper/mpatha /dev/mapper/mpathb
mountLogicalVolume /dev/${dataName}_vg/${dataName}_lv /$dataName
else # voting disks and data FCP LUNs will be controlled by ASM
setDiskOwnership
fi
} # setFCPdevices()
#+--------------------------------------------------------------------------+
function mkDirectories()
# Make HOME and BASE directories for oracle and grid
# Arg 1: "ora" or "grid"
#+--------------------------------------------------------------------------+
{
local type=$1
# make the data directory first
warnCmd mkdir /$dataName
warnCmd chown oracle.oinstall /$dataName
374
375
echo '# change ownership of disks for Oracle ASM' > $udevFile
echo 'KERNEL=="dasd*1",ID=="0.0.0200",OWNER="grid",GROUP="dba",MODE="0660"'
>> $udevFile
echo 'KERNEL=="dasd*1",ID=="0.0.0201",OWNER="grid",GROUP="dba",MODE="0660"'
>> $udevFile
echo 'KERNEL=="dasd*1",ID=="0.0.0202",OWNER="grid",GROUP="dba",MODE="0660"'
>> $udevFile
# set the ownership of the two data disks with the
echo "" >> $rcFile
echo "# Set ownership of two data disks for Oracle
echo "chown oracle:asmadmin /dev/mapper/mpatha" >>
echo "chown oracle:asmadmin /dev/mapper/mpathb" >>
} # setDiskOwnership()
/etc/init.d/rc.local file
ASM" >> $rcFile
$rcFile
$rcFile
#+--------------------------------------------------------------------------+
function configureOracle()
# Configure a new Linux system for Oracle either standalone database or grid
#
cluster
# Args: none
#+--------------------------------------------------------------------------+
{
if [ "$SOFTWARE" = "OracleStandalone" ]; then
# Oracle standalone server
msg "Preparing for Oracle standalone installation"
checkForRPMs ora
# check for Oracle RPMs
createGroupsUsers ora
# define users and groups
yumInstallRPMs ora
# add RPMs necessary for Oracle with yum
setNTP
# configure the NTP service
setLimitsDotConf
# configure limits.conf file
createOraProfile
# set up .profile for oracle user
setKernelParms ora
# configure kernel parameters
setFCPdevices ora
# set FCP devices online
mkDirectories ora
# make directories for Oracle standalone
mkResponseFile ora
# customize the database response file
elif [ "$SOFTWARE" = "OracleGrid" ]; then
# Oracle CRS cluster server
msg "Preparing for Oracle grid installation"
checkForRPMs grid
# check for Oracle RPMs
createGroupsUsers grid
# define users and groups
yumInstallRPMs grid
# add RPMs necessary for Oracle with yum
setNTP
# configure the NTP service
setLimitsDotConf
# configure limits.conf file
createOraProfile
# set up .profile for the oracle user
createGridProfile
# set up .profile for the grid user
setKernelParms grid
# configure kernel parameters
setFCPdevices grid
# set FCP devices online
mkDirectories grid
# make directories for Oracle grid
mkResponseFile grid
# customize the grid response file
else
# not an Oracle system
msg "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
msg "SOFTWARE varaiable not set to 'OracleStandalone' or 'OracleGrid' exiting"
msg "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
exit 11
fi
} # configureOracle()
376
#+--------------------------------------------------------------------------+
function successMsg()
# Issue a success message
# Args: none
#+--------------------------------------------------------------------------+
{
msg "*"
msg "********************************************************"
msg "Sucessfully completed!"
msg "********************************************************"
} # successMsg()
#+--------------------------------------------------------------------------+
# global variables
allRPMs="compat-libcap1 compat-libstdc++-33 elfutils-libelf-devel libaio-devel"
confType="CONF-RH6"
# File type of configuration file
dataName="oradata"
# Oracle data mount point, vg name,
etc
baseDir="/opt"
# directory with LV for Oracle
binaries
disk191=""
# device file name of the 191 disk
goldenID="RH62GOLD"
# user ID of the golden image
outFile="/tmp/boot.oracle.out"
# the output file
gridProfile=/home/grid/.profile
# the grid user's profile
oraProfile=/home/oracle/.profile
# the oracle user's profile
cvuqdiskRPM="/tmp/cvuqdisk-1.0.9-1.rpm" # two required Oracle RPMs
oravalRPM="/tmp/ora-val-rpm-EL6-DB-11.2.0.3-1.s390x.rpm"
parmFile=""
# the CMS parameter file
this=""
# the name of this command
# main()
if [ "$1" = "start" ]; then # configure the system
initialize
# set up
configureOracle
# do the real work
chkconfig boot.oracle off
# turn self off so runs just once
successMsg
# success if we fall through to here
fi
377
378
379
ORACLE_ROOT=/u01/oracle
ORACLE_INVENTORY=/u01/oraInventory
ORACLE_SOURCE_NFS=l5ntcdom.mop.ibm.com:/drivers
ORACLE_INSTALL_MNT=/mnt
ORACLE_SI_SCRIPT=db-install-11.2.0.3-SLES11SP1.rsp
ORACLE_DATA=/$ORACLE_ROOT/oradata
ORAOUTPUT=/tmp/preporacle.out
#######################################################
echo "Starting Oracle Environment Customization" > $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Setting the environment with the variables below:" >> $ORAOUTPUT
echo "Oracle UserId = " $ORACLE_INSTALL_UID >> $ORAOUTPUT
echo "Oracle Name = " $ORACLE_INSTALL_USER >> $ORAOUTPUT
echo "Oracle Group Name = " $ORACLE_INSTALL_GROUP >> $ORAOUTPUT
echo "Oracle GroupId = " $ORACLE_INSTALL_GID >> $ORAOUTPUT
echo "DBA GroupId = " $ORACLE_DBA_GID >> $ORAOUTPUT
echo "OPERATOR GroupId= " $ORACLE_OPER_GID >> $ORAOUTPUT
echo "Oracle Linux Home Directory = " $ORACLE_USER_HOME >> $ORAOUTPUT
echo "Oracle Installation Base Directory = " $ORACLE_ROOT >> $ORAOUTPUT
echo "Oracle Inventory Referential Directory = " $ORACLE_INVENTORY >>
$ORAOUTPUT
echo "Oracle RAC Home Directory = " $ORACLE_RAC >> $ORAOUTPUT
#######################################################
#
# Unmount/mount nfs source directory
#
#######################################################
cd /
umount /mnt 1>/dev/null 2>&1
mount -t nfs $ORACLE_SOURCE_NFS /mnt >> $ORAOUTPUT
vmount=`ls -l /mnt | wc -l`
if [ $vmount -lt 2 ] ; then
echo "Oracle Source Mounted Filesystem is empty !" >> $ORAOUTPUT
#return -1
else
#######################################################
#
# Groups creation
# oracle
vgroup=`cat /etc/group | grep oinstall | wc -l `
if [ $vgroup -ne 1 ] ; then
echo "creation du groupe oinstall" >> $ORAOUTPUT
groupadd -g $ORACLE_INSTALL_GID $ORACLE_INSTALL_GROUP
fi
echo "group oinstall OK" >> $ORAOUTPUT
# oracle dba
vgroup=`cat /etc/group | grep dba | wc -l `
if [ $vgroup -ne 1 ] ; then
echo "creation du groupe oinstall" >> $ORAOUTPUT
groupadd -g $ORACLE_DBA_GID dba
fi
echo "group dba OK" >> $ORAOUTPUT
# oracle operator
380
381
382
383
384
echo ' if ( $3 > 262144 ) { vOra_kernel = vOra_kernel " " $3 }' >>
/tmp/sysctl.txt
echo '
else { vOra_kernel = vOra_kernel " " 262144 }' >> /tmp/sysctl.txt
echo ' vline=vOra_kernel' >> /tmp/sysctl.txt
echo ' vnetcorewmemdefault=1 ' >>/tmp/sysctl.txt
echo ' }' >> /tmp/sysctl.txt
#
echo 'if ( $1 ~ /^net.core.wmem_max/ ) {' >> /tmp/sysctl.txt
echo ' if ( $3 > 1048576 ) { vOra_kernel = vOra_kernel " " $3 }' >>
/tmp/sysctl.txt
echo '
else { vOra_kernel = vOra_kernel " " 1048576 }' >> /tmp/sysctl.txt
echo ' vline=vOra_kernel' >> /tmp/sysctl.txt
echo ' vnetcorewmemmax=1 ' >>/tmp/sysctl.txt
echo ' }' >> /tmp/sysctl.txt
#
echo 'if ( $1 ~ /^kernel.spin_retry/ ) {' >> /tmp/sysctl.txt
echo ' if ( $3 > 2000 ) { vOra_kernel = vOra_kernel " " $3 }' >>
/tmp/sysctl.txt
echo '
else { vOra_kernel = vOra_kernel " " 2000 }' >> /tmp/sysctl.txt
echo ' vline=vOra_kernel' >> /tmp/sysctl.txt
echo ' vkernelspinretry=1 ' >>/tmp/sysctl.txt
echo ' }' >> /tmp/sysctl.txt
#
echo 'if ( $1 ~ /^vm.hugetlb_shm_group/ ) {' >> /tmp/sysctl.txt
echo ' vline = "vm.hugetlb_shm_group = "' $ORACLE_INSTALL_GID >>
/tmp/sysctl.txt
echo ' vmhugetlbshmgroup=1 ' >>/tmp/sysctl.txt
echo ' }' >> /tmp/sysctl.txt
echo 'print vline' >> /tmp/sysctl.txt
echo '}' >> /tmp/sysctl.txt
echo 'END { ' >> /tmp/sysctl.txt
echo ' print "# Oracle parameters"; ' >> /tmp/sysctl.txt
echo ' if ( vkernelsem=0 ) { print "kernel.sem = 250 32000 100 128" } ' >>
/tmp/sysctl.txt
echo ' if ( vkernelshmall=0 ) { print "kernel.shmall = 2097152" } ' >>
/tmp/sysctl.txt
echo ' if ( vkernelshmmax=0 ) { print "kernel.shmmax = 4218210304" } ' >>
/tmp/sysctl.txt
echo ' if ( vkernelshmmni=0 ) { print "kernel.shmmni = 4096" } ' >>
/tmp/sysctl.txt
echo ' if ( vfsfilemax=0 ) { print "fs.file-max = 6815744" } ' >>
/tmp/sysctl.txt
echo ' if ( vfsaiomaxnr=0 ) { print "fs.aio-max-nr = 1048576" } ' >>
/tmp/sysctl.txt
echo ' if ( vnetipv4iplocalportrange=0 ) { print "net.ipv4.ip_local_port_range
= 9000 65500" } ' >> /tmp/sysctl.txt
echo ' if ( vnetcorermemdefault=0 ) { print "net.core.rmem_default = 262144" }
' >> /tmp/sysctl.txt
echo ' if ( vnetcorermemmax=0 ) { print "net.core.rmem_max = 4194304" } ' >>
/tmp/sysctl.txt
echo ' if ( vnetcorewmemdefault=0 ) { print "net.core.wmem_default = 262144" }
' >> /tmp/sysctl.txt
echo ' if ( vnetcorewmemmax=0 ) { print "net.core.wmem_max = 1048576" } ' >>
/tmp/sysctl.txt
385
386
387
export ORACLE_INSTALL_USER
export ORACLE_USER_HOME
export ORACLE_SI_SCRIPT
echo "su -c \"cd /home/$ORACLE_INSTALL_USER;. ./.profile;cd
$ORACLE_INSTALL_MNT/database; ./runInstaller -silent -ignorePrereq -force
-responseFile $ORACLE_USER_HOME/$ORACLE_SI_SCRIPT\" $ORACLE_INSTALL_USER" >>
$ORAOUTPUT
su -c "cd /home/$ORACLE_INSTALL_USER;. ./.profile;cd
$ORACLE_INSTALL_MNT/database; ./runInstaller -silent -ignorePrereq -force
-responseFile $ORACLE_USER_HOME/$ORACLE_SI_SCRIPT" $ORACLE_INSTALL_USER
echo "End runInstaller."
#
##################################################################
# VERIFICATIONS
##################################################################
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Environment variables set:" >> $ORAOUTPUT
echo "Oracle UserId = " $ORACLE_INSTALL_UID >> $ORAOUTPUT
echo "Oracle Name = " $ORACLE_INSTALL_USER >> $ORAOUTPUT
echo "Oracle Group Name = " $ORACLE_INSTALL_GROUP >> $ORAOUTPUT
echo "Oracle GroupId = " $ORACLE_INSTALL_GID >> $ORAOUTPUT
echo "DBA GroupId = " $ORACLE_DBA_GID >> $ORAOUTPUT
echo "OPERATOR GroupId= " $ORACLE_OPER_GID >> $ORAOUTPUT
echo "Oracle Linux Home Directory = " $ORACLE_USER_HOME >> $ORAOUTPUT
echo "Oracle Installation Base Directory = " $ORACLE_ROOT >> $ORAOUTPUT
echo "Oracle Inventory Referential Directory = " $ORACLE_INVENTORY >>
$ORAOUTPUT
##################################################################
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif directories creation" >> $ORAOUTPUT
ls -al /u01 >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif user and groups creation" >> $ORAOUTPUT
cat /etc/group | grep oinstall >> $ORAOUTPUT
cat /etc/group | grep dba >> $ORAOUTPUT
cat /etc/group | grep oper >> $ORAOUTPUT
cat /etc/passwd | grep $ORACLE_INSTALL_USER >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif oracle home creation" >> $ORAOUTPUT
ls -al /home/$ORACLE_INSTALL_USER >> $ORAOUTPUT
cat /home/$ORACLE_INSTALL_USER/.profile >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif /user/local/bin chmod" >> $ORAOUTPUT
ls -l /usr/local >> $ORAOUTPUT
388
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif /etc/sysctl.conf" >> $ORAOUTPUT
#cat /etc/sysctl.conf >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif /etc/security/limits.conf" >> $ORAOUTPUT
cat /etc/security/limits.conf | grep $ORACLE_INSTALL_USER >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Verif /etc/pam.d/sshd" >> $ORAOUTPUT
cat /etc/pam.d/sshd | grep pam_limits.so >> $ORAOUTPUT
echo
"##############################################################################
##" >> $ORAOUTPUT
echo "Oracle Environment Customization End" >> $ORAOUTPUT
fi
#return 0
# End preporacle.sh
389
References books
The following Redbooks publications are available:
z/VM and Linux on IBM System z: The Virtualization Cookbook for SLES 11 SP1,
SG24-7931-00
Directory Maintenance Facility Tailoring and Administration Guide; Version 6 Release 1,
SC24-6190-00
Deploying a Cloud on IBM System z, REDP-4711
Provisioning Linux on IBM System z with Tivoli Service Automation Manager, REDP-4663
Installing Oracle 11gR2 RAC on Linux on System z, REDP-4788
Experiences with a Silent Install of Oracle Database 11gR2 RAC on Linux on System z
(11.2.0.3), REDP-9131
Tivoli Service Automation Manager Version 7.2.2 - Installation and Administration Guide,
SC34-2657-00
For more information about IBM publications, see the Redbooks website at:
http://www.redbooks.ibm.com
For more information about Tivoli, see the Configuring the z/VM environment for Tivoli
Service Automation Manager topic in the Tivoli Information Center at this website:
http://pic.dhe.ibm.com/infocenter/tivihelp/v10r1/index.jsp
390
Related publications
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
Experiences with Oracle Solutions on Linux for IBM System z, SG24-7634
Silent Installation Experiences with Oracle Database 11gR2 Real Application Clusters on
Linux on System z, REDP-9131
Installing Oracle 11gR2 RAC on Linux on System z, REDP-4788
Sharing and maintaining Linux under z/VM, REDP-4322
Optimizing Your Oracle Investment with IBM Storage Solutions, REDP-4421
An Introduction to z/VM Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8006
Using z/VM v 6.2 Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8039
Using IBM Virtualization to Manage Cost and Efficiency, REDP-4527-00
Linux on IBM System z: Performance Measurement and Tuning, SG24-6926
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and additional materials at the following website:
http://www.ibm.com/redbooks
Other publications
The following publications are also relevant as further information sources:
z/VM and Linux on IBM System z: The Virtualization Cookbook for z/VM 6.2 RHEL 6.2 and
SLES 11 SP2, which is available at the following websites:
http://www.vm.ibm.com/devpages/mikemac/CKB-VM62.PDF
http://www.vm.ibm.com/devpages/mikemac/CKB-VM62.tgz
z/VM and Linux on IBM System z: The Cloud Computing Cookbook for z/VM 6.2, RHEL
6.2 and SLES 11 SP2, which is available at the following websites:
http://www.vm.ibm.com/devpages/mikemac/CKB0VM62.PDF
http://www.vm.ibm.com/devpages/mikemac/CKB-VM62.tgz
High-Availability of System Resources: Architectures for Linux on IBM System z Servers:
http://public.dhe.ibm.com/common/ssi/ecm/en/zsw03236usen/ZSW03236USEN.PDF
391
392
Note 1082253 Requirements for Installing Oracle 10gR2 RDBMS on SLES 10 zLinux
(s390x)
Note 741646.1 Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on zLinux
(s390x).
Note 415182.1 DB Install Requirements Quick Reference - zSeries based Linux.
Online resources
The following websites are also relevant as further information sources:
Oracle Technology Network:
http://otn.oracle.com
Oracle Support web page (My Oracle Support):
https://support.oracle.com
Special Interest Group of Oracle users on the mainframe:
http://www.zseriesoraclesig.org
Linux on System z:
http://www.ibm.com/developerworks/linux/linux390/
z/VM Performance and Tuning Tips, Capacity planning:
http://www.vm.ibm.com/perf/tips
IBM Tivoli Service Automation Manager, Version 7.2.2, Setting up z/VM for Linux
provisioning:
http://publib.boulder.ibm.com/infocenter/tivihelp/v10r1/index.jsp?topic=%2Fcom.
ibm.tsam.doc_7.2%2Ft_config_zvm_setup.html
Hints and Tips for tuning Linux on System z:
http://www.ibm.com/developerworks/linux/linux390/perf/index.html
Related publications
393
394
(0.5 spine)
0.475<->0.873
250 <-> 459 pages
Back cover
Experiences with
Oracle 11gR2 on
Linux on System z
Installing Oracle
11gR2 on Linux on
System z
Managing an Oracle
environment
Provisioning an
Oracle environment
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
ISBN 0738438715