Oracle Real Application Clusters (RAC) 12c Best Practices: Markus Michalewicz Oracle Redwood Shores, CA, USA
Oracle Real Application Clusters (RAC) 12c Best Practices: Markus Michalewicz Oracle Redwood Shores, CA, USA
Oracle Real Application Clusters (RAC) 12c Best Practices: Markus Michalewicz Oracle Redwood Shores, CA, USA
Best Practices
Markus Michalewicz
Oracle
Redwood Shores, CA, USA
Introduction
It is no secret that standardization can lead to significant cost savings by re-utilizing the same parts as
well as by reducing individual workflows in favor of standardized workflows that are executed
repeatedly. The same ideas apply to the installation and management of Oracle Real Application
Clusters (RAC) and ensuring that best practices are applied to a given system. The installation flow of
the Oracle Universal Installer (OUI) which is used to install the Oracle Grid Infrastructure as well as
an Oracle RAC-enabled database home attempts to cover most of the common installation scenarios
assuming that best practices during the system setup (hardware and software) are followed. This
article describes the steps that will allow OUI to work efficiently. It will point to the critical steps that
need special considerations depending on the future use of the system and it will list the postinstallation steps that may be required.
Oracle Grid Infrastructure for a Cluster requires a cluster setup on hardware level and thereby slightly
increases the pre-installation requirements in terms of storage and network setup. However,
realistically, those requirements are mostly met by most of todays systems by default. All that may be
required is a subtle change in how these systems are configured.
Oracle Grid Infrastructure for a Cluster requires shared storage (SAN or NAS devices) as well as a so
called interconnect (dedicated connection) between the servers in addition to the public network (the
network that is used to connect to the server from a client). Most of todays systems probably use a
SAN / NAS for non-local storage. Hence, meeting shared storage requirements should rather be a
matter of software configuration, since the physical setup has already been established. As a matter of
fact, systems that are set up to either function as a failover cluster or use a virtualization solution that
allows for inter-server failover or migration of virtual instances between servers are most likely
already prepared to host an Oracle Grid Infrastructure for a Cluster deployment.
It is therefore Oracles recommendation to standardize on a cluster setup for any deployment; may it
be a Single Instance Database or an Oracle RAC Database. This recommendation is the standard on
Oracles Engineered Systems. The Oracle Database Appliance (ODA) as well as Oracle Exadata
systems use a 2-node cluster setup as their minimum configuration. It needs to be noted in this context,
that the cluster setup itself does not impose any additional licensing, as Oracle GI is free of charge.
The licensing of such a system is determined by the software being hosted in the cluster.
Standardizing on a cluster allows for scalability in future and immediately improves High Availability
(HA) as permitted by such architectures. With respect to an Oracle Grid Infrastructure deployment and
despite the fact that the Oracle Grid Infrastructure for Standalone Deployments and the Oracle Grid
Infrastructure for a Cluster use the same binaries, it is recommended to standardize on the cluster setup
(even on a 1-node cluster). The main reason is that changing from a standalone setup to a clustered
configuration requires a re-configuration as shown in illustration 1.
Whether or not the Oracle Database can immediately benefit from the cluster setup is determined by
the database type that is deployed in the cluster. Potential database types are: 1) an Oracle Single
Instance Database, 2) an Oracle RAC One Node Database, 3) an Oracle RAC Database.
Oracle RAC databases, either Oracle RAC One Node or Oracle RAC provide the highest level of
flexibility1. Oracle RAC One Node utilizes multiple features that make it the ideal standard database
deployment. From an application point of view, Oracle RAC One Node appears for all practical
purposes as an Oracle Single Instance Database towards the application. Therefore, there is hardly
ever any need to certify an application with Oracle RAC One Node explicitly, as long as the
application is certified to run against an Oracle Database.
From an administration point of view, Oracle RAC One Node shares and benefits from the same
architecture that has been used for Oracle RAC for more than a decade. This design allows Oracle
RAC One Node to use the Online Database Relocation-feature for uninterrupted database service in
case of planned downtime, typically used during patching the OS or the database home for example. A
Single Instance database, even when deployed on a cluster, needs to be taken down in this case.
For more information regarding Oracle RAC One Node and Oracle RAC, see http://www.oracle.com/goto/rac,
especially: http://www.oracle.com/technetwork/products/clustering/rac-one-node-wp-12c-1896130.pdf and
http://www.oracle.com/technetwork/products/clustering/rac-wp-12c-1896129.pdf
On Linux, Oracle provides pre-install-packages, one for Oracle Database 11g Rel. 2 and one for
Oracle Database 12c, which can be downloaded using yum (if configured) and are named accordingly:
[root@dasher Desktop]# yum list oracle-*
oracle-rdbms-server-11gR2-preinstall.x86_64 1.0-7.el6
oracle-rdbms-server-12cR1-preinstall.x86_64 1.0-8.el6
ol6_latest
ol6_latest
The pre-installation packages will configure the OS to host an Oracle Single Instance database of the
respective version. The deployment of the packages will configure OS groups and user structures as
well as certain Kernel parameters in accordance to Oracle Database Single Instance best practices. As
these best practices do not include Oracle Grid Infrastructure, deploying those packages is only
recommended as a first step; additional steps need to be performed to optimize the OS for the
deployment of an Oracle Grid Infrastructure for a Cluster configuration. Those additional steps
include, but are not limited to:
While the Oracle RAC installation is designed to integrate best practices into the installation flow,
spending a bit more time on the preparation of the system can go a long way. In order to set up a
system as homogenous as possible, cloning the OS once the first server has been configured properly
can simplify the deployment.
Regardless of the exact deployment model for the OS, choosing an appropriate hostname for each
server to be deployed in the cluster is important, as changing the hostname is not supported once
Oracle GI is configured. A hostname-change will require a re-configuration of the cluster stack on this
server. Each server in the cluster needs to maintain a unique hostname; ideally all lowercase
hostnames are used (not required by design, but due to BUGs by experience). It is arguable whether or
not uniqueness should be enforced by numeric values. This paper recommends using a name-scheme
for hostnames, flexible enough to allow for cluster expansion in multiple ways. For the purpose of this
paper, a 4-node cluster with the server names dasher, dancer, comet and vixen is used.
Using Huge Pages is a standing recommendation on high performance Oracle RAC systems. Configuring the
system for the use of Huge Pages can be performed either as part of the OS setup or later. More information:
My Oracle Support note 361323.1 HugePages on Linux: What It Is... and What It Is Not...
My Oracle Support note 401749.1 Shell Script to Calculate Values for Linux HugePages
Example: using the OUI to install Oracle Grid Infrastructure 12c and retaining the pre-selection in the
first four dialogs that appear after the OUI has been started will lead to an Oracle GI for a Cluster
Deployment (recommendation), using a Standard Cluster (as opposed to an Oracle Flex Cluster,
discussed later in this paper) and a Typical Installation. With the exception of the Typical
Installation, this is the recommended deployment for Oracle GI 12c.
http://www.oracle.com/technetwork/products/clusterware/overview/oracle-clusterware-12c-overview-1969750.pdf
However, the main reason for recommending the Advanced Installation option is the choice to install
Oracle Flex ASM as well as the Oracle Grid Infrastructure Management Repository (GIMR), which
are both new features available with Oracle Grid Infrastructure 12c.
Oracle Flex ASM is a new Oracle Automatic Storage Management (ASM) deployment model that
increases database instance availability and reduces the Oracle ASM related resource consumption.
Oracle Flex ASM facilitates cluster based database consolidation, as it ensures that Oracle Database
12c instances running on a particular server will continue to operate, should the Oracle Flex ASM
instance on that server fail.
The Grid Infrastructure Management Repository is a Single Instance Oracle Database 12c which will
be installed on one of the nodes in the cluster. It is managed as a failover database and contains the
Cluster Health Monitor (CHM) data. The GIMR is stored in the first ASM disk group that is created as
part of the configuration. In addition, Quality of Service Management (QoS) required resources are
added to the cluster configuration (e.g. OC4J) as part of the GIMR creation. Therefore and because
creating the GIMR as a post-installation step requires additional steps and is currently not supported, it
is strongly recommended to configure the GIMR upfront.
In addition, the Advanced Installation option allows for making additional choices regarding:
For those entities the recommendations are simple and summarized as follows:
The Prerequisite Checks performed as part of the Oracle GI installation attempt to ensure that the
system is configured properly to host an Oracle GI for a cluster installation and in extension an Oracle
Real Application Clusters database. Following the various instructions provided as part of the dialog is
vital for a successful cluster operation.
Cluster Verification Utility (CVU), will be started by OUI in the background and will check the
system, providing detailed feedback on the state of the servers. For each negative check result, it will
list the check performed, its status and whether or not the issue is fixable by CVU, as shown in
illustration 3. The status can vary between Failed and Warning, while a particular issue is either
fixable (Yes) or not (No).
While all checks listed in this dialog can be ignored (checking the Ignore All option in the right top
corner as shown in illustration 3), it is strongly recommended to tackle all issues prior to proceeding
with the installation and configuration of an Oracle RAC Database.
One check that was introduced with Oracle GI 12c is the check for physical memory, which tries to
ensure that at least 4GB of physical memory are available on the server and returns negatively, if the
server does not provide this amount of memory. This check does not intend to indicate that Oracle GI
will use this amount of memory. It attempts to ensure a stable environment once the installation is
complete. Memory pressure has proven to be one of the main reasons for server failures.
Depending on the choices made during the Oracle GI installation, up to three pre-configured Oracle
instances (the GIMR management database, an Oracle ASM instance and an APX instance used for
ACFS when Flex ASM is enabled) could be running on a given server in the cluster (most likely the
one, which was used to install the stack) in addition to the Oracle Database instances that is are meant
to be hosted on the cluster.
That said, the main reason for requiring 4GB of physical memory may not be so obvious: shared
memory. In Linux shared memory is managed as a tmpfs (mounted under /dev/shm typically). The
default size for this tmpfs is roughly 50% of the physical memory available on the server, which on a
4GB server means approximately 2GB. These 2GB will just be enough to run the three preconfigured
instances plus a small database instance in addition. Less than 4GB of physical memory may lead to
instances not being able to start, unless these values are manually adapted.
hold many pluggable databases (PDBs). The idea is that an existing database can be simply adopted,
with no changes in the application tier, as a pluggable database. In this architecture, Oracle RAC
provides the local high availability that is required when consolidating various business critical
applications on one system.
When using PDBs with Oracle RAC, the multitenant container database (CDB) is based on Oracle
RAC. Each pluggable database can be made available on either every instance of the RAC CDB or a
subset of instances. In any case, access to and management of the PDBs are regulated using Dynamic
Database Services, which will also be used by applications to connect to the respective PDB as they
would in a Single Instance Oracle Database using Oracle Net Services for connectivity.
The remaining steps are rather self explanatory and the official Oracle documentation can provide
additional details as needed. Attention needs to be paid to the configuration of the disk groups used by
the database, as DBCA does not automatically enforce separating the data and Fast Recovery Are
(FRA) disk group depending on the configuration used on the system.
Milestone What has been configured so far?
Following the steps outlined in this paper, the installation of the new Oracle RAC 12c system has
reached a critical milestone with Oracle Grid Infrastructure 12c installed and configured and at least
one database created. Depending on the choices made during the installation process, the system at
this point in time could look like the one outlined in illustration 4.
The system outlined in illustration 4 consists of four nodes (dasher, dancer, comet and vixen), of
which all use Oracle Linux 6.4 with an UEK Kernel (choice of the author) and an Oracle Grid
Infrastructure 12c for a Cluster deployed on each server. The servers also share an Oracle RACenabled Database 12c Home, which in this particular installation is hosted on an ACFS-based cluster
file system. Two user-created Server Pools have been created in this cluster (frontoffice and
backoffice). The multitenant container database (called raccdb1) is an Oracle RAC-based, policymanaged database, currently running on nodes dasher and dancer.
Illustration 5: A 4-node Cluster Example with 1 CDB and 2 PDBs (HR / CRM)
Contact address:
Markus Michalewicz, Oracle
500 Oracle Parkway M/S 4OP840
Redwood Shores, CA, 94065, USA
Phone:
Email
Internet:
5
For more information see MOS note 1268927.1 - RACcheck - RAC Configuration Audit Tool
For more information see The Oracle Trace File Analyzer
http://www.oracle.com/technetwork/products/clustering/overview/tracefileanalyzer-2008420.pdf
6