D78850GC20 sg2
D78850GC20 sg2
D78850GC20 sg2
Editors
Arijit Ghosh
Malavika Jinka
Smita Kommini
Graphic Designer
Maheshwari Krishnamurthy
Publishers
Glenn Austin
Jayanthy Keshavamurthy
Srividya Rameshkumar
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Contents
1 Introduction
Objectives 1-2
Curriculum Context 1-3
Suggested Schedule 1-4
2 Getting Started
Objectives 2-2
Naming the Core Components of an Oracle Database Server 2-3
Oracle Database Server Architecture: Overview 2-4
What You Already Know About Database Storage Architecture 2-6
Naming Logical and Physical Database Structures 2-8
What You Already Know About Process Architecture 2-10
Process Structures 2-11
Reviewing Processes 2-13
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Summary 3-28
Practice Overview: Configuring for Recoverability 3-29
6 Performing Backups
Objectives 6-2
RMAN Backup Types 6-3
Incrementally Updated Backups 6-5
Incrementally Updated Backups: Example 6-6
Fast Incremental Backup 6-7
Maintaining Block Change Tracking File 6-8
Monitoring Block Change Tracking 6-9
Automatic Disk-to-Disk Backup and Recovery 6-10
Oracle-Suggested Backup 6-11
Reporting on Backups 6-12
Using Dynamic Views 6-13
Managing Backups: Cross-Checking and Deleting 6-14
Quiz 6-15
Summary 6-17
Practice Overview: Creating Incremental Backups 6-18
9 Diagnosing Failures
Objectives 9-2
Reducing Problem Diagnosis Time 9-3
Automatic Diagnostic Workflow 9-4
Automatic Diagnostic Repository 9-5
ADR Command-Line Tool (ADRCI) 9-6
V$DIAG_INFO View 9-7
Interpreting RMAN Message Output 9-8
DEBUG Option 9-9
Interpreting RMAN Error Stacks 9-10
Data Recovery Advisor 9-11
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
11 Performing Recovery I
Objectives 11-2
Ensuring Backups Are Available 11-3
Restoring in NOARCHIVELOG Mode 11-4
Recovery with Incremental Backups in NOARCHIVELOG Mode 11-5
12 Performing Recovery II
Objectives 12-2
Recovery from Loss of Server Parameter File 12-3
Restoring the Server Parameter File from the Control File Autobackup 12-4
Loss of a Control File 12-5
Recovering from the Loss of All Control File Copies: Overview 12-6
Restoring the Control File from Autobackup 12-7
Restoring the SPFILE and the Control File 12-8
Quiz 12-9
Recovering NOLOGGING Database Objects 12-10
Loss of a Redo Log File 12-11
Log Group Status: Review 12-13
Recovering from the Loss of a Redo Log Group 12-14
Clearing a Log File 12-15
Re-creating a Password Authentication File 12-16
Recovering from a Lost Index Tablespace 12-18
Recovering a Read-Only Tablespace 12-19
Automatic Tempfile Recovery 12-20
Restoring and Recovering the Database on a New Host 12-21
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
15 Flashback Database
Objectives 15-2
Flashback Database: Continuous Data Protection 15-3
Flashback Database 15-4
Flashback Database Architecture 15-5
Configuring Flashback Database 15-6
Flashback Database: Examples 15-7
Flashback Database Considerations 15-8
Monitoring Flashback Database Information 15-9
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
16 Transporting Data
Objectives 16-2
18 Duplicating a Database
Objectives 18-2
Using a Duplicate Database 18-3
Choosing Database Duplication Techniques 18-4
Duplicating an Active Database with “Push” 18-5
“Push” Versus “Pull” Methods of Duplication 18-6
Duplicating a Database with a Target Connection 18-7
Duplicating a Database with Recovery Catalog Without Target Connection 18-8
Duplicating a Database Without Recovery Catalog or Target Connection 18-9
Creating a Backup-Based Duplicate Database 18-10
Creating an Initialization Parameter File for the Auxiliary Instance 18-11
Specifying New Names for Your Destination 18-12
Using the SET NEWNAME Clauses 18-13
Substitution Variables for SET NEWNAME 18-14
Specifying Parameters for File Naming 18-15
Starting the Instance in NOMOUNT Mode 18-17
Ensuring That Backups and Archived Redo Log Files Are Available 18-18
Allocating Auxiliary Channels 18-19
Understanding the RMAN Duplication Operation 18-20
Specifying Options for the DUPLICATE Command 18-22
Using Additional DUPLICATE Command Options 18-23
Substitution Variables for SET NEWNAME 18-24
Quiz 18-25
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Summary 18-26
Practice Overview: Duplicating a Database 18-27
20 Workshop Overview
Objectives 20-2
Workshop Structure and Approach 20-3
Business Requirements for the Workshop Database 20-5
Diagnosing the Failures 20-6
Summary 20-7
A Your Learning
Overview A-2
Enterprise Manager Database Express Menus A-5
Request Handling in EM Express A-6
Oracle SQL Developer: Connections A-7
Oracle SQL Developer: DBA Actions A-8
Continuing Your Learning A-9
Further Information A-10
Suggested Oracle University ILT Courses A-11
C Cloud Computing
Cloud Based? C-2
Cloud Computing Explained C-3
Objectives
Note: Oracle Secure Backup and the Oracle database have different
release numbers. OSB 10.4 is the correct version for Oracle
Database 12c. For up-to-date information, check OTN.
Note that Oracle Secure Backup releases do not follow the same numbering as the database
releases. So, Oracle Secure Backup 10.4 is the correct release for Oracle Database 12c.
• The product home page can be accessed via the Oracle Technology Network (OTN):
http://www.oracle.com/technetwork/products/secure-backup/overview/index.html.
• Product documentation can be downloaded from OTN:
http://www.oracle.com/technetwork/products/secure-backup/documentation/index.html.
Web tool
EM
As shown in the slide, you can access Oracle Secure Backup in four different ways
depending on what you want to do:
• Enterprise Manager provides a graphical interface for database tape backup and restore
operations through integration with RMAN. It includes a link to the Oracle Secure
Backup web tool for performing file-system backup and restore operations.
• Use RMAN to back up your databases directly to tape. RMAN can be accessed either
through the RMAN command-line client or through the graphical Enterprise Manager.
RMAN communicates with Oracle Secure Backup through the system backup to tape
(SBT) interface.
• The web tool is a GUI application for OSB-related tasks. It enables you to configure
administrative domains, manage operations, browse the backup catalog, and back up
and restore data. It provides a graphical and interactive interface to access the obtool
utility. You should use this interface when making backups of file-system data.
• The obtool utility provides a command-line interface to Oracle Secure Backup.
Managing the backup infrastructure of file-system data and Oracle database data is easily
administered with Oracle Secure Backup and RMAN. Defining what data to back up is
conceptually similar for file-system and database data. Both require that you define what to
include in the backup.
• For the database, you use the RMAN backup sets.
• When backing up file-system data, OSB uses the “dataset” definition, which describes
the actual data from the file system, which is being backed up. You can use either the
web tool or the obtool utility to define the dataset for Oracle Secure Backup.
After defining what data to back up, you must determine what type of backup is most
appropriate to meet your backup and restore requirements. For the Oracle database, RMAN
offers full and incremental backup levels that are backed up to tape by Oracle Secure Backup.
For file-system backups, OSB offers multiple levels including full backup, multiple
incremental, and an off-site backup. The off-site level is a full backup performed without
interfering with any incremental backup strategies.
Schedule your database backups through RMAN or EM and your file-system backups through
Oracle Secure Backup.
After you have defined what, how, and how often to back up your data through scheduling,
Oracle Secure Backup can automatically implement your backup schedules, only requiring
manual intervention for hardware errors or media needs.
Oracle Database 12c: Backup and Recovery Workshop 13 - 5
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Backup Backup
image image
The backup of an Oracle database, created by RMAN, results in a backup set (an RMAN-
specific logical structure), which contains at least one backup piece (an RMAN-specific
physical file containing the backed-up data).
Oracle Secure Backup backs up and maintains backup metadata for each RMAN backup
piece written to tape within its own catalog. You can browse backup pieces with the obtool
command-line or Oracle Secure Backup web tool.
Note: The best practice is for backup pieces to be updated through RMAN, not manually by
the use of Oracle Secure Backup.
If you manage the backup pieces stored on tape by using Oracle Secure Backup utilities
instead of RMAN, the Oracle Secure Backup catalog and the RMAN repository can become
unsynchronized. If that were to happen, use the RMAN CROSSCHECK command before you
take additional corrective actions.
Backup set
Media family
...
File-system Backup
file image
This slide provides an overview over RMAN and OSB components. On the left side it shows
data files on the OS level, how they relate to RMAN image copies and backup pieces, and
how these relate to OSB backup images. File-system files, which of course do not have an
RMAN equivalent, relate directly to OSB backup images.
The right side depicts that OSB backup images are stored as backup image section on a
volume, within a volume set, which belongs to a media family in a tape library.
EM
5 4
RMAN 1
3
Data being
OSB client: backed up Media server
Database server
1. RMAN initiates backup and passes the database backup storage selector to OSB. If
RMAN is started from the Enterprise Manager (EM) interface, you must configure the
administrative server in EM (a one-time task).
2. Oracle Secure Backup creates the backup job. Typically, the OS namespace associated
with the Oracle Secure Backup user of the current session is used.
3. Oracle Secure Backup executes the job (transfers data from client to media).
4. Oracle Secure Backup updates its own catalog.
5. RMAN updates its repository.
The following pages explain the basic flow and relevant parts in more detail.
Quiz
Answer: b
Set up tasks:
• Install Oracle Secure Backup software.
• Define host roles for administrative server, media
server(s), and clients (done as part of the installation).
This slide provides an overview of the initial setup tasks, discussed on the following pages.
The tasks are executed by a high-privileged user.
You should determine the host roles before beginning an installation because you can use the
OSB installation process for the initial host role configuration.
The recommended directory for installing the Oracle Secure Backup software is
/usr/local/oracle/backup. This lesson refers to it as OSB_Home.
Note: There is no default OSB_Home environment variable, which is used to refer to this
directory, unlike the ORACLE_HOME variable used with Oracle Database installations.
After your OSB_Home directory is created, change your current directory to the OSB_Home
directory, and execute the setup program from your staging area, which in this training
example is the /stage/osb_installmedia directory.
Some examples:
• View Oracle Secure Backup processes in Linux:
$ ps -e | grep ob
The installation process creates default objects. In the slide are some examples of how you
can verify your installation with obtool commands:
ob> lsuser
admin admin
oracle oracle
ob> lsmf --long
OSB-CATALOG-MF:
Write window: 7 days
Keep volume set: 14 days
Appendable: yes
Volume ID used: unique to this media family
Comment: OSB catalog backup media family
RMAN-DEFAULT:
Keep volume set: content manages reuse
Appendable: yes
Volume ID used: unique to this media family
Comment: Default RMAN backup media family
ob> logout
Oracle Database 12c: Backup and Recovery Workshop 13 - 12
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
• To access the Oracle Secure Backup software, you must enter a username and
password or use preauthorization. Each Oracle Secure Backup user is assigned to a
class, which defines the actions that are permitted for that user.
• All hosts in the administrative domain use SSL and X.509 certificates for identity
verification and authentication. Sensitive data is encrypted before transmittal over the
network. The web server requires a signed X.509 certificate and associated public and
private keys to establish an SSL connection with a client browser. The X.509 certificate
for the web server is self-signed by the installation script when you install OSB on the
administrative server.
Note: Currently, the Network Data Management Protocol (NDMP) does not include a
mechanism to accommodate the negotiation of an SSL connection to NDMP filers.
• For your database backups, you have a choice of RMAN and OSB encryption. For your
file-system backups, use OSB encryption. Your choices for database encryption:
- RMAN backup encryption, which encrypts data within the database
- Oracle Secure Backup encryption, which encrypts data after RMAN has passed
the data through the SBT to Oracle Secure Backup
Preauthorization
RMAN script:
run { …
allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
…}
You can preauthorize Oracle Secure Backup users for the use of the obtool command line
(cmdline), rman, or both.
Preauthorization for file-system backups is primarily used to avoid logging in to Oracle Secure
Backup when running custom scripts. Without cmdline preauthorization, the script would fail,
because access to Oracle Secure Backup is not granted without user login.
RMAN preauthorization is required to successfully back up or restore the Oracle database.
Oracle database backups are invoked from RMAN or Enterprise Manager. When Oracle
Secure Backup receives communication from RMAN (through sbt), Oracle Secure Backup
verifies that an OSB user meets the following requirements:
1. RMAN preauthorization on that host
2. Matching the OS user identity of the Oracle instance associated with the database
(which is, for example, oracle)
3. Assignment to a class with rights to back up or restore Oracle database, OSB class:
- access Oracle backups (set to owner, class, or all)
- perform Oracle backups and restores
If these three criteria are not successfully met, Oracle Secure Backup does not perform the
RMAN backup or restore requests.
By defining retention periods within RMAN, a combination of disk and tape backups is used to
meet your recovery requirements. When using the fast recovery area and Oracle Secure
Backup, the recommended RMAN retention policy is the user-defined RECOVERY WINDOW
option. This means that you define a period of time within which point-in-time recovery must
be possible. When defining this recovery window, also consider the following:
• Base retention of recovery needs
• Size the fast recovery area based on desired disk recovery capability
• Scheduling of disk and tape backups (frequency and scope)
If your recovery plan allows for restoration from disk for a certain number of hours each day,
the fast recovery area should be of sufficient size to hold the recovery-related files for this time
period. The length of time in which backups remain in the fast recovery area is determined by
the amount of available disk space, not by a specific time setting.
Sample obtool command to create a media family for RMAN backups:
ob> mkmf --vidunique --writewindow forever content-man-family
OS OSB
Oracle Secure Backup automates tape recycling, reusing tapes after the backups or volumes
have expired, depending on their user-defined recycling method.
Time-managed expiration policies: The expiration time is associated at the volume level for
time-managed media families. When the volume expiration date is reached, the volume
becomes eligible to be overwritten. Each volume in a volume set will have an expiration date,
which is determined as follows:
• The user-defined Write window determines how long the tape may be appended to after
the first tape write event (optional).
• The user-defined retention time determines how long the volume must be retained after
the Write window has closed or after the first tape write event, if a Write window is not
defined. If a Write window is not defined, the volume will be appended to, until it is full.
• The expiration time is the Write window time plus the retention time.
In short, time-managed volumes for file-system backups have a user-defined expiration period
associated with the volume, not content of volume. (This policy is not for RMAN.)
RMAN OSB
Database name (*=all) Host (*=all)
Database ID Database Media family
backup
Copy number Restricted devices
storage
Content: archivelog, selector Resource wait time
full, incremental, Encryption
autobackup
Database backup storage selectors are for backup and restore operations of Oracle
databases. They are maintained as an object type on the administrative server.
When RMAN performs an Oracle database backup to devices and media managed by Oracle
Secure Backup, RMAN passes the database name, content type, and copy number to Oracle
Secure Backup. With this information, Oracle Secure Backup determines the corresponding
database backup storage selector. The selector informs Oracle Secure Backup to which
devices, if any, to restrict this backup, and which media family (if any) to use.
Database backup storage selectors enable you to specify which resources should be used by
SBT backups. A database backup storage selector object contains the information listed in
the slide.
• An asterisk character (*) for the database name or ID indicates that the storage selector
applies to all databases.
• An asterisk character (*) for host indicates that the storage selector applies to databases
residing on all available hosts.
If you use Oracle Secure Backup database storage selectors, you are not required to set
media management parameters in RMAN. In some circumstances, however, you might want
to override the database storage selectors by setting RMAN parameters.
You can specify media management parameters with:
• Environment variables, which are specified with the ENV parameter of the PARMS option
on the CONFIGURE or ALLOCATE CHANNEL commands
• The RMAN SEND command
You can use the OSB parameters listed in the slide in RMAN backup and restore jobs.
The OB_IGNORE_NUMA parameter (new to OSB 10.4) controls NUMA-awareness. Its default
value is 1, thereby enabling NUMA.
1. Configure RMAN access to the Oracle Secure Backup SBT. If you are using Enterprise
Manager Database Control, then this step involves registering the administrative server
with Enterprise Manager. (You need to specify only the Oracle Secure Backup home
directory. RMAN locates the SBT library automatically.)
2. Create a preauthorized Oracle Secure Backup user.
3. Oracle Corporation recommends to explicitly create media families for RMAN backups.
When you create a media family, you specify a unique volume ID and a volume
expiration policy that determines when a volume in that media family is eligible to be
overwritten and recycled.
If you do not create dedicated media families, then Oracle Secure Backup uses a default
media family (which is sufficient for this training course).
4. Create a database backup storage selector in Enterprise Manager or with the obtool
utility.
5. If you use Oracle Secure Backup database storage selectors, then you are not required
to set media management parameters in RMAN.
After you have configured RMAN to use the Oracle Secure Backup SBT, the procedures for
performing RMAN backup and restore operations is identical to the procedures for disk
backups and restores.
To back up the fast recovery area to tape with Oracle Secure Backup, you issue one RMAN
command: BACKUP DEVICE TYPE SBT RECOVERY AREA. Using this disk-to-tape backup
method (instead of performing a separate backup of the production database to tape)
provides a few distinct advantages:
• Saves tape resources with optimized backups of the fast recovery area. It eliminates
unnecessary backup of files, which are already on tape.
• Enables RMAN to use better restore intelligence, first from disk, then from tape, as
needed. Otherwise, RMAN uses the most recent backup regardless of the storage
media.
• Reduces I/O (important for production databases) because the fast recovery area uses
a separate disk group
Log:
High-level events
Job
Backup
Job summaries:
Text files for file-
system operations
Each backup and restore operation creates a corresponding job. Each job has a unique ID,
log, and transcript (as shown in the graphic).
• Job logs describe high-level events, such as:
- Job creation
- Job dispatch
- Completion times
• Job transcripts describe the job details, such as:
- Created at the time of dispatch
- Updated as the job progresses
- Input requests, such as “operator assistance required”
There are two different job types:
• Dataset jobs for file-system backup or restore operations
• Oracle backup jobs for database backup or restore operations
A job summary is a text file report produced by Oracle Secure Backup that describes the
status of selected file-system backup and restore jobs. Job summaries may be generated on
a regular, repeating basis and sent via email to users.
If an error occurs during an SBT session, Oracle Secure Backup tries to send the error
description to the administrative server to be saved in the job transcript. RMAN records the
error in the trace file named sbtio.log, unless the user has configured a different file to be
used by RMAN. The initialization parameter DIAGNOSTIC_DEST specifies the location of the
ADR base, which is the directory that contains one or more ADR homes. By default, the
sbtio.log file is in the trace subdirectory.
All SBT errors contain the following information:
• The location (function) where the failure occurred (for example, sbtbackup)
• The operation that was being performed (for example, “creating a backup piece”)
• A brief description of the problem (for example, “unable to contact admin server”)
• If applicable, a brief description of the remedy that the user may apply
• If applicable, the name of the trace or debug file where additional information about the
problem can be found
You can get more trace information by using the TRACE option of the ALLOCATE CHANNEL
command. For example: ALLOCATE CHANNEL c1 TYPE sbt TRACE 5 …
Trace levels range from 0 (errors only) to 6 (verbose debugging).
Hosts lshost -l
Devices lsdev
The slide lists some of the common obtool commands that you can use to query the Oracle
Secure Backup administrative and catalog data. Depending on the information you want to
retrieve, you may use additional options to specify the amount of information returned, such
as listing all the volumes for a particular media family or listing only completed jobs.
For details of all command options, see the Oracle Secure Backup Reference.
These commands can assist you with troubleshooting your OSB installation and
configuration. For example, the lshost command shows the current roles of a host. If you
want to add a device to your OSB domain, the host must have the mediaserver role, which
is not installed by default.
Quiz
Answer: a, b, d
Quiz
Answer: a, d, e
Quiz
Answer: a
Quiz
Where can RMAN record the errors returned from OSB over
the SBT interface:
a. To $ORACLE_HOME/trace/sbtio.log by default
b. To /usr/local/oracle/sbtio.log by default
Answer: c, d
Summary
Practice Overview:
Performing RMAN Tape Backup and Restore
This practice covers the following topics:
• Performing an RMAN backup to tape
• Restoring a data file from a tape backup
Objectives
Flashback Technologies
Error Detection and Correction
• Flashback makes error recovery much easier by:
– Enabling you to view data as of a past point in time
– “Rewinding” unwanted data changes
– Minimizing the time it takes to correct an error
40
20
0
Oracle Database Flashback technologies are a set of data recovery solutions that provide
capabilities to correct human errors by selectively and efficiently undoing the effects of a
mistake. Flashback technologies support recovery at all levels including row, transaction,
table, and the entire database.
Original data
in
DML operations
When a transaction starts, it is assigned to an undo segment. Throughout the life of the
transaction, when data is changed, the original “old” values are copied into the undo segment.
You can see which transactions are assigned to which undo segments by checking the
V$TRANSACTION view.
Undo segments are specialized segments that are automatically created by the instance as
needed to support transactions. Like all segments, undo segments are made up of extents,
which, in turn, consist of data blocks. Undo segments automatically grow and shrink as
needed, acting as a circular storage buffer for their assigned transactions.
When transactions fill the blocks in their current undo segment extent, they are assigned
another block in the same extent. If no free blocks remain in that extent, the transaction
acquires a block from the next extent in the segment. If all extents are in use, the transaction
either wraps around back into the first extent or requests that a new extent be allocated to the
undo segment.
The diagram in the slide shows on the left a table icon with original data arriving from a DML
operation. The original data is kept in the buffer cache (if not aged out) and then written to the
undo tablespace (shown in circular form on the right).
Note: Parallel DML operations can actually cause a transaction to use more than one undo
segment. To learn more about parallel DML execution, see the Oracle Database VLDB and
Partitioning Guide.
Flashback Technology
Oracle Flashback technology is a group of features that support viewing past states of data—
and winding data back and forth in time—without requiring restoring the database from
backup. With this technology, you help users analyze and recover from errors.
• Flashback Query: View committed data as it existed at some point in the past. The
SELECT command with the AS OF clause references a time in the past through a time
stamp or system change number (SCN).
• Flashback Version Query: View committed historical data for a specific time interval.
Use the VERSIONS BETWEEN clause of the SELECT command (for performance reasons
with existing indexes).
• Flashback Transaction Query: View changes made at the transaction level.
• Flashback Transaction Backout: Roll back a specific transaction and dependent
transactions.
• Flashback Table: Rewind one or more tables to their contents at a previous time
without affecting other database objects.
• Flashback Drop: Reverse the effects of dropping a table by returning the dropped table
from the recycle bin to the database along with dependent objects such as indexes and
triggers.
• Flashback Database: Return the database to a past time or SCN.
Flashback Technology
Object Scenario Examples Flashback Depends Affects
Level Technology On Data
Database Truncate table; undesired Database Flashback TRUE
multitable changes made logs
Table Drop table. Drop Recycle bin TRUE
You can use Flashback technology when a logical corruption occurs in the Oracle database
and you need to recover data quickly and easily. As with human errors, it is difficult to identify
the objects and rows that are affected by an erroneous transaction. With Flashback
technology, you can diagnose how errors were introduced into the database, and then repair
the damage. You can view the transactions that have contributed to specific row
modifications, view the entire set of versions of a given row during a specific time period, or
just view data as it appeared at a specific time in the past. The table in the slide shows typical
uses of Flashback technology. Flashback Database depends on the flashback logs to perform
the Flashback Database operation. Flashback Drop uses the recycle bin. All other techniques
use undo data.
Not all flashback features modify the database. Some are simply methods to query other
versions of data. You can use these features to investigate a problem and aid in recovery:
• Determine the type of flashback operation to use to correct the problem.
• Use the result of the query in an INSERT, UPDATE, or DELETE that enables you to repair
the erroneous data.
Retention guarantee:
15 minutes
With guaranteed retention, undo retention settings are enforced even if they cause
transactions to fail. (So in case of conflict, queries have precedence over transactions, as
shown in the graphic of this slide.)
Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that
unexpired undo data is not discarded. Note that by configuring retention guarantee, ongoing
operations that need undo space in the segments of the tablespace may fail as the result of a
lack of space. RETENTION GUARANTEE is a tablespace attribute rather than an initialization
parameter. An example is shown in the slide. To return a guaranteed undo tablespace to its
normal setting:
SQL> ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE;
To satisfy long retention requirements, create a Temporal History.
Quiz
Answer: c, d, e, f
11:00 11:10
Flashback technology provides the capability to query past versions of schema objects, query
historical data, and perform change analysis. Every transaction logically generates a new
version of the database. With Flashback technology, you can navigate through these versions
to find an error and its cause:
• Flashback Query: Query all data as it existed at a specific point in time.
• Flashback Version Query: See all versions of rows between two times and the
transactions that changed the row.
• Flashback Transaction Query: See all changes made by a transaction and, if needed,
roll back a transaction with “undo” SQL commands.
Flashback Query
T1 T2
With the Flashback Query feature, you can perform queries as of a certain time. By using the
AS OF clause of the SELECT statement, you can specify the time stamp for which to view the
data. This is useful for analyzing a data discrepancy.
Note: TIMESTAMP and SCN are valid options for the AS OF clause.
The Flashback Version Query feature enables you to use the VERSIONS clause to retrieve all
the versions of the rows that exist between two points in time or two SCNs. The rows returned
represent a history of changes for the rows across transactions.
Flashback Version Query retrieves only committed data. Uncommitted row versions within a
transaction are not shown. The rows returned also include deleted and subsequently
reinserted versions of the rows.
Use row history to audit the rows of a table and retrieve information about the transactions
that affected the rows. You can then use the returned transaction identifier (the
VERSIONS_XID pseudocolumn) to perform transaction mining by using LogMiner or to
perform a Flashback Transaction Query.
• The VERSIONS clause cannot be used to query external tables, temporary tables, fixed
tables, or views. But you can create a view with the VERSIONS clause.
• The VERSIONS clause in a SELECT statement cannot produce versions of rows across
the DDL statements that change the structure of the corresponding tables. This means
that the query stops producing rows after it reaches a time in the past when the table
structure was changed.
• Certain maintenance operations, such as a segment shrink, may move table rows
across blocks. In this case, the version query filters out such phantom versions because
the row data remains the same.
Flashback
- Query
Flashback Table: Overview - Versions
- Table
- Transaction
- Drop
- Data Archive
• Flashback Table recovers tables to a specific
point in time.
• Flashback Table is an in-place operation.
• The database stays online.
With Flashback Table, you can recover a set of tables to a specific point in time without
having to perform traditional point-in-time recovery operations.
A Flashback Table operation is done in place, while the database is online, by rolling back
only the changes that are made to the given tables and their dependent objects.
A Flashback Table statement is executed as a single transaction. All tables must be flashed
back successfully, or the entire transaction is rolled back.
Note: You can use Flashback Version Query and Flashback Transaction Query to determine
the appropriate flashback time.
Flashback Table
With Flashback Table, you can recover a table or tables to a specific point in time without
restoring a backup. When you use this feature, the data in tables and their associated objects
(indexes, constraints, triggers, and so on) is restored. The data used to satisfy a Flashback
Table request is retrieved from the undo tablespace. You can use Flashback Version Query
and Flashback Transaction Query to determine the appropriate flashback time.
Flashback Table provides a way for users to easily and quickly recover from accidental
modifications without a database administrator’s involvement. You must grant the
FLASHBACK TABLE or FLASHBACK ANY TABLE system privilege to any user that uses the
Flashback Table feature. In addition, you must grant the SELECT, INSERT, DELETE, and
ALTER object privileges to the user.
You can use Enterprise Manager or SQL*Plus to flash back a table. The wizard guides you
through the process.
You must enable row movement on a table to be able to flash back the table. When you
enable row movement, the Oracle server can move a row in the table. You can enable row
movement by using Enterprise Manager or by using the ALTER TABLE command.
• The entire FLASHBACK TABLE statement is executed within a single transaction. All or
none of the specified tables are flashed back.
• Flashback Table acquires exclusive data manipulation language (DML) locks on all
tables that are specified in the statement over the period of time the operation is in
progress.
• Statistics of impacted objects are not flashed back.
• All existing indexes are maintained. Dropped indexes are not re-created. Dependent on-
commit materialized views are also maintained automatically.
• Tables specified in the FLASHBACK TABLE statement are flashed back, provided that
none of the table constraints are violated. If any constraints are violated during flashback
execution, the operation is aborted and the tables are left in the same state as they were
just before the FLASHBACK TABLE statement invocation.
• You cannot perform Flashback Table to a particular time that is older than the time of
the execution of a data definition language (DDL) operation that altered the structure of
or shrunk a table that would be involved in the flashback operation. This restriction does
not apply to DDL statements that only change storage attributes of the tables.
• Flashback Table cannot be performed on system tables, remote tables, and fixed tables.
DBA
Erroneous
DML Undo
SQL
User
Flashback Transaction Query is a diagnostic tool that you can use to view changes made to
the database at the transaction level. This enables you to diagnose problems in your
database and perform analysis and audits of transactions (as shown in the graphic).
You can use the FLASHBACK_TRANSACTION_QUERY view to determine all the necessary
SQL statements that can be used to undo the changes made either by a specific transaction
or during a specific period of time.
This feature is used in conjunction with the Flashback Version Query feature with the help of
the Perform Recovery Wizard.
Within the database, DDL operations are nothing but a series of space management
operations and changes to the data dictionary. Flashback Transaction Query on a transaction
underlying a DDL command displays the changes made to the data dictionary.
When Flashback Transaction Query involves tables that have been dropped from the
database, the table names are not reflected. Instead, object numbers are used.
If the user who executed a transaction is dropped, Flashback Transaction Query of that
transaction displays the corresponding user ID only, and not the username.
Note: When there is not enough undo data for a specific transaction, a row with a value of
UNKNOWN in the OPERATION column of FLASHBACK_TRANSACTION_QUERY is returned.
With Flashback Transaction, you can reverse a transaction and dependant transactions.
Oracle Database determines the dependencies between transactions and, in effect, creates a
compensating transaction that reverses the unwanted changes. The database rewinds to a
state as if the transaction, and any transactions that could be dependent on it, never occurred.
You can use the Flashback Transaction functionality from within Enterprise Manager or by
using PL/SQL packages.
To flash back or back out a transaction—that is, to create a compensating transaction—you
must have the SELECT, FLASHBACK, and DML privileges on all affected tables.
Some DDL commands that alter the structure of a table invalidate any existing undo data for
the table. Commands that drop and modify columns, move tables, drop partitions, and
truncate table/partition fall into this category. It is not possible to retrieve data from a point
before the execution of these DDL commands. An attempt to do so results in an ORA-1466
error. Note that this restriction does not apply to DDL operations that alter the storage
attributes of a table, such as PCTFREE, INITRANS, and MAXTRANS.
1 RECYCLEBIN=ON
Using the FLASHBACK TABLE command, you can undo the effects of a DROP TABLE
statement without having to use point-in-time recovery (as shown in the graphic of this slide).
1. The RECYCLEBIN initialization parameter is used to control whether the Flashback Drop
capability is turned ON or OFF. If the parameter is set to OFF, then dropped tables do not
go into the recycle bin.
2. If this parameter is set to ON (default), the dropped tables go into the recycle bin.
3. Dropped tables can be recovered with the FLASHBACK TABLE …TO BEFORE DROP
command.
Recycle Bin
BIN$zbjrBdpw==$0 EMPLOYEES
BIN$zbjra9wy==$0 EMPLOYEES_PK
Recycle
bin
4
EMPLOYEES BIN$zbjrBdpw==$0
3
EMPLOYEES_PK BIN$zbjra9wy==$0
Objects are:
– Renamed
– Not moved
1
2 DROP TABLE employees;
If the recycle bin is not enabled and you drop a table, the space can immediately be used for
other objects. If the recycle bin is enabled and you drop a table, the space associated with the
table and its dependent objects is not immediately reclaimable, even though it does appear in
DBA_FREE_SPACE. Instead, the dropped objects are referenced in the recycle bin and still
belong to their owner. The space used by recycle bin objects is never automatically reclaimed
unless there is space pressure. This enables you to recover recycle bin objects for the
maximum possible duration.
When a dropped table is “moved” to the recycle bin, the table and its associated objects and
constraints are renamed using system-generated names. The renaming convention is:
BIN$unique_id$version, where unique_id is a 26-character globally unique identifier
for this object making the recycle bin name unique across all databases and version is a
version number assigned by the database.
The recycle bin itself is a data dictionary table that maintains the relationships between the
original names of dropped objects and their system-generated names. You can query the
recycle bin by using the DBA_RECYCLEBIN view. The diagram illustrates this behavior:
1. You have created a table called EMPLOYEES in your tablespace.
2. You drop the EMPLOYEES table.
3. The extents occupied by EMPLOYEES are now considered as free space.
4. EMPLOYEES is renamed and the new name is recorded into the recycle bin.
Oracle Database 12c: Backup and Recovery Workshop 14 - 22
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
• You can use the DROP TABLE PURGE command to permanently drop a table and its
dependent objects from the database. When you use this command, the corresponding
objects are not moved to the recycle bin.
• When you issue the DROP TABLESPACE ... INCLUDING CONTENTS command, the
objects in the tablespace are not placed in the recycle bin. Moreover, objects in the
recycle bin belonging to the tablespace are purged. When you issue the same
command without the INCLUDING CONTENTS clause, the tablespace must be empty for
the command to succeed. However, there can be objects belonging to the tablespace in
the recycle bin. In this case, these objects are purged.
• When you issue the DROP USER ... CASCADE command, the user and all the objects
owned by the user are permanently dropped from the database. Any objects in the
recycle bin belonging to the dropped user are purged.
For increased security, you may decide to not allow the use of the recycle bin (for example, if
the current object is encrypted, but the dropped object is in clear text, potentially showing
sensitive data). Connected as SYSDBA, you can:
• View the recycle bin status with: SHOW PARAMETER RECYCLEBIN
• Disable the use of the recycle bin with: ALTER SYSTEM SET RECYCLEBIN=OFF
SCOPE=SPFILE; after issuing this command, you need to restart the database.
Flashback data archives provide a mechanism for tracking changes to production databases
that is secure, efficient, easy to use, and application transparent.
With this technology, you can automatically track and store the data in tables enabled for
flashback data archive (FDA). This ensures that flashback queries obtain SQL-level access to
the versions of database objects without getting a snapshot-too-old error.
A flashback data archive provides the ability to track and store all transactional changes to a
“tracked” table over its lifetime. It is no longer necessary to build this intelligence into your
application. You can use this technology for compliance, audit reports, data analysis, and
decision-support systems.
Temporal
Base table History
A flashback data archive consists of one or more tablespaces. You can have multiple
flashback data archives. They are configured with retention duration. Based on your retention
duration requirements, you should create different archives—for example, one for all records
that must be kept for two years, another for all records that must be kept for five years. The
database server will automatically purge all historical information on the day after the retention
period expires.
1. Create a tablespace for your FDA. The size depends on the base table and the
expected DML and DDL activity.
2. Create a flashback data archive with retention time, by default with duplication and
without compression. Data archived in FDA is retained for the retention time. This task
requires the FLASHBACK ARCHIVE ADMINISTER system privilege. If different retention
periods are needed, different archives must be created. With the OPTIMIZE DATA
clause, the flashback data archive is created with compression and deduplication.
3. Enable flashback archiving (and then disable it again) for a (whole) table. This task
requires the FLASHBACK ARCHIVE object privilege. Although flashback archiving is
enabled for a table, some DDL statements are not allowed on that table. By default,
flashback archiving is off for any table.
History data:
• Row captured asynchronously by background
processes at self-tuned intervals (default: 5 min)
• Automatically purged per retention policy
DML
history data
• Partitions automatically created
based on time and volume
• Unrelated partitions skipped by
queries
EMPLOYEES FDA1
History data is captured from undo (and buffer cache) by the fdba background process at
self-tuned intervals. The default is every five minutes. The entire base table row that is
updated is stored, no matter how many columns are updated.
• With the OPTIMIZE DATA clause, table and LOB compression and LOB deduplication
are automatically turned on, using any of the following features: Advanced Row table
compression, SecureFiles Intelligent Compression, SecureFiles Intelligent
Deduplication, and segment-level and row-level ILM compression. ILM is enabled to
allow new data to be archived as uncompressed and over time is compressed in the
background.
Note: If the base table is compressed with Hybrid Columnar compression, the table
cannot be enabled for flashback data archiving.
• FDA history tables already compressed and deduplicated in releases prior to 12.1 are
not changed. Their storage continues compressed and deduplicated.
• To stop optimization on FDA history tables, execute the following statement:
SQL> ALTER FLASHBACK ARCHIVE fla1 NO OPTIMIZE DATA;
• Each flashback archive partition is at least 1 day and 1 MB of data, partitioned on
ENDSCN. Flashback queries to the archives avoid unrelated partitions.
• Up to ten flashback archiver slaves can be called upon by the fbda process.
• If the flashback archive process and slaves are too busy, archiving may be performed
inline, which significantly affects the user’s response time.
Oracle Database 12c: Backup and Recovery Workshop 14 - 26
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
• The user context of a transaction executed on a table with Temporal History is collected
and retrievable. The parameters of the namespace USERENV describe the current
session. The user context is obtained from
DBMS_FLASHBACK_ARCHIVE.GET_SYS_CONTEXT.
• The user context collection is controlled by a parameter, set by
DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL whose values can be NONE,
TYPICAL, or ALL. By default no user context is collected.
Some examples of information, which are collected in the TYPICAL case, are database
user ID, global user ID, client identifier, service name, module name, or host name.
• Rows in these tables are purged when the commit time is older than the retention of the
flashback archive with the longest retention.
• Each row of the user context can by read by only the DBA or the owner of the
transaction.
Add column
2
time
The most common DDL commands are possible with flashback data archives. When a
schema has evolved in any of the ways listed in the slide, Temporal History technology
automatically keeps track of the changes. Flashback query appropriately returns the row or
rows with the corresponding schema (as shown in the diagram).
All DDL changes that are not automatically supported can be executed through the
DBMS_FLASHBACK_ARCHIVE package.
Note: This function should be used with care and with the understanding that the archive can
no longer be guaranteed to be immutable, because the history could have been altered during
the time of disassociation. The system catalog has a note when the disassociation occurred.
The diagram in the slide shows the following workflow:
1. If you have the FLASHBACK ARCHIVE ADMINISTER privilege, you can disassociate
the archive from the base table with the DISASSOCIATE_FBA procedure.
2. Make the necessary changes to the base table.
3. Make the necessary changes to the corresponding archive.
4. Then, associate the table with the archive within the same schema with the
RESASSOCIATE_FBA procedure. Temporal History validates that the schemas are the
same upon association.
There is no transportability of history tables. Some DDL statements cause error ORA-55610
when used on a table enabled for FDA, for example:
• ALTER TABLE …with UPGRADE TABLE clause
• ALTER TABLE statement that moves or exchanges a partition or subpartition operation
• DROP TABLE statement
Oracle Database 12c: Backup and Recovery Workshop 14 - 29
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Flashback
- Query
Temporal Validity and History - Versions
- Table
- Transaction
- Drop
- Data Archive
Distinguish active from Emp
nonactive rows: Job Hire Date
IDc
• Temporal validity: User- 100 Clerk 22-Apr-11
managed effective date in
the same table 200 Developer 12-Dec-11
Applications often annotate the validity of a fact recorded in a table with dates or time stamps
that are relevant to the underlying business; for example, the hire date of an employee in HR
applications and the effective date of coverage in the insurance industry. This temporal
attribute is called temporal validity, and is usually controlled by the user who defines the valid-
time dimension at table creation.
Temporal validity dates or time stamps are different than the dates or time stamps annotated
when the fact was recorded in the database. The date or time stamp when the fact was
recorded in the database are attributes of Temporal History (also known as flashback data
archive) and are system-managed.
In the slide, employee 400 was hired on March 22, but the row was entered in the HR.EMP
table on March 23. 22-MAR-12 is the valid time temporal date and 23-MAR-12 is the
transaction time temporal date.
By using the valid time temporal implicit filter on the valid-time dimension, queries can show
rows that are currently valid or that will be valid in the future. The query is able to hide rows
whose facts are not currently valid.
Bi-temporal queries can use both valid time temporal and transaction time temporal date.
A valid-time dimension, represented by the new PERIOD FOR clause, consists of two date-
time columns that can be specified in the table definition, as shown in the first example or that
are automatically created, as shown in the second example.
To hide valid-time dimension columns, just specify a PERIOD FOR clause name without any
date columns. Oracle creates two hidden columns using the name of the valid-time dimension
as a prefix for the names of the two columns. The valid-time dimension name is used to drop
the dimension if required. As shown in the second example, you defined user_time as the
name of the valid-time dimension and user_time is used as the prefix for the two date
columns automatically created: USER_TIME_START and USER_TIME_END.
To insert rows into a table with a valid-time dimension, you would name the two valid-time
date columns as follows:
SQL> INSERT INTO emp2 (empno, salary, deptid, name, user_time_start,
user_time_end) VALUES (1,1000,20, 'John', SYSDATE, NULL);
SQL> select EMPNO, user_time_start, user_time_end from emp2;
EMPNO USER_TIME_START USER_TIME_END
---------- ----------------------------------- ------------------
1 17-AUG-12 09.58.03.000000 AM +00:00
JEAN
SCOTT
ADAM
KIM
How do you filter on valid-time columns? Use the SELECT statement with the PERIOD FOR
clause or use the DBMS_FLASHBACK_ARCHIVE procedure.
• There is one set of data that is “valid” based on its valid-start and valid-end times and
the query time (AS OF or undecorated).
• On the other hand, there is the other set of rows where the query time falls outside the
valid-start and valid-end times.
Both sets of rows data reside in the same table. However, by controlling the visibility of data to
the valid rows, you can limit what queries and DMLs affect. Until now, you could do as-of and
versions queries for transaction time. Now you can do as-of and versions queries for valid
time.
For each new employee that you inserted in the table, you included the hire dates, valid-time
start dates, and valid end dates. The dates represent the activeness of each row. These dates
are entered by the application and correspond to valid dates. The time that the rows were
inserted and committed in the table corresponds to the transaction date.
You can filter the active employees by using the following new PERIOD FOR clause. The
query displays all active employees who were valid at the explicit date of '01-DEC-1992',
which is the date that belongs to the valid period; that is, between USER_TIME_START and
USER_TIME_END.
JEAN
SCOTT
ADAM
KIM
You can filter the active employees by using the VERSIONS PERIOD FOR BETWEEN clause:
select * from hr.emp VERSIONS PERIOD FOR user_time
BETWEEN to_date('31-DEC-2011', 'dd-mon-yyyy')
AND to_date('31-DEC-2012', 'dd-mon-yyyy') ;
The query displays all employees whose VALID_TIME_START is less than or equal to '31-
DEC-2011' and VALID_TIME_END greater than or equal to '31-DEC-2012'.
Queries that mix valid-time and transaction-time dimensions are called “bi-temporal queries.”
The example shows rows as of the specified transaction time that are valid now.
select * from hr.emp
as of period for user_time to_date('31-DEC-1992', 'dd-mon-yyyy')
as of timestamp to_date ('30-mar-2012','dd-mon-yyyy');
Using DBMS_FLASHBACK_ARCHIVE
• Visibility control applies to queries and DML.
• Full visibility applies to DDL.
• Visibility set with DBMS_FLASHBACK_ARCHIVE:
– Set the visibility to data valid as of the given time.
DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ASOF',
– Set the visibility to data currently valid within the valid time
period at the session level.
DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('CURRENT')
DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ALL')
Users can modify visibility within a session via the new DBMS_FLASHBACK_ARCHIVE
package. Visibility control applies to all SQL SELECT and DML statements.
DDLs will default to getting full visibility to the table data. For example, CTAS, online
redefinition and ALTER TABLE MOVE operations will have full visibility of the table data.
Hidden columns are also visible to the DDL operations, resulting in preservation of those
columns and their data.
The first example sets the valid time visibility to ‘29-SEP-2010’, showing only rows
overlapping the given date.
The second example sets the visibility to data currently valid within the period time at the
session-level.
The third example sets the visibility to the full table, which is the default temporal table
visibility.
Quiz
Answer: c
Quiz
Answer: b
Summary
Practice Overview:
Using Flashback Technologies
This practice covers the following topics:
1. Preparing to use Flashback technologies
– Configuring undo retention
– Verifying the value of the RECYCLEBIN parameter
Additional learning: There are several Flashback videos on YouTube and OLL.
Flashback Database
Objectives
Flashback Database is a unique database point-in-time recovery capability, which enables the
database to be quickly “rewound” to a previous point in time. Flashback Database restores
the database more quickly than traditional restore and recovery methods, because only the
affected data blocks are restored and recovered.
Flashback Database uses flashback logs, which record old block versions. The diagram in the
slide illustrates how the old block version is written to the flashback log and the new block
version is written to the data file when writes are issued to disk and Flashback Database is
enabled. When the FLASHBACK DATABASE command is issued, only the changed blocks are
retrieved from the flashback logs. The blocks are then recovered with the appropriate
archived logs to the required point in time.
Flashback Database
With Flashback Database, you can quickly bring your database to an earlier point in time by
undoing all the changes that have taken place since that time. This operation is fast because
you do not need to restore backups. You can use this feature to undo changes that have
resulted in logical data corruptions.
When you use Flashback Database, the Oracle Database server uses past block images to
back out changes to the database. During normal database operation, the Oracle Database
server occasionally logs these block images in flashback logs. Flashback logs are written
sequentially and are not archived. The Oracle Database server automatically creates, deletes,
and resizes flashback logs in the fast recovery area. You need to be aware of flashback logs
only for monitoring performance and deciding how much disk space to allocate for them in the
fast recovery area.
The time it takes to rewind a database with Flashback Database is proportional to how far
back in time you need to go and the amount of database activity after the target time. The
time it would take to restore and recover the whole database could be much longer. The
before images in the flashback logs are used only to restore the database to a point in the
past, and forward recovery is used to bring the database to a consistent state at some time in
the past.
SGA
Not every Redo log
Flashback
change! buffer
Buffer cache
buffer
Every LGWR
change
Flashback Redo
RVWR logs
logs
Do forward media
Back out changes recovery.
1 to database using 2
before images.
… …
When you enable Flashback Database, the RVWR (Flashback Writer) background process is
started. This background process sequentially writes Flashback Database data from the
flashback buffer to the Flashback Database logs, which are circularly reused. Subsequently,
when a FLASHBACK DATABASE command is issued, the flashback logs are used to restore to
the blocks’ before images, and then redo data is used to roll forward to the desired flashback
time.
The overhead of enabling Flashback Database depends on the read/write mix of the
database workload. Because queries do not need to log any flashback data, the more write-
intensive the workload, the higher the overhead of turning on Flashback Database.
You can use the RMAN FLASHBACK DATABASE command to execute the Flashback
Database operation. You can use SEQUENCE and THREAD to specify a redo log sequence
number and thread as a lower limit. RMAN selects only files that can be used to flash back to,
but not including, the specified sequence number.
Alternatively, you can use the SQL FLASHBACK DATABASE command to return the database
to a past time or SCN. If you use the TO SCN clause, you must provide a number. If you
specify TO TIMESTAMP, you must provide a time stamp value. You can also specify a restore
point name.
You can monitor the Flashback Database progress with the V$SESSION_LONGOPS view.
Note: The database must be mounted in exclusive mode to issue the FLASHBACK DATABASE
command and opened read-only to review changes. The database must be opened read/write
with the RESETLOGS option when finished.
In situations where you cannot use the Flashback Database feature, you should use an
incomplete recovery operation to return the database to a specific time. After the Flashback
Database operation is complete, you can open the database in read-only mode to verify that
the correct target time or SCN was used. If not, you can flash back the database again, or
perform a recovery to roll forward the database.
You cannot use Flashback Database to recover a data file that was dropped during the span
of time you are flashing back. The dropped data file is added to the control file and marked
offline, but it is not flashed back. Flashback Database cannot flash back a data file to a time
after its creation and before the resize operation. If a file was resized during the span of time
to which you are going to flash back the database, then you should take the file offline before
beginning the Flashback Database operation. This is applicable for files that are shrunk rather
than expanded. You can use Flashback Database with data files that you have configured for
automatic extension. You can flash back to just before the last RESETLOGS operation by
supplying the TO BEFORE RESETLOGS clause in the FLASHBACK DATABASE command.
Note: The flashback retention target is not an absolute guarantee that flashback will be
available. If space is needed for required files in the fast recovery area, flashback logs may be
deleted automatically.
It is important for you to monitor space usage of the fast recovery area so that you know how
well you are meeting your retention target. Use the V$FLASHBACK_DATABASE_LOG view to
monitor the Flashback Database retention target:
• ESTIMATED_FLASHBACK_SIZE uses previously logged flashback data to provide an
estimate of how much disk space is needed in the fast recovery area for flashback logs
to meet the current flashback retention target. The estimate is based on the workload
since the instance was started, or during the most recent time interval equal to the
flashback retention target, whichever is shorter.
• FLASHBACK_SIZE gives you the current size, in bytes, of the flashback data.
• OLDEST_FLASHBACK_SCN and OLDEST_FLASHBACK_TIME display the approximate
lowest SCN and time to which you can flash back your database. CURRENT_SCN in
V$DATABASE gives you the current database SCN.
Use the V$FLASHBACK_DATABASE_STAT view to monitor the overhead of logging flashback
data in the Flashback Database logs. This view contains 24 hours of information, with each
row representing a one-hour time interval. You can use this view to determine rate changes in
the flashback data generation.
Like normal restore points, guaranteed restore points can be used as aliases for SCNs in
recovery operations. A principal difference is that guaranteed restore points never age out of
the control file and must be explicitly dropped. However, they also provide specific
functionality related to the use of the Flashback Database feature.
Creating a guaranteed restore point at a particular SCN enforces the requirement that you
can perform a Flashback Database operation to return your database to its state at that SCN,
even if flashback logging is not enabled for your database. If flashback logging is enabled,
creating a guaranteed restore point enforces the retention of flashback logs required for
Flashback Database back to any point in time after the creation of the earliest guaranteed
restore point.
A guaranteed restore point can be used to revert a whole database to a known good state
days or weeks ago, as long as there is enough disk space in the fast recovery area to store
the needed logs. As with normal restore points, guaranteed restore points can be used to
specify a point in time for RECOVER DATABASE operations.
Note: Limitations that apply to Flashback Database also apply to guaranteed restore points.
For example, shrinking a data file or dropping a tablespace can prevent flashing back the
affected data files to the guaranteed restore point.
To support the use of guaranteed restore points, the database must satisfy the following
prerequisites:
• The COMPATIBLE initialization parameter must be set to 10.2 or greater.
• The database must be running in ARCHIVELOG mode.
• To rewind the database to a guaranteed restore point, the FLASHBACK DATABASE
command needs the archived redo logs starting from around the time of the restore
point.
• A fast recovery area must be configured because the Oracle Database server stores the
required logs in the fast recovery area.
• If Flashback Database is not enabled, the database must be mounted, not open, when
creating the first guaranteed restore point (or if all previously created guaranteed restore
points have been dropped).
To achieve good performance for large production databases by using Flashback Database,
Oracle recommends the following:
• Use a fast file system for your fast recovery area, preferably without operating system
file caching. Files that are stored in the fast recovery area, including flashback logs, are
typically large. Operating system file caching is typically not effective for these files, and
may actually add CPU overhead due to reading from and writing to these files. Use a file
system such as ASM.
• Configure enough disk spindles for the file system that will hold the fast recovery area.
Multiple disk spindles may be needed to support the disk throughput required for the
Oracle Database server to write the flashback logs effectively.
• If the storage system used to hold the fast recovery area does not have non-volatile
RAM, try to configure the file system on top of striped storage volumes with a relatively
small stripe size such as 128 K. This enables each write to the flashback logs to be
spread across multiple spindles, thereby improving performance.
• Set the LOG_BUFFER initialization parameter to at least 8 MB. This ensures that the
Oracle Database server allocates the maximum amount of memory (typically 16 MB) for
writing flashback database logs.
Quiz
Answer: a
Quiz
Guaranteed restore points do not age out of the control file and
must be explicitly dropped.
a. True
b. False
Answer: a
Summary
Practice Overview:
Flashback Database
This practice covers the following topics:
1. Configuring your database for Flashback Database
2. Using Flashback Database
Additional learning: There are several flashback videos on YouTube and OLL.
Transporting Data
Objectives
RMAN enables you to transport databases, data files, and tablespaces across platforms. This
includes transporting tablespaces across platforms with different endian formats (byte
ordering). You can convert a database on the destination host or source host. For platforms
that have the same endian format, no conversion is needed.
• Cross-platform data transport can be based on image copies or on backup sets.
• You can also create cross-platform inconsistent tablespace backups by using image
copies and backup sets. An inconsistent tablespace backup is one that is created when
the tablespace is not in read-only mode.
With the use of backup sets, you can choose compression and multisection options, which
reduce the overall transport time.
Note: RMAN does not catalog backup sets created for cross-platform transport in the control
file. This ensures that backup sets created for cross-platform transportation are not used
during regular restore operations.
When you develop a database transport strategy, you need to consider the endian format of
the platforms and the database open mode.
• Transport at the database level requires the same endian format (on source and
destination) and READ ONLY mode of the source database (which is not desirable for a
database that users need to update frequently).
• Tablespaces and backup sets can be transported across platforms of different endian
format, while the source database remains online (in READ WRITE mode).
The slide shows a sample workflow that considers these requirements. It is a strategy to
minimize down time by performing most of the work when the database is open in READ
WRITE mode. Only for the final step (a small incremental backup) is the database in READ
ONLY mode, which is required so that both databases can be opened in a completely
consistent state.
Note: MyOracle Support Note 1389592.1 provides information on how this use case can be
implemented for Oracle databases release 11.2.
Target
No
To transport a tablespace from one platform to another (source to target), data files belonging
to the tablespace set must be converted to a format that can be understood by the target or
destination database. Although with Oracle Database, disk structures conform to a common
format, it is possible for the source and target platforms to use different endian formats (byte
ordering). When going to a different endian platform, you can use the CONVERT command of
the RMAN utility to convert the byte ordering. This operation can be performed on either the
source or the target platforms. For platforms that have the same endian format, no conversion
is needed.
The slide graphic depicts the possible steps to transport tablespaces from a source platform
to a target platform. However, it is possible to perform the conversion after shipping the files to
the target platform. The last two steps must be executed on the target platform.
Basically, the procedure is the same as when using previous releases of the Oracle database
server except when both platforms use different endian formats. It is assumed that both
platforms are cross-transportable compliant.
Using transportable tablespaces, Oracle data files (containing table data, indexes, and almost
every other Oracle database object) can be directly transported from one database to another.
Transportable tablespaces also provide a mechanism for transporting metadata.
You can use the transportable tablespace feature to move data across platform boundaries
(with the same character set). This simplifies the distribution of data from a data warehouse
environment to data marts, which often run on smaller platforms. It also allows a database to
be migrated from one platform to another by rebuilding the dictionary and transporting the
user tablespaces.
Moving data by using transportable tablespaces is much faster than performing either an
export/import or unload/load of the same data. This is because the data files containing all of
the actual data are just copied to the destination location, and you use Data Pump to transfer
only the metadata of the tablespace objects to the new database.
To be able to transport data files from one platform to another, you must ensure that both the
source system and the target system are running on one of the supported platforms. Query
V$TRANSPORTABLE_PLATFORM to determine whether the endian ordering is the same on
both platforms. V$DATABASE has two columns that can be used to determine your own
(source) platform name and platform identifier.
RMAN:
• Converts tablespaces, data files, or databases to the
format of a destination platform
• Does not change input files
You use the RMAN CONVERT command to convert a tablespace, data file, or database to the
format of a destination platform in preparation for transport across different platforms:
• CONVERT DATAFILE
• CONVERT TABLESPACE
• CONVERT DATABASE
Input files are not altered by CONVERT because the conversion is not performed in place.
Instead, RMAN writes converted files to a specified output destination.
When you use the RMAN CONVERT command to convert data, you can either convert the data
on the source platform after running Data Pump export, or you can convert it on the target
platform before running Data Pump import. In either case, you must transfer the data files
from the source system to the target system.
Restrictions: The CONVERT command does not process user data types that require endian
conversions. To transport objects between databases that are built on underlying types that
store data in a platform-specific format, use the Data Pump Import and Export utilities.
For detailed prerequisites, usage, restrictions, and syntax, see the Oracle Database Backup
and Recovery Reference and the Oracle Database Administrator’s Guide.
Quiz
Answer: b, c
4
TO PLATFORM Data transport
3
Backup data Destination Database
Endian
conversion
The graphic in the slide illustrates transporting data with backup sets:
1. Before you create a backup set that can be used for cross-platform data transportation,
the following prerequisites must be met:
- COMPATIBLE parameter must be set to 12.0 or greater.
- To transport an entire database, the source database must be open in read-only
mode, because the SYS and SYSAUX tablespaces participate in the transport.
- When you use the DATAPUMP clause, the database must be open in read/write
mode, so that Data Pump can access the metadata.
2. There are two alternatives that affect the location of the endian conversion (if needed).
The FOR TRANSPORT clause indicates that the backup set can be transported to any
destination database. If the destination database uses an endian format that is different
from that of the source database, the endian format conversion is performed on the
destination database. The TO PLATFORM clause indicates that the conversion performs
on the source database. The DATAPUMP clause indicates that an export dump file for the
tablespaces must be created. In this case, the database must be opened in read/write
mode. The export can be performed after the last incremental backup.
Process Steps: 1
Metadata
1. Verify the prerequisites.
2. Start an RMAN session in the source database. Source
Database
3. Query the exact name of the destination platform.
4. Change the tablespace to read-only.
RMAN> ALTER TABLESPACE test READ ONLY;
1. Verify the prerequisites: The source database must be opened in read/write mode.
2. Start an RMAN session and connect to the target instance.
3. To transport tablespaces across platforms, query the exact name of the destination
platform.
4. Change the tablespace to read-only.
5. Back up the source tablespace by using the BACKUP command with the TO PLATFORM
or FOR TRANSPORT clause to indicate where the conversion takes place. Use the
DATAPUMP clause to indicate the need for an export dump file of the tablespace
metadata.
Note: The ALLOW INCONSISTENT clause of the BACKUP command enables you to back up
tablespaces that are not in read-only mode. Although the backup is created, you cannot plug
these tablespaces directly in to the target database because they are inconsistent. You must
later create an incremental backup of the tablespaces when they are in read-only mode. This
incremental backup must contain the DATAPUMP clause that creates an export dump file of the
tablespace metadata.
Process Steps: 2
Meta data
6. Disconnect from the source database and move the backup sets and the Data Pump
export dump file to the destination host. You can use operating system utilities for this
task.
7. Connect to the destination host, to which the tablespace is transported, as TARGET.
Ensure that the destination database is opened in read-write mode.
8. Use the RESTORE command in the destination database as shown in the slide. The
FOREIGN TABLESPACE clause points to the HP source data file. The FORMAT clause
indicates the destination location. The DUMP FILE FROM BACKUPSET clause restores
the required metadata from the dump file.
An additional example:
1. Create cross-platform, inconsistent, incremental backups with the ALLOW
INCONSISTENT clause:
BACKUP INCREMENTAL FROM SCN=2720649 FOR TRANSPORT
ALLOW INCONSISTENT FORMAT '/home/u_inc1.bkp' TABLESPACE
users;
2. Restore the inconsistent cross-platform tablespace backup with the RESTORE FOREIGN
TABLESPACE command.
3. Recover the restored data files copies with cross-platform incremental backups with the
RECOVER FOREIGN DATAFILECOPY command.
Oracle Database 12c: Backup and Recovery Workshop 16 - 12
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
With the transportable tablespace feature, moving data across different platforms becomes
much faster. However, metadata still needs to be unloaded, because the system tablespace
cannot be transported.
The purpose of the database transport feature is to provide a fast and easy way to transport a
database across different platforms with the same endian format. However, the source
platform and the target platform can have different disk alignments. For example, HP-UX and
Solaris both have big endian, but the disk alignment is eight on HP-UX and four on Solaris.
To transport databases from one platform to another, you must ensure that both the source
system and the target system are running on one of the platforms that are listed in
V$TRANSPORTABLE_PLATFORM and that both of them have the same endian format.
For example, you can transport a database running on Linux IA (32-bit) to one of the Windows
platforms.
Unlike transportable tablespace, where there is a target database to plug data into, this
feature creates a new database on the target platform. The newly created database contains
the same data as the source database. Except for things such as database name, instance
name, and location of files, the new database also has the same settings as the source
database.
Note: Transporting database is faster than using Data Pump to move data.
Before you can transport your database, you must open it in READ ONLY mode. Then use
RMAN to convert the necessary data files of the database.
When you do the conversion on the source platform, the RMAN command CONVERT
DATABASE generates a script containing the correct CREATE CONTROLFILE RESETLOGS
command that is used on the target system to create the new database. The CONVERT
DATABASE command then converts all identified data files so that they can be used on the
target system. You then ship the converted data files and the generated script to the target
platform. By executing the generated script on the target platform, you create a new copy of
your database.
For conversion on the target platform, the CONVERT DATABASE command (which is executed
on the source system) generates only two scripts for the target system to convert the data
files, and to re-create the control files for the new database. Then, you ship the identified data
files and both scripts to the target platform. The first script uses the existing CONVERT
DATAFILE RMAN command to perform the conversion. The second script issues the CREATE
CONTROLFILE RESETLOGS SQL command with the converted data files to create the new
database.
Note: The source database must be running with the COMPATIBLE initialization parameter set
to 10.0.0 or higher. All identified tablespaces must have been READ WRITE at least once
since COMPATIBLE was set to 10.0.0 or higher.
Tempfiles BFILEs
directories
SYSTEM SYSTEM external tables
The diagram shows how different parts of a database are transported to the target platform or
re-created on the target platform.
Redo and undo information of the source database are discarded. Undo segments contain
change vectors, which currently cannot be converted. This is the reason why you must cleanly
shut down the instance before you open the database in read-only mode.
To ensure that undo records cannot be accessed (for consistent reads or flashback) after the
database is transported to its target platform, undo segment headers are converted so that
any attempt to access the undo records receives the “ORA-01555 Snapshot too old” error.
Source
DBMS_TDB.CHECK_DB('Micro
$ sqlplus / as sysdba 2 soft Windows IA (32
SQL> startup mount; bits)')
1 SQL> alter database open read only;
SQL> host rman target=/
$ sqlplus / as sysdba
5 SQL> @crdb.sql
Target
To transport a database on Linux IA (32-bit) to Windows platform and rename the new
database to newdb, perform the following steps if the conversion of the data files is done on
the source platform:
1. Open the source database in read-only mode.
2. Run the PL/SQL function:
DBMS_TDB.CHECK_DB('Microsoft Windows IA (32 bits)').
This procedure checks whether the database can be transported to the target platform.
3. Issue the following RMAN command:
convert database transport script 'crdb.sql' new database
'newdb' to platform 'Microsoft Windows IA (32 bits)' format
'/tmp/%U';
RMAN generates the data files and a pfile for the new database, and a script called
crdb.sql in the /tmp directory. crdb.sql creates a database called newdb.
4. Ship all the data files, the pfile, and the script to the target platform.
5. Run crdb.sql to create the database on the target platform.
DBMS_TDB.CHECK_DB('Micro
$ sqlplus / as sysdba 2 soft Windows IA (32
SQL> startup mount; bits)')
1
SQL> alter database open read only;
SQL> host rman target=/
RMAN> CONVERT DATABASE ON TARGET PLATFORM CONVERT
3 SCRIPT 'cnvt.sql' TRANSPORT SCRIPT 'crdb.sql' NEW
$ sqlplus / as sysdba
5 SQL> host rman target=/
RMAN> @cnvt.sql
6 RMAN> exit;
SQL> @crdb.sql
Perform the following steps if the conversion of the data files is done on the target platform:
1. Open the source database in read-only mode.
2. Run the following PL/SQL function:
DBMS_TDB.CHECK_DB('Microsoft Windows IA (32 bits)')
This procedure checks whether the database can be transported to the target platform.
3. Issue the following RMAN command:
convert database on target platform convert script 'cnvt.sql'
transport script 'crdb.sql' new database 'newdb' format
'/tmp/%U';
RMAN generates the data files and a pfile for the new database, a script called
cnvt.sql to convert the data files on the target platform, and a script called crdb.sql
to create the new database called newdb on the target platform. The name of the target
platform is not needed because the command puts the name of the source platform in
the cnvt.sql script. This script contains the RMAN command:
convert datafile <list of data files> from platform 'Linux IA
(32-bit)';
4. Ship all the data files, the pfile, and the scripts to the target platform.
5. Run the cnvt.sql script in RMAN to convert the data files on the target platform.
6. Run crdb.sql to create a database on the target platform.
Oracle Database 12c: Backup and Recovery Workshop 16 - 17
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Redo logs, control files, and tempfiles are not transported. They are re-created for the new
database on the target platform. As a result, the new database on the target platform must be
opened with the RESETLOGS option.
If a password file is used, it is not transported and you need to create it on the target platform.
This is because the types of file names allowed for the password file are OS specific.
However, the output of the CONVERT DATABASE command lists all the usernames and their
system privileges, and advises to re-create the password file and add entries for these users
on the target platform.
The CONVERT DATABASE command lists all the directory objects and objects that use BFILE
data types or external tables in the source database. You may need to update these objects
with new directory and file names. If BFILEs are used in the database, you have to transport
the BFILEs.
The generated PFILE and transport script use Oracle Managed Files (OMF) for database
files. If you do not want to use OMF, you must modify the PFILE and transport script.
The transported database has the same DBID as the source database. You can use the
DBNEWID utility to change the DBID. In the transport script as well as the output of the
CONVERT DATABASE command, you are prompted to use the DBNEWID utility to change the
database ID.
Or:
RMAN> BACKUP FOR TRANSPORT FORMAT '/bkp_dir/trans_U%'
DATABASE;
1. Verify the prerequisites (the source database must be open in read-only mode if the
whole database is transported).
2. Start an RMAN session and connect to the target instance.
3. For performing cross-platform database transport, you may need the exact name of the
destination platform to which you are transporting data.
SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM
WHERE UPPER(PLATFORM_NAME) LIKE '%LINUX%';
4. Back up the source database using the BACKUP command with TO PLATFORM or FOR
TRANSPORT. The FORMAT clause indicates the directory where the backup sets
containing the data required for cross-platform database transportation are stored on the
source host.
In the first example, the conversion will take place on the source host and the files
stored in the /bk directory are already converted for HP Tru64 UNIX platform.
In the second example, the conversion takes place on the destination host during the
restore command and the files stored in the /bk directory on the source host are not
converted yet.
To use inconsistent tablespaces in a workflow like the one described on the previous pages:
• Create a cross-platform inconsistent incremental backup with the ALLOW
INCONSISTENT clause:
BACKUP INCREMENTAL FROM SCN=2720649 FOR TRANSPORT
ALLOW INCONSISTENT FORMAT '/u01/backup/inc1.bkp' TABLESPACE
users;
• Restore the inconsistent cross-platform tablespace backup with the RESTORE FOREIGN
TABLESPACE command.
• Recover restored data file copies with cross-platform incremental backups by using the
RECOVER FOREIGN DATAFILECOPY command.
Note: The ALLOW INCONSISTENT clause enables you to back up tablespaces that are not in
read-only mode. Although the backup is created, you cannot plug these tablespaces directly
in to the target database because they are inconsistent. You must later create an incremental
backup of the tablespaces when they are in read-only mode. This incremental backup must
contain the DATAPUMP clause that creates an export dump file of the tablespace metadata.
Quiz
Answer: a, c, d
Quiz
Answer: b
Summary
Practice Overview
This practice shows how to perform a cross-platform tablespace transport (although the
practice environment has only one host and platform).
Product demonstrations show the transfer across platforms.
Objectives
Point-in-Time Recovery
A benefit of PITR is that you can recover one or more objects—for example, tablespaces—to
an earlier time, without affecting the state of the other tablespaces and objects in the
database.
With Oracle Database 12c (and higher), point-in-time recovery has three different levels or
recovery scopes:
• Table Point-in-Time Recovery (TPITR) to recover one or more tables or table partitions
to an earlier point-in-time
• Tablespace Point-in-Time Recovery (TSPITR) to recover one or more contained
tablespaces to an earlier point-in-time
• Database Point-in-Time Recovery (DBPITR) to migrate a database to a different
platform by creating a new database on the destination platform and performing a
transport of all the user tablespaces, but excluding the SYSTEM tablespace
PITR Terminology
Data file
4 Restore
backups
Restore 5
2
Recover
Archived
redo log files Recovered tablespace
Target database
RMAN
10
Control file
Export
Auxiliary file
Recovered instance
Target database tablespace
8 7
Export
Point to recovered tablespace metadata
Before performing PITR, you need to determine the correct target time for your recovery. You
need to determine whether you need additional tablespaces in your recovery set. You should
evaluate what objects will be lost as a result of the PITR operation and determine how you
want to preserve those objects.
Each of these steps is discussed in more detail in this lesson.
It is extremely important that you choose the right target time or SCN for TSPITR. After you
perform TSPITR and bring a tablespace online, you cannot use any backup from a time
earlier than the moment you brought the tablespace online. In practice, this means that you
cannot make a second attempt at TSPITR if you choose the wrong target time the first time,
unless you are using a recovery catalog. However, if you have a recovery catalog, you can
perform repeated TSPITR operations to different target times.
The current control file does not contain a record of an older incarnation of the recovered
tablespace if you do not use a recovery catalog. Recovery with a current control file that
involves the tablespace cannot use a backup taken prior to the time when you brought the
tablespace online. However, you can perform incomplete recovery of the whole database to
any time prior to or equal to the time when you brought the tablespace online if you can
restore a backup control file from before that time.
You can use Oracle Flashback Query, Oracle Flashback Transaction Query, and Oracle
Flashback Version Query to investigate changes to your database and to help determine the
correct target time for TSPITR.
Note: With the Flashback tools and the data still available as undo data, it is usually much
simpler to use the Flashback tools for undoing unwanted changes (rather than TSPITR).
DBMS_TTS.TRANSPORT_SET_CHECK ('USERS,EXAMPLE');
SELECT * FROM TRANSPORT_SET_VIOLATIONS;
To identify relationships between objects that span the recovery set boundaries, use the
DBMS_TTS.TRANSPORT_SET_CHECK procedure and query the
TRANSPORT_SET_VIOLATIONS view.
Note: RMAN TSPITR automatically executes the DBMS_TTS.TRANSPORT_SET_CHECK
procedure for the recovery set tablespaces and verifies that the query against
TRANSPORT_SET_VIOLATIONS returns no rows. If the query returns rows, RMAN stops
TSPITR processing and any tablespace containment violations must be resolved before
TSPITR can proceed. You can execute the procedure and query the view as described in the
slide as a precautionary measure.
In addition to the preparation requirements discussed earlier in the lesson, when you perform
fully automated TSPITR, you must:
• Configure any channels required for TSPITR on the target instance
• Specify a destination for RMAN to use for the auxiliary set of data files and other
auxiliary instance files
After TSPITR has completed, back up the recovered tablespaces and bring them online. You
cannot use backups of any tablespaces that participate in TSPITR taken before TSPITR after
you perform TSPITR.
Note: This time format assumes that NLS_DATE_FORMAT is set to 'yyyy-mm-
dd:hh24:mi:ss' and NLS_LANG is set to AMERICAN_AMERICA.WE8MSWIN1252.
RUN
{
SET NEWNAME FOR DATAFILE
'$ORACLE_BASE/oradata/orcl/users01.dbf'
TO ‘/u01/backup/users01.dbf';
You can improve TSPITR performance by directing RMAN to use the existing image copies of
the recovery set and auxiliary set data files. This technique enables RMAN to skip restoring
the data files from a backup.
The CONFIGURE AUXNAME command sets a persistent alternative location for an auxiliary set
data file image copy, whereas the SET NEWNAME command sets an alternative location for
the duration of a RUN command.
If you want to customize RMAN TSPITR, you can use an RMAN-managed auxiliary instance
and make the following changes:
• Rename the recovery set data files by using SET NEWNAME so that they are not restored
and recovered in their original locations.
• Control the location of your auxiliary set data files by specifying new names for
individual files with SET NEWNAME and using DB_FILE_NAME_CONVERT to provide rules
for converting data file names in the target database to data file names for the auxiliary
database.
• Use existing image copies of the recovery set and auxiliary set data files on disk rather
than restoring them from backup for faster RMAN TSPITR performance.
Note: Refer to the Oracle Database Backup and Recovery User’s Guide for additional
information.
Oracle recommends that you allow RMAN to manage the creation and destruction of the
auxiliary instance used during RMAN TSPITR. However, creating and using your own
auxiliary instance is supported.
To create an Oracle instance suitable for use as an auxiliary instance, perform the following
steps:
1. Create an Oracle password file for the auxiliary instance by using the orapwd utility.
2. Create a text initialization parameter file for the auxiliary instance.
3. Verify Oracle Net connectivity to the auxiliary instance by using a valid net service
name.
To perform TSPITR, execute the following steps:
4. Start the auxiliary instance in NOMOUNT mode.
5. Connect the RMAN client to target and auxiliary instances.
6. Execute the RECOVER TABLESPACE command.
Refer to the Oracle Database Backup and Recovery User’s Guide for a detailed example.
File name conflicts: If your use of SET NEWNAME, CONFIGURE AUXNAME, and
DB_FILE_NAME_CONVERT causes multiple files in the auxiliary or recovery sets to have the
same name, you receive an error during TSPITR. To correct the problem, specify different
values for these parameters to eliminate the duplicate name.
RMAN cannot identify tablespaces with undo segments: During TSPITR, RMAN needs
information about which tablespaces had undo segments at the TSPITR target time. This
information is usually available in the recovery catalog, if one is used. If there is no recovery
catalog, or if the information is not found in the recovery catalog, RMAN proceeds assuming
that the set of tablespaces with undo segments at the target time is the same as the set of
tablespaces with undo segments at the present time. If this assumption is not correct, the
TSPITR operation fails and an error is reported. To prevent this from happening, provide a list
of tablespaces with undo segments at the target time in the UNDO TABLESPACE clause.
Restarting manual auxiliary instance after TSPITR failure: If you are managing your own
auxiliary instance and there is a failure in TSPITR, then before you can retry TSPITR, you
must shut down the auxiliary instance, correct the problem, and put the auxiliary instance
back in NOMOUNT mode.
Quiz
Answer: a, c, d
In Oracle Database 12c and higher, RMAN enables you to recover one or more tables or
table partitions to a specified point in time without affecting the remaining database objects.
Recovering tables and table partitions is useful in the following situations:
• You need to recover a very small number of tables to a particular point in time. In this
situation, TSPITR is not the most effective solution because it moves all the objects in
the tablespace to a specified point in time.
• You need to recover a tablespace that is not self-contained to a particular point in time.
TSPITR can be used only if the tablespace is self-contained.
• You need to recover tables that have either been corrupted or deleted with the PURGE
option, so you cannot use the Flashback Drop functionality.
• You enabled logging for a Flashback Table, but the flashback target time or SCN is
beyond the available undo.
• You want to recover data that is lost after a data definition language (DDL) operation
has changed the structure of tables. You cannot use Flashback Table to rewind a table
to before the point of a structural change, such as a truncate table operation.
Name?
Time?
1 Import?
2 3
Prerequisites OK
4
6
5
Dump file
1. RMAN uses backups that were previously created to recover tables and table partitions
to a specified point in time. The prerequisites are fulfilled and you provide the following
input with the RECOVER command:
- Names of tables or table partitions to be recovered
- Point in time to which the tables or table partitions need to be recovered
- Whether the recovered tables or table partitions must be imported into the target
database
- RMAN uses your input to automate the process of recovering the specified tables
or table partitions.
2. RMAN determines the backup based on your specification.
3. RMAN creates an auxiliary instance.
4. RMAN recovers your tables or table partitions, up to the specified point in time, into this
auxiliary instance.
5. RMAN creates a Data Pump export dump file that contains the recovered objects.
6. RMAN imports the recovered objects into the target database.
You can customize this process as you will see in the following slides.
The slide lists the prerequisites and limitations for table recovery from backups.
The result of table recovery is of course that the table exists as it was at an earlier point in
time. This could potentially cause consistency issues.
• RMAN recovers tables or table partitions to the state that they were in at the time
specified by the SCN. The SCN is an upper, noninclusive limit.
• RMAN recovers tables or table partitions to the state they were in at the specified time.
Use the date format specified in the NLS_LANG and NLS_DATE_FORMAT environment
variables. You can also use data constants such as SYSDATE to specify the time.
(SYSDATE – 5 means 5 days earlier than the system date).
• RMAN recovers tables or table partitions to the state they were at the time specified by
the log sequence number and thread number. RMAN selects only files that it can use to
restore or recover up to but not including the specified sequence number.
1. After verifying the prerequisites, start an RMAN session and connect to the target
instance.
2. Enter the RECOVER TABLE command with your required clauses.
3. RMAN determines the backup that contains the data that needs to be recovered, based
on the point in time specified for recovery.
4. RMAN creates an auxiliary instance. Optionally, you can specify the location of the
auxiliary instance files with the AUXILIARY DESTINATION or SET NEWNAME clauses
of the RECOVER command. AUXILIARY DESTINATION is the recommended clause,
because if you use SET NEWNAME and you forget just one data file name, the recovery
would not happen.
5. RMAN recovers the specified tables or table partitions, up to the specified point in time,
into this auxiliary instance.
6. RMAN creates a Data Pump export dump file that contains the recovered objects. You
can optionally specify the name of the export dump file (with the DUMP FILE clause,
default OS-specific name) that is used to store the metadata from the source database.
You can also specify the location in which the export dump file is created with the
DATAPUMP DESTINATION clause. The location is typically the path of the OS directory
that stores the dump file. If omitted, the dump file is stored in the AUXILIARY
DESTINATION location. If that is not specified, then the dump file is stored in a default
OS-specific location.
7. By default, RMAN imports the recovered objects that are stored in the export dump file
into the target database. However, you can choose not to import the recovered objects
by using the NOTABLEIMPORT clause of the RESTORE command.
If you choose not to import the recovered objects, RMAN recovers the tables or table
partitions to the specified point and then creates the export dump file. However, this
dump file is not imported into the target database. You must manually import this dump
file into your target database, when required, by using the Data Pump Import utility.
8. RMAN optionally renames the recovered tables or table partitions in the target database
with the REMAP TABLE option.
- If a table already exists and the REMAP option is not specified, then the table
recovery generates an error.
- If the REMAP option is specified, then the indexes and constraints are not imported.
You must create dependent objects yourself.
To import the recovered objects into a tablespace that is different from the one in which
the objects originally existed, use the REMAP TABLESPACE clause of the RECOVER
command. Only the tables or table partitions that are being recovered are remapped, the
existing objects are not changed.
Quiz
Answer: a
Summary
Practice Overview
This practice shows functionality introduced with Oracle Database 12c, how to recover a table
from a backup with the use of an automatic auxiliary instance.
Optionally, see OLL and YouTube videos on this topic.
Duplicating a Database
Objectives
A duplicate database is a copy of your target database. With the FOR STANDBY clause, it
keeps the same unique database identifier (DBID); if FOR STANDBY is not specified, it
creates a new DBID. You can operate it independently of the target database to:
• Test backup and recovery procedures
• Recover objects that were inadvertently dropped from the target database by creating
an export containing the objects in the duplicate database and importing them into the
production database. Although you will probably find that Flashback Query, Flashback
Drop, Flashback Table, and table recovery from backup are much easier and faster
solutions to recover objects.
To create a duplicate database, you can use the RMAN DUPLICATE command.
• The duplicate database can include the same content or only a subset from the source
database. It can be in the same host or separate hosts.
• The principal work of the duplication is performed by the auxiliary channels. These
channels correspond to a server session on the auxiliary instance on the destination
host for backup-based duplication.
• For active database duplication, the target channels perform the work of pushing data
file copies to the auxiliary instance (if number of allocated target channels is greater than
the number of allocated auxiliary channels).
You can duplicate a source database to a destination database, which can be on the same or
different computers. The database instance associated with the duplicate database is called
the auxiliary instance. All duplication techniques require a connection to the auxiliary instance.
The diagram shows you the following techniques for database duplication:
• From an active database, connected to the target and auxiliary instances:
- With Oracle Database 12c (and higher), a “pull” (or restore) process is based on
backup sets.
- Before Oracle Database 12c, a “push” process is based on image copies.
• From backup, connected to the target and auxiliary instances
• From backup, connected to the auxiliary instance, not connected to the target, but with
recovery catalog connection
• From backup, connected to the auxiliary instance, not connected to the target and the
recovery catalog
You can duplicate databases with the RMAN command line or with Cloud Control.
Connect Connect
Source target auxiliary Duplicate
database database
You can instruct the source database to send a “clone” of itself directly to the auxiliary
instance by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN
DUPLICATE command.
Pre-existing backups are neither needed nor used for this operation. The online image copies
are created by the source database and directly transmitted via Oracle Net (they are not
written to disk) when using the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE
command. The source database can be open or mounted.
RMAN connects as TARGET to the source database instance and as AUXILIARY to the
auxiliary instance (as shown in the slide).
The required files (data files, control files, SPFILE, and archive log copies) are copied from
the source to an auxiliary instance via an inter-instance network connection. RMAN then uses
a “memory script” (one that is contained only in memory) to complete recovery and open the
database.
This method of active database duplication is referred to as the “push”-based method (and
was the only method for Oracle Database 11g).
The “pull” (or restore) process: A connection is first established with the source database. The
auxiliary instance then retrieves the required database files from the source database as
backup sets. A restore operation is performed from the auxiliary instance. Therefore, fewer
resources are used on the source database. Both TNS connections are required on target
and auxiliary instances.
Based on the DUPLICATE clauses, RMAN dynamically determines which process to use
(push or pull). This ensures that existing customized scripts continue to function.
• When you specify USING BACKUPSET, RMAN uses the pull method.
• When you specify SET ENCRYPTION before the DUPLICATE command, RMAN
automatically uses the pull method and creates backup sets. The backups sent to the
destination are encrypted.
• The SECTION SIZE clause divides data files into subsections that are restored in
parallel across multiple channels on the auxiliary database. For an effective use of
parallelization, allocate more AUXILIARY channels.
• With the USING COMPRESSED BACKUPSET clause, the files are transferred as
compressed backup sets. RMAN uses unused block compression while creating
backups, thus reducing the size of backups that are transported over the network.
Connect Connect
Source target auxiliary Duplicate
database database
When you duplicate a database with a target database connection, RMAN can obtain
metadata about backups either from the target database control file or from the recovery
catalog.
The diagram illustrates backup-based duplication with a target connection. RMAN connects to
the source database instance and the auxiliary instance. Optionally, RMAN can connect to a
recovery catalog database (not shown in the graphic). The destination host must have access
to the RMAN backups required to create the duplicate database.
Connect Connect
Recovery catalog auxiliary Duplicate
catalog database
database
Recovery Destination
catalog host host
RMAN client
When you duplicate a database without a target database connection, but with a recovery
catalog, RMAN uses the recovery catalog to obtain metadata about the backups.
The diagram illustrates backup-based duplication without a target connection. RMAN
connects to a recovery catalog database instance and the auxiliary instance. The destination
host must have access to the RMAN backups required to create the duplicate database.
Auxiliary
Backup location
Duplicate
database
Connect
auxiliary
Destination
host
RMAN client
When you duplicate a database without a target database connection and without a recovery
catalog, RMAN uses a BACKUP LOCATION where all necessary backups and copies reside.
The diagram illustrates backup-based duplication without connections to the target or to the
recovery catalog database instance. A disk backup location containing all the backups or
copies for duplication must be available to the destination host.
It is important to understand these basic steps and the RMAN database duplication process.
If you are using the Enterprise Manager interface, wizards can perform most steps for you. If
you are creating a duplicate database with the command-line interface, you need to perform
the steps manually. You can also use the EM interface as a test or sample, and use the
output log as a basis for scripting your own database duplication.
The basic steps for creating a duplicate database are outlined in the slide. More details are
provided in this lesson for some of the steps.
You must create a text initialization parameter file for the auxiliary instance. The text
initialization parameter file must reside on the same host as the RMAN client that you use to
execute the DUPLICATE command.
Take note of the requirements for each of the following parameters:
• DB_NAME: If the target database and the duplicate database are in the same Oracle
home, you must set DB_NAME to a different name. If they are in different Oracle homes,
you must ensure that the name of the duplicate database differs from the other names in
its Oracle home. Be sure to use the same database name that you set for this parameter
when you execute the DUPLICATE command.
• CONTROL_FILES: This parameter is required when you are not using the SET
NEWNAME option and Oracle Managed Files (OMF).
Note: Be sure to verify the settings of all initialization parameters that specify path names.
Verify that all specified paths are accessible on the duplicate database host.
Available techniques:
• SET NEWNAME command
• CONFIGURE AUXNAME command (deprecated for recovery
set data files)
You can use the following techniques to specify new names for data files:
• Include the SET NEWNAME FOR DATAFILE command within a RUN block to specify new
names for the data files.
• Use the CONFIGURE AUXNAME command.
CONFIGURE AUXNAME is an alternative to SET NEWNAME. The difference is that after
you configure the auxiliary name the first time, additional DUPLICATE commands reuse
the configured settings. In contrast, you must reissue the SET NEWNAME command
every time you execute the DUPLICATE command.
Note: SET NEWNAME replaces CONFIGURE AUXNAME for recovery set data files.
• Specify the DB_FILE_NAME_CONVERT parameter with the DUPLICATE command.
You can use SET NEWNAME to specify the default name format for all data files in a named tablespace
and all data files in the database.
The order of precedence for the SET NEWNAME command is as follows:
1. SET NEWNAME FOR DATAFILE and SET NEWNAME FOR TEMPFILE
2. SET NEWNAME FOR TABLESPACE
3. SET NEWNAME FOR DATABASE
Example:
RUN
{
SET NEWNAME FOR DATABASE TO '/u01/app/oracle/oradata/dupldb/%b';
DUPLICATE TARGET DATABASE TO dupldb
LOGFILE
GROUP 1 ('/u01/app/oracle/oradata/dupldb/redo01a.log',
'/u01/app/oracle/oradata/dupldb/redo01b.log') SIZE 50M REUSE,
GROUP 2 ('/u01/app/oracle/oradata/dupldb/redo02a.log',
'/u01/app/oracle/oradata/dupldb/redo02b.log') SIZE 50M REUSE,
GROUP 3 ('/u01/app/oracle/oradata/dupldb/redo03a.log',
'/u01/app/oracle/oradata/dupldb/redo03b.log') SIZE 50M REUSE;
}
Syntax Description
Element
%b Specifies the file name without the directory path
%f Specifies the absolute file number of the data file for which the new
When issuing SET NEWNAME FOR DATABASE or SET NEWNAME FOR TABLESPACE, you must
specify substitution variables in the TO <filename> clause to avoid name collisions. Specify
at least one of the following substitution variables: %b, %f, and %U. %I and %N are optional
variables.
RMAN generates names for the required database files when you execute the DUPLICATE
command. You can control the naming of the files by specifying the following initialization
parameters in the auxiliary instance initialization parameter file:
• CONTROL_FILES: Specify the names of the control files in this parameter. If you do not
set the names via this parameter, the Oracle server creates an Oracle-managed control
file in a default control destination. Refer to the SQL CREATE CONTROLFILE command
in the SQL Reference manual for specific information.
• DB_FILE_NAME_CONVERT: This parameter is used to specify the names of data files for
the auxiliary database. It has the format DB_FILE_NAME_CONVERT = 'string1’,
'string2', where string1 is the pattern of the target database file name and
string2 is the pattern of the auxiliary database file name. You can also specify the
DB_FILE_NAME_CONVERT parameter as an option to the DUPLICATE DATABASE
command.
• LOG_FILE_NAME_CONVERT: This parameter is used to specify the names of the redo
log files for the auxiliary database. It has the format LOG_FILE_NAME_CONVERT =
'string1', 'string2', where string1 is the pattern of the target database file
name and string2 is the pattern of the auxiliary database file name. You can also use
the LOGFILE clause of the DUPLICATE DATABASE command to specify redo log file
names.
File created.
After you have created the text initialization parameter file, invoke SQL*Plus to start the
auxiliary instance in NOMOUNT mode.
RMAN creates a default server parameter file for the auxiliary instance if the following
conditions are true:
• Duplication does not involve a standby database.
• Server parameter files are not being duplicated.
• The auxiliary instance was not started with a server parameter file.
If these conditions are not met, create a server parameter file (SPFILE) from your text
initialization parameter file. You can execute CREATE SPFILE before or after you have started
the instance.
The backups needed to restore the data files must be accessible on the duplicate host. You
do not need a whole database backup. RMAN can use a combination of full and incremental
backups of individual data files during the duplication process.
Archived redo logs required to recover the duplicate database to the desired point in time
must also be accessible. The archived redo log files can be backups, image copies, or the
actual archived redo logs. The backups or copies can be transferred to the local disk of the
duplicate database node or mounted across a network by some means such as network file
system (NFS).
If you do not have automatic channels configured, manually allocate at least one auxiliary
channel before issuing the DUPLICATE command. The ALLOCATE AUXILIARY CHANNEL
command must be within the same RUN block as the DUPLICATE command.
The channel type specified on the ALLOCATE AUXILIARY CHANNEL command must match
the media where the backups of the target database are located.
• If the backups reside on disk, you can allocate more than one channel to reduce the
time it takes for the duplication process.
• For tape backups, you can specify the number of channels that correspond to the
number of devices available.
The auxiliary instance must be started with the NOMOUNT option and the target database must
be mounted or open.
When you execute the DUPLICATE command, RMAN performs the operations listed in the
slide.
1A. RMAN creates a default server parameter file for the auxiliary instance if the following
conditions are true:
- Duplication does not involve a standby database.
- Server parameter files are not being duplicated.
- The auxiliary instance was not started with a server parameter file.
1B. RMAN restores from backup—always for the standby database, and for backup-based
duplication without target connection.
2. RMAN mounts the restored or the copied backup control file from the active database.
3. For backup-based duplication: RMAN uses the RMAN repository to select the backups
for restoring the data files to the auxiliary instance.
4. RMAN restores and copies the duplicate data files.
5. RMAN recovers the data files with incremental backups and archived redo log files to a
noncurrent point in time. RMAN must perform database point-in-time recovery, even
when no explicit point in time is provided for duplication. Point-in-time recovery is
required because the online redo log files in the source database are not backed up and
cannot be applied to the duplicate database. The farthest point of recovery of the
duplicate database is the most recent redo log file archived by the source database.
Oracle Database 12c: Backup and Recovery Workshop 18 - 20
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
6. RMAN shuts down and restarts the database instance in NOMOUNT mode.
7. RMAN creates a new control file, which then creates and stores the new, unique
database identifier DBID in the data files of the duplicated database.
8. RMAN opens the duplicate database with the RESETLOGS option.
9. RMAN creates the online redo log files for the duplicate database.
Note: If the DUPLICATE DATABASE command fails, you can re-execute the DUPLICATE
DATABASE command and the duplication process attempts to resume from the point of
failure.
You can specify the following options with the DUPLICATE command:
command:
Option Purpose
SKIP READONLY Excludes read-only tablespaces
Option Purpose
UNDO TABLESPACE Must be specified when target database is not open and
there is no recovery catalog connection so that RMAN
does not check the tablespace for SYS-owned objects
The following additional options for the DUPLICATE command are introduced with Oracle
Database 11g Release 2:
• NOREDO: The NOREDO option is used to signal RMAN that redo logs should not be
applied during the recovery phase of the duplication operation. This option should be
specified when the database was in NOARCHIVELOG mode at the time of the backup or
when the archived redo log files are not available for use during the duplication
operation. This option is appropriate if a database that is currently in ARCHIVELOG
mode is being duplicated to a point in time when it was in NOARCHIVELOG mode.
If you are planning a targetless DUPLICATE operation and the database is in
NOARCHIVELOG mode, you must use the NOREDO option to inform RMAN of the
database mode. Without a connection to the target database, RMAN cannot determine
the mode.
• UNDO TABLESPACE: RMAN checks that there are no objects belonging to the SYS user
in any of the duplicated tablespaces during non-whole database duplication. The
SYSTEM, SYSAUX, and undo segment tablespaces are excluded from this check.
However, if the target database is not open and a recovery catalog is not being used
during the duplication, RMAN cannot obtain the undo tablespace names. So you must
use the UNDO TABLESPACE option to provide the names of undo segment tablespaces.
%f Specifies the absolute file number of the data file for which the
new name is generated
To avoid possible name collisions when restoring to another location, use the substitution
variables of the SET NEWNAME command. Specify at least one of the following substitution
variables: %b, %f, and %U. %I and %N are optional variables.
The example shows the SET NEWNAME FOR TABLESPACE command to set default names with
a substitution variable, together with explicit SET NEWNAME clauses.
Quiz
Answer: b, c
Summary
Practice Overview:
Duplicating a Database
Practice 18-1 covers duplicating a database.
orcl rcat
Sample schema
This practice covers cloning a database and using utilities to complete the setup of a
functioning duplicated database.
Objectives
The RMAN command output contains actions that are relevant to the RMAN job as well as
error messages that are generated by RMAN, the server, and the media vendor. RMAN error
messages have an RMAN-nnnn prefix. The output is displayed to the terminal (standard
output) but can be written to a file by defining the LOG option or by shell redirection.
The RMAN trace file contains the DEBUG output and is used only when the TRACE command
option is used.
The alert log contains a chronological log of errors, nondefault initialization parameter
settings, and administration operations. Because it records values for overwritten control file
records, it can be useful for RMAN maintenance when operating without a recovery catalog.
In Cloud Control navigate from the database home page > Oracle Database > Logs > Alert
Logs Content > Switch to Text Alert Log Contents, and optionally, enter search criteria. Click
Go to view the content of the alert log.
The Oracle trace file contains detailed output that is generated by Oracle server processes.
This file is created when an ORA-600 or ORA-3113 (following an ORA-7445) error message
occurs, whenever RMAN cannot allocate a channel, and when the Media Management
Library fails to load. It can be found in USER_DUMP_DEST.
The sbtio.log file contains vendor-specific information that is written by the media
management software and can be found in USER_DUMP_DEST. Note that this log does not
contain Oracle server or RMAN errors.
The DEBUG option displays all SQL statements that are executed during RMAN compilations
and the results of these executions. Any information that is generated by the recovery catalog
PL/SQL packages is also displayed. In the following example, the DEBUG output is written
during the backup of data file 3, but not data file 4:
RMAN> run {
debug on;
allocate channel c1 type disk;
backup datafile 3;
debug off;
backup datafile 4; }
Remember that the DEBUG output can be voluminous, so make sure that you have adequate
disk space for the trace file. This simple backup session that does not generate any errors
creates a trace file that is almost half a megabyte in size:
$ rman target / catalog rman/rman debug trace sample.log
RMAN> backup database;
RMAN> host "ls –l sample.log";
-rw-r--r-- 1 user02 dba 576270 Apr 6 10:38 sample.log
host command complete
Oracle Database 12c: Backup and Recovery Workshop 19 - 4
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Because of the amount of data that RMAN logs, you may find it difficult to identify the useful
messages in the RMAN error stack. Note the following tips and suggestions:
• Because many of the messages in the error stack are not meaningful for
troubleshooting, try to identify the one or two errors that are most important.
• Check for a line that says Additional information followed by an integer. This line
indicates a media management error. The integer that follows refers to code that is
explained in the text of the error message.
• Read the messages from bottom to top because this is the order in which RMAN issues
the messages. The last one or two errors that are displayed in the stack are often
informative.
• Look for the RMAN-03002 or RMAN-03009 message immediately following the banner.
RMAN-03009 is the same as RMAN-03002 but includes the channel ID. If the failure is
related to an RMAN command, then these messages indicate which command failed.
The syntax errors generate an RMAN-00558 error.
I) During the compilation phase, RMAN identifies the files that will participate in the
command by querying the backup repository (which is either the recovery catalog or the
target database control file).
J) With this information, RMAN constructs one or more job steps. Each job step consists of
a set of instructions from the RMAN client to the target instance to perform specific data
movement operations on one or more files.
C) During the execution phase, RMAN submits each job step to an available channel.
M) RMAN monitors the channels that process work in parallel. After the completion of each
job step, RMAN submits the next job step to that channel. When all job steps have
completed, the command is complete.
• Each RMAN channel contains a separate OCI connection to the target database. So
each channel also represents one Oracle foreground process that performs the data
movement tasks requested by the RMAN client.
• The RMAN client does not perform I/O operations; those tasks are performed by the
Oracle processes which:
R) Read (A channel reads data from disk or tape into I/O input buffers.)
P) Process using CPU (A channel copies blocks from input buffers to output buffers
and performs additional processing on the blocks: block validation, encryption,
and/or compression.)
W) Write (A channel writes the blocks from output buffers to storage media, that is to
disk or tape.)
Oracle Database 12c: Backup and Recovery Workshop 19 - 6
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Control Recovery
file Catalog Backup pieces
VALIDATE
Backup to Tape
On some platforms, Oracle provides a diagnostic tool called sbttest. This utility performs a
simple test of the media management software by acting as the Oracle database server and
attempting to communicate with the media manager.
Use sbttest to perform a quick test of the media manager.
$ORACLE_HOME/bin/sbttest backup_file_name
When you type sbttest without the mandatory backup_file_name, the online
documentation is displayed.
If sbttest returns 0, then the test ran without error, which means that the media manager is
correctly installed and can accept a data stream and return the same data when requested. If
sbttest returns a nonzero value, then either the media manager is not installed or it is not
configured correctly.
Is There a Problem?
The first step in tuning is analysis. Do you actually have a problem that can be addressed by
tuning RMAN operations? To find out, you need to:
• Know the performance of each of your components. For example, if you think of the
speed of a tape drive, you compare the actual data rate of a backup with the
documented maximum speed of that tape drive.
• For backups, analyze the read and process steps with the BACKUP VALIDATE
command. It causes RMAN to perform all steps of the backup up to the point of writing
the data to the output device, then the data is discarded and nothing is written. Note that
VALIDATE does perform compression if specified on the backup command, but not
encryption. If compression was specified on the backup command, then BACKUP
VALIDATE should be first run without compression, to measure the speed at which
RMAN can read the input files, then again with compression to see the effects of the
compression itself.
• The VALIDATE option for restoration causes RMAN to perform all the steps of a restore,
up to the point where it is about to write the restored data to disk, then the data is
discarded and nothing is written. This is often used to validate the integrity of backup
media, but is also useful to diagnose performance problems with the read and
processing operations of a RESTORE command. RESTORE VALIDATE performs both
decryption and decompression if the backup was created with those options.
1. You can use the BACKUP VALIDATE command to help you determine whether your
bottleneck is in the read or write phase. Start by querying the
EFFECTIVE_BYTES_PER_SECOND column of V$BACKUP_ASYNC_IO or
V$BACKUP_SYNC_IO.
2. If the value in EFFECTIVE_BYTES_PER_SECOND is less than the expected throughput
from your storage media, execute the BACKUP VALIDATE command. BACKUP
VALIDATE performs the same disk reads as a backup, but does not perform I/O to an
output device. So by comparing the time of your backup operations with the time taken
by the BACKUP VALIDATE command, you should be able to determine whether the
bottleneck is due to reads or writes.
If the execution time of the BACKUP VALIDATE command approximates the actual backup
time, the read phase is most likely the bottleneck.
You can improve backup performance by adjusting the multiplexing level.
Note: Refer to My Oracle Support note 1072545.1 for additional information about tuning by
using buffer memory parameters.
To analyze a write process to disk (for both backup and restore), create a new tablespace
with a data file on the disk and time the operation. If the same issues occur, general Oracle
performance to this device should be tuned, not RMAN-specific parameters.
If this technique is not suitable, a write I/O driver can be used. The write driver is invoked by
calling the DBMS_BACKUP_RESTORE.SETPARMS function with parameters:
• p0 => 6
• p1 => buffer size in bytes (specify NULL or 0 to use default)
• p2 => number of buffers (specify NULL or 0 to use default)
• p3 => number of blocks to write
• p4 => block size in bytes (must be specified, 8192 is a good choice)
• p5 => file name to write
• p6 => 1
The write I/O driver is also helpful when you are tuning disk channel output by executing
commands with varying buffer sizes and counts.
If the execution time for the BACKUP VALIDATE command is significantly less than the actual
backup time, writing to the output device is most likely the bottleneck.
If you are using compression, set the compression to LOW or MEDIUM.
Use the AES128 encryption algorithm because it is the least CPU-intensive.
View Use
If you experience performance issues with backup and restore jobs, you may begin
diagnosing your issues by using the views listed in the slide. Additional information about
each view follows in the lesson.
Monitor the progress of backups, copies, and restores by querying the V$SESSION_LONGOPS
view. RMAN uses detail and aggregate rows in V$SESSION_LONGOPS. Detail rows describe
the files that are being processed by one job step. Aggregate rows describe the files that are
processed by all job steps in an RMAN command. A job step is the creation or restoration of
one backup set or data file copy. The detail rows are updated with every buffer that is read or
written during the backup step, so their granularity of update is small. The aggregate rows are
updated when each job step is completed, so their granularity of update is large.
Note: Set the STATISTICS_LEVEL parameter to TYPICAL (the default value) or ALL to
populate the V$SESSION_LONGOPS view.
The relevant columns in V$SESSION_LONGOPS for RMAN include:
• OPNAME: A text description of the row. Detail rows include RMAN:datafile copy,
RMAN:full datafile backup, and RMAN:full datafile restore.
• CONTEXT: For backup output rows, the value of this column is 2. For all the other rows
except proxy copy (which does not update this column), the value is 1.
The maximum backup speed is limited by the available hardware. It is not possible to back up
any faster than the aggregate tape bandwidth. One exception to this is if there are many
empty blocks in the data files that need not be backed up.
One of the components of the backup system will be a bottleneck—which one depends on the
relative speeds of the disk, tape drive, and any other transport components such as the
network. As an example, if the bottleneck is the tape drive, and the tape is streaming, then the
backup cannot possibly proceed any faster.
You can use V$BACKUP_ASYNC_IO to monitor asynchronous I/O. The LONG_WAITS column
shows the number of times the backup or restore process directed the operating system to
wait until an I/O was complete. The SHORT_WAITS column shows the number of times the
backup/restore process made an operating system call to poll for I/O completion in
nonblocking mode. On some platforms, the asynchronous I/O implementation may cause the
calling process to wait for the I/O to complete while performing a nonblocking poll for I/O.
The simplest way to identify the bottleneck is to query V$BACKUP_ASYNC_IO for the data file
that has the largest ratio for LONG_WAITS divided by IO_COUNT.
When using synchronous I/O, it can easily be determined how much time the backup jobs
require because devices perform only one I/O task at a time. Oracle I/O uses a polling
mechanism rather than an interrupt mechanism to determine when each I/O request
completes. Because the backup or restore process is not immediately notified of I/O
completion by the operating system, you cannot determine the duration of each I/O.
Use V$BACKUP_SYNC_IO to determine the source of backup or restore bottlenecks and to
determine the progress of backup jobs. V$BACKUP_SYNC_IO contains rows when the I/O is
synchronous to the process (or thread, on some platforms) that is performing the backup.
1. Remove RATE settings from configured and allocated channels. The RATE parameter is
used to set the maximum number of bytes (default), kilobytes (K), megabytes (M), or
gigabytes (G) that RMAN reads each second on the channel. It sets an upper limit for
bytes read so that RMAN does not consume too much disk bandwidth and degrade
performance. If your backup is not streaming to tape, ensure that the RATE parameter is
not set on the ALLOCATE CHANNEL or CONFIGURE CHANNEL command.
2. Set DBWR_IO_SLAVES if you use synchronous disk I/O. If your disk does not support
asynchronous I/O, try setting the DBWR_IO_SLAVES initialization parameter to a
nonzero value. Any nonzero value for DBWR_IO_SLAVES causes a fixed number (four)
of disk I/O slaves to be used for backup and restore, simulating asynchronous I/O. If I/O
slaves are used, I/O buffers are obtained from the SGA. The large pool is used if
configured. Otherwise, the shared pool is used.
Note: By setting DBWR_IO_SLAVES, the database writer processes will use slaves as
well. You may need to increase the value of the PROCESSES initialization parameter.
3. If there is failure in shared memory allocation, set the LARGE_POOL_SIZE initialization
parameter as described on the next page.
4. Tune RMAN read, write, and copy phases.
MML Backup
piece 2
Data file Data file Data file
6 7 8 Channel
MML Backup
piece 3
You can configure parallel backups by setting the PARALLELISM option of the CONFIGURE
command to greater than 1 or by manually allocating multiple channels. RMAN parallelizes its
operation and writes multiple backup sets in parallel. The server sessions divide the work of
backing up the specified files.
Example
RMAN> RUN {
2> ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
3> ALLOCATE CHANNEL c2 DEVICE TYPE sbt;
4> ALLOCATE CHANNEL c3 DEVICE TYPE sbt;
5> BACKUP
6> INCREMENTAL LEVEL = 0
7> (DATAFILE 1,4,5 CHANNEL c1)
8> (DATAFILE 2,3,9 CHANNEL c2)
9> (DATAFILE 6,7,8 CHANNEL c3);
10> SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
11> }
Setting LARGE_POOL_SIZE
The requests for contiguous memory allocations from the shared pool are small, usually under
5 KB in size. It is possible that a request for a large contiguous memory allocation can fail or
require significant memory housekeeping to release the required contiguous memory. The
large pool may be able to satisfy the memory request. The large pool does not have a least
recently used list, so Oracle does not attempt to age memory out of the large pool.
Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. Query
V$SGASTAT.POOL to see in which pool (shared pool or large pool) the memory for an object
resides. The suggested value for LARGE_POOL_SIZE is calculated as:
#_of_allocated_channels * (16 MB + (4*size_of_tape_buffer ))
For backups to disk, the tape buffer is obviously 0, so set LARGE_POOL_SIZE to 16 MB. For
tape backups, the size of a single tape buffer is defined by the RMAN channel parameter
BLKSIZE, which defaults to 256 KB. Assume a case in which you are backing up to two tape
drives. If the tape buffer size is 256 KB, set LARGE_POOL_SIZE to 34 MB. If you increase
BLKSIZE to 512 KB, then increase LARGE_POOL_SIZE to 36 MB.
Note: The large pool is used only for disk buffers when DBWR_IO_SLAVES > 0 and for tape
buffers when BACKUP_TAPE_IO_SLAVES = TRUE. If you are using Automatic Shared
Memory Management, the large pool is sized automatically in response to system workload.
RMAN Multiplexing
RMAN uses two different types of buffers for I/O: disk and tape. RMAN multiplexing
determines how RMAN allocates disk buffers. RMAN multiplexing is the number of files in a
backup read simultaneously and then written to the same backup piece.
The degree of multiplexing depends on the FILESPERSET parameter of the BACKUP
command as well as the MAXOPENFILES parameter of the CONFIGURE CHANNEL or
ALLOCATE CHANNEL command.
Note: RMAN multiplexing is set at the channel level. For ASM or RAID1, set MAXOPENFILES
to 1 or 2.
RMAN Multiplexing
• For reads:
Multiplexing Level Allocation Rule
Level <= 4 1 MB buffers are allocated so that the total buffer size
For example, assume that you back up two data files with one channel. You set
FILESPERSET to 3 and MAXOPENFILES to 8. In this case, the number of files in each backup
set is 2 (the lesser of FILESPERSET and the files read by each channel) and the level of
multiplexing is 2 (the lesser of MAXOPENFILES and the number of files in each backup set).
When RMAN backs up from disk, it uses the algorithm that is described in the table shown in
the slide.
For writing, each channel allocates four output buffers of size 1 MB each.
These buffers are allocated from the PGA unless DBWR_IO_SLAVES is set to a nonzero
value.
Note: For best recovery performance, do not set FILESPERSET to a value greater than 8.
The slide contains a list of best practices that will help to improve restore and recovery
performance.
Quiz
Answer: a, b
Summary
No Practice
orcl
rcat
Sample schema
emrep Enterprise
Manager
Cloud Control
Oracle Secure
Backup: Virtual
Tape Library Oracle
Net
There is no practice for this lesson, but your instructor will explain what needs to be done in
preparation for the workshop.
Workshop Overview
Objectives
To assist you with these tasks, use this course material, videos, your own notes, as well as
the documentation, especially:
• Oracle Database Backup and Recovery User’s Guide
• Oracle Database Backup and Recovery Reference
Business Requirements
for the Workshop Database
Business Requirement Configuration Options
Consider the requirements listed in the slide when you configure your database for the
workshop portion of the course.
The steps in the slide provide you with a general methodology to use when diagnosing the
failures in your database. After you have determined what may have gone wrong in your
database, you may refer to the failure scenario solution outlines provided in the Activity
Guides for more detailed information.
Summary
Overview
Oracle Database Cloud Service:
Objectives
ssh access
Oracle Database Cloud Service provides you the ability to deploy Oracle databases in the Cloud,
with
ith each
hddatabase
t b d
deployment
l t containing
t i i a single
i l O Oracle
l ddatabase.
t b Y
You h
have ffullll access tto th
the
features and operations available with Oracle Database, but with Oracle providing the computing
power, physical storage and (optionally) tooling to simplify routine database maintenance and
management operations.
When you create database deployments, Database Cloud Service creates compute nodes to host the
database, using computing and storage resources provided by Oracle Compute Cloud Service.
Additionally,
y, it provides
p access to the compute
p nodes ((and thus to the database)) using
g networking
g
resources provided by Oracle Compute Cloud Service.
The Oracle Database Cloud Service includes Oracle Database and supporting software. An Oracle
database is created using values you provide when creating the database deployment, and the
database is started. Additionally, you can direct Database Cloud Service to set up automatic
backups. Finally, the deployment includes cloud tooling that simplifies backup, recovery, patching
and upgrade operations.
Simple Automated Restore from backups and recover the orec subcommand of the
Recovery database dbaascli utility
Database as a Service offers the following tools on the compute nodes associated with the Oracle
D t b
Database Cl
ClouddSService
i llevel:
l
• Simple Automated Backups: Use the bkup_api utility on single-instance database or the
raccli utility on deployments that use Oracle Real Application Clusters (RAC) to perform on-
demand backups and to change how automatic backups are configured.
• Simple Automated Recovery: Use the orec subcommand of the dbaascli utility on single-
instance databases or the raccli utility on deployments that use Oracle RAC to restore from
backups and recover the database.
• Simple Automated Patching: Use the dbpatchm subcommand of the dbaascli utility for
single-instance databases, raccli on deployments that use Oracle RAC, and the
dbpatchmdg utility on Oracle Data Guard configurations to apply patches.
Enterprise - High Advanced Analytics, Advanced Compression, Cloud Management for Oracle Database,
Performance Advanced Security,
Security Database Vault,
Vault Label Database Lifecycle Management,
Management Data
Security, Multitenant, OLAP, Partitioning, Real Masking and Subsetting, Diagnostics, Tuning
Enterprise - Active Data Guard, Advanced Analytics, Advanced Cloud Management for Oracle Database,
Extreme Compression, Advanced Security, Database In- Database Lifecycle Management, Data
Performance Memory, Database Vault, Label Security, Masking and Subsetting, Diagnostics, Tuning
Multitenant, OLAP, Partitioning, Real Application
Clusters, Real Application Testing, Spatial and
Graph
When you create a database deployment, you can choose from four software editions. The High
P f
Performance and
d Extreme
E t Performance
P f editions
diti iinclude
l d th
the O
Oracle
l DDatabase
t b options
ti and
dOOracle
l
Management packs listed in the slide.
1. Display the Sign In to Oracle Cloud page by clicking the My Services URL
link in your Welcome email or by following these instructions:
a. Open your web browser and go to the Oracle Cloud website.
b. Click Sign In.
c. In the My Services box, select the data center where your services are located.
d
d. Cli k Si
Click Sign IIn tto M
My S
Services.
i
You can access the Oracle Database Cloud Service console as described in the slide. From the
console
l you can perform
f certain
t i ttasks
k to
t administer
d i i t and
d manage your ddatabase
t b d
deployment.
l t
Note: The navigation steps may vary, depending on the version of Database Cloud Service that you
are using.
Request Database
for Ready for
Service Allocate Allocate Provision Set Keys Install & Configure Configure Configure Use
Compute Storage OS & Configure Backups Tools Access
Privileges Database
The Create Oracle Database Cloud Service Instance wizard is used to create a new database
d l
deployment.
t Aft
After requesting
ti the
th creation
ti off the
th database
d t b deployment,
d l t the
th steps
t shown
h in
i th
the gray
box happen automatically. When the creation and configuration is complete, the database
deployment is ready for use.
Expand the action menu on the Database Cloud Service console to access
tools:
Web application to Application
monitor the Oracle development using a
database web browser.
Enterprise Manager
The Oracle Database Cloud Service console enables you to manage your database deployment
th
throughh the
th menu choices
h i shown
h on th
the slide.
lid
• Database deployments on Database as a Service include Oracle DBaaS Monitor Console a
built-in monitor that provides a wide spectrum of information about Oracle Database status and
resource usage.
• Oracle Application Express enables you to design, develop and deploy responsive, database-
driven applications using only your web browser.
• To monitor and manage the Oracle database deployed on Database as a Service,Service you can use
the standard management tool provided with the version of the database:
- For Oracle Database 12c, use Enterprise Manager Database Express 12c.
- For Oracle Database 11g, use Enterprise Manager 11g Database Control.
• Should the need arise, you can add SSH public keys for the opc and oracle users to the
compute nodes associated with a database deployment on Oracle Database Enterprise Cloud
Service.
• When you no longer require a database deployment on Database as a Service, you can delete
it. Once deleted, the entry is removed from the list of database deployments displayed on the
Database as a Service Console.
Note: Your choices from this menu may vary from what is shown based on the version of Database
Cloud Service you are using.
From the Oracle Database Cloud Service console, you can stop, start and restart the compute nodes
associated
i t d with
ith a database
d t b deployment
d l t on Database
D t b as a S
Service.
i
Note: Your choices from this menu may vary from what is shown based on the version of Database
Cloud Service you are using.
By default, network access to the compute nodes associated with Database as a Service is provided
b S
by Secure Sh
Shellll (SSH) connections
ti on portt 22
22.
To access network protocols and services on a compute node by using a port other than port 22, you
must either:
• Enable network access to the port: You can use the Oracle Compute Cloud Service console
or the Oracle Database Cloud Service console to enable access to a port on a compute node.
• Create an SSH tunnel to the port: Creating an SSH tunnel enables you to access a specific
compute node port by using an SSH connection as the transport mechanism.
mechanism To create the
tunnel, you must have the SSH private key file that matches the public key specified during the
database deployment creation process.
Database as a Service relies on Oracle Compute Cloud Service to provide secure network access to
d t b
database d
deployments.
l t YYou can use th
the O
Oracle
l CCompute
t Cl
Cloud
dSService
i consolel or th
the O
Oracle
l
Database Cloud Service console to perform network access operations such as enabling access to a
port on a compute node.
When a database deployment is created, the Oracle Compute Cloud Service security rules listed on
the slide are created, but not enabled.
To enable access to a compute node port, you enable the appropriate security rule. When you enable
one of the predefined security rules
rules, the given port on the compute node is opened to the public
internet.
If you wish to enable access to a compute node port that is not associated with an existing security
rule, you must create a new security rule to define the protocol associated with the port number and
create a security rule.
If you wish to restrict access to a compute node port, to only permit connections from specific IP
addresses,, you
y must create a Securityy IP List and associate it to the securityy rule.
Scale the compute shape or storage from the Action menu in the Oracle
Database Cloud Service console:
Action
Scale Up Select a new compute shape.
By backing up your Database as a Service database deployments, you can protect the software,
configuration
fi ti and dddatabase
t b against
i t lloss if a ffailure
il occurs. Y
You can restore
t th
the d
deployment's
l t'
software, configuration, and database to their state at the time of the backup.
Database as a Service provides a backup feature that backs up:
• The database
• Database configuration files
• Grid Infrastructure configuration files (on deployments hosting an Oracle RAC database)
• System and cloud tooling files
To provide this backup feature, Database as a Service relies on system utilities, Oracle Database
utilities, and Oracle Database Backup Cloud Service, all of which are installed in the database
deployment.
When you create a database deployment, you choose one of the following backup destinations:
• Both Cloud Storage and Local Storage. Backups are configured to be created automatically and
stored both on local compute node storage and on an Oracle Storage Cloud Service container.
The container must have been created before creating the Database as a Service database
deployment.
• Cloud Storage Only. Backups are configured to be created automatically and stored only on an
Oracle Storage Cloud Service container.
• None.
None No backups are created
created.
Backup Configuration
The backup configuration created when you choose a destination other than None follows a set of
O l b
Oracle best-practice
t ti guidelines
id li as lilisted
t d iin th
the slide.
lid
You can create an on-demand backup of a Database as a Service database deployment from the
O l Database
Oracle D t b Cl
ClouddSService
i IInstance
t Ad
Administration
i i t ti page.
You can also create on-demand backups by using command-line utilities. Use the bkup_api utility to
create an on-demand backup on database deployments hosting single-instance databases. Use the
raccli utility to create an on-demand backup on database deployments hosting Oracle RAC
databases.
F
Frequency Ti
Time th
thatt bkup_api
bk i is
i run / b file
/etc/crontab
/ fil
You can use the Oracle Database Cloud Service console to:
• Restore from the most recent backup and perform complete recovery
• Restore from a backup and recover to a specific data and time
• Restore from a backup and recover to a specific system change number (SCN)
Note: Your choices from this menu may vary from what is shown based on the version of Database
Cloud Service you are using.
Use the orec subcommand of the dbaascli utility to restore and recover the
database.
• Restoring from the most recent backup and performing complete recovery:
# dbaascli orec --args -latest
• Restoring from a specific backup and performing point-in-time recovery:
To restore from a backup and perform recovery on a Database as a Service deployment hosting a
single-instance
i l i t d
database,
t b you use th
the orec subcommand
b d off th
the dbaascli
db li utility,
tilit which
hi h is
i available
il bl
on the compute node.
The orec subcommand must be run with root access. Therefore, you need to connect to the
compute node as the opc user to perform recovery operations.
You can restore from the most recent backup, a specific backup, or a specific long-term backup as
shown in the slide.
U th
Use the ffollowing
ll i command d tto obtain
bt i a lilistt off backups,
b k including
i l di th the backup-tag
b k t for
f each:
h
# dbaascli orec --args –list
Use this command to obtain a list of long-term backups, including the backup-tag for each:
# dbaascli orec --args -keep -list
Summary
Your Learning
Overview
This appendix provides a few “cheat sheets” that might be helpful during the course.
It also adds details about further training options.
Your instructor might suggest additional resources for your learning.
See also Appendix B for more information about Using Enterprise Manager Cloud Control.
Vi Commands
Command Description
http://<hostname>:<port>em
EM Express Servlet
• Authenticates and
validates the request
Oracle Web Server • Serves the request by
executing queries
EM Express inside the database
Shared Servers
Oracle SQL Developer is a tool that allows stand-alone graphical browsing and development
of database schema objects, as well as execution of database administrative tasks.
SQL Developer enables users with database administrator privileges to view and edit certain
information relevant to DBAs and perform DBA operations. To perform DBA operations, use
the DBA navigator, which is similar to the Connections navigator in that it has nodes for all
defined database connections. If the DBA navigator is not visible, select View, then DBA. You
should add only connections for which the associated database user has DBA privileges or at
least privileges for the desired DBA navigator operations on the specified database.
• Documentation
• Oracle Technology Network (OTN)
– Oracle Database > High Availability
– Oracle communities on Social Applications
Documentation:
• Oracle Database Backup and Recovery User’s Guide
• Oracle Database Backup and Recovery Reference
Oracle Technology Network (OTN): http://www.oracle.com/technetwork/index.html
Oracle University training courses:
• Oracle Database 12c: Managing Multitenant Architecture
• Oracle Database 12c: RAC Administration
• Oracle Database 12c: Data Guard Administration
• Oracle Database 12c: Security
• Oracle Enterprise Manager Cloud Control 12c: Advanced Configuration Workshop
• Oracle Database 12c: Performance Tuning
Oracle University self studies: Oracle Database 12c New Features series
Oracle Learning Library (OLL): https://www.oracle.com/goto/oll
Search with “backup” or any other topic that interests you, such as Maximum Availability
Architecture (MAA).
Further Information
For more information about topics that are not covered in this
course, refer to the following:
• Other Oracle University Oracle Database 12c ILT courses
• Oracle Database 12c: New Features Self Studies
– A comprehensive series of self-paced online courses covering
For more information about topics that are not covered in this course, refer to the following:
• Oracle University Oracle Database 12c instructor-led courses
• Oracle Database 12c: New Features self-paced online courses
• Oracle By Example series: Oracle Database 12c
• Oracle OpenWorld events
Objectives
Note: For a complete understanding of Oracle Enterprise Manager Cloud Control and
Database Express installation and usage, refer to the following guides in the Oracle
documentation:
• Oracle Enterprise Manager Cloud Control Basic Installation Guide 12c Release 1
• Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration
Guide 12c Release 1
• Oracle Enterprise Manager Cloud Control Administrator’s Guide 12c Release 1
• Oracle Enterprise Manager Licensing Information 12c Release 1
As the data center grows with the growth in business, so do the challenges. An administrator
is faced with challenges that include:
• Monitoring high levels of performance and availability of applications
• Identifying and resolving problems quickly and effectively
• Enabling IT professionals to use resources effectively, thereby reducing costs
• Aligning IT with business priorities to ensure that businesses are agile enough to meet
the changing needs
Exadata and
Database
Exalogic
Management
Management
Managed
Oracle Management
Service
Cloud Control
Console
Oracle Management
Repository
Cloud Control
Push via SSH Application
7788 / 7801
4889 / 4900
WebLogic
Server
HTTP / HTTPS HTTP / HTTPS
Oracle Management
JDBC 1521
Agent(s)
OMR
The communication flow between the Cloud Control components is illustrated using
directional arrows. Communication between the Agent and the OMS, and the OMS and the
console is bi-directional. All the ports shown and listed in the slide are default values that can
be changed during installation, either by the installer as it searches for available ports, or
explicitly by you. You can also change ports after installation.
• The Agent uploads data to the OMS via HTTP on port 4889 or via HTTPS on port 4900.
(Designed to be able to work with WAN.)
• The OMS communicates with the Agent via HTTP or HTTPS on port 3872.
• The reason for the separate ports for OMS to OMA communications is that they can
communicate asynchronously and simultaneously to one another.
• The OMS communicates with the OMR via JDBC on port 1521. Although the OMR will
return data to the OMS, this is not considered to be a separate communication between
the two; hence, the flow is shown to be unidirectional from OMS to OMR.
• Cloud Control console users access the Cloud Control webpages via HTTPS on port
7801 or via HTTP on port 7788.
Knowing the ports used in your Cloud Control installation is important, especially if you are
managing hosts behind firewalls or where other network restrictions apply, because
communication will need to be allowed on these ports and in the directions shown.
WebLogic
EM
sqlplus or
srvctl emctl emctl
lsnrctl
Each component of the Enterprise Manager Cloud Control framework has its own utility or
utilities that can be used to monitor, start, and stop the component. In many cases, these
utilities also provide some capability to configure the component beyond the simple start-and-
stop functionality.
RAC databases require the use of the Server Control commands; for single instances, there is
a choice between SQL*Plus and Server Control. Server Control is usable when Oracle
Restart is installed and the database is registered with the OLR.
To start and stop the listener, either use the Server Control utility or the lsnrctl command.
Examples:
srvctl stop database -d orcl -o immediate
srvctl start database -d orcl -o open
WebLogic
EM
OHS
Repository OMS
Agents
g
To start the whole Enterprise Manager Cloud Control framework, perform the following steps:
1. Start the repository listener:
$ORACLE_HOME/bin/lsnrctl start
2. Start the repository database instance:
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> startup
3. Start the OMS (including OHS and WebLogic Managed Server):
$OMS_HOME/bin/emctl start oms
4. Start the agent (on the OMS/repository host):
$AGENT_HOME/bin/emctl start agent
5. Start the agent on the managed servers:
$AGENT_HOME/bin/emctl start agent
Note: Use the SRVCTL command if you have a RAC instance for the repository.
WebLogic
EM
OHS
Agents OMS Repository
To stop the whole Enterprise Manager Cloud Control framework, perform the following steps:
1. Stop the agent on the managed servers:
$AGENT_HOME/bin/emctl stop agent
2. Stop the agent (on the OMS/repository host):
$AGENT_HOME/bin/emctl stop agent
3. Stop the OMS (including OHS and WebLogic Managed Server):
$OMS_HOME/bin/emctl stop oms
4. Stop the repository database instance:
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> shutdown immediate
Note: Use the SRVCTL command if you have a RAC instance for the repository.
Targets are the entities that Enterprise Manager Cloud Control manages. To do so, it uses
target-type specific plug-ins and host-specific agents.
Enterprise Manager Cloud Control can monitor, administer, maintain, and manage different
types of targets as listed in the slide. As your environment changes, you can add and remove
targets from Enterprise Manager Cloud Control as needed. The commonly used Oracle
targets (including Enterprise Manager Cloud Control components, such as the OMR and
OMS) are predefined as part of the base Enterprise Manager Cloud Control product, but
Enterprise Manager Cloud Control has an open API that enables you to create custom
targets.
Target Discovery
Guided
Application Discovery
servers Listeners Databases
After the Agent has been installed on a host, it needs to look for targets that it can manage.
As an Enterprise Manager Cloud Control administrator, you can guide that process from the
Enterprise Manager Cloud Control console pages. Guided discovery allows you to nominate a
family of target types that you want to search for, such as database and listeners, and then
the agents where you want that search to be executed. If any new targets are discovered, the
appropriate plug-in will be pushed from the OMS if it is not already installed on the agent, the
target will be recorded in the OMR, and monitoring will begin.
You can also configure auto discovery to run at regular intervals and get an agent to search
for known targets unattended, allowing you to review the results at a later stage and promote
discovered targets to become managed targets.
The image in the slide is of the Enterprise Summary page of Oracle Enterprise Manager
Cloud Control. The user interface (UI) functionality includes:
• Information displayed in graphs and tables
• Summary information with drilldown capability to relevant details
• User-selected home page from a predefined set, or based on any page in the console
• Menu-driven navigation
• Global target search
• History and favorites
• Customizable target home pages (per-user basis)
User Interface
Security: Overview
Enterprise
Distinguishing Credentials
• Named credentials
• Preferred credentials
• Default credentials
• Access level:
Types of Credentials
As the Enterprise Manager administrator, you can also store credentials
(username/password, a public key-private key pair, or an X509v3 certificate) as named
credentials in Enterprise Manager to use when performing operations like running jobs,
patching, and other system management tasks. Objects refer or point to named credentials.
They are “placeholders” to facilitate, for example, the changing of passwords.
You can store, access, and modify a fixed number of username/password–based credentials
as preferred credentials to simplify access to managed targets by storing target login
credentials in the Management Repository.
Default credentials can be set for a particular target type and will be available for all the
targets of the target type.
The three levels of access that can be granted are:
• View access: To use the credentials
• Edit access: To change the credentials, including changing its name and password
• Full access: For complete access, including the ability to delete the named credential
Credentials can also be classified by their usage, such as job credentials (used by the job
system), collection credentials, and monitoring credentials (used by OMA).
Quiz
Answer: f
Practice Overview:
Using Enterprise Manager Cloud Control
These videos cover the following topics:
• Accessing Enterprise Manager Cloud Control
• Setting the Summary page as the home page
• Adding a database instance as a new target monitored by
View the “Oracle Enterprise Manager 12c: Console Overview and Customization”
demonstration (unless your instructor just demonstrated those topics) (8 minutes).
Optional demonstrations about related topics are available:
• Oracle Enterprise Manager 12c: Create an Enterprise Manager Administrator (6
minutes)
• Oracle Enterprise Manager 12c: Create and Use Named Credentials (6 minutes)
• Oracle Enterprise Manager 12c: Create SSH Key Named Credentials (3 minutes)
• Oracle Enterprise Manager 12c: Discover and Promote Unmanaged Hosts and Targets
Optional Oracle By Example (OBEs) about related topics are available:
• Oracle Enterprise Manager 12c Enterprise Ready Framework: Create and Use
Credentials (60 minutes)
• Oracle Enterprise Manager 12c Enterprise Ready Framework: Create a Super
Administrator Account (5 minutes)
Cloud Computing
Cloud Based?
“Now”
1
The US National Institute of Standards and Technology (NIST) defines cloud computing as “a
model for enabling ubiquitous, convenient, on-demand network access to a shared pool of
configurable computing resources (for example, networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.”
Every Cloud Has a Silver Lining
Cloud computing takes on a different aspect depending upon your perspective. However, all
aspects present views of benefits that can be derived from cloud computing.
From the perspective of a consumer of cloud-based resources, the cloud is simply a capability
or service that is used without having knowledge of how or where it is implemented. In fact,
knowledge of how the consumable product is provided is obscured by the very nature of it
being accessed via “the cloud.” Because implementation details are of no concern to the
consumer, their primary interest is availability and usability.
SaaS
Application
PaaS
Platform
IaaS
Infrastructure
• Private
– For exclusive use by a single organization
• Community
– A common environment for use by a group of related
organizations
• Standardization
• Consolidation
• Centralization
• Optimization
Oracle Clouds
Oracle offers a number of cloud solutions by using combinations of different technologies.
You can learn more by visiting the following sites:
• http://cloud.oracle.com
• http://www.oracle.com/goto/cloud
• http://www.oracle.com/technetwork/topics/cloud/index.html
• IaaS
– Built on Oracle VM for x86 virtualization platform
– User requests create virtual machines.
• PaaS
Plan
Meter &
Build
Charge
Manage Test
Monitor Deploy
Quiz
Answer: a, c, d, f, h
Although power conditioning, hot swappable storage devices, and reliability are desirable
characteristics of any data center, according to the National Institute of Standards and
Technology, five essential characteristics of cloud computing are focused on services rather
than physical devices.
Quiz
Answer: a, f
Enterprise Manager Cloud Control 12c does not provide the tools required to implement
SaaS, where the infrastructure, platform, and application are made available to self-service
users. One could argue that IaaS could be used to allow end users to request a complete
stack of infrastructure, platform, and application. However, strictly speaking, SaaS is an
application service rather than the ability to request provisioning of an application service.