Rman Incremental Backups - Compress
Rman Incremental Backups - Compress
Rman Incremental Backups - Compress
RMAN incremental backups backup only datafile blocks that have changed since a specified
previous backup. We can make incremental backups of databases, individual tablespaces or
datafiles.
The primary reasons for making incremental backups part of your strategy are:
For use in a strategy based on incrementally updated backups, where these incremental
backups are used to periodically roll forward an image copy of the database
To reduce the amount of time needed for daily backups
To save network bandwidth when backing up over a network
To get adequate backup performance when the aggregate tape bandwidth available for
tape write I/Os is much less than the aggregate disk bandwidth for disk read I/Os
To be able to recover changes to objects created with the NOLOGGING option. For
example, direct load inserts do not create redo log entries and their changes cannot be reproduced
with media recovery. They do, however, change data blocks and so are captured by incremental
backups.
To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole
database backup every time, you can make incremental backups.
As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the
database is open; if the database is in NOARCHIVELOG mode, then you can only make incremental
backups after a consistent shutdown.
One effective strategy is to make incremental backups to disk, and then back up the resulting
backup sets to a media manager with BACKUP AS BACKUPSET. The incremental backups are
generally smaller than full backups, which limits the space required to store them until they are
moved to tape. Then, when the incremental backups on disk are backed up to tape, it is more
likely that tape streaming can be sustained because all blocks of the incremental backup are
copied to tape. There is no possibility of delay due to time required for RMAN to locate changed
blocks in the datafiles.
Note that these backup classifications apply only to datafile backups. Backups of other files, such
as archivelogs and control files, always include the complete file and are never inconsistent.
Backup
Type Definition
Full A backup of a datafile that includes every allocated block in the file being backed up.
A full backup of a datafile can be an image copy, in which case every data block is
backed up. It can also be stored in a backup set, in which case datafile blocks not in
use may be skipped.
A full backup cannot be part of an incremental backup strategy; that is, it cannot be
the parent for a subsequent incremental backup.
Incremental An incremental backup is either a level 0 backup, which includes every block in the file
except blocks compressed out because they have never been used, or a level 1
backup, which includes only those blocks that have been changed since the parent
backup was taken.
A level 0 incremental backup is physically identical to a full backup. The only
difference is that the level 0 backup is recorded as an incremental backup in the
RMAN repository, so it can be used as the parent for a level 1 backup.
Open A backup of online, read/write datafiles when the database is open.
Closed A backup of any part of the target database when it is mounted but not open. Closed
backups can be consistent or inconsistent.
Consistent A backup taken when the database is mounted (but not open) after a normal
shutdown. The checkpoint SCNs in the datafile headers match the header information
in the control file. None of the datafiles has changes beyond its checkpoint. Consistent
backups can be restored without recovery.
Note: If you restore a consistent backup and open the database in read/write mode
without recovery, transactions after the backup are lost. You still need to perform
an OPEN RESETLOGS.
Inconsistent A backup of any part of the target database when it is open or when a crash occurred
or SHUTDOWN ABORT was run prior to mounting.
An inconsistent backup requires recovery to become consistent.
The goal of an incremental backup is to back up only those data blocks that have changed since a
previous backup. You can use RMAN to create incremental backups of datafiles, tablespaces, or the
whole database.
During media recovery, RMAN examines the restored files to determine whether it can recover
them with an incremental backup. If it has a choice, then RMAN always chooses incremental
backups over archived logs, as applying changes at a block level is faster than reapplying
individual changes.
RMAN does not need to restore a base incremental backup of a datafile in order to apply
incremental backups to the datafile during recovery. For example, you can restore non-
incremental image copies of the datafiles in the database, and RMAN can recover them with
incremental backups.
Incremental backups allow faster daily backups, use less network bandwidth when backing up over
a network, and provide better performance when tape I/O bandwidth limits backup performance.
They also allow recovery of database changes not reflected in the redo logs, such as direct load
inserts. Finally, incremental backups can be used to back up NOARCHIVELOG databases, and are
smaller than complete copies of the database (though they still require a clean database
shutdown).
One effective strategy is to make incremental backups to disk (as image copies), and then back up
these image copies to a media manager with BACKUP AS BACKUPSET. Then, you do not have the
problem of keeping the tape streaming that sometimes occurs when making incremental backups
directly to tape. Because incremental backups are not as big as full backups, you can create them
on disk more easily.
Note that if you enable the block change tracking feature, RMAN can refer to the change tracking
file to identify changed blocks in datafiles without scanning the full contents of the datafile. Once
enabled, block change tracking does not alter how you take or use incremental backups, other
than offering increased performance.
Note:
Cumulative backups are preferable to differential backups when recovery time is more important
than disk space, because during recovery each differential backup must be applied in succession.
Use cumulative incremental backups instead of differential, if enough disk space is available to
store cumulative incremental backups.
The size of the backup file depends solely upon the number of blocks modified and the incremental
backup level.
The following command performs a level 1 differential incremental backup of the database:
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If
compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores
the results as a level 1 backup. In other words, the SCN at the time the incremental backup is
taken is the file creation SCN. If compatibility <10.0.0, RMAN generates a level 0 backup of the file
contents at the time of the backup, to be consistent with the behavior in previous releases.
In the example shown in above Figure, the following occurs:
Sunday
An incremental level 0 backup backs up all blocks that have ever been in use in this database.
Monday - Saturday
On each day from Monday through Saturday, a differential incremental level 1 backup backs up all
blocks that have changed since the most recent incremental backup at level 1 or 0. So, the
Monday backup copies blocks changed since Sunday level 0 backup, the Tuesday backup copies
blocks changed since the Monday level 1 backup, and so forth.
The cycle is repeated for the next week.
The following command performs a cumulative level 1 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since level 0
In the example shown in Figure, the following occurs:
Sunday
An incremental level 0 backup backs up all blocks that have ever been in use in this database.
Monday - Saturday
A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0
backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on
each day Monday through Saturday backs up all blocks changed since the Sunday backup.
The cycle is repeated for the next week.
When deciding how often to take full or level 0 backups, a good rule of thumb is to take a new
level 0 whenever 50% or more of the data has changed. If the rate of change to your database is
predictable, then you can observe the size of your incremental backups to determine when a new
level 0 is appropriate. The following query displays the number of blocks written to a backup set
for each datafile with at least 50% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS FROM
V$BACKUP_DATAFILE WHERE INCREMENTAL_LEVEL > 0 AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Compare the number of blocks in differential or cumulative backups to a base level 0 backup. For
example, if you only create level 1 cumulative backups, then when the most recent level 1 backup
is about half of the size of the base level 0 backup, take a new level 0.
This example makes a differential level 1 backup of the SYSTEM tablespace and
datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or
level 0 backup:
BACKUP INCREMENTAL LEVEL 1 TABLESPACE SYSTEM DATAFILE
'ora_home/oradata/trgt/tools01.dbf';
This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks
changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE TABLESPACE users;
During restore and recovery of the database, RMAN can restore from this incrementally updated
copy and then apply changes from the redo log, with the same results as restoring the database
from a full backup taken at the SCN of the most recently applied incremental level 1 backup.
A backup strategy based on incrementally updated backups can help minimize time required for
media recovery of your database. For example, if you run scripts to implement this strategy daily,
then at recovery time, you never have more than one day of redo to apply.
chopt tool, a command-line utility, to configure the database options. Oracle Universal
Installer no longer provides the custom installation option of individual components.
Unusable indexes and index partitions no longer consume space in the database because
they become segmentless.
From this release, Oracle/ASM will Support 4KB Sector Disk Drives.
CREATE or REPLACE TYPE will allow FORCE option. The FORCE option can now be used
in conjunction with the CREATE or REPLACE TYPE command.
New SQL*Plus command SET EXITCOMMIT specifies whether the default EXIT behavior is
COMMIT or ROLLBACK.
SET EXITC[OMMIT] {ON|OFF}
Oracle Database 11g Release 2, provides the new PRECEDES keyword in trigger
definition which allows trigger-upon-trigger dependencies.
Audit filename will be prefixed with the instance name and ends with a sequence number.
For example:
SID_ora_pid_seqNumber.aud or SID_ora_pid_seqNumber.xml
An existing audit file is never appended.
From Oracle 11g R2, we can change audit table's (SYS.AUD$ and SYS.FGA_LOG$)
tablespace and we can periodically delete the audit trail records using DBMS_AUDIT_MGMT.
The initial segment creation for non partitioned tables and indexes can be delayed until
data is first inserted into an object. Depending on the module usage, only a subset of the objects
is really being used. With delayed segment creation, empty database objects do not consume any
space, reducing the installation footprint and speeding up the installation.
In Oracle Database 11g Release 2 (11.2), support for the LZO compression
algorithm on SecureFiles has been added. The new compression option is designated as
COMPRESS LOW.
Fast decompression - LZO compression is 2 times faster than ZLIB. Fast compression - LZO
compression is 3 times faster than ZLIB.
IGNORE_ROW_ON_DUPKEY_INDEX hint for INSERT Statement With INSERT INTO
TARGET ... SELECT ... FROM SOURCE, a unique key for some to-be-inserted rows may collide with
existing rows. The IGNORE_ROW_ON_DUPKEY_INDEX allows the collisions to be silently ignored
and the non-colliding rows to be inserted.
Oracle Database Smart Flash Cache is a new feature, for Oracle Linux & Oracle Solaris,
which increases the size of the database buffer cache without having to add RAM to host.
Concurrent Statistics gathering feature is introduced in Oracle 11g release 2, which
enables user to gather statistics on multiple tables in a schema, and multiple (sub)partitions within
a table concurrently.
ASM
ASM Configuration Assistant (ASMCA) is a new tool to install and configure ASM.
ASM Cluster File System (ACFS) provides support for files such as Oracle binaries,
Clusterware binaries, report files, trace files, alert logs, external files, and other application
datafiles. ACFS can be managed by ACFSUTIL, ASMCMD, OEM, ASMCA, SQL command interface.
ASM Dynamic Volume Manager (ADVM) provides volume management services and a
standard device driver interface to its clients (ACFS, ext3, OCFS2 and third party files systems).
ACFS Snapshots are read-only on-line, space efficient, point in time copy of an ACFS file
system. ACFS snapshots can be used to recover from inadvertent modification or deletion of files
from a file system.
ASM can hold and manage OCR (Oracle Cluster Registry) file and voting file.
Data Guard
Automatic Block Repair - Automatic block repair allows corrupt blocks on the primary
database or physical standby database to be automatically repaired, as soon as they are detected,
by transferring good blocks from the other destination.
The number of standby databases that a primary database can support is increased from 9
to 30in this release.
RMAN duplicate standby from active database
RMAN > duplicate target database for standby from active database;
Compressed table support in logical standby databases and Oracle LogMiner.
Archived log deletion policy enhancements - we can CONFIGURE an archived redo log
deletion policy so that logs are eligible for deletion only after being applied on or transferred to
(all) standby database destinations.
Increase in redo apply performance.
Heterogeneous Data Guard Configuration.
Oracle Scheduler
E-mail Notification - Oracle Database 11g Release 2 (11.2) users can now get e-mail
notifications on any job activity.
File Watcher - File watcher enables jobs to be triggered when a file arrives on a given
machine.
RMAN
The following are new clauses and format options for the SET NEWNAME command:
A single SET NEWNAME command can be applied to all files in a database or tablespace.
SET NEWNAME FOR DATABASE TO format;
SET NEWNAME FOR TABLESPACE tsname TO format;
New format identifiers are as follows:
%U - Unique identifier. data_D-%d_I-%I_TS-%N_FNO-%f
%b - UNIX base name of the original datafile name. For example, if the original datafile name was
$ORACLE_HOME/data/tbs_01.f, then %b is tbs_01.f.
Archived log deletion policy enhancements - we can CONFIGURE an archived redo log
deletion policy so that logs are eligible for deletion only after being applied on or transferred to
(all) standby database destinations.
Oracle added about 482 new features in the Oracle Database 11g Release 1.
New Datatypes
The new datatypes brought in Oracle 11g are:
Binary XML type - up to 15 times faster over XML LOBs.
DICOM (Digital Imaging and Communications in Medicine) medical images.
3D spatial support.
RFID tag datatypes.
New background processes
ACMS - Atomic Controlfile to Memory Server
DBRM - Database Resource Manager
DIA0 - Diagnosibility process 0
DIAG - Diagnosibility process
FBDA - Flashback Data Archiver
GTX0 - Global Transaction Process 0
KATE - Konductor (Conductor) of ASM Temporary Errands
MARK - Mark Allocation unit for Resync Koordinator (coordinator)
SMCO - Space Manager
VKTM - Virtual Keeper of TiMe process
W000 - Space Management Worker Processes
ABP - Autotask Background Process
SQL*Plus
SQL*Plus can show the BLOB/BFILE columns in select query.
The errors while executing a script/SQL can be logged on to a table (SPERRORLOG, by
default).
SQL> set errorlogging on --->> errors will be logged onto SPERRORLOG.
SQL> set errorlogging on table scott.error_log --->> errors will be logged onto user defined table.
SQL> set errorlogging on truncate --->> will truncate all the rows in the table.
SQL> set errorlogging on identifier identifier-name --->> useful to query the logging table
SQL
Datapump
New options in Datapump export.
DATA_OPTIONS, ENCRYPTION, ENCRYPTION_ALGORITHM, ENCRYPTION_MODE, REMAP_DATA,
REUSE_DUMPFILES, TRANSPORTABLE
New options in Datapump import.
DATA_OPTIONS, PARTITION_OPTIONS, REMAP_DATA, REMAP_TABLE, TRANSPORTABLE
New option in Datapump export interactive mode - REUSE_DUMPFILES.
In datapump import, we can specify how the partitions should transform by using
PARTITION_OPTIONS.
Dumpfile can be compressed. In Oracle 10g, only metadata can be compressed. From
11g, both data & metadata can be compressed. Dumpfile will be uncompressed automatically
before importing.
Encryption: The dumpfile can be encrypted while creating. This encryption occurs on
the entire dumpfile, not just on the encrypted columns as it was in the Oracle Database 10g.
Masking: when we import data from production to test or development instances, we
have to make sure sensitive data such as credit card details, etc. are obfuscated/remapped
(altered in such a way that they are not identifiable). From 11g, Data Pump enables us do that by
creating a masking function and then using that during import.
RMAN
Multisection backups of same file - RMAN can backup or restore a single file in parallel by
dividing the work among multiple channels. Each channel backs up one file section, which is a
contiguous range of blocks. This speeds up overall backup and restore performance, and
particularly for bigfile tablespaces, in which a datafile can be sized upwards of several hundred GB
to TB's.
Recovery will make use of flashback logs in FRA (Flash/Fast Recovery Area).
Fast Backup Compression - in addition to the Oracle Database 10g backup compression
algorithm (BZIP2), RMAN now supports the ZLIB algorithm, which offers 40% better performance,
with a trade-off of no more than 20% lower compression ratio, versus BZIP2.
RMAN> configure compression algorithm 'ZLIB' ;
Will backup uncommitted undo only, not committed undo.
Data Recovery Advisor (DRA) - quickly identify the root cause of failures; auto fix or
present recovery options to the DBA.
Archived Redo log failover - this feature enables RMAN to complete backups even when
some archiving destinations having missing logs or contain logs with corrupted blocks where local
archive log destination is configured along with FRA.
Virtual Private Catalog - a recovery catalog administrator can grant visibility of a subset
of registered databases in the catalog to specific RMAN users.
RMAN> grant catalog for database db-name to user-name;
Catalogs can be merged/moved/imported from one database to another.
New commands in RMAN
o RMAN> list failure;
o RMAN> list failure errnumber detail;
o RMAN> advise failure;
o RMAN> repair failure;
o RMAN> repair failure preview;
o RMAN> validate database; -- checks for corrupted blocks
o RMAN> create virtual catalog;
Partitioning
Support compression on INSERT, UPDATE and DELETE operations. 10g only supported
compression for bulk data-loading operations.
Advanced compression allows for a 2-3 X compression rate of structured and unstructured
data.
From Oracle 11g, we can compress individual partitions also.
Performance improvements
RAC - 70% faster (ADDM has a better global view of the RAC cluster).
Streams - 30-50% faster.
Optimizer stats collection - 10x faster.
OLAP (Online Analytic Processing) based materialized views for fast OLAP cube building.
Cube-organized MView supports automatic query rewrite and automatic refresh of the cube.
SQL Result Cache - new memory area in SGA for storing SQL query results, PL/SQL
function results and OCI call results. When we execute a query with the hint result_cache, the
results are stored in the SQL Result Cache. Query results caching is 25% faster. The size of the
cache is determined by result_cache_max_size, result_cache_max_result, result_cache_mode,
result_cache_remote_expiration.
Invisible indexes - indexes will be ignored by the optimizer. Handy for testing without
dropping. To make it visible, recreate it.
SQL> alter index index-name invisible;
Oracle secure files - 5x faster than normal file systems.
Availability improvements
Ability to apply many patches on-line without downtime (RAC and single instance
databases).
XA transactions spanning multiple servers.
Improved runtime connection load balancing.
Flashback Transaction/Oracle Total Recall.
Security improvements
Oracle Active Data Guard - Standby databases can now simultaneously be in read and
recovery mode - so use it for running reports 24x7.
Online upgrades: Test on standby and roll to primary.
Snapshot standby database - physical standby database can be temporarily converted into
an updateable one called snapshot standby database.
Creation of physical standby is become easier.
From Oracle 11g, we can control archive log deletion by setting the log_auto_delete
initialization parameter to TRUE. The log_auto_delete parameter must be coupled with the
log_auto_del_retention_target parameter to specify the number of minutes an archivelog is
maintained until it is purged. Default is 24 hours (1440 minutes).
Incremental backup on physical readable physical standby.
Offload: Complete database and fast incremental backups.
Logical standby databases now support XML and CLOB datatypes as well as transparent
data encryption.
We can compress the redo data that goes to the standby server, by
setting compression=enable.
From Oracle 11g, logical standby provides support for DBMS_SCHEDULER.
When transferring redo data to standby, if the standby does not respond in time, the log
transferring service will wait for specified timeout value (set by net_timeout=n) and then give up.
New package and procedure, DBMS_DG.INITIATE_FS_FAILOVER, introduced to
programmatically initiate a failover.
SecureFiles
SecureFiles provide faster access to unstructured data than normal file systems, provides the
benefits of LOBs and external files. For example, write access to SecureFiles is faster than a
standard Linux file system, while read access is about the same. SecureFiles can be encrypted for
security, de-duplicated and compressed for more efficient storage, cached (or not) for faster
access (or save the buffer cache space), and logged at several levels to reduce the mean time to
recover (MTTR) after a crash.
create table table-name ( ... lob-column lob-type [deduplicate] [compress high/low] [encrypt using
'encryption-algorithm'] [cache/nocache] [logging/nologging] ...) lob (lob-column) store as
securefile ...;
To create SecureFiles:
(i) The initialization parameter db_securefile should be set to PERMITTED (the default value).
(ii) The tablespace where we are creating the securefile should be Automatic Segment Space
Management (ASSM) enabled (default mode in Oracle Database 11g).
Oracle export utility (exp). Imp is still supported for backwards compatibility.
Windows SQL*Plus GUI & iSQLPlus will not be shipped anymore. Use SQL Developer
instead.
Oracle Enterprise Manager Java console.
copy command is deprecated.
Datapump- faster data movement with expdp and impdp, successor for normal exp/imp.
NID utility has been introduced to change the database name and id.
Oracle Enterprise Manager (OEM) became browser based. Through any browser we can
access data of a database in Oracle Enterprise Manager Database Control. Grid Control is used for
accessing/managing multiple instances.
Automated Storage Management (ASM). ASMB, RBAL, ARBx are the new background
processes related to ASM.
Ability to transport tablespaces across platforms (e.g. Windows to Linux, Solaris to HP-
UX), which has same ENDIAN formats. If ENDIAN formats are different we have to use RMAN.
In Oracle 10g, undo tablespace can guarantee the retention of unexpired undo extents.
SQL> CREATE UNDO TABLESPACE ... RETENTION GUARANTEE;
SQL> ALTER TABLESPACE UNDO_TS RETENTION GUARANTEE;
New 'drop database' statement, will delete the datafiles, redolog files mentioned in control
file and will delete SP file also.
SQL> STARTUP RESTRICT MOUNT EXCLUSIVE;
SQL> DROP DATABASE;
New memory structure in SGA i.e. Streams pool (streams_pool_size parameter), useful
fordatapump activities & streams replication.
Introduced new init parameter, sga_target, to change the value of SGA dynamically. This
is called Automatic Shared Memory Management (ASMM). It includes buffer cache, shared pool,
java pool and large pool. It doesn't include log buffer, streams pool and the buffer pools for
nonstandard block sizes and the non-default ones for KEEP or RECYCLE.
SGA_TARGET = DB_CACHE_SIZE + SHARED_POOL_SIZE + JAVA_POOL_SIZE +
LARGE_POOL_SIZE
Temporary tablespace groups to group multiple temporary tablespaces into a single group.
From Oracle Database 10g, the ability to prepare the primary database and logical standby
for a switchover, thus reducing the time to complete the switchover.
On primary,
ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY;
On logical standby,
ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY;
New packages
o DBMS_SCHEDULER, which can call OS utilities and programs, not just PL/SQL
program units like DBMS_JOB package. By using this package we can create jobs, programs,
schedules and job classes.
o DBMS_FILE_TRANSFER package to transfer files.
o DBMS_MONITOR, to enable end-to-end tracing (tracing is not done only by
session, but by client identifier).
o DBMS_ADVISOR, will help in working with several advisors.
o DBMS_WORKLOAD_REPOSITORY, to aid AWR, ADDM, ASH.
Auditing: FGA (Fine-grained auditing) now supports DML statements in addition to selects.
New features in RMAN
o Managing recovery related files with flash/fast recovery area.
o Optimized incremental backups using block change tracking (Faster incremental
backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is
the background process responsible for tracking the blocks.
o Reducing the time and overhead of full backups with incrementally updated
backups.
o Comprehensive backup job tracking and administration with Enterprise Manager.
o Backup set binary compression.
o New compression algorithm BZIP2 brought in.
o Automated Tablespace Point-in-Time Recovery.
o Automatic channel failover on backup & restore.
o Cross-platform tablespace conversion.
o Ability to preview the backups required to perform a restore operation.
RMAN> restore database preview [summary];
RMAN> restore tablespace tbs1 preview;
SQL*Plus enhancements
1. The default SQL> prompt can be changed by setting the below parameters in
$ORACLE_HOME/sqlplus/admin/glogin.sql
_connect_identifier (will prompt DBNAME>)
_date (will prompt DATE>)
_editor
_o_version
_o_release
_privilege (will prompt AS SYSDBA> or AS SYSOPER> or AS SYSASM>)
_sqlplus_release
_user (will prompt USERNAME>)
2. From 10g, the login.sql file is not only executed at SQL*Plus startup time, but also
at connect time as well. So SQL prompt will be changed after connect command.
3. Now we can login as SYSDBA without the quotation marks.
sqlplus / as sysdba
(as well as old sqlplus "/ as sysdba" or sqlplus '/ as sysdba'). This enhancement not only
means we have two fewer characters to type, but provides some additional benefits such
as not requiring escape characters in operating systems such as Unix.
4. From Oracle 10g, the spool command can append to an existing one.
SQL> spool result.log append
Virtual Private Database (VPD) has grown into a very powerful feature with the ability to
support a variety of requirements, such as masking columns selectively based on the policy and
applying the policy only when certain columns are accessed. The performance of the policy can
also be increased through multiple types of policy by exploiting the nature of the application,
making the feature applicable to multiple situations.
We can now shrink segments, tables and indexes to reclaim free blocks, provided that
Automatic Segment Space Management (ASSM) is enabled in the tablespace.
SQL> alter table table-name shrink space;
Statistics can be collected for SYS schema, data dictionary objects and fixed objects (x$
tables).
Introduced Advisors
o SQL Access Advisor
o SQL Tune Advisor
o Memory Advisor
o Undo Advisor
o Segment Advisor
o MTTR (Mean Time To Recover) Advisor
Oracle 10g Release 2 (10.2.0) - September 2005
New asmcmd utility for managing ASM storage.
Async COMMITs.
Passwords for DB Links are encrypted.
The CONNECT role can now only connect (CREATE privileges are removed).
Before 10g,
SQL> select PRIVILEGE from role_sys_privs where ROLE='CONNECT';
PRIVILEGE
----------------------------------------
CREATE VIEW
CREATE TABLE
ALTER SESSION
CREATE CLUSTER
CREATE SESSION
CREATE SYNONYM
CREATE SEQUENCE
CREATE DATABASE LINK
From 10g,
SYS> select PRIVILEGE from role_sys_privs where ROLE='CONNECT';
PRIVILEGE
----------------------------------------
CREATE SESSION
Oracle Database has a method of maintaining information that is used to rollback or undo the
changes to the database. Oracle Database keeps records of actions of transactions, before they are
committed and Oracle needs this information to rollback or undo the changes to the database.
These records are called rollback or undo records.
Space for undo segments is dynamically allocated, consumed, freed, and reused — all under the
control of Oracle Database, rather than by DBA.
From Oracle 9i, the rollback segments method is referred as "Manual Undo Management Mode"
and the new undo tablespaces method as the "Automatic Undo Management Mode".
Notes:
Although both rollback segments and undo tablespaces are supported, both modes cannot
be used in the same database instance, although for migration purposes it is possible, for
example, to create undo tablespaces in a database that is using rollback segments, or to drop
rollback segments in a database that is using undo tablespaces. However, you must bounce the
database in order to effect the switch to another method of managing undo.
System rollback segment exists in both the modes.
When operating in automatic undo management mode, any manual undo management
SQL statements and initialization parameters are ignored and no error message will be issued e.g.
ALTER ROLLBACK SEGMENT statements will be ignored.
The default value for this parameter is MANUAL i.e. manual undo management mode.
UNDO_TABLESPACE
UNDO_TABLESPACE an optional dynamic parameter, can be changed online, specifying the name
of an undo tablespace to use. An undo tablespace must be available, into which the database will
store undo records. The default undo tablespace is created at database creation, or an undo
tablespace can be created explicitly.
When the instance starts up, the database automatically selects for use the first available undo
tablespace. If there is no undo tablespace available, the instance starts, but uses the SYSTEM
rollback segment for undo. This is not recommended, and an alert message is written to the alert
log file to warn that the system is running without an undo tablespace. ORA-01552 error is issued
for any attempts to write non-SYSTEM related undo to the SYSTEM rollback segment.
If the database contains multiple undo tablespaces, you can optionally specify at startup that you
want an Oracle Database instance to use a specific undo tablespace. This is done by setting the
UNDO_TABLESPACE initialization parameter.
UNDO_TABLESPACE = undotbs
In this case, if you have not already created the undo tablespace, the STARTUP command will fail.
The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an
instance in an Oracle Real Application Clusters (RAC) environment.
UNDO_RETENTION
Committed undo information normally is lost when its undo space is overwritten by a newer
transaction. However, for consistent read purposes, long-running queries sometimes require old
undo information for undoing changes and producing older images of data blocks. The success of
several Flashback features can also depend upon older undo information.
The default value for the UNDO_RETENTION parameter is 900. Retention is specified in units of
seconds. This value specifies the amount of time, undo is kept in the tablespace. The system
retains undo for at least the time specified in this parameter.
The effect of the UNDO_RETENTION parameter is immediate, but it can only be honored if the
current undo tablespace has enough space. If an active transaction requires undo space and the
undo tablespace does not have available space, then the system starts reusing unexpired undo
space (if retention is not guaranteed). This action can potentially cause some queries to fail with
the ORA-01555 "snapshot too old" error message.
UNDO_RETENTION applies to both committed and uncommitted transactions since the introduction
offlashback query feature in Oracle needs this information to create a read consistent copy of the
data in the past.
Oracle Database 10g automatically tunes undo retention by collecting database use statistics and
estimating undo capacity needs for the successful completion of the queries. You can set a low
threshold value for the UNDO_RETENTION parameter so that the system retains the undo for at
least the time specified in the parameter, provided that the current undo tablespace has enough
space. Under space constraint conditions, the system may retain undo for a shorter duration than
that specified by the low threshold value in order to allow DML operations to succeed.
The amount of time for which undo is retained for Oracle Database for the current undo tablespace
can be obtained by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic
performance view.
SQL> select tuned_undoretention from v$undostat;
Automatic tuning of undo retention is not supported for LOBs. The RETENTION value for LOB
columns is set to the value of the UNDO_RETENTION parameter.
UNDO_SUPRESS_ERRORS
In case your code has the alter transaction commands that perform manual undo management
operations. Set this to true to suppress the errors generated when manual management SQL
operations are issued in an automated management mode.
UNDO_SUPRESS_ERRORS = false
Retention Guarantee
Oracle Database 10g lets you guarantee undo retention. When you enable this option, the
database never overwrites unexpired undo data i.e. undo data whose age is less than the undo
retention period. This option is disabled by default, which means that the database can overwrite
the unexpired undo data in order to avoid failure of DML operations if there is not enough free
space left in the undo tablespace.
You enable the guarantee option by specifying the RETENTION GUARANTEE clause for the undo
tablespace when it is created by either the CREATE DATABASE or CREATE UNDO TABLESPACE
statement or you can later specify this clause in an ALTER TABLESPACE statement. You do not
guarantee that unexpired undo is preserved if you specify the RETENTION NOGUARANTEE clause.
In order to guarantee the success of queries even at the price of compromising the success of DML
operations, you can enable retention guarantee. This option must be used with caution, because it
can cause DML operations to fail if the undo tablespace is not big enough. However, with proper
settings, long-running queries can complete without risk of receiving the ORA-01555 "snapshot too
old" error message, and you can guarantee a time window in which the execution of Flashback
features will succeed.
From 10g, you can use the DBA_TABLESPACES view to determine the RETENTION setting for the
undo tablespace. A column named RETENTION will contain a value on GUARANTEE,
NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo tablespace).
A typical use of the guarantee option is when you want to ensure deterministic and predictable
behavior of Flashback Query by guaranteeing the availability of the required undo data.
Oracle Database supports automatic extension of the undo tablespace to facilitate capacity
planning of the undo tablespace in the production environment. When the system is first running
in the production environment, you may be unsure of the space requirements of the undo
tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace
so that they automatically increase in size when more space is needed. By combining automatic
extension of the undo tablespace with automatically tuned undo retention, you can ensure that
long-running queries will succeed by guaranteeing the undo required for such queries.
After the system has stabilized and you are more familiar with undo space requirements, Oracle
recommends that you set the maximum size of the tablespace to be slightly (10%) more than the
current size of the undo tablespace.
If you have decided on a fixed-size undo tablespace, the Undo Advisor can help us estimate
needed capacity, and you can then calculate the amount of retention your system will need. You
can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISOR package.
The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository
(AWR). An adjustment to the collection interval and retention period for AWR statistics can affect
the precision and the type of recommendations the advisor produces.
Undo Advisor
Oracle Database provides an Undo Advisor that provides advice on and helps automate the
establishment of your undo environment. You activate the Undo Advisor by creating an undo
advisor task through the advisor framework. The following example creates an undo advisor task
to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based
on AWR snapshots, which you must specify by setting parameters START_SNAPSHOT and
END_SNAPSHOT.
DECLARE
tid NUMBER;
tname VARCHAR2(30);
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname,'UNDO_TBS',null, null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.execute_task(tname);
end;
/
Once you have created the advisor task, you can view the output and recommendations in the
Automatic Database Diagnostic Monitor (ADDM) in Enterprise Manager. This information is also
available in the DBA_ADVISOR_* data dictionary views.
Undo Space = UNDO_RETENTION in seconds * undo blocks for each second + overhead
where:
* Undo Space is the number of undo blocks
* overhead is the small overhead for metadata and based on extent and file size
(DB_BLOCK_SIZE)
As an example, if UNDO_RETENTION is set to 2 hours, and the transaction rate (UPS) is 200 undo
blocks for each second, with a 4K block size, the required undo space is computed as follows:
(2 * 3600 * 200 * 4K) = 5.8GBs
Such computation can be performed by using information in the V$UNDOSTAT view. In the steady
state, you can query the view to obtain the transaction rate. The overhead figure can also be
obtained from the view.
You cannot create database objects in an undo tablespace. It is reserved for system-managed
undo data.
The following statement illustrates using the UNDO TABLESPACE clause in a CREATE DATABASE
statement. The undo tablespace is named undotbs_01 and one datafile, is allocated for it.
If the undo tablespace cannot be created successfully during CREATE DATABASE, the entire
operation fails.
The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement,
but the UNDO keyword is specified. The database determines most of the attributes of the undo
tablespace, but you can specify the DATAFILE clause.
You can create more than one undo tablespace, but only one of them can be active at any one
time.
Adding datafile
when resizing the undo tablespace you may encounter ORA-03297 error: file contains used data
beyond the requested RESIZE value. This means that some undo information stills stored above
the datafile size we want to set. We can check the most high used block to check the minimum
size that we can resize a particular datafile, by querying the dba_free_space view.
Another way to set undo tablespace to the size that we want is, to create another undo tablespace,
set it the default one, take offline the old and then just drop the big old tablespace.
Making datafile online or offline
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo
tablespace contains any outstanding transactions (e.g. a transaction died but has not yet been
recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an
undo tablespace even if it contains unexpired undo information (within retention period), you must
be careful not to drop an undo tablespace if undo information is needed by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE ... INCLUDING
CONTENTS. All contents of the undo tablespace are removed.
Assuming undotbs_01 is the current undo tablespace, after this command successfully executes,
the instance uses undotbs_02 in place of undotbs_01 as its undo tablespace.
If any of the following conditions exist for the tablespace being switched to, an error is reported
and no switching occurs:
The tablespace does not exist
The tablespace is not an undo tablespace
The tablespace is already being used by another instance (in RAC environment)
The database is online while the switch operation is performed, and user transactions can be
executed while this command is being executed. When the switch operation completes
successfully, all transactions started after the switch operation began are assigned to transaction
tables in the new undo tablespace.
The switch operation does not wait for transactions in the old undo tablespace to commit. If there
are any pending transactions in the old undo tablespace, the old undo tablespace enters into a
PENDING OFFLINE mode. In this mode, existing transactions can continue to execute, but undo
records for new user transactions cannot be stored in this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo tablespace cannot be used by another instance,
nor can it be dropped. Eventually, after all active transactions have committed, the undo
tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then
on, the undo tablespace is available for other instances (in an RAC environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the current
undo tablespace is switched out and the next available undo tablespace is switched in. Use this
statement with care, because if there is no undo tablespace available, the SYSTEM rollback
segment is used. This causes ORA-01552 error to be issued for any attempts to write non-SYSTEM
related undo to the SYSTEM rollback segment.
You can specify an undo pool for each consumer group. An undo pool controls the amount of total
undo that can be generated by a consumer group. When the total undo generated by a consumer
group exceeds it's undo limit, the current UPDATE transaction generating the redo is terminated.
No other members of the consumer group can perform further updates until undo space is freed
from the pool.
When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space.
In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system
has long-running queries that cause SNAPSHOT TOO OLD errors. To prevent excessive alerts, the
long query alert is issued at most once every 24 hours. When the alert is generated, you can
check the Undo Advisor Page of Enterprise Manager to get more information about the undo
tablespace.
The following dynamic performance views are useful for obtaining space information about the
undo tablespace:
View Description
Contains statistics for monitoring and tuning undo space. Use this view to
V$UNDOSTAT help estimate the amount of undo space required for the current workload.
Oracle uses this view information to tune undo usage in the system.
For automatic undo management mode, information reflects behavior of the
V$ROLLSTAT
undo segments in the undo tablespace.
V$TRANSACTION Contains undo segment information.
DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo tablespace.
WRH$_UNDOSTAT Contains statistical snapshots of V$UNDOSTAT information.
WRH$_ROLLSTAT Contains statistical snapshots of V$ROLLSTAT information.
The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space
in the current instance. Statistics are available for undo space consumption, transaction
concurrency, the tuning of undo retention, and the length and SQL ID of long-running queries in
the instance. This view contains information that spans over a 24 hour period and each row in this
view contains data for a 10 minute interval specified by the BEGIN_TIME and END_TIME.
Each row in the view contains statistics collected in the instance for a 10minute interval. The rows
are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval
marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular
statistic in that time interval. The first row of the view contains statistics for the (partial) current
time period. The view contains a total of 1008 rows, spanning a 7 day cycle.
Flashback Features
Oracle Database includes several features that are based upon undo information and that allow
administrators and users to access database information from a previous point in time. These
features are part of the overall flashback strategy incorporated into the database and include:
Flashback Query
Flashback Versions Query
Flashback Transaction Query
Flashback Table
Flashback Database
The retention period for undo information is an important factor for the successful execution of
flashback features. It determines how far back in time a database version can be established.
We must choose an undo retention interval that is long enough to enable users to construct a
snapshot of the database for the oldest version of the database that they are interested in, e.g. if
an application requires that a version of the database be available reflecting its content 12 hours
previously, then UNDO_RETENTION must be set to 43200.
You might also want to guarantee that unexpired undo is not overwritten by specifying the
RETENTION GUARANTEE clause for the undo tablespace.
set serveroutput on
DECLARE
utbsize_in_MB NUMBER;
BEGIN
utbsize_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
dbms_output.put_line(utbsize_in_MB||'MB');
END;
/
With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves
tablespaces between databases. Oracle 8i supports tablespace transportation between databases
that run on same OS platforms and use the same database block size.
With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace
transportation between databases on platforms of the same type, but using different block sizes.
With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support
transportation of tablespaces between databases running on different OS platforms (e.g. Windows
to Linux, Solaris to HP-UX), which has same ENDIAN formats. If ENDIAN formats are different you
have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX). From this version we can transport
whole database, this is called Transportable Database.
From Oracle 11g, we can transport single partition of a tablespace between databases.
You can also query the V$TRANSPORTABLE_PLATFORM view to see all the platforms that are
supported, and to determine their platform names and IDs and their endian format.
Moving data using transportable tablespaces can be much faster than performing either
anexport/import or unload/load of the same data, because transporting a tablespace only requires
copying of datafiles and integrating the tablespace structural information. You can also use
transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds
you would have to perform when importing or loading table data.
In Oracle 8i, there were three restrictions with TTS. First, both the databases must have same
block size. Second, both platforms must be the same OS. Third, you cannot rename the
tablespace. Oracle 9iremoves the first restriction. Oracle 10g removes the second restriction.
Oracle 10g also makes available a command to rename tablespaces.
Limitations/Restrictions
Following are limitations/restrictions of transportable tablespace:
The source and target database must use the same character set and national character set.
If Automatic Storage Management (ASM) is used with either the source or destination database,
you must use RMAN to transport/convert the tablespace.
You cannot transport a tablespace to a target database in which a tablespace with the same name
already exists. However, you can rename either the tablespace to be transported or the
destination tablespace before the transport operation.
Binary_Float and Binary_Double datatypes (new in Oracle 10g) are not supported.
At Source:
Validating Self Containing Property
TTS requires all the tablespaces, which we are moving, must be self contained. This means that
the segments within the migration tablespace set cannot have dependency to a segment in a
tablespace out of the transportable tablespace set. This can be checked using the
DBMS_TTS.TRANSPORT_SET_CHECK procedure.
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE);
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs1, tbs2, tbs3', FALSE, TRUE);
If it were not self contained you should either remove the dependencies by dropping/moving them
or include the tablespaces of segments into TTS set to which migration set is depended.
If the tablespace set being transported is not self-contained, then the export will fail.
You can drop the tablespaces at source, if you don‘t want them.
SQL> drop tablespace tbs-name including contents;
At Target:
Import the export file.
Finally we have to switch the new tablespaces into read write mode:
SQL> alter tablespace tbs-name read write;
Create transportable tablespace sets from backup for one or more tablespaces.
RMAN> TRANSPORT TABLESPACE example, tools TABLESPACE DESTINATION '/disk1/trans'
AUXILIARY DESTINATION '/disk1/aux' UNTIL TIME 'SYSDATE-15/1440';
1. Generate a transportable tablespace set that consists of datafiles for the set of tablespaces
being transported and an export file containing structural information for the set of tablespaces.
a. Copy the datafiles and the export file to the primary database.
Invoke the Data Pump utility to plug the set of tablespaces into the primary database. Redo data
will be generated and applied at the standby site to plug the tablespace into the standby database.
Related Packages
DBMS_TTS
DBMS_EXTENDED_TTS_CHECKS
A tablespace group lets you assign multiple temporary tablespaces to a single user and increases
the addressability of temporary tablespaces.
The following statement creates temporary tablespace temp as a member of the temp_grp
tablespace group. If the tablespace group does not already exist, then Oracle Database creates it
during execution of this statement.
SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'temp01.dbf' SIZE 5M AUTOEXTEND
ON TABLESPACE GROUP temp_grp;
Related Views:
DBA_TABLESPACE_GROUPS
DBA_TEMP_FILESV$TEMPFILE
V$TEMPSTATV$TEMP_SPACE_HEADER
V$TEMPSEG_USAGE
Temporary tablespaces are used to manage space for database sort and joining operations and for
storing global temporary tables. For joining two large tables or sorting a bigger result set, Oracle
cannot do in memory by using SORT_AREA_SIZE in PGA (Programmable Global Area). Space will
be allocated in a temporary tablespace for doing these types of operations. Other SQL operations
that might require disk sorting are: CREATE INDEX, ANALYZE, SELECT DISTINCT, ORDER BY,
GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.
Note that a temporary tablespace cannot contain permanent objects and therefore doesn't need to
be backed up. A temporary tablespace contains schema objects only for the duration of a session.
From Oracle 9i, we can specify a default temporary tablespace when you create a database, using
the DEFAULT TEMPORARY TABLESPACE extension to the CREATE DATABASE statement.
e.g.
SQL> CREATE DATABASE oracular .....
Example:
SQL> CREATE TABLESPACE TEMPTBS DATAFILE '/path/temp.dbf' SIZE 2048M AUTOEXTEND ON
NEXT 1M MAXSIZE UNLIMITED LOGGING DEFAULT NOCOMPRESS ONLINE EXTENT MANAGEMENT
DICTIONARY;
Oracle 7.3 & 8.0 - CREATE TABLESPACE temp DATAFILE ... TEMPORARY;
Example:
SQL> CREATE TABLESPACE TEMPTBS DATAFILE '/path/temp.dbf' SIZE 2048M AUTOEXTEND ON
NEXT 1M MAXSIZE UNLIMITED LOGGING DEFAULT NOCOMPRESS ONLINE TEMPORARY EXTENT
MANAGEMENT DICTIONARY;
Examples:
SQL> CREATE TEMPORARY TABLESPACE TEMPTBS TEMPFILE '/path/temp.dbf' SIZE 1000M
AUTOEXTEND ON NEXT 8K MAXSIZE 1500M EXTENT MANAGEMENT LOCAL UNIFORM SIZE
1M BLOCKSIZE 8K;
Restrictions:
(1) We cannot specify nonstandard block sizes for a temporary tablespace or if you intend to
assign this tablespace as the temporary tablespace for any users.
(2) We cannot specify FORCE LOGGING for an undo or temporary tablespace.
(3) We cannot specify AUTOALLOCATE for a temporary tablespace.
Unlike normal datafiles, tempfiles are not fully allocated. When you create a tempfiles, Oracle only
writes to the header and last block of the file. This is why it is much quicker to create a tempfiles
than
to create a normal datafile.
Tempfiles are not recorded in the database's control file. This implies that just recreate them
whenever you restore the database, or after deleting them by accident. You can have different
tempfile configurations between primary and standby databases in dataguard environment, or
configure tempfiles to be local instead of shared in a RAC environment.
One cannot remove datafiles from a tablespace until you drop the entire tablespace. However, one
can remove a tempfile from a database. Look at this example:
If you remove all tempfiles from a temporary tablespace, you may encounter error:
ORA-25153: Temporary Tablespace is Empty.
Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a locally
managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).
Locally managed temporary tablespaces have temporary datafiles (tempfiles), which are similar to
ordinary datafiles except:
You cannot create a tempfile with the ALTER DATABASE statement.
You cannot rename a tempfile or set it to read-only.
Tempfiles are always set to NOLOGGING mode.
When you create or resize tempfiles, they are not always guaranteed allocation of disk
space for the file size specified. On certain file systems (like UNIX) disk blocks are allocated not at
file creation or resizing, but before the blocks are accessed.
Tempfile information is shown in the dictionary view DBA_TEMP_FILES and the dynamic
performance view V$TEMPFILE.
Note: This arrangement enables fast tempfile creation and resizing, however, the disk could run
out of space later when the tempfiles are accessed.
From Oracle 9i, we can define a default temporary tablespace at database creation time, or by
issuing an "ALTER DATABASE" statement:
By default, the default temporary tablespace is SYSTEM. Each database can be assigned one and
only one default temporary tablespace. Using this feature, a temporary tablespace is automatically
assigned to users.
-DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create another one.
To see the default temporary tablespace for a database, execute the following query:
SQL> select PROPERTY_NAME,PROPERTY_VALUE from database_properties where property_name
like '%TEMP%';
The DBA should assign a temporary tablespace to each user in the database to prevent them from
allocating sort space in the SYSTEM tablespace. This can be done with one of the following
commands:
All new users that are not explicitly assigned a TEMPORARY TABLESPACE will get the default
temporary tablespace as its TEMPORARY TABLESPACE. Also, when you assign a TEMPORARY
tablespace to a user, Oracle will not change this value next time you change the default temporary
tablespace for the database.
Performance Considerations
Unlike datafiles, tempfiles are not listed in V$DATAFILE and DBA_DATA_FILES. Use V$TEMPFILE
and DBA_TEMP_FILES instead.
DBA_FREE_SPACE does not record free space for temporary tablespaces. Use
DBA_TEMP_FREE_SPACE or V$TEMP_SPACE_HEADER instead.
From 11g, we can check free temp space in new view DBA_TEMP_FREE_SPACE.
SQL> select * from DBA_TEMP_FREE_SPACE;
Resizing tempfile
In Oracle 11g, temporary tablespace or it's tempfiles can be shrinked, up to specified size.
Shrinking frees as much space as possible while maintaining the other attributes of the tablespace
or temp files. The optional KEEP clause defines a minimum size for the tablespace or temp file.
SQL> alter tablespace temp-tbs shrink space;
SQL> alter tablespace temp-tbs shrink space keep n{K|M|G|T|P|E};
SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name' ;
SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name' keep n{K|M|G|T|P|E};
The below script reports temporary tablespace usage (script was created for Oracle9i Database).
With this script we can monitor the actual space used in a temporary tablespace and see HWM
(High Water Mark) of the temporary tablespace. The script is designed to run when there is only
one temporary tablespace in the database.
Several methods existed to reclaim the space used for a larger than normal temporary tablespace.
(1) Restarting the database, if possible.
(2) The method that exists for all releases of Oracle is, simply drop and recreate the temporary
tablespace back to its original (or another reasonable) size.
(3) If you are using Oracle9i or higher, drop the large tempfile (which will drop the tempfile from
the data dictionary and the OS file system).
From 11g, while creating global temporary tables, we can specify TEMPORARY tablespaces.
Related Views:
DBA_TEMP_FILES
DBA_DATA_FILES
DBA_TABLESPACES
DBA_TEMP_FREE_SPACE (Oracle 11g)
V$TEMPFILE
V$TEMP_SPACE_HEADER
V$TEMPORARY_LOBS
V$TEMPSTAT
V$TEMPSEG_USAGE
Statspack in Oracle
Statspack was introduced in Oracle8i.
Statspack is a set of performance monitoring, diagnosis and reporting utility provided by Oracle.
Statspack provides improved UTLBSTAT/UTLESTAT functionality, as it‘s successor, though the old
BSTAT/ESTAT scripts are still available.
Statspack package is a set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection,
automation, storage, and viewing of performance data. Statspack stores the performance statistics
permanently in Oracle tables, which can later be used for reporting and analysis. The data
collected can be analyzed using Statspack reports, which includes an instance health and load
summary page, high resource SQL statements, the traditional wait events and initialization
parameters.
Statspack is a diagnosis tool for instance-wide performance problems; it also supports application
tuning activities by providing data which identifies high-load SQL statements. Statspack can be
used both proactively to monitor the changing load on a system, and also reactively to investigate
a performance problem.
Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK,
users that are not licensed to use the Enterprise Manager Diagnostic Pack, should continue to use
Statspack.
This will create PERFSTAT user, statspack objects in it and STATSPACK package.
NOTE: Default tablespace or temporary tablespace must not be SYSTEM for PERFSTAT user.
To install statspack in batch mode, you must assign values to the SQL*Plus variables that specify
the default and temporary tablespaces before running SPCREATE.SQL.
DEFAULT_TABLESPACE: For the default tablespace
TEMPORARY_TABLESPACE: For the temporary tablespace
PERFSTAT_PASSWORD: For the PERFSTAT user password
$ sqlplus "/as sysdba"
SQL> define default_tablespace='STATS'
SQL> define temporary_tablespace='TEMP_TBS'
SQL> define perfstat_password='perfstat'
SQL> @?/rdbms/admin/spcreate
When SPCREATE.SQL is run, it does not prompt for the information provided by the variables.
When a snapshot is executed, the STATSPACK software will sample from the RAM in-memory
structures inside the SGA and transfer the values into the corresponding STATSPACK tables.
Taking such a snapshot stores the current values for the performance statistics in the statspack
tables. This snapshot can be used as a baseline for comparison with another snapshot taken at a
later time.
$ sqlplus perfstat/perfstat
SQL> exec statspack.snap;
or
SQL> exec statspack.snap(i_snap_level=>10);
instruct statspack to do gather more details in the snapshot.
Note that in most cases, there is a direct correspondence between the v$view in the SGA and the
corresponding STATSPACK table.
e.g. the stats$sysstat table is similar to the v$sysstat view.
Remember to set timed_statistics to true for the instance. Statspack will then include important
timing information in the data it collects.
Note: In RAC environment, you must connect to the instance for which you want to collect data.
BEGIN
SYS.DBMS_JOB.SUBMIT
(job => 999,
what => 'statspack.snap;',
next_date => to_date('17/08/2009 18:00:00','dd/mm/yyyy hh24:mi:ss'),
interval => 'trunc(SYSDATE+1/24,''HH'')',
no_parse => FALSE
);
END;
/
Use an OS utility, such as cron.
Statspack reporting
The information captured by a STATSPACK snapshot has accumulated values. The information
from the v$views collects database information at startup time and continues to add the values
until the instance is shutdown. In order to get a meaningful elapsed-time report, you must run a
STATSPACK report that compares two snapshots.
When the report is run, you are prompted for the following:
The beginning snapshot ID
The ending snapshot ID
The name of the report text file to be created
It is not correct to specify begin and end snapshots where the begin snapshot and end snapshot
were taken from different instance startups. In other words, the instance must not have been
shutdown between the times that the begin and end snapshots were taken.
This is necessary because the database's dynamic performance tables, which statspack queries to
gather the data, reside in memory. Hence, shutting down the Oracle database resets the values in
the performance tables to 0. Because statspack subtracts the begin-snapshot statistics from the
end-snapshot statistics, end snapshot will have smaller values than the begin snapshot, the
resulting output is invalid and then the report shows an appropriate error to indicate this.
To run the report without being prompted, assign values to the SQL*Plus variables that specify the
begin snap ID, the end snap ID, and the report name before running SPREPORT. The variables
are:
BEGIN_SNAP: Specifies the begin snapshot ID
END_SNAP: Specifies the end snapshot ID
REPORT_NAME: Specifies the report output name
Statspack has two types of collection options, level and threshold. The level parameter controls the
type of data collected from Oracle, while the threshold parameter acts as a filter for the collection
of SQL statements into the stats$sql_summary table.
Captures general statistics, including rollback segment, row cache, buffer pool
Level 0 statistics, SGA, system events, background events, session events, system statistics,
wait statistics, lock statistics, and latch information.
Level 5 Includes capturing high resource usage SQL Statements, along with all data captured
(default) by lower levels.
Includes capturing SQL plan and SQL plan usage information for high resource usage
Level 6
SQL Statements, along with all data captured by lower levels.
Captures segment level statistics, including logical and physical reads, row lock, ITL
Level 7
and buffer busy waits, along with all data captured by lower levels.
Includes capturing parent & child latch statistics, along with all data captured by lower
Level 10
levels.
You can change the default parameters used for taking snapshots so that they are tailored to the
instance's workload.
To temporarily use a snapshot level or threshold that is different from the instance's default
snapshot values, you specify the required threshold or snapshot level when taking the snapshot.
This value is used only for the immediate snapshot taken; the new value is not saved as the
default.
You can save the new value as the instance's default in either of two ways.
Simply use the appropriate parameter and the new value with the statspack
MODIFY_STATSPACK_PARAMETER or SNAP procedure.
1) You can change the default level of a snapshot with the STATSPACK.SNAP function. The
i_modify_parameter=>'true' changes the level permanent for all snapshots in the future.
SQL> EXEC STATSPACK.SNAP(i_snap_level=>8, i_modify_parameter=>'true');
Setting the I_MODIFY_PARAMETER value to TRUE saves the new thresholds in the
STATS$STATSPACK_PARAMETER table. These thresholds are used for all subsequent snapshots.
If the I_MODIFY_PARAMETER was set to FALSE or omitted, then the new parameter values are not
saved. Only the snapshot taken at that point uses the specified values. Any subsequent snapshots
use the preexisting values in the STATS$STATSPACK_PARAMETER table.
This procedure changes the values permanently, but does not take a snapshot.
Snapshot level and threshold information used by the package is stored in the
STATS$STATSPACK_PARAMETER table.
$ sqlplus perfstat/perfstat
SQL> @?/rdbms/admin/sprepsql.sql
The SPREPSQL.SQL script can run in batch mode. To run the report without being prompted,
assign values to the SQL*Plus variables that specify the begin snap ID, the end snap ID, the hash
value, and the report name before running the SPREPSQL.SQL script. The variables are:
BEGIN_SNAP: specifies the begin snapshot ID
END_SNAP: specifies the end snapshot ID
HASH_VALUE: specifies the hash value
REPORT_NAME: specifies the report output name
SQL> connect perfstat
SQL> define begin_snap=66
SQL> define end_snap=68
SQL> define hash_value=2342342385
SQL> define report_name=sql_report
SQL> @?/rdbms/admin/sprepsql
When SPREPSQL.SQL is run, it does not prompt for the information provided by the variables.
If you want to gather session statistics and wait events for a particular session (in addition to the
instance statistics and wait events), specify the session ID in the call to statspack. The statistics
gathered for the session include session statistics, session events, and lock activity. The default
behavior is to not gather session level statistics.
e.g.: SQL> exec statspack.snap(i_session_id=>333);
Purging can require the use of a large rollback segment, because all data relating to each snapshot
ID will be deleted. You can avoid rollback segment extension errors in one of two ways:
Specify a smaller range of snapshot IDs to purge.
Explicitly use a large rollback segment, by executing the SET TRANSACTION USE
ROLLBACK SEGMENT statement before running the SPPURGE.SQL script.
Note: Better to export the schema as backup before running this script, either using your own
export parameters or those provided in SPUEXP.PAR.
This script will drop statspack objects and the PERFSTAT user.
The SPDROP.SQL script calls the following scripts:
SPDTAB.SQL - drops tables and public synonyms
SPDUSR.SQL - drops the user
Check output files produced, in present directory, SPDTAB.LIS & SPDUSR.LIS, to ensure that the
package was completely uninstalled.
2) If queries are aged out of the shared pool, the stats from V$SQL are reset. This can throw off
the delta calculations and even make it negative. For example, query A has 10,000 buffer_gets at
snapshot 1, but at snapshot #2, it has been aged out of the pool and reloaded and now shows
only 1,000 buffer_gets. So, when you run spreport.sql from snapshot 1 to 2, you'll get 1,000-
10,000 = -9,000 for this query.
Upgrading Statspack
The statspack upgrade scripts must be run as a user with the SYSDBA privilege.
SPUP102.SQL: Upgrading statspack to 11 schema.
SPUP10.SQL: Upgrading statspack to 10.2 schema.
SPUP92.SQL: Upgrading statspack to 10.1 schema.
SPUP90.SQL: Upgrading statspack to 9.2 schema.
SPUP817.SQL: Upgrading statspack from 8.1.7 to 9.0
SPUP816.SQL: Upgrading statspack from 8.1.6 to 8.1.7
NOTE:
1. Backup the existing schema before running the upgrade scripts.
2. Downgrade scripts are not provided.
3. Upgrade scripts should only be run once.
Statspack Documentation
The SPDOC.TXT file in the $ORACLE_HOME/rdbms/admin directory contains instructions and
documentation on the statspack package.
STATS$BG_EVENT_SUMMARY
STATS$BUFFER_POOL_STATISTICS
STATS$BUFFERED_QUEUES
STATS$BUFFERED_SUBSCRIBERS
STATS$CR_BLOCK_SERVER
STATS$CURRENT_BLOCK_SERVER
STATS$DATABASE_INSTANCE
STATS$DB_CACHE_ADVICE
STATS$DLM_MISC
STATS$DYNAMIC_REMASTER_STATS
STATS$ENQUEUE_STATISTICS
STATS$EVENT_HISTOGRAM
STATS$FILE_HISTOGRAM
STATS$FILESTATXS
STATS$IDLE_EVENT
STATS$INSTANCE_CACHE_TRANSFER
STATS$INSTANCE_RECOVERY
STATS$INTERCONNECT_PINGS
STATS$IOSTAT_FUNCTION
STATS$IOSTAT_FUNCTION_NAME
STATS$JAVA_POOL_ADVICE
STATS$LATCH
STATS$LATCH_CHILDREN
STATS$LATCH_MISSES_SUMMARY
STATS$LATCH_PARENT
STATS$LEVEL_DESCRIPTION
STATS$LIBRARYCACHE
STATS$MEMORY_DYNAMIC_COMPS
STATS$MEMORY_RESIZE_OPS
STATS$MEMORY_TARGET_ADVICE
STATS$MUTEX_SLEEP
STATS$OSSTAT
STATS$OSSTATNAME
STATS$PARAMETER
STATS$PGA_TARGET_ADVICE
STATS$PGASTAT
STATS$PROCESS_MEMORY_ROLLUP
STATS$PROCESS_ROLLUP
STATS$PROPAGATION_RECEIVER
STATS$PROPAGATION_SENDER
STATS$RESOURCE_LIMIT
STATS$ROLLSTAT
STATS$ROWCACHE_SUMMARY
STATS$RULE_SET
STATS$SEG_STAT
STATS$SEG_STAT_OBJ
STATS$SESS_TIME_MODEL
STATS$SESSION_EVENT
STATS$SESSTAT
STATS$SGA
STATS$SGA_TARGET_ADVICE
STATS$SGASTAT
STATS$SHARED_POOL_ADVICE
STATS$SNAPSHOT
STATS$SQL_PLAN
STATS$SQL_PLAN_USAGE
STATS$SQL_STATISTICS
STATS$SQL_SUMMARY
STATS$SQL_WORKAREA_HISTOGRAM
STATS$SQLTEXT
STATS$STATSPACK_PARAMETER
STATS$STREAMS_APPLY_SUM
STATS$STREAMS_CAPTURE
STATS$STREAMS_POOL_ADVICE
STATS$SYS_TIME_MODEL
STATS$SYSSTAT
STATS$SYSTEM_EVENT
STATS$TEMP_SQLSTATS
STATS$TEMPSTATXS
STATS$THREAD
STATS$TIME_MODEL_STATNAME
STATS$UNDOSTAT
STATS$WAITSTAT
Oracle Statistics
Whenever a valid SQL statement is processed Oracle has to decide how to retrieve the necessary
data. This decision can be made using one of two methods:
Rule Based Optimizer (RBO) - This method is used if the server has no internal statistics
relating to the objects referenced by the statement. This method is no longer favoured by Oracle
and was desupported from Oracle 10g.
Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The
CBO checks several possible execution plans and selects the one with the lowest cost, where cost
relates to system resources. Since Oracle 8i the Cost Based Optimizer (CBO) is the preferred
optimizer for Oracle.
Oracle statistics tell us the size of the tables, the distribution of values within columns, and other
important information so that SQL statements will always generate the best execution plans. If
new objects are created, or the amount of data in the database changes the statistics will no
longer represent the real state of the database so the CBO decision process may be seriously
impaired.
Oracle can do things in several different ways, e.g. select might be done by table scan or by using
indexes. It uses statistics, a variety of counts and averages and other numbers, to figure out the
best way to do things. It does the figuring automatically, using the Cost Based Optimizer. DBA job
is to make sure the numbers are good enough for that optimizer to work properly.
Oracle statistics may refer to historical performance statistics that are kept in STATSPACK or AWR,
but more common use of the term Oracle statistics is about Oracle optimizer Metadata statistics in
order to provide the cost-based SQL optimizer with the information about the nature of the tables.
The statistics mentioned here are optimizer statistics, which are created for the purposes of query
optimization and are stored in the data dictionary. These statistics should not be confused with
performance statistics visible through V$ views.
If we provide Oracle with good statistics about the schema the CBO will almost always generate an
optimal execution plan. The areas of schema analysis include:
Object statistics - Statistics for all tables, partitions, IOTs, etc should be sampled with a
deep and statistically valid sample size.
Critical columns - Those columns that are regularly-referenced in SQL statements that are:
o Heavily skewed columns - This helps the CBO properly choose between an index
range scan and a full table scan.
o Foreign key columns - For n-way table joins, the CBO needs to determine the
optimal table join order and knowing the cardinality of the intermediate results sets is critical.
External statistics - Oracle will sample the CPU cost and I/O cost during statistics collection
and use this information to determine the optimal execution plan, based on optimizer_mode.
External statistics are most useful for SQL running in the all_rows optimizer mode.
Optimizer statistics are a collection of data that describe more details about the database and the
objects in the database. These statistics are used by the query optimizer to choose the best
execution plan for each SQL statement. Optimizer statistics include the following:
Table statistics
o Number of rows
o Number of blocks
o Average row length
Column statistics
o Number of distinct values (NDV) in column
o Number of nulls in column
o Data distribution (histogram)
Index statistics
o Number of leaf blocks
o Levels
o Clustering factor
System statistics
o I/O performance and utilization
o CPU performance and utilization
The optimizer statistics are stored in the data dictionary. They can be viewed using data dictionary
views. Only statistics stored in the dictionary itself have an impact on the cost-based optimizer.
When statistics are updated for a database object, Oracle invalidates any currently parsed SQL
statements that access the object. The next time such a statement executes, the statement is re-
parsed and the optimizer automatically chooses a new execution plan based on the new statistics.
Distributed statements accessing objects with new statistics on remote databases are not
invalidated. The new statistics take effect the next time the SQL statement is parsed.
Because the objects in a database can be constantly changing, statistics must be regularly updated
so that they accurately describe these database objects. Statistics are maintained automatically by
Oracle or we can maintain the optimizer statistics manually using the DBMS_STATS package.
DBMS_STATS package provides procedures for managing statistics. We can save and restore
copies of statistics. You can export statistics from one system and import those statistics into
another system. For example, you could export statistics from a production system to a test
system. We can lock statistics to prevent those statistics from changing.
For data warehouses and database using the all_rows optimizer_mode, from Oracle9i release 2 we
can collect the external cpu_cost and io_cost metrics. The ability to save and re-use schema
statistics is important:
Bi-Modal databases - Many databases get huge benefits from using two sets of stats, one
for OLTP (daytime), and another for batch (evening jobs).
Test databases - Many Oracle professionals will export their production statistics into the
development instances so that the test execution plans more closely resemble the production
database.
Creating statistics
In order to make good use of the CBO, we need to create statistics for the data in the database.
There are several options to create statistics.
GATHER_STATS_JOB
Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers
statistics on all objects in the database which have missing statistics and stale statistics.
This job is created automatically at database creation time and is managed by the Scheduler. The
Scheduler runs this job when the maintenance window is opened. By default, the maintenance
window opens every night from 10 P.M. to 6 A.M. and all day on weekends. The
stop_on_window_close attribute controls whether the GATHER_STATS_JOB continues when the
maintenance window closes. The default setting for the stop_on_window_close attribute is TRUE,
causing Scheduler to terminate GATHER_STATS_JOB when the maintenance window closes. The
remaining objects are then processed in the next maintenance window.
In situations when you want to disable automatic statistics gathering, then disable the
GATHER_STATS_JOB as follows:
BEGIN
DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');
END;
/
Automatic statistics gathering relies on the modification monitoring feature. If this feature is
disabled, then the automatic statistics gathering job is not able to detect stale statistics. This
feature is enabled when the STATISTICS_LEVEL parameter is set to TYPICAL (default) or ALL.
Volatile tables that are being deleted or truncated and rebuilt during the course of the day.
Objects which are the target of large bulk loads which add 10% or more to the object's
total size.
For highly volatile tables, there are two approaches:
The statistics on these tables can be set to NULL. When Oracle encounters a table with no
statistics, Oracle dynamically gathers the necessary statistics as part of query optimization. This
dynamic sampling feature is controlled by the OPTIMIZER_DYNAMIC_SAMPLING parameter, and
this parameter should be set to a value of 2 or higher. The default value is 2. The statistics can set
to NULL by deleting and then locking the statistics:
BEGIN
DBMS_STATS.DELETE_TABLE_STATS('SCOTT','EMP');
DBMS_STATS.LOCK_TABLE_STATS('SCOTT','EMP');
END;
/
The statistics on these tables can be set to values that represent the typical state of the
table. We should gather statistics on the table when the tables have a representative number of
rows, and then lock the statistics.
This is more effective than the GATHER_STATS_JOB, because any statistics generated on the table
during the overnight batch window may not be the most appropriate statistics for the daytime
workload.For tables which are being bulk-loaded, the statistics-gathering procedures should be run
on those tables immediately following the load process, preferably as part of the same script or job
that is running the bulk load.
Statistics on fixed objects, such as the dynamic performance tables, need to be manually collected
using GATHER_FIXED_OBJECTS_STATS procedure. Fixed objects record current database activity;
statistics gathering should be done when database has representative activity.
Whenever statistics in dictionary are modified, old versions of statistics are saved automatically for
future restoring. Statistics can be restored using RESTORE procedures of DBMS_STATS package.
In some cases, we may want to prevent any new statistics from being gathered on a table or
schema by the DBMS_STATS_JOB process, such as highly volatile tables. In those cases, the
DBMS_STATS package provides procedures for locking the statistics for a table or schema.
Scheduling Stats
Scheduling the gathering of statistics using DBMS_JOB is the easiest way to make sure they are
always up to date:
SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN
DBMS_JOB.submit(l_job, 'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;',
SYSDATE,'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/
The above code sets up a job to gather statistics for SCOTT for the current time every day. We can
list the current jobs on the server using the DBA_JOBS and DBA_JOBS_RUNNING views.
The preferred tool for collecting statistics used to be the ANALYZE command. Over the past few
releases, the DBMS_STATS package in the PL/SQL Packages and Types reference has taken over
the statistics functions, and left the ANALYZE command with more mundane 'health check' work
like analyzing chained rows.
Analyze Statement
The ANALYZE statement can be used to gather statistics for a specific table, index or cluster. The
statistics can be computed exactly, or estimated based on a specific number of rows, or a
percentage of rows.
The ANALYZE command is available for all versions of Oracle, however to obtain faster and better
statistics use the procedures supplied - in 7.3.4 and 8.0 DBMS_UTILITY.ANALYZE_SCHEMA, and in
8i and above - DBMS_STATS.GATHER_SCHEMA_STATS.
The analyze table can be used to create statistics for 1 table, index or cluster.
Syntax:
ANALYZE table tableName {compute|estimate|delete} statistics options
ANALYZE index indexName {compute|estimate|delete} statistics options
ANALYZE cluster clusterName {compute|estimate|delete} statistics options
Note: Do not use the COMPUTE and ESTIMATE clauses of ANALYZE statement to collect optimizer
statistics. These clauses are supported solely for backward compatibility and may be removed in a
future release. The DBMS_STATS package collects a broader, more accurate set of statistics, and
gathers statistics more efficiently.
We may continue to use ANALYZE statement to for other purposes not related to optimizer
statistics collection:
EXEC DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','COMPUTE');
EXEC DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','ESTIMATE',ESTIMATE_ROWS=>100);
EXEC DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','ESTIMATE',ESTIMATE_PERCENT=>25);
EXEC DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','DELETE');EXEC
DBMS_UTILITY.ANALYZE_DATABASE('COMPUTE');
EXEC DBMS_UTILITY.ANALYZE_DATABASE('ESTIMATE',ESTIMATE_ROWS=>100);
EXEC DBMS_UTILITY.ANALYZE_DATABASE('ESTIMATE',ESTIMATE_PERCENT=>15);
DBMS_STATS
The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of
gathering object statistics. Oracle list a number of benefits to using it including parallel execution,
long term storage of statistics and transfer of statistics between servers. This PL/SQL package is
also used to modify, view, export, import, and delete statistics. It follows a similar format to the
other methods.
The DBMS_STATS package can gather statistics on table and indexes, and well as individual
columns and partitions of tables. It does not gather cluster statistics; however, we can use
DBMS_STATS to gather statistics on the individual tables instead of the whole cluster.
When we generate statistics for a table, column, or index, if the data dictionary already contains
statistics for the object, then Oracle updates the existing statistics. The older statistics are saved
and can be restored later if necessary.
Procedure Collects
EXEC DBMS_STATS.GATHER_DATABASE_STATS;
EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>20);
EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMP');
EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMP',ESTIMATE_PERCENT=>15);
EXEC DBMS_STATS.GATHER_INDEX_STATS('SCOTT','EMP_PK');
EXEC DBMS_STATS.GATHER_INDEX_STATS('SCOTT','EMP_PK',ESTIMATE_PERCENT=>15);
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME=>'"DWH"',OPTIONS=>'GATHER
AUTO');
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME=>'PERFSTAT',CASCADE=>TRUE);
Note that certain types of index statistics are not gathered in parallel, including cluster indexes,
domain indexes, and bitmap join indexes.
Depending on the SQL statement being optimized, the optimizer can choose to use either the
partition (or subpartition) statistics or the global statistics. Both types of statistics are important
for most applications, and Oracle recommends setting the GRANULARITY parameter to AUTO to
gather both types of partition statistics.
Histograms are specified using the METHOD_OPT argument of the DBMS_STATS gathering
procedures. Oracle recommends setting the METHOD_OPT to FOR ALL COLUMNS SIZE AUTO. With
this setting, Oracle automatically determines which columns require histograms and the number of
buckets (size) of each histogram. You can also manually specify which columns should have
histograms and the size of each histogram.
Note: If you need to remove all rows from a table when using DBMS_STATS, use TRUNCATE
instead of dropping and re-creating the same table. When a table is dropped, workload information
used by the auto-histogram gathering feature and saved statistics history used by the
RESTORE_*_STATS procedures will be lost. Without this data, these features will not function
properly.
-- Table level
ALTER TABLE emp NOMONITORING;
ALTER TABLE emp MONITORING;
-- Schema level
EXEC DBMS_STATS.alter_schema_tab_monitoring('SCOTT', TRUE);
EXEC DBMS_STATS.alter_schema_tab_monitoring('SCOTT', FALSE);
-- Database level
EXEC DBMS_STATS.alter_database_tab_monitoring(TRUE);
EXEC DBMS_STATS.alter_database_tab_monitoring(FALSE);
User-defined Statistics
You can create user-defined optimizer statistics to support user-defined indexes and functions.
When you associate a statistics type with a column or domain index, Oracle calls the statistics
collection method in the statistics type whenever statistics are gathered for database objects.
You should gather new column statistics on a table after creating a function-based index, to allow
Oracle to collect column statistics equivalent information for the expression. This is done by calling
the statistics-gathering procedure with the METHOD_OPT argument set to FOR ALL HIDDEN
COLUMNS.
For an application in which tables are being incrementally modified, we may only need to gather
new statistics every week or every month. The simplest way to gather statistics in these
environments is to use a script or job scheduling tool to regularly run the
GATHER_SCHEMA_STATS and GATHER_DATABASE_STATS procedures. The frequency of collection
intervals should balance the task of providing accurate statistics for the optimizer against the
processing overhead incurred by the statistics collection process.
For tables which are being substantially modified in batch operations, such as with bulk loads,
statistics should be gathered on those tables as part of the batch operation. The DBMS_STATS
procedure should be called as soon as the load operation completes.
For partitioned tables, there are often cases in which only a single partition is modified. In those
cases, statistics can be gathered only on those partitions rather than gathering statistics for the
entire table. However, gathering global statistics for the partitioned table may still be necessary.
Statistics can be exported and imported from the data dictionary to user-owned tables. This
enables you to create multiple versions of statistics for the same schema. It also enables you to
copy statistics from one database to another database. You may want to do this to copy the
statistics from a production database to a scaled-down test database.
Note: Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the
database. The DBMS_STATS export and import packages do utilize IMP and EXP dumpfiles.
Before exporting statistics, you first need to create a table for holding the statistics. This statistics
table is created using the procedure DBMS_STATS.CREATE_STAT_TABLE. After this table is
created, then you can export statistics from the data dictionary into your statistics table using the
DBMS_STATS.EXPORT_*_STATS procedures. The statistics can then be imported using the
DBMS_STATS.IMPORT_*_STATS procedures.
Note that the optimizer does not use statistics stored in a user-owned table. The only statistics
used by the optimizer are the statistics stored in the data dictionary. In order to have the
optimizer use the statistics in user-owned tables, you must import those statistics into the data
dictionary using the statistics import procedures.
In order to move statistics from one database to another, you must first export the statistics on
the first database, then copy the statistics table to the second database, using the EXP and IMP
utilities or other mechanisms, and finally import the statistics into the second database.
Note: The EXP and IMP utilities export and import optimizer statistics from the database along with
the table. One exception is that statistics are not exported with the data if a table has columns
with system-generated names.
In the following example the statistics for the APPSCHEMA user are collected into a new table,
STATS_TAB, which is owned by DBASCHEMA:
3. This table can be transferred to another server using any one of the below methods.
SQLPlus Copy:
SQL> insert into dbaschema.stats_tab select * from dbaschema.stats_tab@source;
Export/Import:
exp file=stats.dmp log=stats_exp.log tables=dbaschema.stats_tab
imp file=stats.dmp log=stats_imp.log
Datapump:
expdp directory=dpump_dir dumpfile=stats.dmp logfile=stats_exp.log tables=
dbaschema.stats_tab
impdp directory=dpump_dir dumpfile=stats.dmp logfile=stats_imp.log
4. Import statistics into the data dictionary.
EXEC DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA', 'STATS_TABLE', NULL, 'SYSTEM');
Optimizer Hints
ALL_ROWS
FIRST_ROWS
FIRST_n_ROWS
APPEND
FULL
INDEX
DYNAMIC_SAMPLING
BYPASS_RECURSIVE_CHECK
BYPASS_RECURSIVE_CHECK APPEND
Examples:
SELECT /*+ ALL_ROWS */ empid, last_name, sal FROM emp;
SELECT /*+ FIRST_ROWS */ * FROM emp;
SELECT /*+ FIRST_20_ROWS */ * FROM emp;
SELECT /*+ FIRST_ROWS(100) */ empid, last_name, sal FROM emp;
System Statistics
System statistics describe the system's hardware characteristics, such as I/O and CPU
performance and utilization, to the query optimizer. When choosing an execution plan, the
optimizer estimates the I/O and CPU resources required for each query. System statistics enable
the query optimizer to more accurately estimate I/O and CPU costs, enabling the query optimizer
to choose a better execution plan.
When Oracle gathers system statistics, it analyzes system activity in a specified time period
(workload statistics) or simulates a workload (noworkload statistics). The statistics are collected
using the DBMS_STATS.GATHER_SYSTEM_STATS procedure. Oracle highly recommends that you
gather system statistics.
Note: You must have DBA privileges or GATHER_SYSTEM_STATISTICS role to update dictionary
system statistics.
These options better facilitate the gathering process to the physical database and workload: when
workload system statistics are gathered, noworkload system statistics will be ignored. Noworkload
system statistics are initialized to default values at the first database startup.
Workload Statistics
Workload statistics, introduced in Oracle 9i, gather single and multiblock read times, mbrc, CPU
speed (cpuspeed), maximum system throughput, and average slave throughput. The sreadtim,
mreadtim, and mbrc are computed by comparing the number of physical sequential and random
reads between two points in time from the beginning to the end of a workload. These values are
implemented through counters that change when the buffer cache completes synchronous read
requests. Since the counters are in the buffer cache, they include not only I/O delays, but also
waits related to latch contention and task switching. Workload statistics thus depend on the
activity the system had during the workload window. If system is I/O bound—both latch contention
and I/O throughput—it will be reflected in the statistics and will therefore promote a less I/O
intensive plan after the statistics are used. Furthermore, workload statistics gathering does not
generate additional overhead.
In Oracle release 9.2, maximum I/O throughput and average slave throughput were added to set a
lower limit for a full table scan (FTS).
To gather workload statistics, either:
Noworkload Statistics
Noworkload statistics consist of I/O transfer speed, I/O seek time, and CPU speed (cpuspeednw).
The major difference between workload statistics and noworkload statistics lies in the gathering
method.
Noworkload statistics gather data by submitting random reads against all data files, while workload
statistics uses counters updated when database activity occurs. isseektim represents the time it
takes to position the disk head to read data. Its value usually varies from 5 ms to 15 ms,
depending on disk rotation speed and the disk or RAID specification. The I/O transfer speed
represents the speed at which one operating system process can read data from the I/O
subsystem. Its value varies greatly, from a few MBs per second to hundreds of MBs per second.
Oracle uses relatively conservative default settings for I/O transfer speed.
In Oracle 10g, Oracle uses noworkload statistics and the CPU cost model by default. The values of
noworkload statistics are
initialized to defaults at the first instance startup:
ioseektim = 10ms
iotrfspeed = 4096 bytes/ms
cpuspeednw = gathered value, varies based on system
If workload statistics are gathered, noworkload statistics will be ignored and Oracle will use
workload statistics instead. To gather noworkload statistics, run
dbms_stats.gather_system_stats() with no arguments. There will be an overhead on the I/O
system during the gathering process of noworkload statistics. The gathering process may take
from a few seconds to several minutes, depending on I/O performance and database size.
The information is analyzed and verified for consistency. In some cases, the value of noworkload
statistics may remain its default value. In such cases, repeat the statistics gathering process or set
the value manually to values that the I/O system has according to its specifications by using the
dbms_stats.set_system_stats procedure.
Managing Statistics
Restoring Previous Versions of Statistics
Whenever statistics in dictionary are modified, old versions of statistics are saved automatically for
future restoring. Statistics can be restored using RESTORE procedures of DBMS_STATS package.
These procedures use a time stamp as an argument and restore statistics as of that time stamp.
This is useful in case newly collected statistics leads to some sub-optimal execution plans and the
administrator wants to revert to the previous set of statistics.There are dictionary views that
display the time of statistics modifications. These views are useful in determining the time stamp
to be used for statistics restoration.
The other DBMS_STATS procedures related to restoring and purging statistics include:
PURGE_STATS: This procedure can be used to manually purge old versions beyond a time
stamp.
GET_STATS_HISTORY_RENTENTION: This function can be used to get the current statistics
history retention value.
GET_STATS_HISTORY_AVAILABILITY: This function gets the oldest time stamp where
statistics history is available. Users cannot restore statistics to a time stamp older than the oldest
time stamp.
When restoring previous versions of statistics, the following limitations apply:
You want to recover older versions of the statistics. For example, to restore the optimizer
behaviour to an earlier date.
You want the database to manage the retention and purging of statistics histories.
You should use EXPORT/IMPORT_*_STATS procedures when:
You want to experiment with multiple sets of statistics and change the values back and
forth.
You want to move the statistics from one database to another database. For example,
moving statistics from a production system to a test system.
You want to preserve a known set of statistics for a longer period of time than the desired
retention date for restoring statistics.
The DBMS_STATS package provides two procedures for locking and two procedures for unlocking
statistics:
LOCK_SCHEMA_STATS
LOCK_TABLE_STATS
UNLOCK_SCHEMA_STATS
UNLOCK_TABLE_STATS
EXEC DBMS_STATS.LOCK_SCHEMA_STATS('AP');
EXEC DBMS_STATS.UNLOCK_SCHEMA_STATS('AP');
Setting Statistics
We can set table, column, index, and system statistics using the SET_*_STATISTICS procedures.
Setting statistics in the manner is not recommended, because inaccurate or inconsistent statistics
can lead to poor performance.
Dynamic Sampling
The purpose of dynamic sampling is to improve server performance by determining more accurate
estimates for predicate selectivity and statistics for tables and indexes. The statistics for tables and
indexes include table block counts, applicable index block counts, table cardinalities, and relevant
join column statistics. These more accurate estimates allow the optimizer to produce better
performing plans.
You can use dynamic sampling to:
Estimate single-table predicate selectivities when collected statistics cannot be used or are
likely to lead to significant errors in estimation.
Estimate statistics for tables and relevant indexes without statistics.
Estimate statistics for tables and relevant indexes whose statistics are too out of date to
trust.
This dynamic sampling feature is controlled by the OPTIMIZER_DYNAMIC_SAMPLING parameter.
For dynamic sampling to automatically gather the necessary statistics, this parameter should be
set to a value of 2(default) or higher.
The primary performance attribute is compile time. Oracle determines at compile time whether a
query would benefit from dynamic sampling. If so, a recursive SQL statement is issued to scan a
small random sample of the table's blocks, and to apply the relevant single table predicates to
estimate predicate selectivities. The sample cardinality can also be used, in some cases, to
estimate table cardinality. Any relevant column and index statistics are also collected. Depending
on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, a certain number of
blocks are read by the dynamic sampling query.
For a query that normally completes quickly (in less than a few seconds), we will not want to incur
the cost of dynamic sampling. However, dynamic sampling can be beneficial under any of the
following conditions:
Viewing Statistics
Statistics on Tables, Indexes and Columns
Statistics on tables, indexes, and columns are stored in the data dictionary. To view statistics in
the data dictionary, query the appropriate data dictionary view (USER, ALL, or DBA). These DBA_*
views include the following:
· DBA_TAB_STATISTICS
· ALL_TAB_STATISTICS
· USER_TAB_STATISTICS
· DBA_TAB_COL_STATISTICS
· ALL_TAB_COL_STATISTICS
· USER_TAB_COL_STATISTICS
· DBA_TAB_HISTOGRAMS
· ALL_TAB_HISTOGRAMS
· USER_TAB_HISTOGRAMS
· DBA_TABLES
· DBA_OBJECT_TABLES
· DBA_TAB_HISTOGRAMS
· DBA_INDEXES
· DBA_IND_STATISTICS
· DBA_CLUSTERS
· DBA_TAB_PARTITIONS
· DBA_TAB_SUBPARTITIONS
· DBA_IND_PARTITIONS
· DBA_IND_SUBPARTITIONS
· DBA_PART_COL_STATISTICS
· DBA_PART_HISTOGRAMS
· DBA_SUBPART_COL_STATISTICS
· DBA_SUBPART_HISTOGRAMS
Viewing Histograms
Column statistics may be stored as histograms. These histograms provide accurate estimates of
the distribution of column data. Histograms provide improved selectivity estimates in the presence
of data skew, resulting in optimal execution plans with non uniform data distributions.
Oracle uses two types of histograms for column statistics: height-balanced histograms and
frequency histograms. The type of histogram is stored in the HISTOGRAM column of the
*TAB_COL_STATISTICS views (USER and DBA). This column can have values of HEIGHT
BALANCED, FREQUENCY, or NONE.
Height-Balanced Histograms
In a height-balanced histogram, the column values are divided into bands so that each band
contains approximately the same number of rows. The useful information that the histogram
provides is where in the range of values the endpoints fall. Height-balanced histograms can be
viewed using the *TAB_HISTOGRAMS tables.
ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
0 0
1 27
2 42
3 57
4 74
5 98
6 123
7 149
8 175
9 202
10 353
In the query output, one row corresponds to one bucket in the histogram.
Frequency Histograms
In a frequency histogram, each value of the column corresponds to a single bucket of the
histogram. Each bucket contains the number of occurrences of that single value. Frequency
histograms are automatically created instead of height-balanced histograms when the number of
distinct values is less than or equal to the number of histogram buckets specified. Frequency
histograms can be viewed using the *TAB_HISTOGRAMS tables.
ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
36 1
213 2
261 3
370 4
484 5
692 6
798 7
984 8
1112 9
Issues
Exclude dataload tables from your regular stats gathering, unless you know they will be
full at the time that stats are gathered.
Gathering stats for the SYS schema can make the system run slower, not faster.
Gathering statistics can be very resource intensive for the server so avoid peak workload
times or gather stale stats only.
Even if scheduled, it may be necessary to gather fresh statistics after database
maintenance or large data loads.
If a table goes from 1 row to 200 rows, that's a significant change. When a table goes
from 100,000 rows to 150,000 rows, that's not a terribly significant change. When a table goes
from 1000 rows all with identical values in commonly-queried column X to 1000 rows with nearly
unique values in column X, that's a significant change.
Statistics store information about item counts and relative frequencies. Things that will let it
"guess" at how many rows will match a given criteria. When it guesses wrong, the optimizer can
pick a very suboptimal query plan.
Startup options
STARTUP [FORCE][RESTRICT][NOMOUNT][MIGRATE][QUIET]
[PFILE=file_name | SPFILE=file_name]
[MOUNT [EXCLUSIVE] database_name | OPEN READ {ONLY | WRITE [RECOVER]}
| RECOVER database_name]
STARTUP
STARTUP OPEN
STARTUP OPEN READ ONLY
STARTUP OPEN READ WRITE
STARTUP OPEN WRITE RECOVER
STARTUP OPEN RECOVER;
STARTUP OPEN database_name PFILE='/path/' PARALLEL
STARTUP NOMOUNT
STARTUP MOUNT (or STARTUP MOUNT EXCLUSIVE or STARTUP MOUNT SHARED)
STARTUP RESTRICT
STARTUP RESTRICT MOUNT
STARTUP [PFILE='/path/'] {UPGRADE | DOWNGRADE} [QUIET]
STARTUP UPGRADE
STARTUP DOWNGRADE
STARTUP MIGRATE
STARTUP FORCE (= SHUT IMMEDIATE + STARTUP)
STARTUP FORCE pfile='/path/'
STARTUP FORCE RESTRICT PFILE='/path/' OPEN [database_name]
STARTUP pfile
STARTUP pfile = '/path/'
STARTUP spfile
STARTUP spfile = '/path/'
NOMOUNT -- Background processes will be started upon reading the parameter file (initSID.ora) or
server parameter file (spfileSID.ora) at $ORACLE_SID/dbs and allocate the shared memory and
semaphores.
MOUNT -- control files will be read and opened.
OPEN -- datafiles, redolog files are opened.
Shutdown options
Misc
SQL*Loader in Oracle
SQL*Loader
SQL*Loader (sqlldr) is, the utility, to use for high performance data loads, which has a powerful
data parsing engine which puts little limitation on the format of the data in the datafile. The data
can be loaded from any flat file and inserted into the Oracle database.
SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle
database. SQL*Loader supports various load formats, selective loading, and multi-table loads.
SQL*Loader reads data file(s) and description of the data which is defined in the control file. Using
this information and any additional specified parameters (either on the command line or in the
PARFILE), SQL*Loader loads the data into the database.
During processing, SQL*Loader writes messages to the log file, bad rows to the bad file, and
discarded rows to the discard file.
The Control File
The SQL*Loader control file, is a flat file or text file, contains information that describes how the
data will be loaded. It contains the table name, column data types, field delimiters, bad file name,
discard file name, conditions to load, SQL functions to be applied and may contain data or infile
name.
This sample control file will load an external data file containing delimited data:
load data
infile 'c:\data\emp.csv'
into table emp //here INSERT is default
fields terminated by "," optionally enclosed by '"'
(empno, empname, sal, deptno)
Another Sample control file with in-line data formatted as fix length records.
load data
infile *
replace
into table departments
(dept position (02:05) char(4),
deptname position (08:27) char(20)
)
begindata
COSC COMPUTER SCIENCE
ENGL ENGLISH LITERATURE
MATH MATHEMATICS
POLY POLITICAL SCIENCE
"infile *" means, the data is within the control file; otherwise we‘ve to specify the file name and
location. The trick is to specify "*" as the name of the data file, and use BEGINDATA to start the
data section in the control file:
In the first example we will see how delimited (variable length) data can be loaded into Oracle.
Example 1:
LOAD DATA
INFILE *
CONTINUEIF THIS (1) = '*'
INTO TABLE delimited_data
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(data1 "UPPER(:data1)",
data2 "TRIM(:data2)"
data3 "DECODE(:data2, 'hello', 'goodbye', :data1)"
)
BEGINDATA
*11111,AAAAAAAAAA,hello
*22222,"A,B,C,D,",Testttt
NOTE: The default data type in SQL*Loader is CHAR(255). To load character fields longer than
255 characters, code the type and length in your control file. By doing this, Oracle will allocate a
bigger buffer to hold the entire column, thus eliminating potential "Field in data file exceeds
maximum length" errors.
e.g.:
...
resume char(4000),
...
Example 2:
LOAD DATA
INFILE 'table.dat'
INTO TABLE ‗table-name‘
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(
COL1 DECIMAL EXTERNAL NULLIF (COL1=BLANKS),
COL2 DECIMAL EXTERNAL NULLIF (COL2=BLANKS),
COL3 CHAR NULLIF (COL3=BLANKS),
COL4 CHAR NULLIF (COL4=BLANKS),
COL5 CHAR NULLIF (COL5=BLANKS),
COL6 DATE "MM-DD-YYYY" NULLIF (COL6=BLANKS)
)
Numeric data should be specified as type ‗external‘, otherwise, it is read as characters rather than
as binary data.
Decimal numbers need not contain a decimal point; they are assumed to be integers if not
specified.
The standard format for date fields is DD-MON-YY.
The control file can also specify that records are in fixed format. A file is in fixed record format
when all records in a datafile are the same length. The control file specifies the specific starting
and ending byte location of each field. This format is harder to create and less flexible but can
yield performance benefits. A control file specifying a fixed format could look like the following.
Example 1:
LOAD DATA
INFILE *
INTO TABLE positional_data
(data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
In the above example, position(01:05) will give the 1st to the 5th character (11111 and 22222).
Example 2:
LOAD DATA
INFILE 'table.dat'
INTO TABLE ‗table-name‘
(
COL1 POSITION(1:4) INTEGER EXTERNAL,
COL2 POSITION(6:9) INTEGER EXTERNAL,
COL3 POSITION(11:46) CHAR,
COL4 POSITION(48:83) CHAR,
COL5 POSITION(85:120) CHAR,
COL6 POSITION(122:130) DATE "MMDDYYYY"
)
SQL*Loader Options
To check which options are available in any release of SQL*Loader uses this command:
$ sqlldr help=y
SQL*Loader provides the following options, which can be specified either on the command line or
within a parameter file:
control – The name of the control file. This file specifies the format of the data to be loaded.
log – The name of the file used by SQL*Loader to log results. The log file contains information
about the SQL*Loader execution. It should be viewed after each SQL*Loader job is completed.
Especially interesting is the summary information at the bottom of the log, including CPU time and
elapsed time. It has details like no. of lines readed, no. of lines loaded, no. of rejected lines (full
data will be in discard file), no. of bad lines, actual time taken load the data.
bad – A file that is created when at least one record from the input file is rejected. The rejected
data records are placed in this file. A record could be rejected for many reasons, including a non-
unique key or a required column being null.
data – The name of the file that contains the data to load.
discard – The name of the file that contains the discarded rows. Discarded rows are those that fail
the WHEN clause condition when selectively loading records.
skip – [0] Allows the skipping of the specified number of logical records.
errors – [50] The number of errors to allow on the load. SQL*Loader will tolerates this many
errors (50 by default). After this limit, it'll abort the loading and rollbacks the already inserted
records.
rows – [64] The number of rows to load before a commit is issued (in conventional path).
[ALL] For direct path loads, rows are the number of rows to read from the data file before saving
the data in the datafiles. Committing less frequently (higher value of ROWS) will improve the
performance of SQL*Loader.
bindsize – [256000] The size of the conventional path bind array in bytes. Larger bindsize will
improve the performance of SQL*Loader.
silent – Suppress messages/errors during data load. A value of ALL will suppress all load
messages. Other options include DISCARDS, ERRORS, FEEDBACK, HEADER, and PARTITIONS.
direct – [FALSE] Specifies whether or not to use a direct path load or conventional. Direct path
load (DIRECT=TRUE) will load faster than conventional.
parfile – [Y] The name of the file that contains the parameter options for SQL*Loader.
parallel – [FALSE] do parallel load. Available with direct path data loads only, this option allows
multiple SQL*Loader jobs to execute concurrently and will improve the performance.
file – Used only with parallel loads, this parameter specifies the file to allocate extents from.
Specify a filename that contains index creation statements.
skip_index_maintenance – [FALSE] Stops index maintenance for direct path loads only. Do not
maintain indexes, mark affected indexes as unusable.
commit_discontinued – [FALSE] commit loaded rows when load is discontinued. This is from
10g.
readsize – [1048576] The size of the read buffer used by SQL*Loader when reading data from
the input file. This value should match that of bindsize.
external_table – [NOT_USED] Determines whether or not any data will be loaded using external
tables. The other valid options include GENERATE_ONLY and EXECUTE.
columnarrayrows – [5000] Specifies the number of rows to allocate for direct path column
arrays.
streamsize – [256000] Specifies the size of direct path stream buffer size in bytes.
multithreading – use multithreading in direct path. The default is TRUE on multiple CPU systems
and FALSE on single CPU systems.
resumable – [FALSE] Enables and disables resumable space allocation. When ―TRUE‖, the
parameters resumable_name and resumable_timeout are utilized.
resumable_name – User defined string that helps identify a resumable statement that has been
suspended. This parameter is ignored unless resumable = TRUE.
resumable_timeout – [7200 seconds] The time period in which an error must be fixed. This
parameter is ignored unless resumable = TRUE.
no_index_errors - [FALSE] abort load on any index errors (This is from Oracle 11g release2).
_testing_ncs_to_clob – test non character scalar to character lob conversion. This is from
Oracle 10g.
_parallel_lob_load – allow direct path parallel load of lobs. This is from Oracle 10g.
_testing_server_slot_size – test with non default direct path server slot buffer size. This is
from Oracle 11g.
_testing_server_ca_rows – test with non default direct path server column array rows. This is
from Oracle 11g.
_testing_server_max_rp_ccnt – test with non default direct path max row piece columns. This
is from Oracle 11g.
Note: values within parenthesis are the default values.
PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An
example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo
userid=scott/tiger'. One may specify parameters by position before but not after parameters
specified by keywords.
For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger
control=foo log' is not, even though the position of the parameter 'log' is correct.
Miscellaneous
1. To load MS-Excel data into Oracle, Open the MS-Excel spreadsheet and save it as a CSV
(Comma Separated Values) file. This file can now be copied to the Oracle machine and loaded
using the SQL*Loader utility.
2. Oracle does not supply any data unload utilities (like SQL*Unloader), to get the data from
database. You can use SQL*Plus to select and format your data and then spool it to a file or you
have to use any third party tool.
If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD
clause instead of the SKIP parameter. CONTINUE_LOAD allows you to specify a different number
of rows to skip for each of the tables you are loading.
LOAD DATA
INFILE 'mail_orders.txt'
BADFILE 'bad_orders.txt'
APPEND
INTO TABLE mailing_list
FIELDS TERMINATED BY ","
(addr,
city,
state,
zipcode,
mailing_addr "decode(:mailing_addr,null, :addr, :mailing_addr)",
mailing_city "decode(:mailing_city,null, :city, :mailing_city)",
mailing_state,
move_date "substr(:move_date, 3, 2) || substr(:move_date, 7, 2)"
)
Another example:
LOAD DATA
INFILE 'mydata.dat'
REPLACE
INTO TABLE emp WHEN empno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
deptno POSITION(17:18) CHAR,
mgr POSITION(20:23) INTEGER EXTERNAL
)
INTO TABLE proj WHEN projno != ' '
(projno POSITION(25:27) INTEGER EXTERNAL,
empno POSITION(1:4) INTEGER EXTERNAL
)
7. In SQL*Loader, one cannot COMMIT only at the end of the load file, but by setting the ROWS
parameter to a large value, committing can be reduced. Make sure you have big rollback
segments ready when you use a high value for ROWS.
NOTE: SQL*Loader does not allow the use of OR in the WHEN clause. You can only use AND as in
the example above! To workaround this problem, code multiple "INTO TABLE ... WHEN" clauses.
Here is an example:
LOAD DATA
INFILE 'mydata.dat' BADFILE 'mydata.bad' DISCARDFILE 'mydata.dis'
APPEND
INTO TABLE my_selective_table
WHEN (01) <> 'H' and (01) <> 'T'
(
region CONSTANT '31',
service_key POSITION(01:11) INTEGER EXTERNAL,
call_b_no POSITION(12:29) CHAR
)
INTO TABLE my_selective_table
WHEN (30:37) = '20031217'
(
region CONSTANT '31',
service_key POSITION(01:11) INTEGER EXTERNAL,
call_b_no POSITION(12:29) CHAR
)
BOUNDFILLER (available with Oracle 9i and above) can be used if the skipped column's value will
be required later again. Here is an example:
LOAD DATA
INFILE *
TRUNCATE INTO TABLE sometable
FIELDS TERMINATED BY "," trailing nullcols
(
c1,
field2 BOUNDFILLER,
field3 BOUNDFILLER,
field4 BOUNDFILLER,
field5 BOUNDFILLER,
c2 ":field2 || :field3",
c3 ":field4 + :field5"
)
Control File:
LOAD DATA
INFILE *
INTO TABLE image_table
REPLACE
FIELDS TERMINATED BY ','
(
image_id INTEGER(5),
file_name CHAR(30),
image_data LOBFILE (file_name) TERMINATED BY EOF
)
BEGINDATA
001,image1.gif
002,image2.jpg
003,image3.bmp
reads 3 rows of data for every record. The data rows are literally concatenated together so that
positions 81 to 160 are used to specify column positions for data in the second row (assuming that
the record length of the file is 80). You should also specify the record length (240 in this case) with
a reclen clause when concatenating data, that is:
reclen 240
specifies that if the first four characters of the current line are 'two', the next line is a continuation
of this line. In this case the first four columns of each record are assumed to contain only a
continuation indicator and are not read as data.
When using fixed format data, the continuation character may be in the last column of the data
record. For example:
continueif last = '+'
specifies that if the last non-blank character in the line is '+', the next line is a continuation of the
current line. This method does not work with free format data because the continuation character
is read as the value of the next field.
SQL*Loader is flexible and offers many options that should be considered to maximize the speed
of data loads.
1. Use Direct Path Loads - The conventional path loader essentially loads the data by using
standard insert statements. Direct path load builds blocks of data in memory and saves these
blocks directly into the extents allocated for the table being loaded. The direct path loader
(DIRECT=TRUE) loads directly into the Oracle datafiles and creates blocks in Oracle database block
format. This will effectively bypass most of the RDBMS processing. The fact that SQL is not being
issued makes the entire process much less taxing on the database. There are certain cases, in
which direct path loads cannot be used (clustered tables).
To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql
must be executed (no need to run this, if you ran catalog.sql at the time of database creation).
2. External Table Load - An External table load creates an external table for data in a datafile
and executes INSERT statements to insert the data from datafile into target table.
3. Disable Indexes and Constraints - For only conventional data loads, the disabling of indexes
and constraints can greatly enhance the performance of SQL*Loader. This will significantly slow
down load times even with ROWS set to a high value.
4. Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the
number of calls to the database and increase performance. The size of the bind array is specified
using the bindsize parameter. The bind array's size is equivalent to the number of rows it contains
(rows=) times the maximum length of each row.
5. Use ROWS=n to commit less frequently. For conventional data loads only, the rows parameter
specifies the number of rows per commit. Issuing fewer commits will enhance performance.
6. Use Parallel Loads. Available with direct path data loads only, this option allows multiple
SQL*Loader jobs to execute concurrently.
$ sqlldr control=first.ctl parallel=true direct=true
$ sqlldr control=second.ctl parallel=true direct=true
7. Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing
the data. The savings can be tremendous, depending on the type of data and number of rows.
8. Disable Archiving During Load. While this may not be feasible in certain environments,
disabling database archiving can increase performance considerably.
9. Use unrecoverable. The UNRECOVERABLE option (unrecoverable load data) disables the
writing of the data to the redo logs. This option is available for direct path loads only.
Benchmarking
The following benchmark tests were performed with the various SQL*Loader options. The table
was truncated after each test.
SQL*Loader Option Elapsed Time(Seconds) Time Reduction
direct=false
135 -
rows=64
direct=false
bindsize=512000 92 32%
rows=10000
direct=false
bindsize=512000
rows=10000 85 37%
database in
noarchivelog mode
direct=true 47 65%
direct=true
41 70%
unrecoverable
direct=true
unrecoverable 41 70%
fixed width data
The results above indicate that conventional path loads take the longest. However, the bindsize
and rows parameters can aid the performance under these loads. The test involving the
conventional load didn‘t come close to the performance of the direct path load with the
unrecoverable option specified.
It is also worth noting that the fastest import time achieved (earlier) was 67 seconds, compared to
41 for SQL*Loader direct path – a 39% reduction in execution time. This proves that SQL*Loader
can load the same data faster than import.
These tests did not compensate for indexes. All database load operations will execute faster when
indexes are disabled.
Rollback Segments
Rollback segment is just like any other table segments and index segments, which consist of
extents, also demand space and they get created in a tablespace. In order to perform any DML
operation against a table which is in a non system tablespace ('emp' in 'user' tablespace), oracle
requires a rollback segment from a non system tablespace.
When a transaction is going on a segment which is in non system tablespace, then Oracle needs a
rollback segment which is also in non system tablespace. This is the reason we create a separate
tablespace just for the rollback segment.
Why rollback segments?
At the time of database creation oracle by default creates a rollback segment by name SYSTEM in
system tablespace and it's ONLINE. This rollback segment can't be brought OFFLINE since Oracle
needs it as long as DB is up & running. This can't be dropped also.
Only DBA can create the rollback segments (SYS is the owner) and can not accessible to ordinary
users.
[TABLESPACE tbs-name]
A rollback segment also has its own storage parameters, and the rules in creating RBS are:
1. We can't define PCTINCREASE for RBS (not even 0).
2. We have to have at least 2 as MINEXTENTS.
Apart from regular storage parameters rollback segments can also be defined with OPTIMAL. We
better create these rollback segments in a separate tablespace where no tables or indexes exist.
We should prefer to create different rollback segments in different tablespaces.
Though we have created rollback segments, we have to bring them ONLINE, either by using
init.ora or by using a SQL statement.
There is another way to bring any rollback segment ONLINE, by DBA in Oracle is:
SQL> ALTER ROLLBACK SEGMENT rbs-name ONLINE;
The number of rollback segments that are needed in the database are decided by the concurrent
DML activity users (number of transactions). Maximum number of rollback segments can be
defined in init.ora by MAX_ROLLBACK_SEGMENTS parameter (until 9i).
Points to ponder:
1. Oracle strongly recommends to have smaller rollback segments.
2. Its good to have not more than 4 transactions per rollback segment.
3. One transaction can only take place in one rollback segment. If there is any space problem the
transaction has to fail, but cannot switch over to another rollback segment and Oracle rollbacks
the transaction.
4. One rollback segment can have multiple transactions. We can limit the maximum transactions a
rollback segment can support. Should limit to 10 by having an init.ora parameter
TRANSACTIONS_PER_ROLLBACK_SEGMENT=10.
5. If we are having problems like "Read Inconsistencies" or "Snapshot Too Old" problems, we can
do these things:
Increase the size of the rollback segment (so that "wrapping" issue may not occur so
frequently).
Decrease the "Commit" Frequency, so that "blocks" can‘t be overwritten as they are still
belonging to "Open" DML.
6. Constantly DBA should observe the HWM (High Water Mark) line for rollback segment.
7. If you bounce the database, the rollback segment will be offline, unless you added the rollback
segment name to parameter file.
ROLLBACK_SEGMENTS = r1, r2, r3, r4
8. DBA should define the optimal value for rollback segment. Otherwise if the rollback segment
becomes big, it'll stay at that size which is unwanted (as Oracle recommends smaller RBS). So it's
nice to comeback to some reasonable size after growing while helping a transaction. Though we
define this, rollback segment by default it'll not comeback to this size right after the transaction is
finished, rather it'll wait until another transaction wants to use it. Then it becomes smaller and
again starts growing if necessary.
The biggest issue for a DBA is maintaining the rollback segments especially in a high-activity
environment.
The reasons for a transaction to fail in Oracle, are:
1. RBS is too small to carry entire transaction. Nothing but the tablespace limitation.
2. RBS is already reached MAXEXTENTS, i.e. although tablespace has some free space to offer,
rollback segment can't grow anymore because it has already grabbed it's MAXEXTENTS.
Or you have defined the NEXT EXTENT size wrongly, thus it has reached its MAXEXTENTS so
quickly.
3. Our transaction, say it grabbed one rollback segment and some other transaction also grabbed
the same rollback segment. In this case, our transaction couldn't find sufficient space to have all
the before image blocks.
V$ROLLNAME --> Here you USN and the RBS Name, which are ONLINE.
V$ROLLSTAT --> You can see USN, which are ONLINE, and it's complete details like
1. Size of the rollback segment.
2. Carrying any transactions or not.
3. What is the High-Water-Mark size.
4. Optimal size of the RBS.
5. Wrap information and much more info about each RBS.
From Oracle 10g, you should use UNDO Segments, instead of rollback segments.
RMAN in Oracle
Oracle Recovery Manager (RMAN)
RMAN was introduced in Oracle8, RMAN has since been enhanced (in Oracle 9i), enhanced
(in Oracle 10g) and enhanced (in Oracle 11g).
Recovery Manager(RMAN) is an Oracle provided (free) utility for backing-up, restoring and
recovering Oracle databases. RMAN ships with the Oracle database and doesn't require a separate
installation. The RMAN executable is located in $ORACLE_HOME/bin directory.
RMAN is a Pro*C application that translates commands to a PL/SQL interface through RPC (Remote
Procedure Call). The PL/SQL calls are statically linked into the Oracle kernel, and does not require
the database to be opened (mapped from the ?/rdbms/admin/recover.bsq file).
The RMAN environment consists of the utilities and databases that play a role in backing up our
data. At a minimum, the environment for RMAN must include the following:
The target database to be backed up.
The RMAN client (rman executable and recover.bsq), which interprets backup and recovery
commands, directs server sessions to execute those commands, and records our backup and
recovery activity in the target database control file.
Benefits of RMAN
Some of the benefits provided by RMAN include:
Backups are faster and uses less tapes (RMAN will skip empty blocks)
Less database archiving while database is being backed-up
RMAN checks the database for block corruptions
Automated restores from the catalog
Files are written out in parallel instead of sequential
RMAN can be operated from Oracle Enterprise Manager, or from command line. Here are
the command line arguments:
Argument Value Description
target quoted-string connect-string for target database
catalog quoted-string connect-string for recovery catalog
nocatalog none if specified, then no recovery catalog
cmdfile quoted-string name of input command file
log quoted-string name of output message log file
trace quoted-string name of output debugging message log file
append none if specified, log is opened in append mode
debug optional-args activate debugging
msgno none show RMAN-nnnn prefix for all messages
send quoted-string send a command to the media manager
pipe string building block for pipe names
timeout integer number of seconds to wait for pipe input
checksyntax none check the command file for syntax errors
$ rman
$ rman TARGET SYS/pwd@target
$ rman TARGET SYS/pwd@target NOCATALOG
$ rman TARGET SYS/pwd@target CATALOG rman/pwd@cat
$ rman TARGET=SYS/pwd@target CATALOG=rman/pwd@cat
$ rman TARGET SYS/pwd@target LOG $ORACLE_HOME/dbs/log/rman_log.log APPEND
$ rman TARGET / CATALOG rman/pwd@cat
$ rman TARGET / CATALOG rman/pwd@cat CMDFILE cmdfile.rcv LOG outfile.txt
$ rman CATALOG rman/pwd@cat$ rman @/my_dir/my_commands.txt
Start by creating a database schema (usually named rman), in catalog database. Assign an
appropriate tablespace to it and grant it the recovery_catalog_owner role.
$ sqlplus "/as sysdba"
SQL> create user rman identified by rman default tablespace rmants quota unlimited on rmants;
SQL> grant resource, recovery_catalog_owner to rman;
No need to grant connect role explicitly, because recovery_catalog_owner role has it.
Log in to catalog database with vpc and create the virtual private catalog.
$ rman catalog vpc/vpc
RMAN> CREATE VIRTUAL CATALOG;
RMAN> exit;
$ sqlplus vpc/vpc
SQL>exec rman.DBMS_RCVCAT.CREATE_VIRTUAL_CATALOG;
ADVISE FAILURE Will display repair options for the specified failures. 11g R1 command.
Establish a channel, which is a connection between RMAN and a database
ALLOCATE
instance.
ALTER DATABASE Mount or open a database.
BACKUP Backup database, tablespaces, datafiles, control files, spfile, archive logs.
BLOCKRECOVER Will recover the corrupted blocks.
Add information about file copies and user-managed backups to the catalog
CATALOG
repository.
CHANGE Update the status of a backup in the RMAN repository.
CONFIGURE To change RMAN settings.
Establish a connection between RMAN and a target, auxiliary, or recovery
CONNECT
catalog database.
Convert datafile formats for transporting tablespaces and databases across
CONVERT
platforms.
CREATE CATALOG Create the base/virtual recovery catalog.
CREATE SCRIPT Create a stored script and store it in the recovery catalog.
CROSSCHECK Check whether backup items still exist or not.
DELETE Delete backups from disk or tape.
DELETE SCRIPT Delete a stored script from the recovery catalog.
DROP CATALOG Remove the base/virtual recovery catalog.
DROP DATABASE Delete the target database from disk and unregisters it.
Use backups of the target database to create a duplicate database that we can
DUPLICATE
use for testing purposes or to create a standby database.
EXECUTE SCRIPT Run an RMAN stored script.
EXIT or QUIT Exit/quit the RMAN console.
FLASHBACK
Return the database to its state at a previous time or SCN.
DATABASE
GRANT Grant privileges to a recovery catalog user.
Invoke an operating system command-line subshell from within RMAN or run a
HOST
specific operating system command.
IMPORT CATALOG Import the metadata from one recovery catalog into another recovery catalog.
LIST List backups and copies.
PRINT SCRIPT Display a stored script.
Apply redo logs or incremental backups to a restored backup set in order to
RECOVER
recover it to a specified time.
REGISTER Register the target database in the recovery catalog.
RELEASE CHANNEL Release a channel that was allocated.
Will repair database failures identified by the Data Recovery Advisor. 11g
REPAIR FAILURE
R1 command.
Replace an existing script stored in the recovery catalog. If the script does not
REPLACE SCRIPT
exist, then REPLACE SCRIPT creates it.
REPORT Report backup status - database, files, backups.
Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has
RESET DATABASE been executed and that a new incarnation of the target database has been
created, or reset the target database to a prior incarnation.
RESTORE Restore files from RMAN backup.
Perform a full resynchronization, which creates a snapshot control file and
RESYNC CATALOG then copies any new or changed information from that snapshot control file to
the recovery catalog.
REVOKE Revoke privileges from a recovery catalog user.
To run set of RMAN commands, only some RMAN commands are valid inside
RUN
RUN block.
SEND Send a vendor-specific quoted string to one or more specific channels.
SET Settings for the current RMAN session.
SHOW Display the current configuration.
SHUTDOWN Shutdown the database.
SPOOL To direct RMAN output to a log file.
SQL Execute a PL/SQL procedure or SQL statement (not SELECT).
STARTUP Startup the database.
Specify that a datafile copy is now the current datafile, that is, the datafile
SWITCH
pointed to by the control file.
TRANSPORT Create transportable tablespace sets from backup for one or more
TABLESPACE tablespaces.
UNREGISTER Unregister a database from the recovery catalog.
UPGRADE Upgrade the recovery catalog schema from an older version to the version
CATALOG required by the RMAN executable.
VALIDATE To validate. 11g R1 command.
All RMAN commands executed through channels. A channel is a connection (session) from RMAN to
target database. These connections or channels are used to perform the desired operations.
rman
[ TARGET [=] ['] [userid][/[password]][@net_service_name] [']
| {CATALOG [=] ['] [userid][/[password]][@net_service_name] [']
| LOG [=] [']filename['] [APPEND]
...
]...
$ rman
$ rman NOCATALOG
$ rman TARGET SYS/pwd@target
$ rman TARGET SYS/pwd@target NOCATALOG
$ rman TARGET SYS/pwd@target LOG $ORACLE_HOME/dbs/my_log.log APPEND
$ rman CATALOG rman/pwd@catdb
$ rman TARGET=SYS/pwd@target CATALOG=rman/pwd@cat
$ rman TARGET / CATALOG rman/rman@cat
$ rman TARGET / SCRIPT dwh LOG /tmp/dwh.log
$ rman PIPE newpipe TARGET / TIMEOUT 90
$ rman @/my_dir/my_commands.txt
$ rman @backup_ts_generic.rman "/tmp" USERS
$ rman CMDFILE=backup_ts_users.rman
$ rman TARGET / @backup_db.rman
$ rman TARGET / CATALOG rman/pwd@cat CMDFILE cmdfile.rcv LOG outfile.txt
$ rman TARGET / CATALOG rman/pwd@cat DEBUG TRACE trace.log
$ rman TARGET SYS/pwd@prod CATALOG rman/rman@rcat @'/oracle/dbs/whole.rcv'
$ rman TARGET user/pwd CMDFILE=takefulldb.cmd @@takefulldb.cmd
$ rman CHECKSYNTAX @'/tmp/backup_db.cmd'
$ rman MSGNO
$ rman | tee rman.log
$ rman help=yes
CONNECT command
Establish a connection between RMAN and a target, auxiliary, or recovery catalog database.
RMAN> CONNECT TARGET;
RMAN> CONNECT TARGET /
RMAN> CONNECT TARGET sys@tgt;
RMAN> CONNECT TARGET sys/pwd@tgt;
RMAN> CONNECT CATALOG rman@catdb;
RMAN> CONNECT CATALOG rman/pwd@catdb;
RMAN> CONNECT AUXILIARY /
RMAN> CONNECT AUXILIARY rman@auxdb;
RMAN> CONNECT AUXILIARY rman/pwd@auxdb;
CREATE CATALOG command
Create Oracle schema for the recovery catalog.
RMAN> CREATE CATALOG;
RMAN> CREATE CATALOG TABLESPACE rmants;
RMAN> CREATE VIRTUAL CATALOG; -- Oracle 11g R1
SQL> EXEC rman.DBMS_RCVCAT.CREATE_VIRTUAL_CATALOG; -- Oracle 11g R1
RMAN> SQL "EXEC catown.DBMS_RCVCAT.CREATE_VIRTUAL_CATALOG"; -- Oracle 11g R1
REGISTER command
Register the target database in the recovery catalog.
RMAN> REGISTER DATABASE;
RMAN> REGISTER CATALOG;
RMAN> REGISTER CATALOG TABLESPACE tbs-name;
UNREGISTER command
Unregister a Oracle database from the recovery catalog.
RMAN> UNREGISTER DATABASE;
RMAN> UNREGISTER DATABASE NOPROMPT;
RMAN> UNREGISTER DATABASE prod1;
RMAN> UNREGISTER DATABASE prod2 NOPROMPT;
RMAN> UNREGISTER DB_UNIQUE_NAME prod2;
RMAN> UNREGISTER DB_UNIQUE_NAME prod1 NOPROMPT;
RMAN> UNREGISTER DB_UNIQUE_NAME prod2 INCLUDING BACKUPS;
RMAN> UNREGISTER DB_UNIQUE_NAME prod3 INCLUDING BACKUPS NOPROMPT;
GRANT command
Grant privileges to a recovery catalog user.
RMAN> GRANT CATALOG FOR DATABASE prod1 TO vpc1; -- Oracle 11g R1
RMAN> GRANT REGISTER DATABASE TO bckop2;
RMAN> GRANT RECOVERY_CATALOG_OWNER TO rmanop1, rmanop3;
REVOKE command
Revoke privileges from a recovery catalog user.
RMAN> REVOKE CATALOG FOR DATABASE prod1 FROM vpc1; -- Oracle 11g R1
RMAN> REVOKE REGISTER DATABASE FROM bckop2;
RMAN> REVOKE RECOVERY_CATALOG_OWNER FROM bckop;
STARTUP command
Startup the target database. This command is equivalent to the SQL*Plus STARTUP command.
RMAN> STARTUP;
RMAN> STARTUP PFILE=‘/u01/app/oracle/admin/pfile/initsid.ora‘
RMAN> STARTUP NOMOUNT;
RMAN> STARTUP MOUNT;
RMAN> STARTUP FORCE;
RMAN> STARTUP FORCE DBA;
RMAN> STARTUP FORCE DBA PFILE=c:\Oracle\Admin\pfile\init.ora;
RMAN> STARTUP FORCE NOMOUNT;
RMAN> STARTUP FORCE MOUNT DBA PFILE=/tmp/inittrgt.ora;
RMAN> STARTUP AUXILIARY nomount;
SHUTDOWN command
Shutdown the target database. This command is equivalent to the SQL*Plus
SHUTDOWN command.
RMAN> SHUTDOWN;
RMAN> SHUTDOWN NORMAL;
RMAN> SHUTDOWN TRANSACTIONAL;
RMAN> SHUTDOWN IMMEDIATE;
RMAN> SHUTDOWN ABORT;
SHOW command
Display the current CONFIGURE settings.
SHOW
{ RETENTION POLICY
| BACKUP OPTIMIZATION
| [DEFAULT] DEVICE TYPE
| CONTROLFILE AUTOBACKUP [FORMAT]
| [AUXILIARY] CHANNEL [FOR DEVICE TYPE deviceSpecifier]
| MAXSETSIZE
| DATAFILE BACKUP COPIES
| ARCHIVELOG [BACKUP COPIES|DELETION POLICY]
| AUXNAME
| EXCLUDE
| ENCRYPTION {ALGORITHM | FOR [DATABASE|TABLESPACE]}
| COMPRESSION ALGORITHM
| SNAPSHOT CONTROLFILE NAME
| DB_UNIQUE_NAME
| ALL
} FOR [DB_UNIQUE_NAME [‗db_unique_name‘|ALL]];
CONFIGURE command
To configure persistent RMAN settings. These settings apply to all RMAN sessions until explicitly
changed or disabled.
CONFIGURE deviceConf;
CONFIGURE backupConf;
CONFIGURE AUXNAME FOR DATAFILE datafileSpec {TO 'filename' | CLEAR};
CONFIGURE SNAPSHOT CONTROLFILE NAME {TO 'filename' | CLEAR};
CONFIGURE cfauConf;
CONFIGURE ARCHIVELOG DELETION POLICY
{CLEAR | TO {APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE
deviceSpecifier | NONE | SHIPPED TO [ALL] STANDBY}
[{APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier |
NONE | SHIPPED TO [ALL] STANDBY}] …
}
deviceConf::=
{ DEFAULT DEVICE TYPE { TO deviceSpec | CLEAR }
| DEVICE TYPE deviceSpec { PARALLELISM integer | CLEAR }
| [AUXILIARY] CHANNEL [integer] DEVICE TYPE deviceSpec {allocOperandList|CLEAR}
}
allocOperandList::=
{ PARMS [=] 'channel_parms'
| FORMAT [=] 'format_string' [, 'format_string']...
| { MAXPIECESIZE [=] integer | RATE [=] integer } [K | M | G]
...
}...
connectStringSpec::=
['] [userid] [/[password]] [@net_service_name] [']
backupConf::=
{RETENTION POLICY {TO {RECOVERY WINDOW OF integer DAYS
| REDUNDANCY [=] integer | NONE
}
| CLEAR
}
| MAXSETSIZE {TO {integer [K | M | G]| UNLIMITED}
| CLEAR
}
| {ARCHIVELOG | DATAFILE}
BACKUP COPIES FOR DEVICE TYPE deviceSpec {TO integer | CLEAR}
| BACKUP OPTIMIZATION {ON | OFF | CLEAR}
| EXCLUDE FOR TABLESPACE tablespace_name [CLEAR]
}
cfauConf::==
CONTROLFILE AUTOBACKUP {ON | OFF | CLEAR | FORMAT FOR DEVICE TYPE deviceSpec {TO
'format string'|CLEAR}}
SET command
Set the value of various attributes that affect RMAN behaviour for the duration of a RUN block or a
session.
set_rman_option::=
{ECHO {ON|OFF} | DBID [=] integer
| CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE deviceSpec TO 'frmt_string'
set_run_option::=
{ NEWNAME FOR DATAFILE datafileSpec TO {'filename' | NEW}
| ARCHIVELOG DESTINATION TO 'log_archive_dest'
| untilClause
| COMMAND ID TO 'string'
| CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE deviceSpec TO 'frmt_string'
...
}
ECHO - Controls whether RMAN commands are displayed in the message log.
DBID - A unique 32-bit identification number computed when the database is created. RMAN
displays the DBID upon connection to the target database. We can obtain the DBID by querying
V$DATABASE or RC_DATABASE.
NEWNAME FOR DATAFILE - The default name for all subsequent RESTORE or SWITCH commands
that affect the specified datafile.
MAXCORRUPT FOR DATAFILE - A limit on the number of previously undetected physical block
corruptions that Oracle will allow in the datafile(s).
AUTOLOCATE - Force RMAN to automatically discover which nodes of an Oracle Real Application
Clustersconfiguration contain the backups that you want to restore.
BACKUP command
Backs up Oracle database files, copies of database files, archived logs, or backup sets.
Options::=
[backupOperand [backupOperand]...] backupSpec [backupSpec]...
[PLUS ARCHIVELOG [backupSpecOperand [backupSpecOperand]...]];
backupOperand::=
{ FORMAT [=] 'format_string' [, 'format_string']...
| CHANNEL ['] channel_id [']
| CUMULATIVE
| MAXSETSIZE [=] integer [K | M | G]
| TAG [=] ['] tag_name [']
| keepOption
| SKIP {OFFLINE | READONLY | INACCESSIBLE}
| VALIDATE
| NOT BACKED UP [SINCE TIME [=] 'date_string']
| COPIES [=] integer
| DEVICE TYPE deviceSpecifier
...
}
backupSpec::=
[(]
{ BACKUPSET
{ {ALL | completedTimeSpec }| primary_key) [, primary_key]... }
| COPY OF { DATABASE
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| DATAFILE datafileSpec [, datafileSpec]...
}
| DATAFILE datafileSpec [, datafileSpec]...
| DATAFILECOPY 'filename' [, 'filename']...
| DATAFILECOPY FROM TAG [=] ['] tag_name ['] [, ['] tag_name [']]...
| DATAFILECOPY { ALL | LIKE 'string_pattern' }
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| DATABASE
| archivelogRecordSpecifier
| CURRENT CONTROLFILE [FOR STANDBY]
| CONTROLFILECOPY 'filename'
| SPFILE
}
[backupSpecOperand [backupSpecOperand]...]
backupSpecOperand::=
{ FORMAT [=] 'format_string' [, 'format_string']...
| CHANNEL ['] channel_id [']
| CUMULATIVE
| MAXSETSIZE [=] integer [K | M | G]
| TAG [=] ['] tag_name [']
| keepOption
| SKIP {OFFLINE | READONLY | INACCESSIBLE}
| NOT BACKED UP [SINCE TIME [=] 'date_string' | integer TIMES]
| DELETE [ALL] INPUT
...
}
BACKUP set
RMAN> BACKUP BACKUPSET ALL;
RMAN> BACKUP BACKUPSET ALL FORMAT = ‗/u01/.../backup_%u.bak‘;
RMAN> BACKUP BACKUPSET COMPLETED BEFORE ‗SYSDATE-3‘ DELETE INPUT;
RMAN> BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE 'SYSDATE-14' DELETE INPUT;
RMAN> BACKUP COPIES 2 DEVICE TYPE sbt BACKUPSET ALL;
RMAN> BACKUP AS COMPRESSED BACKUPSET;
RMAN> BACKUP AS COMPRESSED BACKUPSET DEVICE TYPE DISK COPIES 2 DATABASE FORMAT
'/disk1/db_%U', '/disk2/db_%U';
RMAN> BACKUP AS COMPRESSED BACKUPSET INCREMENTAL FROM SCN 4111140000000
DATABASE TAG 'RMAN_RECOVERY';
RMAN> BACKUP AS BACKUPSET DATAFILE
'$ORACLE_HOME/oradata/users01.dbf','$ORACLE_HOME/oradata/tools01.dbf';
RMAN> BACKUP AS BACKUPSET DATAFILECOPY ALL;
RMAN> BACKUP AS BACKUPSET DATAFILECOPY ALL NODUPLICATES;
IMAGE copy
RMAN> BACKUP AS COPY DATABASE;
RMAN> BACKUP AS COPY COPY OF DATABASE FROM TAG 'test' CHECK LOGICAL TAG 'duptest';
RMAN> BACKUP AS COPY TABLESPACE 8;
RMAN> BACKUP AS COPY TABLESPACE test;
RMAN> BACKUP AS COPY TABLESPACE system, tools, users, undotbs;
RMAN> BACKUP AS COPY DATAFILE 1;
RMAN> BACKUP AS COPY DATAFILE 2 FORMAT '/disk2/df2.cpy' TAG my_tag;
RMAN> BACKUP AS COPY CURRENT CONTROLFILE;
RMAN> BACKUP AS COPY CURRENT CONTROLFILE FORMAT ‗/....‘;
RMAN> BACKUP AS COPY ARCHIVELOG ALL;
RMAN> BACKUP AS COPY KEEP FOREVER NOLOGS CURRENT CONTROLFILE FORMAT
'?/oradata/cf_longterm.cpy',DATAFILE 1 FORMAT '?/oradata/df1_longterm.cpy', DATAFILE 2
FORMAT '?/oradata/df2_longterm.cpy';
RMAN> BACKUP AS COPY DATAFILECOPY 'bar' FORMAT 'foobar';
RMAN> BACKUP AS COPY DATAFILECOPY '/disk2/df2.cpy' FORMAT '/disk1/df2.cpy';
RMAN> BACKUP AS COPY REUSE TARGETFILE '/u01/oracle/11.2.0.2/dbs/orapwcrd' AUXILIARY
FORMAT '/u01/oracle/11.2.0.2/dbs/orapwcrd';
RMAN> BACKUP AS COPY CURRENT CONTROLFILE FOR STANDBY AUXILIARY format
'+DATA/crd/data1/control01.ctl';
Incremental backups
RMAN> BACKUP INCREMENTAL LEVEL=0 DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL=1 DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL=2 DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL 2 CUMULATIVE DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL=0 DATABASE PLUS ARCHIVELOG;
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE SKIP INACCESSIBLE DATABASE;
RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr' DATABASE;
RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG
'oltp' DATABASE;
RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 351986 DATABASE FORMAT
'/tmp/incr_standby_%U';
RMAN> BACKUP INCREMENTAL FROM SCN 629184 DATAFILE 5 FORMAT '/tmp/ForStandby_%U'
TAG 'FORSTANDBY';
LIST command
Produce a detailed listing of backup sets or copies.
LIST
{ INCARNATION [OF DATABASE [[']database_name[']]]
| [EXPIRED] {listObjectSpec
[ maintQualifier | RECOVERABLE [untilClause] ]... | recordSpec}
};
listObjectSpec::=
{BACKUP [OF listObjectList] [listBackupOption] | COPY [OF listObjectList] |
archivelogRecordSpecifier}
listObjectList::=
[ DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE [']tablespace_name['] [, [']tablespace_name[']]...
| archivelogRecordSpecifier
| DATABASE [SKIP TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| CONTROLFILE
| SPFILE
]...
listBackupOption::=
[[BY BACKUP] [VERBOSE] | SUMMARY | BY {BACKUP SUMMARY|FILE}]
REPORT command
Report backup status: database, files, and backups. Perform detailed analyses of the content of
the recovery catalog.
REPORT
{{NEED BACKUP [{INCREMENTAL | DAYS} [=] integer| REDUNDANCY [=] integer | RECOVERY
WINDOW OF integer DAYS)]
| UNRECOVERABLE
}
reportObject
| SCHEMA [atClause]
| OBSOLETE [obsOperandList]
}
[DEVICE TYPE deviceSpecifier [,deviceSpecifier]... ]
reportObject::=
[ DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...
| DATABASE [SKIP TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
]
atClause::=
{AT TIME [=] 'date_string' | AT SCN [=] integer|AT SEQUENCE [=] integer THREAD [=] integer
}
obsOperandList::=
[REDUNDANCY [=] integer | RECOVERY WINDOW OF integer DAYS | ORPHAN]...
CHANGE command
Update the status of a backup in the RMAN repository. Mark a backup piece, image copy, or
archived redo log as having the status UNAVAILABLE or AVAILABLE; remove the repository record
for a backup or copy; override the retention policy for a backup or copy; update the recovery
catalog with the DB_UNIQUE_NAME for the target database.
listObjList::=
[DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| archivelogRecordSpecifier
| DATABASE [SKIP TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| CONTROLFILE
| SPFILE
]...
recordSpec::=
{{BACKUPPIECE | PROXY}
{'media_handle' [, 'media_handle']... | primary_key [, primary_key]... | TAG [=] ['] tag_name [']
}
| BACKUPSET primary_key [, primary_key]...
| {CONTROLFILECOPY | DATAFILECOPY}
{{primary_key [, primary_key]... | 'filename' [, 'filename']...}
| TAG [=] ['] tag_name ['] [, ['] tag_name [']]...
}
| ARCHIVELOG {primary_key [, primary_key]... | 'filename' [, 'filename']...}
}
CROSSCHECK command
Check whether files managed by RMAN, such as archived logs, datafile copies, and backup pieces,
still exist on disk or tape.
CROSSCHECK
{{BACKUP [OF listObjList] | COPY [OF listObjList] | archivelogRecordSpecifier} [maintQualifier
[maintQualifier]...]
| recordSpec [DEVICE TYPE deviceSpecifier [, deviceSpecifier]...]
};
listObjList::=
[ DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| archivelogRecordSpecifier
| DATABASE [SKIP TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| CONTROLFILE
| SPFILE
]...
recordSpec::=
{{ BACKUPPIECE | PROXY }
{ 'media_handle' [, 'media_handle']...| primary_key [, primary_key]... | TAG [=] ['] tag_name
['] }
| BACKUPSET primary_key [, primary_key]...
| { CONTROLFILECOPY | DATAFILECOPY }
{ {primary_key [, primary_key]... | 'filename' [, 'filename']...}
| TAG [=] ['] tag_name ['] [, ['] tag_name [']]...
}
| ARCHIVELOG { primary_key [, primary_key]... | 'filename' [, 'filename']... }
}
SQL command
Execute a SQL statement from within Recovery Manager.
RESTORE command
Restore files from backup sets or from disk copies to the default or a new location.
RESTORE
[(] restoreObject [(restoreSpecOperand [restoreSpecOperand]...] [)]...
[ CHANNEL ['] channel_id [']
| PARMS [=] 'channel_parms'
| FROM { BACKUPSET | DATAFILECOPY }
| untilClause
| FROM TAG [=] ['] tag_name [']
| VALIDATE
| DEVICE TYPE deviceSpecifier [, deviceSpecifier]...
]...;
restoreObject::=
{ CONTROLFILE [TO 'filename']
| DATABASE [SKIP [FOREVER] TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| archivelogRecordSpecifier
| SPFILE [TO [PFILE] 'filename']
}
restoreSpecOperand::=
{ CHANNEL ['] channel_id ['] | FROM TAG [=] ['] tag_name ['] | PARMS [=] 'channel_parms'
| FROM {AUTOBACKUP [{MAXSEQ | MAXDAYS} [=] integer)]... | 'media_handle'}
}
RMAN> RESTORE DATABASE;
RMAN> RESTORE DATABASE VALIDATE;
RMAN> RESTORE DATABASE PREVIEW;
RMAN> RESTORE DATABASE PREVIEW SUMMARY;
RMAN> RESTORE DATABASE SKIP TABLESPACE temp, history;
RMAN> RESTORE DATABASE UNTIL SCN 154876;
Restore the control file, (to all locations specified in the parameter file) then restore the database,
using that control file:
STARTUP NOMOUNT;
RUN
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
RESTORE CONTROLFILE;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
}
RECOVER command
Perform media recovery from RMAN backups and copies. Apply redo log files and incremental
backups to datafiles or data blocks restored from backup or datafile copies, to update them to a
specified time.
recoverObject::=
{ DATABASE
[ untilClause
| [untilClause] SKIP [FOREVER] TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| TABLESPACE [']tablespace_name['] [, [']tablespace_name[']]...
| DATAFILE datafileSpec [, datafileSpec]...
}
recoverOptionList::=
{ DELETE ARCHIVELOG [MAXSIZE {integer [K | M | G]}]
| CHECK READONLY
| NOREDO
| {FROM TAG | ARCHIVELOG TAG} [=] ['] tag_name [']
...
}
DELETE command
Delete backups and copies, remove references to them from the recovery catalog, and update
their control file records to status DELETED.
recordSpec::=
{ { BACKUPPIECE | PROXY }
{ 'media_handle' [, 'media_handle']...| primary_key [, primary_key]...| TAG [=] ['] tag_name
['] }
| BACKUPSET primary_key [, primary_key]...
| { CONTROLFILECOPY | DATAFILECOPY }
{ {primary_key [, primary_key]... | 'filename' [, 'filename']...}
| TAG [=] ['] tag_name ['] [, ['] tag_name [']]...
}
| ARCHIVELOG { primary_key [, primary_key]... | 'filename' [, 'filename']... }
listObjectList::=
[ DATAFILE datafileSpec [, datafileSpec]...
| TABLESPACE ['] tablespace_name ['] [, ['] tablespace_name [']]...
| archivelogRecordSpecifier
| DATABASE [SKIP TABLESPACE [']tablespace_name['] [, [']tablespace_name[']] ...]
| CONTROLFILE
| SPFILE
]...
DUPLICATE command
Use backups of the target database to create a duplicate database that we can use for testing
purposes or to create a standby database.
RMAN> DUPLICATE TARGET DATABASE;
RMAN> DUPLICATE TARGET DATABASE TO dwhdb;
RMAN> DUPLICATE TARGET DATABASE TO test PFILE=/u01/apps/db/inittest.ora;
RMAN> DUPLICATE TARGET DATABASE TO devdb NOFILENAMECHECK;
RMAN> DUPLICATE DATABASE 'prod' DBID 139525561 TO 'dupdb' NOFILENAMECHECK;
RMAN> DUPLICATE DATABASE TO "cscp" NOFILENAMECHECK BACKUP LOCATION
'/apps/oracle/backup';
RMAN> DUPLICATE TARGET DATABASE TO dup FROM ACTIVE DATABASE NOFILENAMECHECK
PASSWORD FILE SPFILE;
SWITCH command
Specify that a datafile copy is now the current datafile, i.e. the datafile pointed to by the control
file. This command is equivalent to the SQL statement ALTER DATABASE RENAME FILE as it
applies to datafiles.
RMAN> SWITCH DATABASE TO COPY;
RMAN> SWITCH TABLESPACE users TO COPY;
RMAN> SWITCH DATAFILE ALL;
RMAN> SWITCH DATAFILE '/disk1/tols.dbf' TO DATAFILECOPY '/disk2/tols.copy';
RMAN> SWITCH DATAFILE "+DG_OLD/db/datafile/sysaux.260.699468081" TO COPY;
RMAN> SWITCH TEMPFILE 1;
RMAN> SWITCH TEMPFILE 1 TO '/newdisk/dbs/temp1.f';
RMAN> SWITCH TEMPFILE ALL;
RMAN> SWITCH CLONE DATAFILE ALL;
CATALOG command
Add information about file copies and user-managed backups to the catalog repository.
RMAN> CATALOG DATAFILECOPY '/disk1/old_datafiles/01_10_2009/users01.dbf';
RMAN> CATALOG DATAFILECOPY '/disk2/backup/users01.bkp' LEVEL 0;
RMAN> CATALOG CONTROLFILECOPY '/disk3/backup/cf_copy.bkp';
RMAN> CATALOG ARCHIVELOG '/disk1/arch1_731.dbf', '/disk1/arch1_732.dbf';
RMAN> CATALOG BACKUPPIECE '/disk1/c-874220581-20090428-01';
RMAN> CATALOG LIKE '/backup';
RMAN> CATALOG START WITH '/fs2/arch';
RMAN> CATALOG START WITH '/disk2/archlog' NOPROMPT;
RMAN> CATALOG START WITH '+dg2';
RMAN> CATALOG RECOVERY AREA;
ALLOCATE command
Establish a channel, which is a connection between RMAN and a database instance.
RMAN> ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
RMAN> ALLOCATE CHANNEL ch DEVICE TYPE DISK FORMAT ‗C:\ORACLEBKP\DB_U%‘;
RMAN> ALLOCATE CHANNEL t1 DEVICE TYPE DISK CONNECT 'sys/pwd@bkp1‘;
RMAN> ALLOCATE CHANNEL c1 DEVICE TYPE sbt PARMS
'ENV=(OB_MEDIA_FAMILY=wholedb_mf)';
RMAN> ALLOCATE CHANNEL t1 DEVICE TYPE sbt PARMS 'ENV=(OB_DEVICE_1=tape1,
OB_DEVICE_2=tape3)';
RMAN> ALLOCATE CHANNEL t1 TYPE 'sbt_tape'
PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so.1';
RMAN> ALLOCATE CHANNEL t1 TYPE 'sbt_tape' SEND
"NB_ORA_CLIENT=CLIENT_MACHINE_NAME";
RMAN> ALLOCATE CHANNEL 'dev1' TYPE 'sbt_tape' PARMS 'ENV=OB2BARTYPE=ORACLE8,
OB2APPNAME=ORCL, OB2BARLIST=MACHINENAME_ORCL_ARCHLOGS)';
RMAN> ALLOCATE CHANNEL y1 TYPE DISK RATE 70M;
RMAN> ALLOCATE AUXILIARY CHANNEL ac1 TYPE DISK;
RMAN> ALLOCATE AUXILIARY CHANNEL ac2 DEVICE TYPE sbt;
BLOCKRECOVER command
Will recover the corrupted blocks.
RMAN> BLOCKRECOVER CORRUPTION LIST;
RMAN> BLOCKRECOVER DATAFILE 8 BLOCK 22;
RMAN> BLOCKRECOVER DATAFILE 7 BLOCK 233,235 DATAFILE 4 BLOCK 101;
RMAN> BLOCKRECOVER DATAFILE 2 BLOCK 12,13 DATAFILE 3 BLOCK 5,98,99 DATAFILE 4 BLOCK
19;
RMAN> BLOCKRECOVER DATAFILE 3 BLOCK 2,4,5 TABLESPACE sales DBA 4194405,4194412
FROM DATAFILECOPY;
RMAN> BLOCKRECOVER TABLESPACE dwh DBA 4194404,4194405 FROM TAG "weekly";
RMAN> BLOCKRECOVER TABLESPACE dwh DBA 94404 RESTORE UNTIL TIME 'SYSDATE-2';
VALIDATE command
Examine a backup set and report whether its data is intact. RMAN scans all of the backup pieces in
the specified backup sets and looks at the checksums to verify that the contents can be
successfully restored.
RMAN> VALIDATE BACKUPSET 218;
RMAN> VALIDATE BACKUPSET 3871, 3890;
RMAN> VALIDATE DATABASE; -- 11g R1
RMAN> VALIDATE CHECK LOGICAL DATABASE;
RMAN> VALIDATE SKIP INACCESSIBLE DATABASE;
RMAN> VALIDATE COPY OF DATABASE;
RMAN> VALIDATE TABLESPACE dwh;
RMAN> VALIDATE COPY OF TABLESPACE dwh;
RMAN> VALIDATE DATAFILE 2;
RMAN> VALIDATE DATAFILE 4,8;
RMAN> VALIDATE DATAFILE 4 BLOCK 56;
RMAN> VALIDATE DATAFILE 8 SECTION SIZE = 200M;
RMAN> VALIDATE CURRENT CONTROLFILE;
RMAN> VALIDATE SPFILE;
RMAN> VALIDATE RECOVERY FILES;
RMAN> VALIDATE RECOVERY AREA;
RMAN> VALIDATE DB_RECOVERY_FILE_DEST;
SPOOL command
Write RMAN output to a log file.
RMAN> SPOOL LOG TO '/tmp/spool.log';
RMAN> SPOOL LOG TO '/tmp/backup.log' APPEND;
RMAN> SPOOL LOG OFF;
run command
Execute a sequence of one or more RMAN commands, which are one or more statements executed
within the braces of RUN.
RMAN> run {
ALLOCATE CHANNEL c1 TYPE DISK FORMAT '/orabak/%U';
BACKUP TABLESPACE users;
}
RMAN> run {
ALLOCATE CHANNEL c1 TYPE DISK FORMAT '&1/%U';
BACKUP TABLESPACE &2;
}
RMAN> run
{
ALLOCATE CHANNEL dev1 DEVICE TYPE DISK FORMAT '/fs1/%U';
ALLOCATE CHANNEL dev2 DEVICE TYPE DISK FORMAT '/fs2/%U';
BACKUP(TABLESPACE system,fin,mark FILESPERSET 20) (DATAFILE 2,3,6);
}
RMAN> CREATE SCRIPT df {BACKUP DATAFILE &1 TAG &2.1 FORMAT '/disk1/&3_%U';}
RMAN> CREATE SCRIPT backup_ts_users FROM FILE 'backup_ts_users.rman';
RMAN> CREATE GLOBAL SCRIPT gl_backup_db {BACKUP DATABASE PLUS ARCHIVELOG;}
RMAN> CREATE GLOBAL SCRIPT backup_db
COMMENT "back up any database from the recovery catalog, with logs"
{BACKUP DATABASE;}
CONVERT command
Convert datafile formats for transporting tablespaces and databases across platforms.
RMAN> CONVERT DATABASE NEW DATABASE 'prodwin' TRANSPORT SCRIPT
'/tmp/convertdb/transportscript' TO PLATFORM 'Microsoft Windows IA (32-bit)'
DB_FILE_NAME_CONVERT '/disk1/oracle/dbs','/tmp/convertdb';
RMAN> CONVERT DATABASE ON DESTINATION PLATFORM CONVERT SCRIPT
'/tmp/convertdb/convertscript.rman'TRANSPORT SCRIPT '/tmp/convertdb/transportscript.sql' NEW
DATABASE 'prodwin' FORMAT '/tmp/convertdb/%U';
RMAN> CONVERT DATABASE ON DESTINATION PLATFORM CONVERT SCRIPT
'/tmp/convert_newdb.rman' TRANSPORT SCRIPT '/tmp/transport_newdb.sql' NEW DATABASE
'prodaix' DB_FILE_NAME_CONVERT '/u01/oradata/datafile','+DATA';
RMAN> exit;
RMAN> quit;
SEND command
Send a vendor-specific quoted string to one or more specific channels.
RMAN> SEND 'OB_DEVICE tape1';
HOST command
Invoke an operating system command-line subshell from within RMAN or run a specific operating
system command.
RMAN> HOST;
RMAN> HOST 'ls -lt /disk2/*';
RMAN> HOST '/bin/mv $ORACLE_HOME/dbs/*.arc /disk2/archlog/';
This utility is focused at collecting information that will aid in problem diagnosis. When logging a
call, Oracle Support will often request that you install the RDA utility, run it and upload the output
to Metalink for analysis.
It‘s not only a great tool for troubleshooting but also very helpful for documenting an Oracle
environment.
RDA offers lots of reporting options, is relatively unobtrusive and provides easy to read results.
You can run it on just about any version of the Database or Oracle Applications or Operating
System and it is smart enough to figure out where to go and what to gather.
Once installed and run rda.sh or rda.pl, you have to answer some questions and send it off to
gather information about your environment. As result you will get a lot of TXT and HTML files. The
simplest way of reviewing the output files is to launch a web browser on the same machine where
rda.sh has run and open the file RDA__START.htm located in the RDA_Output directory. If you pull
up the RDA__START.htm, you can browse through information about your database, server, Java,
applications tier, forms and just about anything else you ever wanted to know. And it‘s all nicely
formatted in HTML with drill-down links.
Download the patch from Metalink, FTP to database box and unzip it.
To run RDA
$./rda.sh –vdt or $perl rda.pl
[[Answer bundle of questions]]
For more options, read the README_UNIX.txt or README_WINDOWS.txt in the installation
directory.
Examples:
./rda.sh -f -y -e RPT_GROUP='XD',ORACLE_SID=$ORACLE_SID,ORACLE_HOME=`grep -v -e
^[#,*,+] /etc/oratab | grep $ORACLE_SID |cut -f2 -
d:`,SQL_LOGIN='/',SQL_SYSDBA=1,ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -
d:`,ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` -p Exadata_Assessment
./rda.sh -p Exadata_Assessment
./rda.sh -vT ora600:`grep -i ora-00600 $B_D_D |cut -f1 -d:`
./rda.sh -p advanced DBM
./rda.sh -p Exadata_FailedDrives
./rda.sh ONET
./rda.sh -f -y -e RPT_GROUP='XD2',ORACLE_SID=$ORACLE_SID,ORACLE_HOME=`grep -v -e
^[#,*,+] /etc/oratab | grep $ORACLE_SID |cut -f2 -
d:`,SQL_LOGIN='/',SQL_SYSDBA=1,ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -
d:`,ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` ONET
./rda.sh -T oraddc
./rda.sh -f -y -e RPT_GROUP='XD2',ORACLE_SID=$ORACLE_SID,ORACLE_HOME=`grep -v -e
^[#,*,+] /etc/oratab | grep $ORACLE_SID |cut -f2 -
d:`,SQL_LOGIN='/',SQL_SYSDBA=1,ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -
d:`,ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` -T oraddc
./rda.sh EXA
./rda.sh -f -y -e RPT_GROUP='XD2',ORACLE_SID=$ORACLE_SID,ORACLE_HOME=`grep -v -e
^[#,*,+] /etc/oratab | grep $ORACLE_SID |cut -f2 -
d:`,SQL_LOGIN='/',SQL_SYSDBA=1,ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -
d:`,ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` EXA
./rda.sh -p Exadata_SickCell
./rda.sh -p Exadata_IbSwitch
./rda.sh -p Exadata_NetworkCabling
Recycle bin
Recycle bin in Oracle
Recycle bin is actually a data dictionary table containing information about dropped objects. When
an object has been dropped from a locally managed tablespace (LMTS), which is not the SYSTEM
tablespace, the database does not immediately delete the object & reclaim the space associated
with the object. Instead, it places the object and any dependent objects in the recycle bin, which is
similar to deleting a file/folder from Windows/Macintosh. You can then restore the object, its data
and its dependent objects from the recycle bin.
The FLASHBACK DROP and the FLASHBACK TABLE feature places the object in the recycle bin after
removing the object from the database. This eliminates the need to perform a point-in-time
recovery operation.
When objects are dropped, the objects are not moved from the tablespace they were in earlier;
they still occupy the space there. The recycle bin is merely a logical structure that catalogs the
dropped objects.
When the recycle bin is enabled, dropped tables and their dependent objects are placed in the
recycle bin.
unique_id is a 26-character globally unique identifier for this object, which makes the
recycle bin name unique across all databases
version is a version number assigned by the database
Use the following command to see recycle bin contents:
SQL> SELECT * FROM RECYCLEBIN;
or
SQL> SHOW RECYCLEBIN
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------- ------------------------ ---------- -----------------
ATTACHMENT BIN$Wk/N7nbuC2DgRAAAd7F0UA==$0 TABLE 2008-10-28:11:46:55
This shows the original name of the table, as well as the new name in the bin.
Note that users can see only objects that they own in the recycle bin.
Remember, placing tables in the recycle bin does not free up space in the original tablespace. To
free the space, you need to purge the bin using:
SQL> PURGE RECYCLEBIN;
But what if you want to drop the table completely, without needing a flashback feature, in that
case, you can drop it permanently using:
SQL> DROP TABLE table-name PURGE;
This is similar to SHIFT+DELETE in Windows. This command will not rename the table to the
recycle bin name; rather, it will be deleted permanently, as it would have been before Oracle 10g.
You can query objects that are in the recycle bin, just as you can query other objects. However,
you must specify the name of the object as it is identified in the recycle bin.
SQL> SELECT * FROM "BIN$W1PPyhVRSbuv6g+V69OgRQ==$0";
If the tables are not really dropped in this process, therefore not releasing the tablespace, what
happens when the dropped objects take up all of that space?
When a tablespace is completely filled up with recycle bin data such that the datafiles have to
extend to make room for more data, the tablespace is said to be under "space pressure." In that
scenario, objects are automatically purged from the recycle bin in a first-in-first-out manner. The
dependent objects (such as indexes) are removed before a table is removed.
Similarly, space pressure can occur with user quotas as defined for a particular tablespace. The
tablespace may have enough free space, but the user may be running out of his or her allotted
portion of it. In such situations, Oracle automatically purges objects belonging to that user in that
tablespace.
In addition, there are several ways you can manually control the recycle bin. If you want to purge
the specific table from the recyclebin after its drop, you could issue
SQL> PURGE TABLE table-name;
This command will remove table and all dependent objects such as indexes, constraints, and so on
from the recycle bin, saving some space.
If you want to permanently drop an index from the recycle bin, you can do so using:
SQL> PURGE INDEX index-name;
This will remove the index only, leaving the copy of the table in the recycle bin. Sometimes it
might be useful to purge at a higher level. For instance, you may want to purge all the objects in
recycle bin in a tablespace.
You would issue:
SQL> PURGE TABLESPACE tablespace-name;
You may want to purge only the recycle bin for a particular user in that tablespace. This approach
could come handy in data warehouse environments where users create and drop many transient
tables. You could modify the command above to limit the purge to a specific user only:
SQL> PURGE TABLESPACE tablespace-name USER user-name;
The PURGE TABLESPACE command only removes recyclebin segments belonging to the currently
connected user. Therefore, it may not remove all the recyclebin segments in the tablespace. You
can determine which users have recyclebin segments in a target tablespace using the following
query:
SQL> SELECT DISTINCT owner FROM dba_recyclebin WHERE ts_name = "tablespace-name";
You can then use the above PURGE TABLESPACE command to purge the segments for each of the
users.
A normal user, such as SCOTT, could clear his own recycle bin with
SQL> PURGE RECYCLEBIN;
The PURGE DBA_RECYCLEBIN command can be used only if you have SYSDBA system privileges.
It removes all objects from the recycle bin, regardless of user.
Note: When a table is retrieved from the recycle bin, all the dependent objects for the table that
are in the recycle bin are retrieved with it. They cannot be retrieved separately.
The un-drop feature brings the table back to its original name, but not the associated objects like
indexes and triggers, which are left with the recycled names. Sources such as views and
procedures defined on the table are not recompiled and remain in the invalid state. These old
names must be retrieved/renamed manually and then applied to the flashed-back table.
A few types of dependent objects are not handled like the simple index above.
o Bitmap join indexes are not put in the recyclebin when their base table is DROPped, and
not retrieved when the table is restored with FLASHBACK DROP.
o The same goes for materialized view logs; when you drop a table, all mview logs defined
on that table are permanently dropped, not put in the recyclebin.
o Referential integrity constraints that reference another table are lost when the table is put
in the recyclebin and then restored.
If space limitations force Oracle to start purging objects from the recyclebin, it purges indexes
first.
The constraint names are also not retrievable from the view. They have to be renamed from other
sources.
When you drop a tablespace including its contents, the objects in the tablespace are not placed in
the recycle bin and the database purges any entries in the recycle bin for objects located in the
tablespace.
The database also purges any recycle bin entries for objects in a tablespace when you drop the
tablespace, not including contents, and the tablespace is otherwise empty. Likewise:
When you drop a user, any objects belonging to the user are not placed in the recycle bin
and any objects in the recycle bin are purged.
When you drop a cluster, its member tables are not placed in the recycle bin and any
former member tables in the recycle bin are purged.
When you drop a type, any dependent objects such as subtypes are not placed in the
recycle bin and any former dependent objects in the recycle bin are purged.
When the recycle bin is disabled, dropped tables and their dependent objects are not placed in the
recycle bin; they are just dropped, and you must use other means to recover them (such as
recovering from backup). Disabling the recycle bin does not purge or otherwise affect objects
already in the recycle bin.
Related Views
RECYCLEBIN$ (base table)
DBA_RECYCLEBIN
USER_RECYCLEBIN
RECYCLEBIN (synonym for USER_RECYCLEBIN)
Profiles
Profiles in Oracle
Profiles were introduced in Oracle 8.
Profiles are, set of resource limits, used to limit system resources a user can use. It allows us to
regulate the amount of resources used by each database user by creating and assigning profiles to
them.
Whenever you create a database, one default profile will be created and assigned to all the users
you have created. The name of default profile is DEFAULT.
Kernel Resources
sessions_per_user -- Maximum concurrent sessions allowed for a user.
cpu_per_session -- Maximum CPU time limit per session, in hundredth of a second.
cpu_per_call -- Maximum CPU time limit per call, in hundredth of a second. Call being
parsed, executed and fetched.
connect_time -- Maximum connect time per session, in minutes.
idle_time -- Maximum idle time before user is disconnected, in minutes.
logical_reads_per_session -- Maximum blocks read per session.
logical_reads_per_call -- Maximum blocks read per call.
private_sga -- Maximum amount of private space in SGA.
composite_limit -- Sum of cpu_per_session, connect_time, logical_reads_per_session and
private_sga.
In order to enforce above kernel resource limits, init parameter resource_limit must be set to true.
SQL> ALTER SYSTEM SET resource_limit=TRUE SCOPE=BOTH;
Password limiting functionality is not affected by this parameter.
Password Resources
Notes:
If a session exceeds one of these limits, Oracle will terminate the session. If there is a logoff
trigger, it won't be executed.
In order to track password limits, Oracle stores the history of passwords for a user in
USER_HISTORY$.
Creating Profiles
In Oracle, the default cost assigned to a resource is unlimited. By setting resource limits, you can
prevent users from performing operations that will tie up the system and prevent other users from
performing operations. You can use resource limits for security to ensure that users log off the
system and do not leave the sessions connected for long periods of time.
e.g:
SQL> CREATE PROFILE onsite LIMIT
PASSWORD_LIFE_TIME 45
PASSWORD_GRACE_TIME 12
PASSWORD_REUSE_TIME 3
PASSWORD_REUSE_MAX 5
FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 2
CPU_PER_CALL 5000
PRIVATE_SGA 250K
LOGICAL_READS_PER_CALL 2000;
Following is the create profile statement for DEFAULT profile (with all default values):
SQL> CREATE PROFILE "DEFAULT" LIMIT
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL UNLIMITED
CONNECT_TIME UNLIMITED
IDLE_TIME UNLIMITED
SESSIONS_PER_USER UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED
PRIVATE_SGA UNLIMITED
COMPOSITE_LIMIT UNLIMITED
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_LOCK_TIME 1
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_VERIFY_FUNCTION NULL;
Assigning Profiles
By default, when you create a user, they are assigned to the DEFAULT profile. If you
don't want that
CREATE USER user-name IDENTIFIED BY password PROFILE profile-name;
e.g. SQL> CREATE USER satya IDENTIFIED BY satya PROFILE onsite;
Dropping Profiles
Syntax for dropping a profile, without dropping users.
If you want to remove this password verify function, assign NULL value to
PASSWORD_VERIFY_FUNCTION.
ALTER PROFILE profile-name LIMIT PASSWORD_VERIFY_FUNCTION NULL;
Related Views
profile$
profname$
DBA_PROFILES
RESOURCE_COST (shows the unit cost associated with each resource)
USER_RESOURCE_LIMITS (each user can find information on his resources and limits)
Password file (orapwd utility) in Oracle
Oracle password file stores passwords for users with administrative privileges.
If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate the
DBA. Obviously, DBA password cannot be stored in the database, because Oracle cannot access
the database before the instance is started up. Therefore, the authentication of the DBA must
happen outside of the database. There are two distinct mechanisms to authenticate the DBA:
(i) Using the password file or
(ii) Through the operating system (groups). Any OS user under dba group, can login as SYSDBA.
REMOTE_LOGIN_PASSWORDFILE
The init parameter REMOTE_LOGIN_PASSWORDFILE specifies if a password file is used to
authenticate the Oracle DBA or not. If it set either to SHARED or EXCLUSIVE, password file will be
used.
NONE - Oracle ignores the password file if it exists i.e. no privileged connections are allowed over
non secure connections. If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and
the password file is missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to
NONE.
EXCLUSIVE (default) - Password file is exclusively used by only one (instance of the) database.
Any user can be added to the password file. Only an EXCLUSIVE file can be modified. EXCLUSIVE
password file enables you to add, modify, and delete users. It also enables you to change the SYS
password with the ALTER USER command.
SHARED - The password file is shared among databases. A SHARED password file can be used by
multiple databases running on the same server, or multiple instances of an Oracle Real Application
Clusters (RAC) database. However, the only user that can be added/authenticated is SYS.
A SHARED password file cannot be modified i.e. you cannot add users to a SHARED password file.
Any attempt to do so or to change the password of SYS or other users with the SYSDBA or
SYSOPER or SYSASM (this is from Oracle 11g) privileges generates an error. All users needing
SYSDBA or SYSOPER or SYSASM system privileges must be added to the password file when
REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you can change
REMOTE_LOGIN_PASSWORDFILE to SHARED.
This option is useful if you are administering multiple databases or a RAC database.
If a password file is SHARED or EXCLUSIVE is also stored in the password file. After its creation,
the state is SHARED. The state can be changed by setting REMOTE_LOGIN_PASSWORDFILE and
starting the database i.e. the database overwrites the state in the password file when it is started
up.
ORAPWD
You can create a password file using orapwd utility. For some Operating systems, you can create
this file as part of standard installation.
Users are added to the password file when they are granted the SYSDBA or SYSOPER or SYSASM
privilege.
The Oracle orapwd utility assists the DBA while granting SYSDBA, SYSOPER and SYSASM privileges
to other users. By default, SYS is the only user that has SYSDBA and SYSOPER privileges. Creating
a password file, via orapwd, enables remote users to connect with administrative privileges.
Examples:
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
$ orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=secret
$ orapwd file=orapwprod entries=30 force=y
C:\orapwd file=%ORACLE_HOME%\database\PWD%ORACLE_SID%.ora password=2012
entries=20
C:\orapwd file=D:\oracle11g\product\11.1.0\db_1\database\pwdsfs.ora password=id entries=6
force=y
$ orapwd file=orapwPRODB3 password=abc123 entries=10 ignorecase=n
$ orapwd file=orapwprodb password=oracle1 ignorecase=y
FILE
Name to assign to the password file, which will hold the password information. You must supply
complete path. If you supply only filename, the file is written to the current directory. The contents
are encrypted and are unreadable. This argument is mandatory.
The filenames allowed for the password file are OS specific. Some operating systems require the
password file to adhere to a specific format and be located in a specific directory. Other operating
systems allow the use of environment variables to specify the name and location of the password
file.
If you are running multiple instances of Oracle Database using Oracle Real Application Clusters
(RAC), the environment variable for each instance should point to the same password file.
It is critically important to secure password file.
PASSWORD
This is the password the privileged users should enter while connecting as SYSDBA or SYSOPER or
SYSASM.
ENTRIES
Entries specify the maximum number of distinct SYSDBA, SYSOPER and SYSASM users that can be
stored in the password file.
This argument specifies the number of entries that you require the password file to accept. The
actual number of allowable entries can be higher than the number of users, because
the orapwd utility continues to assign password entries until an OS block is filled. For example, if
your OS block size is 512 bytes, it holds four password entries. The number of password entries
allocated is always a multiple of four.
Entries can be reused as users are added to and removed from the password file. When you
exceed the allocated number of password entries, you must create a new password file. To avoid
this necessity, allocate a number of entries that is larger than you think you will ever need.
FORCE
(Optional) If Y, permits overwriting an existing password file. An error will be returned if password
file of the same name already exists and this argument is omitted or set to N.
IGNORECASE
(Optional) If Y, passwords are treated as case-insensitive i.e. case is ignored when comparing the
password that the user supplies during login with the password in the password file.
NOSYSDBA
(Optional) For Oracle Data Vault installations.
Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or SYSOPER or
SYSASM system privileges for a database.
Column Description
This column contains the name of the user that is recognized by the password file.
USERNAME
If the value of this column is TRUE, then the user can log on with SYSDBA system
SYSDBA
privilege.
If the value of this column is TRUE, then the user can log on with SYSOPER system
SYSOPER
privilege.
If the value of this column is TRUE, then the user can log on with SYSASM system
SYSASM
privilege.
If orapwd has not yet been executed or password file is not available, attempting to grant SYSDBA
or SYSOPER or SYSASM privileges will result in the following error:
SQL> grant sysdba to satya;
ORA-01994: GRANT failed: cannot add users to public password file
If your server is using an EXCLUSIVE password file, use the GRANT statement to grant the
SYSDBA or SYSOPER or SYSASM system privilege to a user, as shown in the following example:
SQL> grant sysdba to satya;
When you grant SYSDBA or SYSOPER or SYSASM privileges to a user, that user's name and
privilege information are added to the password file. If the server does not have an EXCLUSIVE
password file (i.e. if the initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or
SHARED, or the password file is missing), Oracle issues an error if you attempt to grant these
privileges.
Use the REVOKE statement to revoke the SYSDBA or SYSOPER or SYSASM system privilege from a
user, as shown in the following example:
SQL> revoke sysoper from satya;
A user's name remains in the password file only as long as that user has at least one of these
three privileges. If you revoke all 3 privileges, Oracle removes the user from the password file.
Because SYSDBA, SYSOPER and SYSASM are the most powerful database privileges, the WITH
ADMIN OPTION is not used in the GRANT statement. That is, the grantee cannot in turn grant the
SYSDBA or SYSOPER or SYSASM privilege to another user. Only a user currently connected as
SYSDBA can grant or revoke another user's SYSDBA or SYSOPER or SYSASM system privileges.
These privileges cannot be granted to roles, because roles are available only after database
startup.
If you receive the file full error (ORA-01996) when you try to grant SYSDBA or SYSOPER or
SYSASM system privileges to a user, you must create a larger password file and regrant the
privileges to the users.
$ adrci
adrci> HELP [COMMAND]
adrci> help
HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
adrci> help set browser
adrci> help ips get remote keys
adrci> help merge file
adrci> help register incident file
adrci> EXIT
adrci> exit
adrci> QUIT
adrci> quit
adrci> HOST ["host_command_string"]
adrci> host
adrci> host "ls -l *.sh"
adrci> host "vi tailalert.adrci"
adrci> IPS ADD [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY problem_key | SECONDS
secs | TIME start_time TO end_time] PACKAGE pkg_id
adrci> ips add incident 22 package 33
adrci> ips add problem 3 package 1
adrci> ips add seconds 60 package 9
adrci> ips add time '2011-10-01 10:00:00.00 -07:00' to '2011-10-01 23:00:00.00 -
07:00' package 7
adrci> IPS COPY IN FILE filename [TO new_name][OVERWRITE] PACKAGE pkgid [INCIDENT incid]
adrci> ips copy in file /home/sona/trace/orcl_ora_63175.trc to
ADR_HOME/trace/orcl_ora_63175.trc package 11 incident 6
adrci> ips copy in file /tmp/key_file.txt to new_file.txt overwrite package 12
adrci> IPS PACK [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY prob_key | SECONDS secs |
TIME start_time TO end_time] [CORRELATE {BASIC|TYPICAL|ALL}] [IN path]
adrci> ips pack
adrci> ips pack incident 41
adrci> ips pack incident 24001 in /tmp
adrci> ips pack problem 5
adrci> ips pack problemkey ORA 4031
adrci> ips pack seconds 60 correlate all
adrci> ips pack time '2011-12-31 23:59:59.00 -07:00' to '2012-01-01 01:01:01.00 -07:00';
adrci> IPS REMOVE [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY prob_key] PACKAGE
pkg_id
adrci> ips remove incident 2 package 7
adrci> ips remove problem 4 package 8
adrci> SHOW ALERT [-p predicate_string] [-term] [[-tail [num] [-f]] | [-file alert_file_name]]
The fields in the predicate are the fields:
ORIGINATING_TIMESTAMP timestamp
NORMALIZED_TIMESTAMP timestamp
ORGANIZATION_ID text(65)
COMPONENT_ID text(65)
HOST_ID text(65)
HOST_ADDRESS text(17)
MESSAGE_TYPE number
MESSAGE_LEVEL number
MESSAGE_ID text(65)
MESSAGE_GROUP text(65)
CLIENT_ID text(65)
MODULE_ID text(65)
PROCESS_ID text(33)
THREAD_ID text(65)
USER_ID text(65)
INSTANCE_ID text(65)
DETAILED_LOCATION text(161)
UPSTREAM_COMP_ID text(101)
DOWNSTREAM_COMP_ID text(101)
EXECUTION_CONTEXT_ID text(101)
EXECUTION_CONTEXT_SEQUENCE number
ERROR_INSTANCE_ID number
ERROR_INSTANCE_SEQUENCE number
MESSAGE_TEXT text(2049)
MESSAGE_ARGUMENTS text(129)
SUPPLEMENTAL_ATTRIBUTES text(129)
SUPPLEMENTAL_DETAILS text(129)
PROBLEM_KEY text(65)
adrci> show alert -- it will open alert in vi editor
adrci> show alert -tail -- like Unix command tail
adrci> show alert -tail 200 -- like Unix command tail -200
adrci> show alert -tail -f -- like Unix command tail –f
adrci> show alert -tail 20 -f
adrci> show alert -term
adrci> show alert -P "MESSAGE_TEXT LIKE '%ORA-%'"
-- To list all the "ORA-" errors
adrci> SHOW PROBLEM [-p predicate_string] [-last 50|num|-all] [-orderby (field1, field2, ...)
[ASC|DSC]]
The field names that users can specify in the predicate are:
PROBLEM_ID number
PROBLEM_KEY text(550)
FIRST_INCIDENT number
FIRSTINC_TIME timestamp
LAST_INCIDENT number
LASTINC_TIME timestamp
IMPACT1 number
IMPACT2 number
IMPACT3 number
IMPACT4 number
SERVICE_REQUEST text(64)
BUG_NUMBER text(64)
adrci> show problem
adrci> show problem -all
adrci> show problem -p "problem_id=44"
adrci> show problem -p "problem_key='ORA 600 [krfw_switch_4]'"
adrci> SHOW TRACEFILE [file1 file2 ...] [-rt | -t] [-i inc1 inc2 ...] [-path path1 path2 ...]
adrci> CREATE [OR REPLACE] [PUBLIC | PRIVATE] VIEW viewname [(alias)] AS select_stmt
adrci> create view my_incident as select incident_id from incident
Diagnostic Data Extractor (DDE)
adrci> help DDE
HELP DDE [topic]
Available Topics:
CREATE INCIDENT
EXECUTE ACTION
SET INCIDENT
SET PARAMETER
SHOW ACTIONS
SHOW AVAILABLE ACTIONS
adrci> MERGE ALERT [(projection_list)] [-t ref_time_str beg_sec end_sec] [-tdiff|-tfull] [-term] [-
plain]
adrci> merge alert
adrci> merge alert (fct ti)
adrci> merge alert -tfull -term
adrci> merge alert -t "2012-05-25 09:50:20.442132 -05:30" 10 10 -term
adrci> MERGE FILE [(projection_list)] [file1 file2 ...] [-t ref_time_str beg_sec end_sec | -i incid
[beg_sec end_sec]] [-tdiff|-tfull] [-alert] [-insert_incident] [-term] [-plain]
adrci> merge file (pid fct) orcl_m000_8544.trc -alert -term
adrci> merge file -i 1 -alert -tfull
adrci> merge file orcl_m000_8544.trc -i 1 600 10 -alert
adrci> merge file -t "2012-04-25 22:32:40.442132 -05:30" 10 10 -term
adrci> QUERY [(field1, field2, ...)] relation_name [-p predicate_string] [-orderby (field1, field2,
...) [ASC|DSC]]
adrci> query (incident_id, create_time) incident -p "incident_id > 111"
adrci> query (problem_key) problem -p "PROBLEM_KEY like '%700%'"
adrci> SELECT [* | field1, ...] FROM relation_name [WHERE predicate_string] [ORDER BY field1,
... [ASC|DSC|DESC]] [GROUP BY field1, ... ] [HAVING having_predicate]
adrci> select incident_id, create_time from incident where "incident_id > 90"
adrci> select * from problem where "PROBLEM_KEY like '%7445%'"
adrci> SHOW SECTION section_name [(projection_list)] <[file1 file2 ...] | [[file1 file2 ...] -i inc1
inc2 ...] | [[file1 file2 ...] -path path1 path2 ...]> [-plain] [-term]
adrci> show section sql_exec orcl_ora_27483.trc
adrci> SHOW TRACE [(projection_list)] <[file1 file2 ...] | [[file1 file2 ...] -i inc1 inc2 ...] | [[file1
file2 ...] -path path1 path2 ...]> [-plain] [-xp "path_pred_string"] [-xr display_path] [-term]
adrci> show trace (co, fi, li) orcl_ora_27483.trc
adrci> show trace (id, co) -i 145 152
adrci> show trace orcl_ora_27483.trc -xp "/dump[nm='Data block']" -xr /*
adrci> show trace alert_dwh.log
adrci> show trace ~alertLog
adrci> SHOW TRACEMAP [(projection_list)] <[file1 file2 ...] | [[file1 file2 ...] -i inc1 inc2 ...] |
[[file1 file2 ...] -path path1 path2 ...]> [-level level_num] [-xp "path_pred_string"] [-term]
adrci> show tracemap (nm)
adrci> show tracemap (co, nm) orcl_ora_27483.trc
adrci> show tracemap -level 3 -i 145 152
adrci> show tracemap orcl_ora_27483.trc -xp "/dump[nm='Data block']"
Automatic Storage Management (ASM) is a new type of file system. ASM provided a foundation for
highly efficient storage management with kernelized asynchronous I/O, direct I/O, redundancy,
striping, and an easy way to manage storage. ASM is recommended file system for RAC and single
instance ASM for storing database files. This provides direct I/O to the file and performance is
comparable with that provided by raw devices. Oracle creates a separate instance for this purpose.
ASM includes volume management functionality similar to that of a generic logical volume
manager. Automatic Storage Management (ASM) will take physical disk partitions and manages
their contents in a way that efficiently supports the files needed to create an Oracle database.
Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing
the administrator to reference diskgroups rather than hundreds of individual disks and files, which
are managed by ASM. The ASM functionality is an extension of the Oracle Managed Files (OMF)
functionality that also includes striping and mirroring to provide balanced and secure storage. The
ASM functionality can be used in combination with existing raw and cooked file systems, along
with OMF and manually managed files.
Before ASM, there were only two choices: file system storage and raw disk storage. File system
storage is flexible, allowing the DBA to see the individual files and to move them, copy them, and
back them up easily, but it also incurs overhead. Raw disk storage has no file directories on it, and
Oracle manages its blocks directly, which makes it more efficient. Raw disk storage is such a
manageability nightmare that few DBAs use it.
ASM is the middle ground. It's raw disk storage managed by Oracle, and it is very efficient. Oracle
uses a scaled down Oracle instance to simulate a file structure on it where none exists, by
recording all the metadata. The metadata enables the Recovery Manager (RMAN) to backup and
restore Oracle files easily within it.
Setting up storage takes a significant amount of time during most database installations. Zeroing
on a specific disk configuration from among the multiple possibilities requires careful planning and
analysis, and most important, intimate knowledge of storage technology, volume managers, and
file systems. The design tasks at this stage can be loosely described as follows:
1. Confirm that storage is recognized at the OS level and determine the level of
redundancy protection that might already be provided (hardware RAID, called external
redundancy in ASM).
2. Assemble and build logical volume groups and determine if striping or mirroring is
also necessary.
3. Build a file system on the logical volumes created by the logical volume manager.
4. Set the ownership and privileges so that the Oracle process can open, read, and
write to the devices.
5. Create a database on that file system while taking care to create special files such
as redo logs, temporary tablespaces, and undo tablespaces in non-RAID locations, if
possible.
All above tasks, striping, mirroring, logical file system building, are done to serve Oracle database.
Oracle database offers some techniques of its own to simplify or enhance the process. Lets DBAs
execute many of the above tasks completely within the Oracle framework. Using ASM you can
transform a bunch of disks to a highly scalable and performance file system/volume manager
using nothing more than what comes with Oracle database software at no extra cost and you don't
need to be an expert in disk, volume managers, or file system management.
ASM Instance
The ASM functionality is controlled by an ASM instance. This is a special instance, not a database
where users can create objects, just the memory structures and as such is very small and
lightweight.
With ASM, you don't have to create anything on the OS side; the feature will group a set of
physical disks to a logical entity known as a diskgroup. A diskgroup is analogous to a striped and
optionally mirrored, file system, with important differences: it's not a general-purpose file system
for storing user files and it's not buffered. Diskgroup offers the advantage of direct access to this
space as a raw device, yet provides the convenience and flexibility of a file system. All the
metadata about the disks are stored in the diskgroups themselves, making them as self-describing
as possible.
This special ASM instance is similar to other file systems in that it must be running for ASM to
work and can't be modified by the user. One ASM instance can serve number of Oracle databases.
ASM instance and database instances have to be present on same server. Otherwise it will not
work.
Logical volume managers typically use a function, such as hashing, to map the logical address of
the blocks to the physical blocks. This computation uses CPU cycles. When a new disk is added,
this typical striping function requires each bit of the entire data set to be relocated. In contrast,
ASM uses this special instance to address the mapping of the file extents to the physical disk
blocks. This design, in addition to being fast in locating the file extents, helps while adding or
removing disks because the locations of file extents need not be coordinated.
You should start the instance up when the server is booted i.e. it should be started before the
database instances, and it should be one of the last things stopped when the server is shutdown.
From11.2.0, we can use ASMCMD to start and stop the ASM instances.
To create an ASM instance first create pfile, init+ASM.ora, in the /tmp directory, containing the
following parameter.
INSTANCE_TYPE = ASM
The ASM instance is now ready to use for creating and mounting diskgroups.
Once an ASM instance is present, diskgroups can be used for the following parameters in database
instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:
CONTROL_FILES
DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_n
DB_RECOVERY_FILE_DEST
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST
STANDBY_ARCHIVE_DEST
ASM Diskgroups
The main components of ASM are diskgroups, each of which comprise of several physical disks
that are controlled as a single unit. The physical disks are known as ASM disks, while the files that
reside on the disks are known as ASM files. The locations and names for the files are controlled by
ASM, but user-friendly aliases and directory structures can be defined for ease of reference.
Diskgroup is a terminology used for logical structure which holds the database files. Each
diskgroup consists of disks/raw devices where the files are actually stored. Any ASM file (and it's
redundant copy) is completely contained within a single diskgroup. A diskgroup might contain files
belonging to several databases and a single database can use files from multiple diskgroups.
In the initial release of Oracle 10g, ASM diskgroups were a black box. We had to manipulate ASM
diskgroups with SQL statements while logged in to the special ASM instance that manages the
diskgroups.
In Oracle 10g Release 2, Oracle introduced a new command line tool called ASMCMD that lets you
look inside ASM volumes (which are called diskgroups). Now you can do many tasks from the
command line.
While creating a diskgroup, we have to specify an ASM diskgroup type based on one of the
following three redundancy levels:
Normal redundancy - for 2-way mirroring, requiring two failure groups, when ASM
allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary
copy. ASM chooses the disk on which to store the secondary copy in a different failure group other
than the primary copy.
High redundancy - for 3-way mirroring, requiring three failure groups, in this case the
extent is mirrored across 3 disks.
External redundancy - to not use ASM mirroring. This is used if you are using hardware
mirroring or third party redundancy mechanism like RAID, Storage arrays.
ASM is supposed to stripe the data and also mirror the data (if using Normal, High redundancy).
So this can be used as an alternative for RAID (Redundant Array of Inexpensive Disks) 0+1
solutions.
No, we cannot modify the redundancy for diskgroup once it has been created. To alter it we will be
required to create a new diskgroup and move the files to it. This can also be done by restoring full
backup on the new diskgroup.
Failure groups are defined within a diskgroup to support the required level of redundancy, using
normal/high redundancy. They contain the mirrored ASM extents and must be containing different
disks and preferably on separate disk controller.
In addition failure groups and preferred names for disks can be defined in CREATE DISKGROUP
statement. If the NAME clause is omitted the disks are given a system generated name like
"disk_group_1_0001". The FORCE option can be used to move a disk from another diskgroup into
this one.
Creating diskgroups
SQL> CREATE DISKGROUP dg_asm_data NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1, '/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1, '/devices/diskb2' NAME diskb2;
For two-way mirroring we would expect a diskgroup to contain two failure groups, so individual
files are written to two locations.
For three-way mirroring we would expect a diskgroup to contain three failure groups, so individual
files are written to three locations.
In the above command, database will create a diskgroup named dg_grp1 with the physical disks
named /dev/d1, /dev/d2, and so on. Instead of giving disks separately, we can also specify disk
names in wildcards in the DISK clause as DISK '/dev/d*'.
We have also specified a clause EXTERNAL REDUNDANCY, which indicates that the failure of a disk
will bring down the diskgroup. This is usually the case when the redundancy is provided by the
hardware, such as mirroring. If there is no hardware based redundancy, the ASM can be set up to
create a special set of disks called failgroup in the diskgroup to provide that redundancy.
Although it may appear as such, d3 and d4 are not mirrors of d1 and d2. Rather, ASM uses all the
disks to create a fault-tolerant system. For example, a file on the diskgroup might be created in d1
with a copy maintained on d4. A second file may be created on d3 with copy on d2, and so on.
That is, primary copy will be on one failure group and secondary copy will be another (third copy
will be another, for high redundancy).
Failure of a specific disk allows a copy on another disk so that the operation can continue. For
example, you could lose the controller for both disks d1 and d2 and ASM would mirror copies of
the extents across the failure group to maintain data integrity.
Listing diskgroups
To find out all the diskgroups:
SQL> SELECT * FROM V$ASM_DISKGROUP;
Dropping diskgroups
Diskgroups can be deleted using the DROP DISKGROUP statement.
SQL> DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;
SQL> DROP DISKGROUP disk_group_1 FORCE; (11g R1 command)
SQL> DROP DISKGROUP disk_group_1 FORCE INCLUDING CONTENTS; (11gR1 command)
Altering diskgroups
Disks can be added or removed from diskgroups using the ALTER DISKGROUP statement.
Remember that the wildcard "*" can be used to reference disks so long as the resulting string does
not match a disk already used by an existing diskgroup.
Adding disks
We may have to add additional disks into the diskgroup to accommodate growing demand.
SQL> ALTER DISKGROUP dskgrp1 ADD DISK '/dev/d5';
SQL> ALTER DISKGROUP dg1 ADD DISK '/devices/disk*3', '/devices/disk*4';
Listing disks
The following command shows all the disks managed by the ASM instance for all the client
databases.
SQL> SELECT * FROM V$ASM_DISK;
Dropping disks
We can remove a disk from diskgroup.
SQL> ALTER DISKGROUP dg4 DROP DISK diska4;
Resizing disks
Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. The statement
can be used to resize individual disks, all disks in a failure group or all disks in the diskgroup. If
the SIZE clause is omitted the disks are resized to the size of the disk returned by the OS.
SQL> ALTER DISKGROUP dg_data_1 RESIZE DISK diska1 SIZE 150G;
Undropping disks
The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be
undone. It will not revert drops that have completed, or disk drops associated with the dropping of
a diskgroup.
SQL> ALTER DISKGROUP disk_group_1 UNDROP DISKS;
Online disks
SQL> ALTER DISKGROUP data ONLINE DISK 'disk_0000', 'disk_0001';
SQL> ALTER DISKGROUP data ONLINE DISKS IN FAILGROUP 'fg_99';
SQL> ALTER DISKGROUP data ONLINE ALL;
Offline disks
SQL> ALTER DISKGROUP data OFFLINE DISK 'disk_0000', 'disk_0001';
SQL> ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP 'fg_99';
SQL> ALTER DISKGROUP data OFFLINE DISK d1_0001 DROP AFTER 30m;
Mounting diskgroups
Diskgroups are mounted at ASM instance startup and unmounted at ASM instance shutdown.
Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement
as below.
Dismounting diskgroups
Changing attributes
SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'compatible.rdbms' = '11.1'; (11gR1 command)
SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'compatible.asm' = '11.2';
(11gR1 command)
SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'disk_repair_time' = '4.5h'; (11gR1 command)
Listing attributes
SQL> SELECT * FROM V$ASM_ATTRIBUTE;
Rebalancing
Diskgroups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP
statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used.
Rebalancing is only needed when the speed of the automatic rebalancing is not appropriate.
SQL> ALTER DISKGROUP disk_group_1 REBALANCE POWER 6;
IO statistics of a diskgroup
SQL> SELECT * FROM V$ASM_DISK_IOSTAT;
Directories
As in other file systems, an ASM directory is a container for files, and an ASM directory can be part
of a tree structure of other directories. The fully qualified filename represents a hierarchy of
directories in which the plus sign (+) represent the root directory. In each diskgroup, ASM
automatically creates a directory hierarchy that corresponds to the structure of the fully qualified
filenames in the diskgroup. The directories in this hierarchy are known as system-generated
directories.
An absolute path refers to the full path of a file or directory. An absolute path begins with a plus
sign (+) followed by a diskgroup name, followed by subsequent directories in the directory tree.
The absolute path includes directories until the file or directory is reached. A fully qualified
filename is an example of an absolute path to a file. A relative path includes only the part of the
filename or directory name that is not part of the current directory. That is, the path to the file or
directory is relative to the current directory.
A directory hierarchy can be defined using the ALTER DISKGROUP statement to support ASM file
aliasing.
Creating a directory
SQL> ALTER DISKGROUP dg_1 ADD DIRECTORY '+dg_1/my_dir';
Renaming a directory
SQL> ALTER DISKGROUP dg_1 RENAME DIRECTORY '+dg_1/my_dir' TO '+dg_1/my_dir_2';
Deleting a directory
SQL> ALTER DISKGROUP dg_1 DROP DIRECTORY '+dg_1/my_dir_2' FORCE;
Files
There are several ways to reference ASM files. Some forms are used during creation and some for
referencing ASM files. Every file created in ASM gets a system-generated filename, known as fully
qualified filename, this is same as complete path name in a local file system.
e.g: +dgroup2/crm/CONTROLFILE/Current.256.541956473
+dg_fra/hrms/DATAFILE/users.309.621906475
ASM does not place system-generated files into user-created directories; it places them only in
system-generated directories. We can add aliases or other directories to a user-created directory.
Dropping Files
Files are not deleted automatically if they are created using aliases, as they are not Oracle
Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For
these circumstances it is necessary to manually delete the files, as shown below.
Creating an alias
Creating an alias, using the fully qualified filename
SQL> ALTER DISKGROUP dg_3 ADD ALIAS '+dg_3/my_dir/users.dbf' FOR
'+dg_3/mydb/datafile/users.392.333222555';
Renaming an alias
SQL> ALTER DISKGROUP dg_3 RENAME ALIAS '+dg_3/my_dir/my_file.dbf' TO
'+dg_3/my_dir/my_file2.dbf';
Deleting an alias
SQL> ALTER DISKGROUP dg_3 DELETE ALIAS '+dg_3/my_dir/my_file.dbf';
Templates
Templates are named groups of attributes that can be applied to the files within a diskgroup. The
level of redundancy and the granularity of the striping can be controlled using templates. Default
templates are provided for each file type stored by ASM, but additional templates can be defined
as needed.
MIRROR, COARSE, FINE attributes are cannot be set for external redundancy.
Creating a template
SQL> ALTER DISKGROUP dg_4 ADD TEMPLATE mf_template ATTRIBUTES (MIRROR FINE);
Modifying a template
SQL> ALTER DISKGROUP dg_4 ALTER TEMPLATE c_template ATTRIBUTES (COARSE);
ASMCMD equivalent for this command is chtmpl (11gR2 command).
Listing templates
SQL> SELECT * FROM V$ASM_TEMPLATE;
ASMCMD equivalent for this command is lstmpl (11gR2 command).
Dropping a template
SQL> ALTER DISKGROUP dg_4 DROP TEMPLATE u_template;
Checking Metadata
The internal consistency of diskgroup metadata can be checked in a number of ways using the
CHECK clause of the ALTER DISKGROUP statement.
User Management
From Oracle 11g release 2, we can create ASM users and usergroups and manipulate the
permissions and ownership of files.
Volume Management
From 11g release 2, we can create Oracle ASM Dynamic Volume Manager (Oracle ADVM) volumes
in a diskgroups. The volume device associated with the dynamic volume can then be used to host
an (Oracle ACFS) file system.
Creating a volume
SQL> ALTER DISKGROUP data_dg ADD VOLUME volume1 SIZE 20G;
ASMCMD equivalent for this command is volcreate (11gR2 command).
Listing volume information
To find out the volumes information.
SQL> SELECT * FROM V$ASM_VOLUME;
ASMCMD equivalent for this command is volinfo (11gR2 command).
Dropping a volume
SQL> ALTER DISKGROUP data_dg DROP VOLUME volume1;
ASMCMD equivalent for this command is voldelete (11gR2 command).
Resizing a volume
SQL> ALTER DISKGROUP fra_dg RESIZE VOLUME volume1 SIZE 25G;
ASMCMD equivalent for this command is volresize (11gR2 command).
Disabling a volume
SQL> ALTER DISKGROUP redo_dg DISABLE VOLUME volume1;
SQL> ALTER DISKGROUP ALL DISABLE VOLUME ALL;
ASMCMD equivalent for this command is voldisable (11gR2 command).
Enabling a volume
SQL> ALTER DISKGROUP arch_dg ENABLE VOLUME volume1;
ASMCMD equivalent for this command is volenable (11gR2 command).
Setting a volume
SQL> ALTER DISKGROUP asm_dg_data MODIFY VOLUME volume1 USAGE 'acfs';
ASMCMD equivalent for this command is volset (11gR2 command).
Misc
Listing the current operations
SQL> SELECT * FROM V$ASM_OPERATION;
ASMCMD equivalent for this command is lsop (11gR2 command).
Creating Tablespaces
Now create a tablespace in the main database using a datafile in the ASM-enabled storage.
SQL> CREATE TABLESPACE user_data DATAFILE '+dskgrp1/user_data_01'
SIZE 1024M;
ASM filenames can be used in place of conventional filenames for most Oracle file types, including
controlfiles, datafiles, logfiles etc. For example, the following command creates a new tablespace
with a datafile in the disk_group_1 diskgroup.
SQL> CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;
Note how the diskgroup is used as a virtual file system. This approach is useful not only in
datafiles, but in other types of Oracle files as well. For instance, we can create online redo log files
as
...
LOGFILE GROUP 1 (
'+dskgrp1/redo/group_1.258.659723485',
'+dskgrp2/redo/group_1.258.659723485'
) SIZE 50M,
...
Archived log destinations can also be set to a diskgroup. Everything related to Oracle database can
be created in an ASM diskgroup. Backup is another great use of ASM. You can set up a bunch of
inexpensive disks to create the recovery area of a database, which can be used by RMAN to create
backup datafiles and archived log files.
ASM supports files created by and read by the Oracle database only; it is not a replacement for a
general-purpose file system.
Until Oracle 11g release1, we cannot store binaries or flat files. We cannot use ASM for storing the
voting disk and OCR. It is due to the fact that Clusterware starts before ASM instance and it should
be able to access these files which are not possible if you are storing it on ASM. You will have to
use raw devices or OCFS or any other shared storage. But from 11g release 2, we can store ALL
files on ASM.
Can we see the files stored in the ASM instance using standard Unix commands?
No, you cannot see the files using standard Unix commands like ls. You need to use utility called
asmcmd to do this. Oracle 10g release2 introduces asmcmd which makes administration very
easy.
$ asmcmd
ASMCMD>
ASMLIB is the support library for the ASM. ASMLIB allows an Oracle database using ASM more
efficient and capable access to diskgroups. The purpose of ASMLIB, is to provide an alternative
interface to identify and access block devices. ASMLIB API enables storage and OS vendors to
supply extended storage-related features.
5) Restore the controlfile into the new location from the old location.
RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name';
10) Create new redo logs in ASM and delete the old ones.
ASM Views
The ASM configuration can be viewed using the V$ASM_% views, which contain information
depending on whether they are queried from the ASM instance, or a dependant database instance.
ASM backup can be taken by spooling the output of the ASM views to text file.
SPOOL asm_views.log
SET ECHO ON
SELECT * FROM V$ASM_ALIAS;
SELECT * FROM V$ASM_ATTRIBUTE;
SELECT * FROM V$ASM_CLIENT;
SELECT * FROM V$ASM_DISK;
SELECT * FROM V$ASM_DISK_IOSTAT;SELECT * FROM V$ASM_DISK_STAT;
SELECT * FROM V$ASM_DISKGROUP;
SELECT * FROM V$ASM_DISKGROUP_STAT;
SELECT * FROM V$ASM_FILE;
SELECT * FROM V$ASM_FILESYSTEM;
SELECT * FROM V$ASM_OPERATION;
SELECT * FROM V$ASM_TEMPLATE;
SELECT * FROM V$ASM_USER;
SELECT * FROM V$ASM_USERGROUP;
SELECT * FROM V$ASM_USERGROUP_MEMBER;
SELECT * FROM V$ASM_VOLUME;
SELECT * FROM V$ASM_VOLUME_STAT;
SPOOL OFF
Managing ASM through SQL interfaces, in Oracle Database 10g Release 1, posed a challenge for
administrators who were not very familiar with SQL and preferred a more conventional command
line interface. From Oracle Database 10g Release 2, we have an option to manage the ASM files by
using ASMCMD, a powerful and easy to use command line tool.
In Oracle Database 10g Release 1, ASM diskgroups are not visible outside the database for regular
file system administration tasks such as copying and creating directories. From Oracle Database
10g Release 2, we can transfer the files from ASM to locations outside of the diskgroups via FTP
and through a web browser using HTTP.
ASMCMD has equivalent commands for all the SQL commands that can be performed through
SQL*Plus.
ASMCMD is included in the installation of the Oracle Database software (from 10g Release 2), no
separate setup is required.
We can‘t see the files stored in the ASM instance using standard UNIX commands like ls. We need
to use asmcmd. The asmcmd command line interface is very similar to standard UNIX/Linux
commands, but it only manages files at the OS level. The asmcmd utility supports all common
Linux commands. The idea of this tool is to make administering the ASM files similar to
administering standard OS files.
Invoking asmcmd
To start using ASMCMD, You must log in as a user that has SYSASM or SYSDBA privileges through
OS authentication. The environmental variables ORACLE_HOME and ORACLE_SID must be set to
the ASM instance. Ensure that $ORACLE_HOME/bin is in PATH environment variable. You must
have ASM configured on the machine and started, and the ASM diskgroups are mounted.
The default value of the ASM SID for a single instance database is +ASM. In Real Application
Clusters (RAC) environments, the default value of the ASM SID on any node is +ASMnode#
(+ASM1, +ASM2, ...).
$ export ORACLE_SID=+ASM
To enter in interactive mode, type asmcmd, which brings up the ASM command prompt.
$ asmcmd
ASMCMD>
We can specify the -p option with the asmcmd command to include the current directory in the
ASMCMD prompt.
$ asmcmd -p
ASMCMD [+] > cd dgroup1/hrms
ASMCMD [+dgroup1/hrms] >
We can specify the -a option to choose the type of connection, either SYSASM or SYSDBA.
From 11g, the SYSASM privilege is preferred & default.
$ asmcmd -a sysasm or asmcmd -a sysdba
We can specify the -V option when starting asmcmd to displays the asmcmd version number. This
is from 11g.
$ asmcmd -V
asmcmd version 11.1.0.6.0
We can specify the -v option when starting asmcmd to displays the additional information. This is
from 11g.
$ asmcmd -v
Filenames are not case sensitive, but are case retentive, that is, ASMCMD retains the case of the
directory that you entered.
The fully qualified filename represents a hierarchy of directories in which the plus sign (+)
represent the root directory. We can also create our own directories as subdirectories of the
system-generated directories using the ALTER DISKGROUP command or with the ASMCMD mkdir
command. Those directories can have subdirectories, and we can navigate the hierarchy of both
system-generated directories and user-created directories with the cd command.
When we run an ASMCMD command that accepts a filename or directory name as an argument,
we can use the name as either an absolute path or a relative path.
An absolute path refers to the full path of a file or directory. An absolute path begins with a plus
sign (+) followed by a diskgroup name, followed by subsequent directories in the directory tree.
The absolute path includes directories until the file or directory is reached. A fully qualified
filename is an example of an absolute path to a file.
Using an absolute path enables the command to access the file or directory regardless of where
the current directory is set. The following rm command uses an absolute path for the filename:
ASMCMD [+] > rm +dgroup1/hrms/datafile/users.280.555341999
A relative path includes only the part of the filename or directory name that is not part of the
current directory. That is, the path to the file or directory is relative to the current directory.
ASMCMD [+dgroup1/hrms/DATAFILE] > ls -l undotbs1.267.557429239
Paths to directories can also be relative and we can use the pseudo-directories "." and ".." in place
of a directory name. The wildcard characters * and % match zero or more characters anywhere
within an absolute or relative path, which saves typing of the full directory or file name. These two
wildcard characters behave identically.
Alias
As in UNIX, we can create alias names for files listed in the diskgroup. Aliases are user-friendly
filenames that are references or pointers to system-generated filenames. Aliases are similar to
symbolic links in UNIX flavors. ASM's auto generated names can be a bit strange, so creating
aliases makes working with ASM files with ASMCMD easier. Aliases simplify ASM filename
administration. We can create aliases with an ALTER DISKGROUP command or with the mkalias
ASMCMD command.
An alias has at a minimum the diskgroup name as part of its complete path. We can create aliases
at the diskgroup level or in any system-generated or user-created subdirectory. The following are
examples of aliases:
+dgroup1/ctl1.f
+dgroup1/crm/ctl1.f
+dgroup1/mydir/ctl1.f
If you run the ASMCMD ls (list directory) with the -l flag, each alias is listed with the system-
generated file to which the alias refers.
ctl1.f => +dgroup2/hrms/CONTROLFILE/Current.256.541956473
We can run the ASMCMD utility in either interactive or non interactive mode.
2. Enter an ASMCMD command and press Enter. The command runs and displays its output, and
then ASMCMD prompts for the next command.
3. Continue entering ASMCMD commands. Enter the command exit to exit ASMCMD.
$ asmcmd ls -l
State Type Rebal Unbal Name
MOUNTED NORMAL N N DG_GROUP1/
MOUNTED NORMAL N N DG_GROUP2/
dir_name may be specified as either an absolute path or a relative path, including the . and ..
pseudo-directories and wildcards.
If a wildcard pattern matches only one directory when using wildcard characters with cd, then cd
changes the directory to that destination. If the wildcard pattern matches multiple directories, then
ASMCMD does not change the directory but instead returns an error.
pwd command
Displays the absolute path of the current directory.
pwd
ASMCMD> pwd
help command
Displays the syntax of a command and description of the command parameters.
help [command] or ? [command]
If you do not specify a value for command, then the help command lists all of the ASMCMD
commands and general information about using the ASMCMD utility.
ASMCMD> help
ASMCMD> help lsct
ASMCMD> ?
ASMCMD> ? mkgrp
du command
Displays the total space used for files in the specified directory and in the entire directory tree
under the directory.
du [-H] [dir_name]
This command is similar to the du -s command on UNIX flavors. If you do not specify dir_name,
then information about the current directory is displayed. dir_name can contain wildcard
characters.
Used_MB - The total space used in the directory, this value does not include mirroring.
Mirror_used_MB - This value includes mirroring.
For example, if a normal redundancy diskgroup contains 100 MB of data, then assuming that each
file in the diskgroup is 2way mirrored, Used_MB is 100 MB and Mirror_used_MB is 200 MB.
ASMCMD> du TEMPFILE
Used_MB Mirror_used_MB
24582 24582
In this case, Used_MB & Mirror_used_MB are the same, because the diskgroups are not mirrored.
find command
Displays the absolute paths of all occurrences of the specified name pattern (can have wildcards)
in a specified directory and its subdirectories.
find [-t type] dir_name name_pattern
find [--type type] dir_name name_pattern (Oracle 11g R2 syntax)
type can be (these are the type values from the type column of the V$ASM_FILE)
CONTROLFILE,DATAFILE,ONLINELOG,ARCHIVELOG,TEMPFILE,BACKUPSET,PARAMETERFILE,DATAG
UARDCONFIG,FLASHBACK,CHANGETRACKING,DUMPSET,AUTOBACKUP,XTRANSPORT
This command searches the specified directory and all subdirectories under it in the directory tree
for the supplied name_pattern. The value that you use for name_pattern can be a directory name
or a filename.
ls command
Lists the contents of an ASM directory, the attributes of the specified file, or the names and
attributes of all diskgroups from the V$ASM_DISKGROUP_STAT (default) or V$ASM_DISKGROUP.
ls [-lsdrtLacgH] [name]
If name is a directory name, then ASMCMD lists the contents of the directory and depending on
flag settings, ASMCMD also lists information about each directory member. Directories are listed
with a trailing forward slash (/) to distinguish them from files.
If name is a filename, then ASMCMD lists the file and depending on the flag settings, ASMCMD also
lists information about the file.
Flag Description
(none) Displays only filenames and directory names.
Displays extended file information, including striping and redundancy information and
whether the file was system-generated (indicated by Y under the SYS column) or user-
-l
created (as in the case of an alias, indicated by N under the SYS column). Note that
not all possible file attributes or diskgroup attributes are included.
-s Displays file space information.
If the value for the name argument is a directory, then ASMCMD displays information
-d about that directory, rather than the directory contents. Typically used with another
flag, such as the -l flag.
-r or
Reverses the sort order of the listing.
--reverse
-t Sorts the listing by timestamp.
If the value for the name argument is an alias, then ASMCMD displays information
-L
about the file that it references. Typically used with another flag, such as the -l flag.
-a For each listed file, displays the absolute path of the alias that references it.
Selects from V$ASM_DISKGROUP or GV$ASM_DISKGROUP if the -g flag is also
-c
specified.
Selects from GV$ASM_DISKGROUP_STAT or GV$ASM_DISKGROUP if the -c flag is also
-g
specified. GV$ASM_DISKGOUP.INST_ID is included in the output.
-H Suppresses column headings.
-- Shows the permissions of a file (V$ASM_FILE.permission, V$ASM_FILE.owner,
permission V$ASM_FILE.usergroup, V$ASM_ALIAS.name).
pattern Name of a file, directory, or pattern.
If you specify all of the flags, then the command shows a union of their attributes, with duplicates
removed. To see the complete set of column values for a file or a diskgroup, query the
V$ASM_FILE and V$ASM_DISKGROUP.
ASMCMD [+dgroup1/sample/DATAFILE] > ls
EXAMPLE.269.555342243
SYSAUX.257.555341961
SYSTEM.256.555341961
UNDOTBS1.258.555341963
UNDOTBS1.272.557429239
USERS.259.555341963
If you enter ls +, then the command returns information about all diskgroups, including
information about whether the diskgroups are mounted.
ASMCMD [+dgroup1] > ls -l +
State Type Rebal Unbal Name
MOUNTED NORMAL N N DGROUP1/
MOUNTED HIGH N N DGROUP2/
MOUNTED EXTERN N N DGROUP3/
The Sys column, immediately to the left of the Name column, shows if the file or directory was
created by the ASM system. Because the CONTROLFILE directory is not a real file but an alias, the
attributes of the alias, such as size, free space, and redundancy, shown in the first few columns of
the output are null.
ASMCMD> ls +group1/sample/*
+dgroup1/sample/CONTROLFILE/:
Current.260.555342185
Current.261.555342183
+dgroup1/sample/DATAFILE/:
EXAMPLE.269.555342243
SYSAUX.257.555341961
SYSTEM.256.555341961
UNDOTBS1.272.557429239
USERS.259.555341963
+dgroup1/sample/ONLINELOG/:
group_1.262.555342191
group_1.263.555342195
group_2.264.555342197
group_2.265.555342201
+dgroup1/sample/PARAMETERFILE/:
spfile.270.555342443
+dgroup1/sample/TEMPFILE/:
TEMP.268.555342229
lsct command
Lists information about current ASM clients (from V$ASM_CLIENT). A client, is a database or
Oracle ASM Dynamic Volume Manager (Oracle ADVM), uses diskgroups that are managed by the
ASM instance to which ASMCMD is currently connected.
lsct [-gH] [disk_group]
Flag Description
(none) Displays information about current ASM clients from V$ASM_CLIENT.
-g Selects from GV$ASM_CLIENT. GV$ASM_CLIENT.INST_ID is included in the output.
-H Suppresses column headings.
An ASM instance serves as a storage container; it's not a database by itself. Other databases use
the space in the ASM instance for datafiles, control files, and so on.
How do you know how many databases are using an ASM instance?
ASMCMD [+DG1_FRA] > lsct
DB_Name Status Software_Version Compatible_version Instance_Name
PROD CONNECTED 10.2.0.1.0 10.2.0.1.0 PROD
REP CONNECTED 10.2.0.1.0 10.2.0.1.0 REP
lsdg command
Lists all diskgroups and their attributes from V$ASM_DISKGROUP_STAT (default) or
V$ASM_DISKGROUP. The output also includes notification of any current rebalance operation. The
output also includes notification of any current rebalance operation for a diskgroup. If a diskgroup
is specified, then lsdg returns only information about that diskgroup.
lsdg [-gcH] [disk_group]
lsdg [-gH][--discovery][pattern] (11.2.0 syntax)
Flag Description
(none) Displays all the diskgroup attributes.
-c or -- Selects from V$ASM_DISKGROUP or GV$ASM_DISKGROUP if the -g flag is also
discovery specified. This option is ignored if the ASM instance is version 10.1 or earlier.
Selects from GV$ASM_DISKGROUP_STAT or GV$ASM_DISKGROUP if the -c or --
-g discovery flag is also specified. GV$ASM_DISKGOUP.INST_ID is included in the output.
The REBAL column of the GV$ASM_OPERATION is also included in the output.
-H Suppresses column headings.
Returns only information about the specified diskgroup or diskgroups that match the
pattern
supplied pattern.
To see the complete set of attributes for a diskgroup, use the V$ASM_DISKGROUP_STAT or
V$ASM_DISKGROUP.
The following example lists the attributes of the dg_data diskgroup (in 11.2.0).
ASMCMD [+] > lsdg dg_data
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 12288 8835 1117 3859 0 N DG_DATA
mkalias command
Creates an alias for specified system-generated filename.
mkalias file alias
alias must be in the same diskgroup as the system-generated file. Only one alias is permitted for
each ASM file.
The current directory can be created by the system or by the user. You cannot create a directory
at the root (+) level, which is a diskgroup.
rm command
Deletes specified ASM files and directories.
rm [-rf] name [name]...
Flag Description
-f Force, remove it without user interaction.
-r Recursive, remove sub-directories also.
If name is a file or alias (can contain wildcard characters), then the rm command can delete the
file or alias, only if it is not currently in use by a client database.
If name is a directory, then the rm command can delete it only if it is empty (unless the -r flag is
used) and it is not a system-generated directory.
If name is an alias, then the rm command deletes both the alias and the file to which the alias
refers.
For example, if we have an alias,
+dg1/dir1/file.alias => +dg/orcl/DATAFILE/System.256.146589651
then running the
rm -r +dg1/dir1
command removes the +dg1/dir1/file.alias as well as +dg/orcl/DATAFILE/System.256.146589651.
To delete only an alias and retain the file that the alias references, use the rmalias command.
If you use a wildcard, the rm command deletes all of the matches except nonempty directories,
unless you use the -r flag. To recursively delete, use the -r flag. This enables you to delete a
nonempty directory, including all files and directories in it and in the entire directory tree
underneath it. If you use the -r flag or a wildcard character, then the rm command prompts you to
confirm the deletion before proceeding, unless you specify the -f flag. When using the -r flag,
either the system-generated file or the alias must be present in the directory in which you run the
rm command.
rmalias command
Deletes the specified aliases, retaining the files that the aliases reference.
rmalias [-r] alias [alias]...
To recursively delete, use the -r flag. This enables you to delete all of the aliases in the current
directory and in the entire directory tree beneath the current directory. If any user-created
directories become empty as a result of deleting aliases, they are also deleted. Files and
directories created by the system are not deleted.
exit command
Exits ASMCMD and returns control to the OS command prompt.
exit
ASMCMD> exit
cp command
Used to copy files between ASM diskgroups on local instances to and from remote instances. The
file copy cannot be between remote instances. The local ASM instance must be either the source or
the target. We can also use this command to copy files from ASM diskgroups to the OS.
cp [-ifr] [\@connect_identifier:]src_fname [\@connect_identifier:]tgt_fname
cp [-ifr] [\@connect_identifier:]src_fnameN[, src_fnameN+1…]
[\@connect_identifier:]tgt_directory
Flag Description
-i Interactive, prompt before copy file or overwrite.
-f Force, if an existing destination file, remove it and try again without user interaction.
-r Recursive, copy forwarding sub-directories recursively.
The connect_identifier parameter is not required for a local instance copy, which is default. In case
of a remote instance copy, we need to specify the connect_identifier and ASM prompts for a
password in a non-echoing prompt. The connect_identifier is in the form of:
user_name@host_name[.port_number].SID
The user_name, host_name, and SID are required. The default port number is 1521.
src_fname(s) - Source file name to copy from. Enter either the fully qualified file name, or the ASM
alias.
tgt_fname - A user alias for the created target file name or alias directory name.
tgt_directory - A target alias directory within an ASM diskgroup. The target directory must exist,
otherwise the file copy returns an error.
The format of copied files is portable between Little-Endian and Big-Endian systems, if the files
exist in an ASM diskgroup. ASM automatically converts the format when it writes the files. For
copying non-ASM files from or to an ASM diskgroup, you can copy the files to a different endian
platform and then use one of the commonly used utilities to convert the file.
md_backup command
Creates a backup file containing metadata for one or more diskgroups. By default, all the mounted
diskgroups are included in the backup file which is saved in the current working directory. If the
name of the backup file is not specified, ASM names the file AMBR_BACKUP_INTERMEDIATE_FILE.
Here AMBR stands for ASM Managed Backup Recovery.
md_backup [-b location_of_backup] [-g dgname [-g dgname...]]
md_backup [-b location_of_backup] [-g dgname[, dgname...]]
Flag Description
-b Specifies the location in which you want to store the intermediate backup file.
-g Specifies the diskgroup name that needs to be backed up.
This example backs up all of the mounted diskgroups and creates the backup in the current
working directory.
ASMCMD > md_backup
The following example creates a backup of diskgroup asmdsk1 and asmdsk2. The backup will be
saved in the /tmp/dgbackup100221 file.
ASMCMD > md_backup –b /tmp/dgbackup100221 –g admdsk1 –g asmdsk2
md_restore command
This command restores a diskgroup metadata backup.
md_restore -b backup_file [-li] [-t (full)|nodg|newdg] [-f sql_script_file] [-g
'dg_name,dg_name,...'] [-o 'old_dg_name:new_dg_name,...']
Example restores the diskgroup asmdsk1 from the backup script and creates a copy.
ASMCMD> md_restore –t full –g asmdsk1 –i backup_file
Example restores diskgroup asmdsk1 completely but the new diskgroup that is created is called
asmdsk2.
ASMCMD> md_restore –t newdg -o 'asmdsk1:asmdsk2' –i backup_file
Example restores from the backup file after applying the overrides defined in the file override.txt.
ASMCMD> md_restore –t newdg –of override.txt –i backup_file
Example restores the diskgroup data from the backup script and creates a copy.
ASMCMD [+] > md_restore –-full –G data –-silent /tmp/dgbackup20090714
Example restores diskgroup data completely but the new diskgroup that is created is called data2.
ASMCMD [+] > md_restore –-newdg -o 'data:data2' --silent /tmp/dgbackup20090714
Example restores from the backup file after applying the overrides defined in the override.sql
script file.
ASMCMD [+] > md_restore -S override.sql --silent /tmp/dgbackup20090714
lsdsk command
List the disks that are visible to ASM, using V$ASM_DISK_STAT (default) or V$ASM_DISK.
lsdsk [-ksptcgHI] [-d diskgroup_name] [pattern]
lsdsk [-kptgMHI] [-G diskgroup] [--member|--candidate]
[--discovery] [--statistics] [pattern] (11g R2 syntax)
Flag Description
(none) Displays PATH column of V$ASM_DISK.
Displays TOTAL_MB, FREE_MB, OS_MB, NAME, FAILGROUP, LIBRARY, LABEL, UDID,
-k
PRODUCT, REDUNDANCY, and PATH columns of V$ASM_DISK.
-s or -- Displays READS, WRITES, READ_ERRS, WRITE_ERRS, READ_TIME, WRITE_TIME,
statistics BYTES_READ, BYTES_WRITTEN, and PATH columns of V$ASM_DISK.
Displays GROUP_NUMBER, DISK_NUMBER, INCARNATION, MOUNT_STATUS,
-p
HEADER_STATUS, MODE_STATUS, STATE, and PATH columns of V$ASM_DISK.
Displays CREATE_DATE, MOUNT_DATE, REPAIR_TIMER, and PATH columns of
-t
V$ASM_DISK.
Selects from GV$ASM_DISK_STAT or GV$ASM_DISK if the -c flag is also specified.
-g
GV$ASM_DISK.INST_ID is included in the output.
Selects from V$ASM_DISK or GV$ASM_DISK, if the -g flag is also specified. This
-c
option is ignored if the ASM instance is version 10.1 or earlier.
-H Suppresses column headings.
Scans disk headers for information rather than extracting the information from an
-I
ASM instance. This option forces the non-connected mode.
Restricts results to only those disks that belong to the group specified by
-d or -G
diskgroup_name.
Selects from V$ASM_DISK, or from GV$ASM_DISK if the -g flag is also specified. This
--discovery option is always enabled if the Oracle ASM instance is version 10.1 or earlier. This flag
is disregarded if lsdsk is running in non-connected mode.
Displays the disks that are visible to some but not all active instances. These are disks
-M that, if included in a diskgroup, cause the mount of that diskgroup to fail on the
instances where the disks are not visible.
--candidate Restricts results to only disks having membership status equal to CANDIDATE.
--member Restricts results to only disks having membership status equal to MEMBER.
pattern Returns only information about the specified disks that match the supplied pattern.
The k, s, p, and t flags modify how much information is displayed for each disk. If any
combinations of the flags are specified, then the output shows the union of the attributes
associated with each flag.
pattern restricts the output to only disks that matches the pattern specified.
ASMCMD> lsdsk -d DG_DATA -k
ASMCMD> lsdsk -g -t -d DATA1 *_001
This command can run in connected or non-connected mode. The connected mode is always
attempted first. The -I option forces the non-connected mode.
In connected mode, ASMCMD uses dynamic views to retrieve disk information.
In non-connected mode, ASMCMD scans disk headers to retrieve disk information, using an ASM
disk string to restrict the discovery set. This is not supported on Windows.
The first and second examples list information about disks in the data diskgroup.
ASMCMD [+] > lsdsk -t -G data
Create_Date Mount_Date Repair_Timer Path
13-JUL-09 13-JUL-09 0 /devices/diska1
13-JUL-09 13-JUL-09 0 /devices/diska2
13-JUL-09 13-JUL-09 0 /devices/diskb1
13-JUL-09 13-JUL-09 0 /devices/diskb2
remap command
Repairs range of physical blocks on a disk. The remap command only repairs blocks that have read
disk I/O errors. It does not repair blocks that contain corrupted contents, whether those blocks can
be read or not. The command assumes a physical block size of 512 bytes and supports all
allocation unit sizes (1MB to 64MB).
It reads the blocks from a good copy of an ASM mirror and rewrites them to an alternate location
on disk if the blocks on the original location cannot be read properly.
remap diskgroup_name disk_name block_range
Flag Description
diskgroup_name Name of the diskgroup in which a disk must be repaired.
disk_name Name of the disk that must be repaired.
Range of physical blocks to repair, in the format: starting_number-
block_range
ending_number
The following example repairs blocks 4500 through 5599 for disk DATA_001 in diskgroup
DISK_GRP_DATA.
ASMCMD> remap DISK_GRP_DATA DATA_001 4500-5599
The following example repairs blocks 7200 through 8899 for disk largedisk_2 in diskgroup
DISK_GRP_GRA.
ASMCMD> remap DISK_GRP_FRA largedisk_2 7200-8899
startup command
Starts up an Oracle ASM instance.
startup [--nomount] [--restrict] [--pfile pfile_name]
Flag Description
(default) Will mount diskgroups and enables Oracle ADVM volumes.
--nomount Specifies no mount operation.
--restrict Specifies restricted mode.
--pfile Oracle ASM initialization parameter file.
The following is an example of the startup command that starts the Oracle ASM instance without
mounting diskgroups and uses the asm_init.ora initialization parameter file.
ASMCMD> startup --nomount --pfile asm_init.ora
shutdown command
Shuts down an Oracle ASM instance.
shutdown [--abort|--immediate]
Flag Description
(default) normal shutdown.
--abort Shut down aborting all existing operations.
--immediate Shut down immediately.
Oracle strongly recommends that you shut down all database instances that use the Oracle ASM
instance and dismount all file systems mounted on Oracle ASM Dynamic Volume Manager (Oracle
ADVM) volumes before attempting to shut down the Oracle ASM instance with the abort (--abort)
option.
The first example performs a shutdown of the Oracle ASM instance with normal action.
ASMCMD [+] > shutdown
The second example performs a shut down with immediate action
ASMCMD [+] > shutdown –-immediate
The third example performs a shut down that aborts all existing operations.
ASMCMD [+] > shutdown --abort
dsset command
Sets the discovery diskstring value that is used by the Oracle ASM instance and its clients. The
specified diskstring must be valid for existing mounted diskgroups. The updated value takes effect
immediately.
dsset [--normal] [--parameter] [--profile [--force]] diskstring
Flag Description
Sets the discovery string in the Grid Plug and Play (GPnP) profile and in the Oracle
ASM instance. The update occurs after the Oracle ASM instance has successfully
validated that the specified discovery string has discovered all the necessary
--normal
diskgroups and voting files. This command fails if the instance is not using a server
parameter file (SPFILE).
This is the default setting.
Specifies the discovery diskstring that is pushed to the GPnP profile without any
validation by the Oracle ASM instance, ensuring that the instance can discover all the
required diskgroups. The update is guaranteed to be propagated to all the nodes that
--profile [-- are part of the cluster.
force] If --force is specified, the specified diskstring is pushed to the local GPnP profile
without any synchronization with other nodes in the cluster. This command option
updates only the local profile file. This option should only be used for recovery. The
command fails if the Oracle Clusterware stack is running.
Specifies that the diskstring is updated in memory after validating that the discovery
--
diskstring discovers all the current mounted diskgroups and voting files. The
parameter
diskstring is not persistently recorded in either the SPFILE or the GPnP profile.
diskstring Specifies the value for the discovery diskstring.
The following example uses dsset to set the current value of the discovery diskstring in the GPnP
profile.
ASMCMD [+] > dsset /devices/disk*
dsget command
Retrieves the discovery diskstring value that is used by the Oracle ASM instance and its clients.
dsget [[--normal] [--profile [--force]] [--parameter]]
Flag Description
Retrieves the discovery string from the Grid Plug and Play (GPnP) profile and the one
--normal that is set in the Oracle ASM instance. It returns one row each for the profile and
parameter setting. This is the default setting.
--profile [-- Retrieves the discovery string from the GPnP profile. If --force is specified, retrieves
force] the discovery string from the local GPnP profile.
--parameter Retrieves the ASM_DISKSTRING parameter setting of the Oracle ASM instance.
The following example uses dsget to retrieve the current discovery diskstring value from the GPnP
profile and the ASM_DISKSTRING parameter.
ASMCMD [+] > dsget
profile: /devices/disk*
parameter: /devices/disk*
lspwusr command
List the current users from the local Oracle ASM password file.
lspwusr [-H]
orapwusr command
Add, drop, or modify an Oracle ASM password file user. The command requires the SYSASM
privilege to run. A user logged in as SYSDBA cannot change its password using this command.
orapwusr {{ {--add | --modify [--password]}
[--privilege {sysasm|sysdba|sysoper}] } | --delete} user
Flag Description
--add Adds a user to the password file. Also prompts for a password.
--delete Drops a user from the password file.
--modify Changes a user in the password file.
--privilege Sets the role for the user. The options are sysasm, sysdba, and sysoper.
--password Prompts for and then changes the password of a user.
user Name of the user to add, drop, or modify.
This example adds the Satya to the Oracle ASM password file with the role of the user set to
SYSASM.
ASMCMD [+] > orapwusr --add --privilege sysasm Satya
ASMCMD [+] > lspwusr
Username sysdba sysoper sysasm
SYS TRUE TRUE TRUE
Satya TRUE TRUE TRUE
spset command
Sets the location of the Oracle ASM SPFILE in the Grid Plug and Play (GPnP) profile.
spset location
The following is an example of the spset command that sets the location of the Oracle ASM SPFILE
command in the dg_data diskgroup.
ASMCMD> spset +DG_DATA/asm/asmparameterfile/asmspfile.ora
spget command
Retrieves the location of the Oracle ASM SPFILE from the Grid Plug and Play (GPnP) profile.
spget
The location retrieved by spget is the location in the GPnP profile, but not always the location of
the SPFILE currently used. For example, the location could have been recently updated by spset or
spcopy with the -u option on an Oracle ASM instance that has not been restarted. After the next
restart of the Oracle ASM, this location point to the ASM SPFILE currently is being used.
The following is an example of the spget command that retrieves and displays the location of the
SPFILE from the GPnP profile.
ASMCMD [+] > spget
+DATA/asm/asmparameterfile/registry.253.691575633
spbackup command
Backs up an Oracle ASM SPFILE. spbackup does not affect the GPnP profile.
spbackup source destination
The backup file that is created is not a special file type and is not identified as a SPFILE. This file
cannot be copied with spcopy. To copy this backup file, use the ASMCMD cp command.
The first example backs up the Oracle ASM SPFILE from one operating system location to another.
ASMCMD> spbackup /u01/oracle/dbs/spfile+ASM.ora /u01/oracle/dbs/bakspfileASM.ora
The second example backs up the SPFILE from an operating system location to the
data/bakspfileASM.ora diskgroup.
ASMCMD> spbackup /u01/oracle/dbs/spfile+ASM.ora +DG_DATA/bakspfileASM.ora
spcopy command
Copies an Oracle ASM SPFILE from source to destination. To use spcopy to copy an Oracle ASM
SPFILE into a diskgroup, the diskgroup attribute COMPATIBLE.ASM must be set to 11.2 or greater.
spcopy [-u] source destination
-u updates the Grid Plug and Play (GPnP) profile. We can also use spset to update the GPnP
profile.
The first example copies the Oracle ASM SPFILE from one operating system location to another.
ASMCMD> spcopy /u01/oracle/dbs/spfile+ASM.ora /u01/oracle/dbs/testspfileASM.ora
The second example copies the SPFILE from an operating system location to the data diskgroup
and updates the GPnP profile.
ASMCMD> spcopy -u /u01/oracle/dbs/spfile+ASM.ora +DATA/testspfileASM.ora
spmove command
Moves an Oracle ASM SPFILE from source to destination and automatically updates the GPnP
profile. To use spmove to move an Oracle ASM SPFILE into a diskgroup, the diskgroup attribute
COMPATIBLE.ASM must be set to 11.2 or greater.
spmove source destination
The second example moves the SPFILE from an operating system location to the data diskgroup.
ASMCMD> spmove /u01/oracle/dbs/spfile+ASM.ora +DATA/testspfileASM.ora
lsop command
Lists the current operations on diskgroups or Oracle ASM instance, from V$ASM_OPERATION.
lsop
The following example adds the asmdba2 user to the dg_fra diskgroup.
ASMCMD [+] > mkusr dg_fra asmdba2
lsusr command
Lists Oracle ASM users in a diskgroup.
lsusr [-Ha] [-G diskgroup] [pattern]
Flag Description
-H Suppresses column headings.
-a List all users and the diskgroups to which the users belong.
-G Limits the results to the specified diskgroup name.
pattern Displays the users that match the pattern expression.
The example lists users in the asm_dg_data diskgroup and also shows the OS user Id assigned to
the user.
ASMCMD [+] > lsusr -G asm_dg_data
User_Num OS_ID OS_Name
3 1001 oradba
1 1021 asmdba1
2 1022 asmdba2
passwd command
Changes the password of a user. The command requires the SYSASM privilege to run.
passwd user
The user is first prompted for the current password, then the new password. An error will be raised
if the user does not exist in the Oracle ASM password file.
rmusr command
Deletes an OS user from a diskgroup. Only a user authenticated as SYSASM can run this
command.
rmusr [-r] diskgroup user
-r removes all files in the diskgroup that the user owns at the same time that the user is removed.
The following is an example to remove the asmdba2 user from the dg_data2 diskgroup.
ASMCMD [+] > rmusr dg_data2 asmdba2
mkgrp command
Creates a new Oracle ASM user group. We can optionally specify a list of users to be included as
members of the new user group. User group name can have maximum 30 characters.
mkgrp diskgroup usergroup [user] [user...]
This example creates the asm_data user group in the dg_data diskgroup and adds the asmdba1
and asmdba2 users to the user group.
ASMCMD [+] > mkgrp dg_data asm_data asmdba1 asmdba2
lsgrp command
Lists all Oracle ASM user groups.
lsgrp [-Ha] [-G diskgroup] [pattern]
Flag Description
-H Suppresses column headings.
-a Lists all columns.
-G Limits the results to the specified diskgroup name.
pattern Displays the user groups that match the pattern expression.
The following example displays a subset of information about the user groups whose name
matches the asm% pattern.
ASMCMD [+] > lsgrp asm%
DG_Name Grp_Name Owner
DG_FRA asm_fra grid
DG_DATA asm_data grid
The second example displays all information about all the user groups.
ASMCMD [+] > lsgrp –a –G DG_DATA
DG_Name Grp_Name Owner Members
DG_DATA asm_data grid asmdba1 asmdba2
rmgrp command
Removes a user group from a diskgroup. The command must be run by the owner of the group
and also requires the SYSASM privilege to run.
rmgrp diskgroup usergroup
Removing a group might leave some files without a valid group. To ensure that those files have a
valid group, explicitly update those files to a valid group.
The following is an example of the rmgrp command that removes the asm_data user group from
the dg_data diskgroup.
ASMCMD [+] > rmgrp dg_data asm_data
grpmod command
Adds or removes OS users to and from an existing Oracle ASM user group. Only the owner of the
user group can use this command. The command requires the SYSASM privilege to run. This
command accepts an OS user name or multiple user names separated by spaces. The OS users
are typically owners of a database instance home.
grpmod {--add | --delete} diskgroup usergroup user [user...]
Flag Description
--add Specifies to add users to the user group.
--delete Specifies to delete users from the user group.
usergroup Name of the user group.
user Name of the user to add or remove from the user group.
The following example adds the asmdba1 and asmdba2 users to the asm_fra user group of the
dg_fra diskgroup.
ASMCMD [+] > grpmod –-add dg_fra asm_fra asmdba1 asmdba2
The second example removes the asmdba2 user from the asm_data user group of the dg_data
diskgroup.
ASMCMD [+] > grpmod –-delete dg_data asm_data asmdba2
groups command
Lists all the user groups to which the specified user belongs.
groups diskgroup user
chmod command
Changes permissions of a file or list of files. This command accepts a file name or multiple file
names separated by spaces. The specified files must be closed.
chmod mode file [file ...]
We can only set file permissions to read-write, read-only, and no permissions. We cannot set file
permissions to write-only.
ASMCMD [+fra/orcl/archivelog/flashback] > chmod ug+rw log_7.264.684968167
log_8.265.684972027
To view the permissions on a file, use the ASMCMD ls command with the --permission option.
ASMCMD [+] > ls --permission +fra/orcl/archivelog/flashback
User Group Permission Name
grid asm_fra rw-r----- log_7.264.684968167
grid asm_fra rw-r----- log_8.265.684972027
chown command
Changes the owner of a file or list of files. This command accepts a file name or multiple file names
separated by spaces. The specified files must be closed. Only the Oracle ASM administrator can
use this command.
chown user[:usergroup ] file [file ...]
user typically refers to the user that owns the database instance home. Oracle ASM File Access
Control uses the OS name to identify a database.
ASMCMD [+fra/orcl/archivelog/flashback] > chown asmdba1 log_7.264.684968167
log_8.265.684972027
chgrp command
Changes the Oracle ASM user group of a file or list of files. This command accepts a file name or
multiple file names separated by spaces. Only the file owner or the Oracle ASM administrator can
use this command. If the user is the file owner, then he must also be either the owner or a
member of the group for this command to succeed.
chgrp usergroup file [file ...]
mkdg command
Creates a diskgroup based on an XML configuration file which specifies the name of the diskgroup,
redundancy, attributes, and paths of the disks that form the diskgroup.
mkdg {config_file.xml | 'contents_of_xml_file'}
Flag Description
Name of the XML file that contains the configuration for the new diskgroup.
config_file mkdg searches for the XML file in the directory where ASMCMD was started
unless a path is specified.
contents_of_xml_file The XML script enclosed in single quotations.
Redundancy is an optional parameter; the default is normal redundancy. For some types of
redundancy, disks are required to be gathered into failure groups. In the case that failure groups
are not specified for a diskgroup, each disk in the diskgroup belongs to its own failure group.
It is possible to set some diskgroup attribute values during diskgroup creation. Some attributes,
such as AU_SIZE and SECTOR_SIZE, can be set only during diskgroup creation.
The default diskgroup compatibility settings are 10.1 for Oracle ASM compatibility, 10.1 for
database compatibility, and no value for Oracle ADVM compatibility.
<dsk> disk
name disk name
path disk path
size size of the disk to add
</dsk>
<a> attribute
name attribute name
value attribute value
</a>
</dg>
The following is an example of an XML configuration file for mkdg. The configuration file creates a
diskgroup named dg_data with normal redundancy. Two failure groups, fg1 and fg2, are created,
each with two disks identified by associated disk strings. The diskgroup compatibility attributes are
all set to 11.2.
<dg name="dg_data" redundancy="normal">
<fg name="fg1">
<dsk string="/dev/disk1"/>
<dsk string="/dev/disk2"/>
</fg>
<fg name="fg2">
<dsk string="/dev/disk3"/>
<dsk string="/dev/disk4"/>
</fg>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</dg>
The first example executes mkdg with an XML configuration file in the directory where ASMCMD
was started.
ASMCMD [+] > mkdg data_config.xml
The second example executes mkdg using information on the command line.
ASMCMD [+] > mkdg '<dg name="data"><dsk path="/dev/disk*"/></dg>'
chdg command
Changes a diskgroup (adds disks, drops disks, or rebalances) based on an XML configuration file.
chdg {config_file.xml | 'contents_of_xml_file'}
Flag Description
Name of the XML file that contains the changes for the diskgroup. chdg
config_file searches for the XML file in the directory where ASMCMD was started unless
a path is specified.
contents_of_xml_file The XML script enclosed in single quotations.
The modification includes adding or deleting disks from an existing diskgroup, and the setting
rebalance power level. The power level can set from 0 to the maximum of 11, the same values as
the ASM_POWER_LIMIT initialization parameter.
When adding disks to a diskgroup, the diskstring must be specified in a format similar to the
ASM_DISKSTRING initialization parameter.
The failure groups are optional parameters. The default causes every disk to belong to a its own
failure group.
Dropping disks from a diskgroup can be performed through this operation. An individual disk can
be referenced by its Oracle ASM disk name. A set of disks that belong to a failure group can be
specified by the failure group name.
We can resize a disk inside a diskgroup with chdg. The resize operation fails if there is not enough
space for storing data after the resize.
<dsk> disk
name disk name
path disk path
size size of the disk to add
</dsk>
</chdg>
The following is an example of an XML configuration file for chdg. This XML file alters the diskgroup
named data. The failure group fg1 is dropped and the disk data_0001 is also dropped. The
/dev/disk8 disk is added to failure group fg2. The rebalance power level is set to 4.
<chdg name="data" power="4">
<drop>
<fg name="fg1"></fg>
<dsk name="data_0001"/>
</drop>
<add>
<fg name="fg2">
<dsk string="/dev/disk8"/>
</fg>
</add>
</chdg>
The following are examples of the chdg command with the configuration file or configuration
information on the command line.
ASMCMD [+] > chdg data_config.xml
dropdg command
Drops an existing diskgroup. The diskgroup cannot be mounted on multiple nodes.
dropdg [-r] [-f] diskgroup
Flag Description
-f Force the operation. Only applicable if the diskgroup cannot be mounted.
-r Recursive, include contents.
diskgroup Name of diskgroup to drop.
The first example forces the drop of the diskgroup dg_data, including any data in the diskgroup.
ASMCMD [+] > dropdg -r -f dg_data
The second example drops the diskgroup dg_fra, including any data in the diskgroup.
ASMCMD [+] > dropdg -r dg_fra
chkdg command
Checks or repairs the metadata of a diskgroup. chkdg checks the metadata of a diskgroup for
errors and optionally repairs the errors.
chkdg [--repair] diskgroup
The following is an example of the chkdg command used to check and repair the dg_data
diskgroup.
ASMCMD [+] > chkdg --repair dg_data
mount command
Will mount the specified diskgroups. This operation mounts one or more diskgroups. A diskgroup
can be mounted with or without force or restricted options.
mount [--restrict] {[-a] | [-f] diskgroup[,diskgroup,...]}
Flag Description
--restrict Mounts in restricted mode.
-a Mounts all diskgroups.
-f Forces the mount operation.
diskgroup Name of the diskgroup.
The following are examples of the mount command showing the use of the force, restrict, and all
options.
ASMCMD [+] > mount -f data
umount command
Will dismount the specified diskgroup.
umount {-a | [-f] diskgroup}
Flag Description
Dismounts all mounted diskgroups. These disk groups are listed in the output of the
-a
V$ASM_DISKGROUP view.
-f Forces the dismount operation.
diskgroup Name of the diskgroup.
The first example dismounts all diskgroups mounted on the Oracle ASM instance.
ASMCMD [+] > umount -a
The second example forces the dismount of the data disk group.
ASMCMD [+] > umount -f data
offline command
Offline disks or failure groups that belong to a diskgroup.
offline -G diskgroup {-F failgroup|-D disk} [-t {minutes|hours}]
Flag Description
-G Diskgroup name.
-F Failure group name.
-D Specifies a single disk name.
Specifies the time before the specified disk is dropped as nm or nh, where m specifies
-t
minutes and h specifies hours. The default unit is hours.
When a failure group is specified, this implies all the disks that belong to it should be offlined.
The first example offlines the failgroup1 failure group of the dg_data diskgroup.
ASMCMD [+] > offline -G dg_data -F failgroup1
The second example offlines the data_0001 disk of the dg_data diskgroup with a time of 1.5 hours
before the disk is dropped.
ASMCMD [+] > offline -G dg_data -D data_0001 -t 1.5h
online command
Online all disks, a single disk, or a failure group that belongs to a diskgroup.
online {[-a] -G diskgroup|-F failgroup|-D disk} [-w]
Flag Description
-a Online all offline disks in the diskgroup.
-G Diskgroup name.
-F Failure group name.
-D Disk name.
Wait option. Causes ASMCMD to wait for the diskgroup to be rebalanced before returning
-w
control to the user. The default is not waiting.
When a failure group is specified, this implies all the disks that belong to it should be onlined.
The first example onlines all disks in the failgroup1 failure group of the dg_data diskgroup with the
wait option enabled.
ASMCMD [+] > online -G dg_data -F failgroup1 -w
The second example onlines the data_0001 disk in the dg_data diskgroup.
ASMCMD [+] > online -G dg_data -D data_0001
Flag Description
--power Power setting (0 to 11).
Wait option. Causes ASMCMD to wait for the diskgroup to be rebalanced before
-w
returning control to the user. The default is not waiting.
diskgroup Diskgroup name.
The following example rebalances the dg_fra diskgroup with a power level set to 6.
ASMCMD [+] > rebal --power 6 dg_fra
We can determine if a rebalance operation is occurring with the ASMCMD lsop command.
ASMCMD [+] > lsop
Group_Name Dsk_Num State Power
FRA REBAL RUN 6
iostat command
Will display I/O statistics of disks in mounted ASM diskgroups, by using V$ASM_DISK_IOSTAT.
iostat [-etH] [--io] [--region] [-G diskgroup] [interval]
Flag Description
-e Displays error statistics (Read_Err, Write_Err).
-t Displays time statistics (Read_Time, Write_Time).
-H Suppresses column headings.
--io Displays information in number of I/Os, instead of bytes.
-- Displays information for cold and hot disk regions (Cold_Reads, Cold_Writes, Hot_Reads,
region Hot_Writes).
-G Displays statistics for the diskgroup name.
Refreshes the statistics display based on the interval value (seconds). Use Ctrl-C to stop
interval
the interval display.
To see the complete set of statistics for a diskgroup, use the V$ASM_DISK_IOSTAT view.
Attribute
Description
Name
Group_Name Name of the diskgroup.
Dsk_Name Name of the disk.
Number of bytes read from the disk. If the --io option is entered, then the value is
Reads
displayed as number of I/Os.
Number of bytes written from the disk. If the --io option is entered, then the value
Writes
is displayed as number of I/Os.
Number of bytes read from the cold disk region. If the --io option is entered, then
Cold_Reads
the value is displayed as number of I/Os.
Number of bytes written from the cold disk region. If the --io option is entered,
Cold_Writes
then the value is displayed as number of I/Os.
Number of bytes read from the hot disk region. If the --io option is entered, then
Hot_Reads
the value is displayed as number of I/Os.
Number of bytes written from the hot disk region. If the --io option is entered,
Hot_Writes
then the value is displayed as number of I/Os.
Read_Err Number of failed I/O read requests for the disk.
Write_Err Number of failed I/O write requests for the disk.
I/O time (in hundredths of a second) for read requests for the disk if the
Read_Time
TIMED_STATISTICSinitialization parameter is set to TRUE (0 if set to FALSE).
I/O time (in hundredths of a second) for write requests for the disk if the
Write_Time
TIMED_STATISTICSinitialization parameter is set to TRUE (0 if set to FALSE).
If a refresh interval is not specified, the number displayed represents the total number of bytes or
I/Os. If a refresh interval is specified, then the value displayed (bytes or I/Os) is the difference
between the previous and current values, not the total value.
The first example displays disk I/O statistics for the data diskgroup in total number of bytes.
The second example displays disk I/O statistics for the data diskgroup in total number of I/O
operations.
ASMCMD [+] > iostat --io -G data
Group_Name Dsk_Name Reads Writes
DATA DATA_0000 2801 34918
DATA DATA_0001 58301 35700
DATA DATA_0002 3320 36345
ASMCMD> iostat -t
Group_Name Disk_Name Reads Writes Read_Time Write_Time
FRA DATA_0099 54601 38411 441.234546 672.694266
setattr command
setattr command will change an attribute of a diskgroup.
setattr -G disk_group attribute_name attribute_value
Flag Description
-G Diskgroup name.
-H Suppresses column headings.
-l Display names with values.
-m Displays additional information, such as the RO and Sys columns.
pattern Display the attributes that contain pattern expression.
The RO (read-only) column identifies those attributes that can only be set when a diskgroup is
created. The Sys column identifies those attributes that are system-created.
lsod command
Lists the open ASM disks.
lsod [-H] [-G diskgroup] [--process process_name] [pattern]
Flag Description
-H Suppresses column header information from the output.
-G Specifies the diskgroup that contains the open disks.
--process Specifies a pattern to filter the list of processes.
pattern Specifies a pattern to filter the list of disks.
The rebalance operation (RBAL) opens a disk both globally and locally so the same disk may be
listed twice in the output for the RBAL process.
The first example lists the open devices associated with the data diskgroup and the LGWR process.
ASMCMD [+] > lsod -G data --process LGWR
Instance Process OSPID Path
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska1
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska2
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska3
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb1
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb2
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb3
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskd1
The second example lists the open devices associated with the LGWR process for disks that match
the diska pattern.
ASMCMD [+] > lsod --process LGWR diska
Instance Process OSPID Path
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska1
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska2
1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska3
Flag Description
-G Name of the diskgroup.
--striping Striping specification, either coarse or fine.
--
Redundancy specification, either high, mirror, or unprotected.
redundancy
Intelligent Data Placement specification for primary extents, either hot or cold
--primary
region.
Intelligent Data Placement specification for secondary extents, either hot or cold
--secondary
region.
template Name of the template to create.
The following example adds temp_mc template to the dg_data diskgroup. The new template has
the redundancy set to mirror and the striping set to coarse.
ASMCMD [+] > mktmpl -G dg_data --redundancy mirror --striping coarse temp_mc
SQL equivalent for mktmpl command is:
SQL> ALTER DISKGROUP disk_group ADD TEMPLATE template_name ...;
lstmpl command
Lists all templates or the templates for a specified diskgroup.
lstmpl [-Hl] [-G diskgroup] [pattern]
Flag Description
-H Suppresses column headings.
-l Displays all details.
-G Specifies diskgroup name.
pattern Displays the templates that match pattern expression.
The example lists all details of the templates in the dg_data diskgroup.
ASMCMD [+] > lstmpl -l -G dg_data
Group_Name Group_Num Name Stripe Sys Redund PriReg MirrReg
DG_DATA 1 ARCHIVELOG COARSE Y MIRROR COLD COLD
DG_DATA 1 ASMPARAMETERFILE COARSE Y MIRROR COLD COLD
DG_DATA 1 AUTOBACKUP COARSE Y MIRROR COLD COLD
DG_DATA 1 BACKUPSET COARSE Y MIRROR COLD COLD
DG_DATA 1 CHANGETRACKING COARSE Y MIRROR COLD COLD
DG_DATA 1 CONTROLFILE FINE Y HIGH COLD COLD
DG_DATA 1 DATAFILE COARSE Y MIRROR COLD COLD
DG_DATA 1 DATAGUARDCONFIG COARSE Y MIRROR COLD COLD
DG_DATA 1 DUMPSET COARSE Y MIRROR COLD COLD
DG_DATA 1 FLASHBACK COARSE Y MIRROR COLD COLD
DG_DATA 1 MYTEMPLATE FINE N HIGH COLD COLD
DG_DATA 1 OCRFILE COARSE Y MIRROR COLD COLD
DG_DATA 1 ONLINELOG COARSE Y MIRROR COLD COLD
DG_DATA 1 PARAMETERFILE COARSE Y MIRROR COLD COLD
DG_DATA 1 TEMPFILE COARSE Y MIRROR COLD COLD
DG_DATA 1 XTRANSPORT COARSE Y MIRROR COLD COLD
chtmpl command
Changes the attributes of a template.
chtmpl -G diskgroup { [--striping {coarse|fine}]
[--redundancy {high|mirror|unprotected}] [--primary {hot|cold}]
[--secondary {hot|cold}]} template
Flag Description
-G Name of the diskgroup.
--striping Striping specification, either coarse or fine.
--
Redundancy specification, either high, mirror, or unprotected.
redundancy
Intelligent Data Placement specification for primary extents, either hot or cold
--primary
region.
Intelligent Data Placement specification for secondary extents, either hot or cold
--secondary
region.
template Name of the template to change.
The following example updates temp_hf template of the dg_fra diskgroup. The redundancy
attribute is set to high and the striping attribute is set to fine.
ASMCMD [+] > chtmpl -G dg_fra --redundancy high --striping fine temp_hf
rmtmpl command
Removes a template from a diskgroup.
rmtmpl -G diskgroup template
The following example removes temp_uf template from the dg_data diskgroup.
ASMCMD [+] > rmtmpl -G dg_data temp_uf
Flag Description
-G Name of the diskgroup containing the volume.
Size of the volume to be created in units of K, M, G, T, P, or E. The unit designation
-s size
must be appended to the number specified. No space is allowed. For example: 20G
--column Number of columns in a stripe set. Values range from 1 to 8. The default value is 4.
Stripe width of a volume. The value can range from 4 KB to 1 MB, at power-of-two
--width
intervals, with a default of 128 KB.
Redundancy of the Oracle ADVM volume which can be specified for normal
redundancy diskgroups. The range of values are as follows: unprotected for non-
--
mirrored redundancy, mirror for double-mirrored redundancy, or high for triple-
redundancy
mirrored redundancy. If redundancy is not specified, the setting defaults to the
redundancy level of the diskgroup.
--primary Intelligent Data Placement specification for primary extents, either hot or cold region.
Intelligent Data Placement specification for secondary extents, either hot or cold
--secondary
region.
Name of the volume to be created. Can be a maximum of 11 alphanumeric
volume
characters; dashes are not allowed. The first character must be alphabetic.
When creating an Oracle ADVM volume, a volume device name is created with a unique Oracle
ADVM persistent diskgroup number that is concatenated to the end of the volume name. The
unique number can be one to three digits.
On Linux, the volume device name is in the format volume_name-nnn, such as volume1-123. On
Windows the volume device name is in the format asm-volume_name-nnn, such as asm-volume1-
123.
The volume device file functions as any other disk or logical volume to mount file systems or for
applications to use directly.
The following example creates volume1 in the dg_data diskgroup with the size set to 10 gigabytes.
ASMCMD [+] > volcreate -G dg_data -s 10G --width 64K --column 8 volume1
You can determine the volume device name with the volinfo command.
ASMCMD [+] > volinfo -G dg_data volume1
Diskgroup Name: DATA
Volume Name: VOLUME1
Volume Device: /dev/asm/volume1-123
State: ENABLED
Size (MB): 10240
Resize Unit (MB): 512
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 64
Usage:
Mountpath:
volinfo command
Displays information about Oracle ADVM volumes.
volinfo {-a | -G diskgroup -a | -G diskgroup volume}
volinfo [--show_diskgroup|--show_volume] volumedevice}
Flag Description
When used without a diskgroup name, specifies all volumes within all
-a diskgroups. When used with a diskgroup name (-G diskgroup -a), specifies all
volumes within that diskgroup.
-G Name of the diskgroup containing the volume.
volume Name of the volume.
--
Returns only the diskgroup name. A volume device name is required.
show_diskgroup
--show_volume Returns only the volume name. A volume device name is required.
volumedevice Name of the volume device.
The first example displays information about the volume1 volume in the dg_data diskgroup and
was produced in a Linux environment. The mount path field displays the last mount path for the
volume.
ASMCMD [+] > volinfo -G dg_data volume1
Diskgroup Name: DG_DATA
Volume Name: VOLUME1
Volume Device: /dev/asm/volume1-123
State: ENABLED
Size (MB): 10240
Resize Unit (MB): 512
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 64
Usage: ACFS
Mountpath: /u01/app/acfsmounts/acfs1
The second example displays information about the asm-volume1 volume in the dg_data
diskgroup and was produced in a Windows environment.
ASMCMD [+] > volinfo -G dg_data -a
Diskgroup Name: DG_DATA
Volume Name: VOLUME1
Volume Device: \\.\asm-volume1-311
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 256
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: C:\oracle\acfsmounts\acfs1
voldelete command
Deletes an Oracle ADVM volume.
voldelete -G diskgroup volume
To successfully execute this command, the local Oracle ASM instance must be running and the
diskgroup required by this command must be mounted in the Oracle ASM instance. Before deleting
a volume, you must ensure that there are no active file systems associated with the volume.
voldisable command
Disables Oracle ADVM volumes in mounted diskgroups and removes the volume device on the local
node.
voldisable {-a | -G diskgroup -a | -G diskgroup volume}
Flag Description
When used without a diskgroup name, specifies all volumes within all diskgroups.
-a When used with a diskgroup name (-G diskgroup -a), specifies all volumes within that
diskgroup.
-G Name of the diskgroup containing the volume.
Name of the volume to be operated on. Can be maximum of 30 alphanumeric characters.
volume
The first character must be alphabetic.
You can disable volumes before shutting down an Oracle ASM instance or dismounting a diskgroup
to verify that the operations can be accomplished normally without including a force option due to
open volume files. Disabling a volume also prevents any subsequent opens on the volume or
device file because it no longer exists.
Before disabling a volume, you must ensure that there are no active file systems associated with
the volume. You must first dismount the Oracle ACFS file system before disabling the volume. You
can delete a volume without first disabling the volume.
volenable command
Enables Oracle ADVM volumes in mounted diskgroups. A volume is enabled when it is created.
volenable {-a | -G diskgroup -a | -G diskgroup volume}
Description
Flag
When used without a diskgroup name, specifies all volumes within all diskgroups.
-a When used with a diskgroup name (-G diskgroup -a), specifies all volumes within that
diskgroup.
-G Name of the diskgroup containing the volume.
volume Name of the volume to be operated on.
Flag Description
-G Name of the diskgroup containing the volume.
Force the shrinking of a volume that is not an Oracle ACFS volume to suppress the
-f
warning message.
volume Name of the volume to be operated on.
-s New size of the volume in units of K, M, G, or T.
If the volume is mounted on a non-Oracle ACFS file system, then dismount the file system first
before resizing. If the new size is smaller than current, you are warned of possible data corruption.
Unless the -f (force) option is specified, you are prompted whether to continue with the operation.
If there is an Oracle ACFS file system on the volume, then you cannot resize the volume with the
volresize command. You must use the acfsutil size command, which also resizes the volume and
file system.
The following is an example of the volresize command that resizes volume1 in the dg_data
diskgroup to 20 gigabytes.
ASMCMD [+] > volresize -G dg_data -s 20G volume1
volset commnad
Sets attributes of an Oracle ADVM volume in mounted diskgroups.
volset -G diskgroup [--usagestring string] [--mountpath mount_path] [--primary {hot|cold}] [--
secondary {hot|cold}] volume
Flag Description
-G Name of the diskgroup containing the volume.
Optional usage string to tag a volume which can be up to 30 characters. This string is
--
set to ACFS when the volume is attached to an Oracle ACFS file system and should
usagestring
not be changed.
Optional string to tag a volume with its mount path string which can be up to 1024
--
characters. This string is set when the file system is mounted and should not be
mountpath
changed.
--primary Intelligent Data Placement specification for primary extents, either hot or cold region.
Intelligent Data Placement specification for secondary extents, either hot or cold
--secondary
region.
volume Name of the volume to be operated on.
When running the mkfs command to create a file system, the usage field is set to ACFS and
mountpath field is reset to an empty string if it has been set. The usage field should remain at
ACFS.
When running the mount command to mount a file system, the mountpath field is set to the
mount path value to identify the mount point for the file system. After the value is set by the
mount command, the mountpath field should not be updated.
The following is an example of a volset command that sets the usage string for a volume that is
not associated with a file system.
ASMCMD [+] > volset -G dg_arch --usagestring 'no file system created' volume1
ASMCMD [+] > volset -G dg_data --usagestring 'acfs' volume1
volstat command
Reports I/O statistics for Oracle ADVM volumes.
volstat [-G diskgroup] [volume]
lsof command
Lists the open files of the local clients.
lsof [-H] {-G diskgroup|--dbname db|-C instance}
Flag Description
-H Suppresses column headings.
-G List files only from this specified disk group.
--dbname List files only from this specified database.
-C List files only from this specified instance.
ASMCMD [+] > lsof -G dg_data
DB_Name Instance_Name Path
orcl orcl +dg_data/orcl/controlfile/current.260.691577263
orcl orcl +dg_data/orcl/datafile/example.265.691577295
orcl orcl +dg_data/orcl/datafile/sysaux.257.691577149
orcl orcl +dg_data/orcl/datafile/system.256.691577149
orcl orcl +dg_data/orcl/datafile/undotbs1.258.691577151
orcl orcl +dg_data/orcl/datafile/users.259.691577151
orcl orcl +dg_data/orcl/onlinelog/group_1.261.691577267
orcl orcl +dg_data/orcl/onlinelog/group_2.262.691577271
orcl orcl +dg_data/orcl/onlinelog/group_3.263.691577275
orcl orcl +dg_data/orcl/tempfile/temp.264.691577287
Auditing in Oracle
Auditing in Oracle
The auditing mechanism for Oracle is extremely flexible. Oracle stores
information that is relevant to auditing in its data dictionary.
Every time a user attempts anything in the database where audit is enabled
the Oracle kernel checks to see if an audit record should be created or
updated (in the case or a session record) and generates the record in a
table owned by the SYS user called AUD$. This table is, by default, located
in the SYSTEM tablespace. This in itself can cause problems with potential
denial of service attacks. If the SYSTEM tablespace fills up, the database
will hang.
init parameters
Until Oracle 10g, auditing is disabled by default, but can be enabled by
setting the AUDIT_TRAIL static parameter in the init.ora file.
From Oracle 11g, auditing is enabled for some system level privileges.
audit_trail string DB
The AUDIT_FILE_DEST parameter specifies the OS directory used for the audit
trail when the OS, XML and XML_EXTENDED options are used. It is also the
location for all mandatory auditing specified by the AUDIT_SYS_OPERATIONS
parameter.
Start Auditing
Syntax of audit command:
audit {statement_option|privilege_option} [by user] [by {session|access}]
[whenever {successful|not successful}]
Only the statement_option or privilege_option part is mandatory. The other
clauses are optional and enabling them allows audit be more specific.
Statement level
Auditing will be done at statement level.
Statements that can be audited are found in STMT_AUDIT_OPTION_MAP.
SQL> audit table by scott;
Object level
Auditing will be done at object level.
These objects can be audited: tables, views, sequences, packages, stored
procedures and stored functions.
SQL> audit insert, update, delete on scott.emp by hr;
Privilege level
Auditing will be done at privilege level.
All system privileges that are found in SYSTEM_PRIVILEGE_MAP can be
audited.
SQL> audit create tablespace, alter tablespace by all;
Audit options
BY SESSION
Specify BY SESSION if you want Oracle to write a single record for all SQL
statements of the same type issued and operations of the same type executed
on the same schema objects in the same session.
Oracle database can write to an operating system audit file but cannot read
it to detect whether an entry has already been written for a particular
operation. Therefore, if you are using an operating system file for the
audit trail (that is, the AUDIT_TRAIL initialization parameter is set to
OS), then the database may write multiple records to the audit trail file
even if you specify BY SESSION.
SQL> audit create, alter, drop on currency by xe by session;
SQL> audit alter materialized view by session;
BY ACCESS
Specify BY ACCESS if you want Oracle database to write one record for each
audited statement and operation.
For statement options and system privileges that audit SQL statements other
than DDL, you can specify either BY SESSION or BY ACCESS. BY SESSION is the
default.
SQL> audit update on health by access;
SQL> audit alter sequence by tester by access;
If you omit this clause, then Oracle Database performs the audit regardless
of success or failure.
SQL> audit insert, update, delete on hr.emp by hr by session whenever not
successful;
SQL> audit materialized view by pingme by access whenever successful;
Examples
Auditing for every SQL statement related to roles (create, alter, drop or
set a role).
SQL> AUDIT ROLE;
Auditing for every statement that reads files from database directory
SQL> AUDIT READ ON DIRECTORY ext_dir;
Auditing for every statement that performs any operation on the sequence
SQL> AUDIT ALL ON hr.emp_seq;
The audit trail contains lots of data, but the following are most likely to
be of interest:
Username - Oracle Username.
Terminal - Machine that the user performed the action from.
Timestamp - When the action occurred.
Object Owner - The owner of the object that was interacted with.
Object Name - name of the object that was interacted with.
Action Name - The action that occurred against the object (INSERT, UPDATE,
DELETE, SELECT, EXECUTE)
Several fields have been added to both the standard and fine-grained audit
trails:
Only users who have been granted specific access to SYS.AUD$ can access the
table to select, alter or delete from it. This is usually just the user SYS
or any user who has had permissions. There are two specific roles that
allow access to SYS.AUD$ for select and delete, these are
DELETE_CATALOG_ROLE and SELECT_CATALOG_ROLE. These roles should not be
granted to general users.
From Oracle 11g R2, we can change audit table's (SYS.AUD$ and SYS.FGA_LOG$)
tablespace and we can periodically delete the audit trail records using
DBMS_AUDIT_MGMT package.
Disabling Auditing
The NOAUDIT statement turns off the various audit options of Oracle. Use it
to reset statement, privilege and object audit options. A NOAUDIT statement
that sets statement and privilege audit options can include the BY USER
option to specify a list of users to limit the scope of the statement and
privilege audit options.
SQL> NOAUDIT;
SQL> NOAUDIT session;
SQL> NOAUDIT session BY scott, hr;
SQL> NOAUDIT DELETE ON emp;
SQL> NOAUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE;
SQL> NOAUDIT ALL;
SQL> NOAUDIT ALL PRIVILEGES;
SQL> NOAUDIT ALL ON DEFAULT;
Not all background processes are mandatory for an instance. Some are
mandatory and some are optional. Mandatory background processes are DBWn,
LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be
invoked if that particular feature is activated.
Database writer (or Dirty Buffer Writer) process does multi-block writing
to disk asynchronously. One DBWn process is adequate for most systems.
Multiple database writers can be configured by initialization parameter
DB_WRITER_PROCESSES, depends on the number of CPUs allocated to the
instance. To have more than one DBWn only make sense if each DBWn has been
allocated its own list of blocks to write to disk. This is done through the
initialization parameter DB_BLOCK_LRU_LATCHES. If this parameter is not set
correctly, multiple DB writers can end up contending for the same block
list.
The possible multiple DBWR processes in RAC must be coordinated through the
locking and global cache processes to ensure efficient processing is
accomplished.
LGWR writes redo data from redolog buffers to (online) redolog files,
sequentially.
Redolog file contains changes to any datafile. The content of the redolog
file is file id, block id and new content.
LGWR will be invoked more often than DBWn as log files are really small
when compared to datafiles (KB vs GB). For every small update we don‟t want
to open huge gigabytes of datafiles, instead write to the log file.
Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a
cyclic process. Newly created redolog file will be in UNUSED state.
When the LGWR is writing to a particular redolog file, that file is said to
be in CURRENT status. If the file is filled up completely then a log switch
takes place and the LGWR starts writing to the second file (this is the
reason every database requires a minimum of 2 redolog groups). The file
which is filled up now becomes from CURRENT to ACTIVE.
In RAC, each RAC instance has its own LGWR process that maintains that
instance‟s thread of redo logs.
LGWR will be invoked in following scenarios:
When checkpoint occurred it will invoke the DBWn and updates the SCN block
of the all datafiles and the control file with the current SCN. This is
done by LGWR. This SCN is called checkpoint SCN.
PMON is responsible for cleaning up the database buffer cache and freeing
resources that were allocated to a process.
PMON also registers information about the instance and dispatcher processes
with Oracle (network) listener.
PMON also checks the dispatcher & server processes and restarts them if
they have failed.
ARCH processes, running on primary database, select archived redo logs and
send them to standby database. Archive log files are used for media
recovery (in case of a hard disk failure and for maintaining an Oracle
standby database via log shipping). Archives the standby redo logs applied
by the managed recovery process (MRP).
In RAC, the various ARCH processes can be utilized to ensure that copies of
the archived redo logs for each instance are available to the other
instances in the RAC setup should they be needed for recovery.
CQJ0 - Job queue controller process wakes up periodically and checks the
job log. If a job is due, it spawns Jnnnn processes to handle jobs.
From Oracle 11g release2, DBMS_JOB and DBMS_SCHEDULER work without setting
JOB_QUEUE_PROCESSES. Prior to 11gR2 the default value is 0, and from
11gR2 the default value is 1000.
Dedicated Server
Dedicated server processes are used when MTS is not used. Each user process
gets a dedicated connection to the database. These user processes also
handle disk reads from database datafiles into the database block buffers.
LISTENER
The LISTENER process listens for connection requests on a specified port
and passes these requests to either a distributor process if MTS is
configured, or to a dedicated process if MTS is not used. The LISTENER
process is responsible for load balance and failover in case a RAC instance
fails or is overloaded.
CALLOUT Listener
Used by internal processes to make calls to externally stored procedures.
In Data Guard, LNS process performs actual network I/O and waits for each
network I/O to complete. Each LNS has a user configurable buffer that is
used to accept outbound redo data from the LGWR process. The NET_TIMEOUT
attribute is used only when the LGWR process transmits redo data using a
LGWR Network Server(LNS) process.
ASMB
This ASMB process is used to provide information to and from cluster
synchronization services used byASM to manage the disk resources. It's also
used to update statistics and provide a heart beat mechanism.
Re-Balance RBAL
RBAL is the ASM related process that performs rebalancing of disk resources
controlled by ASM.
Data Dictionary
V$ views
views
Data will not be
lost even after Data will be lost if
instance is instance is shutdowned
shutdowned
(some are) Will be
Will be accessible
accessible even if
only if instance
instance is in mount or
is OPENED
nomount stage (STARTED)
Data dictionary
V$ view names are
view names are
singular
plural
Datapump
From Oracle Database 10g, new Datapump Export (expdp) and Import
(impdp) clients that use this interface have been provided. Oracle
recommends that customers use these new Datapump Export and Import clients
rather than the Original Export and Original Import clients, since the new
utilities have vastly improved performance and greatly enhanced
functionality.
Oracle Datapump provides high speed, parallel, bulk data and metadata
movement of Oracle database contents. It‟s a server-side replacement for
the original Export and Import utilities. A new public interface package,
DBMS_DATAPUMP, provides a server-side infrastructure for fast data and
metadata movement.
The Datapump system requirements are the same as the standard Oracle
Database 10g requirements. Datapump doesn‟t need a lot of additional system
or database resources, but the time to extract and treat the information
will be dependent on the CPU and memory available on each machine. If
system resource consumption becomes an issue while a Datapump job is
executing, the job can be dynamically throttled to reduce the number of
execution threads.
Using the Direct Path method of unloading, a single stream of data unload
is about 2 times faster than normal Export because the Direct Path API has
been modified to be even more efficient. Depending on the level of
parallelism, the level of improvement can be much more.
A single stream of data load is 15-45 times faster than normal Import. The
reason it is so much faster is that Conventional Import uses only
conventional mode inserts, whereas Datapump Import uses the Direct Path
method of loading. As with Export, the job can be parallelized for even
more improvement.
Datapump Features
1.Writes either
– Direct Path unloads
– External tables (part of cluster, has LOB, etc)
2.Command line interface
3.Writing to external tables
4.DBMS_DATAPUMP – Datapump API
5.DBMS_METADATA – Metadata API
6.Checkpoint / Job Restart
Job progress recorded in Master Table - All stopped Datapump jobs can
be restarted without loss of data as long as the master table and dump file
set remain undisturbed while the job is stopped. It doesn‟t matter if the
job was stopped voluntarily by a user of if the stoppage was involuntary
due to a crash,power outage, etc.
May be stopped and restarted later
Abnormally terminated job is also restartable
Current objects can be skipped on restart if problematic
7.Better Job Monitoring and Control
Can detach from and attach to running jobs from any location -
Multiple clients can attach to a job to see what is going on. Clients may
also detach from an executing job without affecting it.
Initial job space estimate and overall percent done - At Export time,
the approximate size of the job is estimated before it gets underway. The
default method for determining this is to estimate the size of a partition
by counting the number of blocks currently allocated to it. If tables have
been analyzed, statistics can also be used which should provide a more
accurate estimate. The user gets an estimate of how much dump file space
will be consumed for the operation.
Job state and description - Once the Export begins, the user can get
a status of the job by seeing a percentage of how far along he is in the
job. He can then extrapolate the time required to get the job completed.
Per-worker status showing current object and percent done
Enterprise Manager interface available - The jobs can be monitored
from any location
8.Interactive Mode for expdp and impdp clients
Network Export
– Unload a remote database to a local dump file set
– Allows export of read-only databases for archiving
Network Import
– Overlap execution of extract and load
– No intermediate dump files
Because Datapump maintains a Master Control Table and must perform database
writes, Datapump can‟t directly Export a Read-only database. Network mode
allows the user to export Read-Only databases: The Datapump Export job runs
locally on a read/write instance and extracts the data and metadata from
the remote read-only instance. Both network export and import use database
links to communicate with the remote source.
First level parallelism is supported for both network export and import.
I/O servers do not operate remotely, so second level, intra-partition
parallelism is not supported in network operations.
All object types are supported - With the new EXCLUDE and INCLUDE
parameters, a Datapump job can include or exclude any type of object and
any subset of objects within a type.
Exclude parameter: specified object types are excluded from the
operation
Include parameter: only the specified object types are included
Both take an optional name filter for even finer granularity:
INCLUDE/ PACKAGE: “LIKE PAYROLL%‟”
EXCLUDE TABLE: “in („FOO‟,‟BAR‟,…)‟”
e.g.:
EXCLUDE=function
EXCLUDE=procedure
EXCLUDE=package:”like „PAYROLL%‟ “
Would exclude all functions, procedures, and packages with names starting
with „PAYROLL‟ from the job.
Using INCLUDE instead of EXCLUDE above, would include the functions,
procedures, and packages with names starting with „PAYROLL‟.
12.DDL Transformations
Easy with XML, because object metadata is stored as XML in the dump file
set,it is easy to apply transformations when DDL is being formed (via XSL-
T) during import.
o Restartable
o Improved control
o Files will created on server, not on client side
o Parallel execution
o Automated performance tuning
o Simplified monitoring
o Improved object filtering
o Dump will be compressed
o Data can be encrypted (in Oracle 11g or later)
o Remap of data during export or import (in 11g or later)
o We can export one or more partitions of a table without having to
move the entire table (in 11g or later)
o XML schemas and XMLType columns are supported for both export and
import (in 11g or later)
o Using the Direct Path method of unloading or loading data, a single
stream of Datapump export (unload) is approximately 2 times faster
than original Export, because the Direct Path API has been modified
to be even more efficient. Depending on the level of parallelism, the
level of improvement can be much more.
o Original Import uses only conventional mode inserts, so a single
stream of Datapump Import is 10-45 times faster than normal Import.
As with Export, the job‟s single stream can be changed to parallel
streams for even more improvement.
o With Datapump, it is much easier for the DBA to manage and monitor
jobs. During a long-running job, the DBA can monitor a job from
multiple locations and know how far along it is, how much there is
left to go, what objects are being worked on, etc. The DBA can also
affect the job‟s operation, i.e. abort it, adjust its resource
consumption, and stop it for later restart.
o Since the jobs are completed much more quickly than before,
production systems have less downtime.
o Datapump is publicly available as a PL/SQL package (DBMS_DATAPUMP),
so customers can write their own data movement utilities if so
desired. The metadata capabilities of the Datapump are also available
as a separate PL/SQL package, DBMS_METADATA.
All the Oracle database data types are supported via Datapump‟s two data
movement mechanisms, Direct Path and External Tables.
We can either use the Command line interface or the Oracle Enterprise
Manager web-based GUI interface.
We can use the “ESTIMATE ONLY” command to see how much disk space is
required for the job‟s dump file set before we start the operation.
Jobs can be monitored from any location is going on. Clients may also
detach from an executing job without affecting it.
Every Datapump job creates a Master Table in which the entire record of the
job is maintained. The Master Table is the directory to the job, so if a
job is stopped for any reason, it can be restarted at a later point in
time, without losing any data. Whenever datapump export or import is
running, Oracle will create a table with the JOB_NAME and will be deleted
once the job is done. From this table, Oracle will find out how much job
has been completed and from where to continue etc.
Datapump Vs SQL*Loader
We can use SQL*Loader to load data from external files into tables of an
Oracle database. Many customers use SQL*Loader on a daily basis to load
files (e.g. financial feeds) into their databases. Datapump Export and
Import may be used less frequently, but for very important tasks, such as
migrating between platforms, moving data between development, test, and
production databases, logical database backup, and for application
deployment throughout a corporation.
Datapump Disadvantages
•Can‟t use UNIX pipes
•Can't run as SYS (/ as sysdba)
Related Views
DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS
USERID must be the first parameter on the command line. This user must have
read & write permissions on DIRECTORY.
$ expdp help=y
QUERY
Predicate clause used to export a subset of a table.
e.g. QUERY=emp:"WHERE dept_id > 10".
Command Description
(or)
$ expdp ATTACH=EXAMPLE1
(or)
==> exporting two schemas and compressing the data (valid in 11g or later).
==> exporting an entire database to a dump file with all GRANTS, INDEXES
and data
INCLUDE=TABLE:"LIKE 'TAB%'"
(or)
==> exporting only those tables whose name start with TAB.
==> exporting data with version. Datapump Import can always read dump file
sets created by older versions of Data Pump Export.
REMAP_DATA=activity.cardno:hidedata.newcc DIRECTORY=dpump_dir
DUMPFILE=hremp2.dmp
impdp utility
The Data Pump Import utility provides a mechanism for transferring
data objects between Oracle databases.
$ impdp help=y
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
==> importing all the exported data, with the help of 4 processes.
REMAP_DATAFILE=”‟C:\DB1\HR\PAYROLL\tbs6.dbf‟:‟/db1/hr/payroll/tbs6.dbf‟”
==> will create sqlfile with DDL that could be executed in another
database/schema to create the tables and indexes.
declare
handle number;
begin
handle := dbms_datapump.open ('IMPORT', 'SCHEMA');
dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR');
dbms_datapump.set_parameter(handle,'TABLE_EXISTS_ACTION','REPLACE');
dbms_datapump.set_parallel(handle,4);
dbms_datapump.start_job(handle);
dbms_datapump.detach(handle);
exception
when others then
dbms_output.put_line(substr(sqlerrm, 1, 254));
end;
/
Export Import
Export (exp) and import (imp) utilities are used to perform logical
database backup and recovery. When exporting, database objects are dumped
to a binary file which can then be imported into another Oracle database.
Before using these commands, you should set ORACLE_HOME, ORACLE_SID and
PATH environment variables.
exp utility
Objects owned by SYS cannot be exported.
If you want to export objects of another schema, you need EXP_FULL_DATABASE
role.
$ exp help=y
USERID username/password
FLASHBACK_TIME time used to get the SCN closest to the specified time
Examples:
$ exp system/manager file=emp.dmp log=emp_exp.log full=y
==> exporting full database.
$ exp system/manager file=owner.dmp log=owner.log owner=owner
direct=y STATISTICS=none
==> exporting all the objects of a schema.
imp utility
imp provides backward compatibility i.e. it will allows you to
import the objects that you have exported in lower Oracle versions also.
imp doesn't recreate already existing objects. It either abort the import
process (default) or ignores the errors (if you specify IGNORE=Y).
$ imp help=y
==> will write DDLs of the objects in exported dumpfile (scott schema) into
specified file. This command won't import the objects.
imp:
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import
process or set INDEXES=N and building indexes later on after the import.
Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in 10g) or COMMIT_WAIT=NOWAIT (in
11g) during import.
Related Views
DBA_EXP_VERSION
DBA_EXP_FILES
DBA_EXP_OBJECTS
Unified Backup Files Storage, all backup components can be stored in one
consolidated spot. The flash recovery area is managed via Oracle Managed
Files (OMF), and it can utilize disk resources managed byAutomatic Storage
Management (ASM). Flash recovery area can be configured for use by multiple
database instances.
Automated Disk-Based Backup and Recovery, once the flash recovery area is
configured, all backup components are managed automatically by Oracle.
Disk Cache for Tape Copies, if your disaster recovery (DR) plan involves
backing up to alternate media, the flash recovery area can act as a disk
cache area for those backup components that are eventually copied to tape.
Flashback Logs, the FRA is also used to store and manage flashback logs,
which are used during flashback backup operations to quickly restore a
database to a prior desired state.
You can designate the FRA as the location for one of the control files and
redo log members to limit the exposure in case of disk failure.
File System:
1. A single directory
2. An entire file system
Raw Devices:
1. Automatic storage management (ASM)
FRA Components
The flash/fast recovery area can contain the following:
Configuring FRA
Following are the three initialization parameters that should be defined in
order to set up the flash recovery area:
o DB_RECOVERY_FILE_DEST_SIZE
o DB_RECOVERY_FILE_DEST
o DB_FLASHBACK_RETENTION_TARGET
DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files that can
be stored in the Flash Recovery Area. The size of the flash recovery area
should be large enough to hold a copy of all data files, all incremental
backups, online redo logs, archived redo log not yet backed up on tape,
control files, and control file auto backups.
Enabling Flashback
You can always define a different location for archive redo logs, if you
use a different location, then you can‟t just erase the values of the
parameters for LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST in order to
specify the location of the FRA.
To place your log files somewhere else other than the FRA you should use a
different parameter to specify the archived redo log locations: use
LOG_ARCHIVE_DEST_1 instead of LOG_ARCHIVE_DEST.
There are various other circumstances in which messages are written in the
alert log:
1. When none of the files are deleted.
2. When the used space in the FRA is 85 percentage (a warning).
3. When the used space in the FRA is 97 percentage (a critical warning).
4. The warning messages issued can be viewed in the DBA_OUTSTANDING_ALERTS
data dictionary view and are also available in the OEM Database Control
main window.
· BACKUP
Do not specify a FORMAT option to the BACKUP command, and do not configure
a FORMAT option for disk backups. In such a case, RMAN creates backup
pieces and image copies in the flash recovery area, with names in Oracle
Managed Files name format.
· CONTROLFILE AUTOBACKUP
RMAN can create control file autobackups in the flash recovery area. Use
the RMAN command CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE
DISK CLEAR to clear any configured format option for the control file
autobackup location on disk. Control file autobackups will be placed in the
flash recovery area when no other destination is configured.
· RESTORE ARCHIVELOG
Explicitly or implicitly (as in the case of, set one of the
LOG_ARCHIVE_DEST_n) parameters to 'LOCATION=USE_DB_RECOVERY_FILE_DEST'. If
you do not specify SET ARCHIVELOG DESTINATION to override this behavior,
then restored archived redo log files will be stored in the flash recovery
area.
To direct the restored archived redo logs to the flash recovery area, set
one of the LOG_ARCHIVE_DEST_n parameters to
'LOCATION=USE_DB_RECOVERY_FILE_DEST", and make sure you are not using SET
ARCHIVELOG DESTINATION to direct restored archived logs to some other
destination.
Note:
Flashback logs cannot be backed up outside the flash recovery area.
Therefore, in a BACKUP RECOVERY AREA operation the flashback logs are not
backed up to tape.
Flashback logs are deleted automatically to satisfy the need for
space for other files in the flash recovery area. However, a guaranteed
restore point can force the retention of flashback logs required to perform
Flashback Database to the restore point SCN. See
Delete unnecessary files from the flash recovery area using the RMAN
DELETE command. (Note that if you use host operating system commands to
delete files, then the database will not be aware of the resulting free
space. You can run the RMAN CROSSCHECK command to have RMAN re-check the
contents of the flash recovery area and identify expired files, and then
use the DELETE EXPIRED command to remove missing files from the RMAN
repository.)
You may also need to consider changing your backup retention policy and, if
using Data Guard, consider changing your archivelog deletion policy.
If you need to move the flash recovery area of your database to a new
location, you can follow this procedure:
Oracle will clean up transient files remaining in the old flash recovery
area location as they become eligible for deletion.
Related views
V$RECOVERY_FILE_DEST
V$FLASH_RECOVERY_AREA_USAGE
V$DBA_OUTSTANDING_ALERTS
V$FLASHBACK_DATABASE_LOGFILE
Flashback
Flashback Technology in Oracle
Oracle has a number of products and features that provide high availability
in cases of unplanned or planned downtime. These include Fast-Start Fault
Recovery, Real Application Clusters (RAC), Recovery Manager (RMAN), backup
and recovery solutions, Oracle Flashback, partitioning, Oracle Data Guard,
LogMiner, multiplexed redolog files and online reorganization.
Flashback Version Query uses undo data stored in the database to view the
changes to one or more rows along with all the metadata of the changes.
Flashback Data Archive - from Oracle 11g, flashback will make use of
flashback logs, explicitly created for that table, in FRA (Flash/Fast
Recovery Area), will not use undo. Flashback data archives can be defined
on any table/tablespace. Flashback data archives are written by a
dedicated background process called FBDA so there is less impact on
performance. Can be purged at regular intervals automatically.
Flashback Query
Flashback Query in Oracle
Flashback Query was introduced in Oracle 9i
Oracle Flashback Query allows us to view and repair historical data. We can
perform queries on the database as of a certain time or specified system
change number (SCN).
You set the date and time you want to view. Then, any SQL query you run
operates on data as it existed at that time. If you are an authorized user,
then you can correct errors and back out the restored data without needing
the intervention of an administrator.
DML and DDL operations can use table decoration to choose snapshots within
sub queries. Operations such as CREATE TABLE AS SELECT and INSERT TABLE AS
SELECT can be used with table decoration in the sub queries to repair
tables from which rows have been mistakenly deleted. Table decoration can
be any arbitrary expression: a bind variable, a constant, a string, date
operations, and so on. You can open a cursor and dynamically bind a
snapshot value (a timestamp or an SCN) to decorate a table with.
■ Application Transparency
Packaged applications, like report generation tools that only do queries,
can run in Flashback Query mode by using logon triggers. Applications can
run transparently without requiring changes to code. All the constraints
that the application needs to be satisfied are guaranteed to hold good,
because there is a consistent version of the database as of the Flashback
Query time.
■ Application Performance
If an application requires recovery actions, it can do so by saving SCNs
and flashing back to those SCNs. This is lot easier and faster than saving
data sets and restoring them later, which would be required if the
application were to do explicit versioning. Using Flashback Query, there
are no costs for logging that would be incurred by explicit versioning.
■ Online Operation
Flashback Query is an online operation. Concurrent DMLs and queries from
other sessions are allowed while an object is queried inside Flashback
Query. The speed of these operations is unaffected. Moreover, different
sessions can flash back to different Flashback times or SCNs on the same
object concurrently. The speed of the Flashback Query itself depends on the
amount of undo that needs to be applied, which is proportional to how far
back in time the query goes.
■ Easy Manageability
There is no additional management on the part of the user, except setting
the appropriate retention interval, having the right privileges, and so on.
No additional logging has to be turned on, because past versions are
constructed automatically, as needed.
Notes:
■ Self-Service Repair
Perhaps you accidentally deleted some important rows from a table and
wanted to recover the deleted rows. To do the repair, you can move backward
in time and see the missing rows and re-insert the deleted row into the
current table.
■ Account Balances
You can view account prior account balances as of a certain day in the
month.
■ Packaged Applications
Packaged applications (like report generation tools) can make use of
Flashback Query without any changes to application logic. Any constraints
that the application expects are guaranteed to be satisfied, because users
see a consistent version of the Database as of the given time or SCN.