3 Oracle Architecture Notes
3 Oracle Architecture Notes
3 Oracle Architecture Notes
User and server processes: The processes shown in the figure are
called user and server processes. These processes are used to manage
the execution of SQL statements.
• A Shared Server Process can share memory and variable
processing for multiple user processes.
• A Dedicated Server Process manages memory and variables for a
single user process.
A user can typically have more than one concurrent session, e.g., the user
may connect using SQLPlus and also connect using Internet Developer
Suite tools at the same time. The limit of concurrent session connections is
controlled by the DBA.
If a system users attempts to connect and the Oracle Server is not running,
the system user receives the Oracle Not Available error message.
Oracle 9i, 10g, and 11g use a Dynamic SGA. Memory configurations for
the system global area can be made without shutting down the database
instance. The advantage is obvious. This allows the DBA to resize the
Database Buffer Cache and Shared Pool dynamically.
Several initialization parameters are set that affect the amount of random
access memory dedicated to the SGA of an Oracle Instance. These are:
The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus
the combination of the size of the additional
parameters, DB_CACHE_SIZE,LOG_BUFFER, SHARED_POOL_SIZE, L
ARGE_POOL_SIZE, and JAVA_POOL_SIZE.
Granules are assigned to the Database Buffer Cache, Shared Pool, Java
Pool, and other memory structures, and these memory components can
dynamically grow and shrink. Using contiguous memory improves system
performance. The actual number of granules assigned to one of these
memory components can be determined by querying the database view
named V$BUFFER_POOL.
The content of the PGA varies, but as shown in the figure above, generally
includes the following:
• SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge,
and bitmap create types of operations.
o Oracle 9i and later versions enable automatic sizing of the SQL
Work Areas by setting the WORKAREA_SIZE_POLICY =
AUTO parameter (this is the default!)
and PGA_AGGREGATE_TARGET = n (where n is some amount
of memory established by the DBA). However, the DBA can let
the Oracle DBMS determine the appropriate amount of memory.
A session that loads a PL/SQL package into memory has the package
state stored to the UGA. The package state is the set of values stored in
all the package variables at a specific time. The state changes as program
code the variables. By default, package variables are unique to and persist
for the life of the session.
The OLAP page pool is also stored in the UGA. This pool
manages OLAP data pages, which are equivalent to data blocks. The page
pool is allocated at the start of an OLAP session and released at the end of
the session. An OLAP session opens automatically whenever a user
queries a dimensional object such as a cube.
The UGA must be available to a database session for the life of the
session. For this reason, the UGA cannot be stored in the PGA when
using a shared server connection because the PGA is specific to a single
process. Therefore, the UGA is stored in the SGA when using shared
server connections, enabling any shared server process access to it. When
using a dedicated server connection, the UGA is stored in the PGA.
sga_target=1610612736
Setting a single parameter simplifies the administration task – the DBA only
specifies the amount of SGA memory available to an instance – the DBA
can forget about the sizes of individual components. No out of memory
errors are generated unless the system has actually run out of
memory. No manual tuning effort is needed.
There are a few SGA components whose sizes are not automatically
adjusted. The DBA must specify the sizes of these components explicitly, if
they are needed by an application. Such components are:
• Keep/Recycle buffer caches (controlled
by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)
• Additional buffer caches for non-standard block sizes (controlled
by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})
• Streams Pool (controlled by the new
parameter STREAMS_POOL_SIZE)
The granule size that is currently being used for the SGA for each
component can be viewed in the view V$SGAINFO. The size of each
component and the time and type of the last resize operation performed on
each component can be viewed in the
view V$SGA_DYNAMIC_COMPONENTS.
Shared Pool
The Shared Pool is a memory structure that is shared by all system users.
• It caches various types of program data. For example, the shared
pool stores parsed SQL, PL/SQL code, system parameters, and data
dictionary information.
• The shared pool is involved in almost every operation that occurs in
the database. For example, if a user executes a SQL statement, then
Oracle Database accesses the shared pool.
• It consists of both fixed and variable structures.
• The variable component grows and shrinks depending on the
demands placed on memory size by system users and application
programs.
You can alter the size of the shared pool dynamically with the ALTER
SYSTEM SET command. An example command is shown in the figure
below. You must keep in mind that the total memory allocated to the SGA
is set by the SGA_TARGET parameter (and may also be limited by
the SGA_MAX_SIZE if it is set), and since the Shared Pool is part of the
SGA, you cannot exceed the maximum size of the SGA. It is
recommended to let Oracle optimize the Shared Pool size.
The Shared Pool stores the most recently executed SQL statements and
used data definitions. This is because some system users and application
programs will tend to execute the same SQL statements often. Saving this
information in memory can improve system performance.
Library Cache
If the Library Cache is too small, the Library Cache must purge statement
definitions in order to have space to load new SQL and PL/SQL
statements. Actual management of this memory structure is through
a Least-Recently-Used (LRU) algorithm. This means that the SQL and
PL/SQL statements that are oldest and least recently used are purged
when more storage space is needed.
The database server manages the size of the Data Dictionary Cache
internally and the size depends on the size of the Shared Pool in which the
Data Dictionary Cache resides. If the size is too small, then the data
dictionary tables that reside on disk must be queried often for information
and this will slow down performance.
The Server Result Cache holds result sets and not data blocks. The server
result cache contains the SQL query result cache and PL/SQL function
result cache, which share the same infrastructure.
The Database Buffer Cache is a fairly large memory object that stores the
actual data blocks that are retrieved from datafiles by system queries and
other data manipulation language commands.
Keep in mind that information read from disk is read a block at a time, not a
row at a time, because a database block is the smallest addressable
storage space on disk.
The write list holds dirty buffers – these are buffers that hold that data that
has been modified, but the blocks have not been written back to disk.
The LRU list holds free buffers, pinned buffers, and dirty buffers that have
not yet been moved to the write list. Free buffers do not contain any
useful data and are available for use. Pinned buffers are currently being
accessed.
When an Oracle process accesses a buffer, the process moves the buffer
to the most recently used (MRU) end of the LRU list – this causes dirty
buffers to age toward the LRU end of the LRU list.
When an Oracle user process needs a data row, it searches for the data in
the database buffer cache because memory can be searched more quickly
than hard disk can be accessed. If the data row is already in the cache
(a cache hit), the process reads the data from memory; otherwise a cache
miss occurs and data must be read from hard disk into the database buffer
cache.
Before reading a data block into the cache, the process must first find a
free buffer. The process searches the LRU list, starting at the LRU end of
the list. The search continues until a free buffer is found or until the search
reaches the threshold limit of buffers.
Each time a user process finds a dirty buffer as it searches the LRU, that
buffer is moved to the write list and the search for a free buffer continues.
When a user process finds a free buffer, it reads the data block from disk
into the buffer and moves the buffer to the MRU end of the LRU list.
The block size for a database is set when a database is created and is
determined by the init.ora parameter file parameter
named DB_BLOCK_SIZE.
• Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB.
• The size of blocks in the Database Buffer Cache matches the block
size for the database.
• The DBORCL database uses an 8KB block size.
• This figure shows that the use of non-standard block sizes results in
multiple database buffer cache memory allocations.
Because tablespaces that store oracle tables can use different (non-
standard) block sizes, there can be more than one Database Buffer Cache
allocated to match block sizes in the cache with the block sizes in the non-
standard tablespaces.
You can dynamically change the size of the Database Buffer Cache with
the ALTER SYSTEM command like the one shown here:
You can have the Oracle Server gather statistics about the Database Buffer
Cache to help you size it to achieve an optimal workload for the memory
allocation. This information is displayed from
the V$DB_CACHE_ADVICE view. In order for statistics to be gathered,
you can dynamically alter the system by using the ALTER SYSTEM SET
DB_CACHE_ADVICE (OFF, ON, READY) command. However, gathering
statistics on system performance always incurs some overhead that will
slow down system performance.
SQL> ALTER SYSTEM SET db_cache_advice = ON;
System altered.
System altered.
This pool retains blocks in memory (data from tables) that are likely to be
reused throughout daily processing. An example might be a table
containing user names and passwords or a validation table of some type.
This pool is used to store table data that is unlikely to be reused throughout
daily processing – thus the data blocks are quickly removed from memory
when not needed.
The Redo Log Buffer memory object stores images of all changes made
to database blocks.
• Database blocks typically store several table rows of organizational
data. This means that if a single column value from one row in a
block is changed, the block image is stored. Changes include
INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP.
• LGWR writes redo sequentially to disk while DBWn performs
scattered writes of data blocks to disk.
o Scattered writes tend to be much slower than sequential writes.
o Because LGWR enable users to avoid waiting for DBWn to
complete its slow writes, the database delivers better
performance.
The Redo Log Buffer as a circular buffer that is reused over and over. As
the buffer fills up, copies of the images are stored to the Redo Log
Files that are covered in more detail in a later module.
Large Pool
The Large Pool is an optional memory structure that primarily relieves the
memory burden placed on the Shared Pool. The Large Pool is used for the
following tasks if it is allocated:
• Allocating space for session memory requirements from the User
Global Area where a Shared Server is in use.
• Transactions that interact with more than one database, e.g., a
distributed database scenario.
• Backup and restore operations by the Recovery Manager (RMAN)
process.
o RMAN uses this only if the BACKUP_DISK_IO =
n and BACKUP_TAPE_IO_SLAVE = TRUE parameters are
set.
o If the Large Pool is too small, memory allocation for backup will
fail and memory will be allocated from the Shared Pool.
• Parallel execution message buffers for parallel server
operations. The PARALLEL_AUTOMATIC_TUNING =
TRUE parameter must be set.
The Large Pool size is set with the LARGE_POOL_SIZE parameter – this
is not a dynamic parameter. It does not use an LRU list to manage
memory.
Java Pool
The Java Pool is an optional memory object, but is required if the
database has Oracle Java installed and in use for Oracle JVM (Java Virtual
Machine).
• The size is set with the JAVA_POOL_SIZE parameter that defaults
to 24MB.
• The Java Pool is used for memory allocation to parse Java
commands and to store data associated with Java commands.
• Storing Java code and data in the Java Pool is analogous to SQL
and PL/SQL code cached in the Shared Pool.
Streams Pool
This pool stores data and control structures to support the Oracle Streams
feature of Oracle Enterprise Edition.
• Oracle Steams manages sharing of data and events in a distributed
environment.
• It is sized with the parameter STREAMS_POOL_SIZE.
• If STEAMS_POOL_SIZE is not set or is zero, the size of the pool
grows dynamically.
Processes
You need to understand three different types of Processes:
• User Process: Starts when a database user requests to connect to
an Oracle Server.
• Server Process: Establishes the Connection to an Oracle Instance
when a User Process requests connection – makes the connection
for the User Process.
• Background Processes: These start when an Oracle Instance is
started up.
Client Process
In order to use Oracle, you must obviously connect to the database. This
must occur whether you're using SQLPlus, an Oracle tool such as Designer
or Forms, or an application program. The client process is also termed the
user process in some Oracle documentation.
This generates a User Process (a memory object) that generates
programmatic calls through your user interface (SQLPlus, Integrated
Developer Suite, or application program) that creates a session and causes
the generation of a Server Process that is either dedicated or shared.
Server Process
A Server Process is the go-between for a Client Process and the Oracle
Instance.
• Dedicated Server environment – there is a single Server Process to
serve each Client Process.
• Shared Server environment – a Server Process can serve several
User Processes, although with some performance reduction.
Background Processes
As is shown here, there are both mandatory and optional background
processes that are started whenever an Oracle Instance starts up. These
background processes serve all system users. We will cover mandatory
process in detail.
Optional Processes
• Archiver Process (ARCn)
• Coordinator Job Queue (CJQ0)
• Dispatcher (number “nnn”) (Dnnn)
• Others
Of these, you will most often use ARCn (archiver) when you automatically
archive redo log file information (covered in a later module).
PMON
The Process Monitor (PMON) is a cleanup type of process that cleans up
after failed processes such as the dropping of a user connection due to a
network failure or the abnormal termination (ABEND) of a user application
program. It does the tasks shown in the figure below.
SMON
The System Monitor (SMON) is responsible for instance recovery by
applying entries in the online redo log files to the datafiles. It also performs
other activities as outlined in the figure shown below.
DBWn writes to datafiles when one of these events occurs that is illustrated
in the figure below.
LGWR
The Log Writer (LGWR) writes contents from the Redo Log Buffer to the
Redo Log File that is in use. These are sequential writes since the Redo
Log Files record database modifications based on the actual time that the
modification takes place. LGWR actually writes before the DBWn writes
and only confirms that a COMMIT operation has succeeded when the Redo
Log Buffer contents are successfully written to disk. LGWR can also call
the DBWn to write contents of the Database Buffer Cache to disk. The
LGWR writes according to the events illustrated in the figure shown below.
CKPT
The Checkpoint (CPT) process writes information to update the database
control files and headers of datafiles to identify the point in time with regard
to theRedo Log Files where instance recovery is to begin should it be
necessary. This is done at a minimum, once every three seconds.
Think of a checkpoint record as a starting point for recovery. DBWn will
have completed writing all buffers from the Database Buffer Cache to disk
prior to the checkpoint, thus those records will not require recovery. This
does the following:
• Ensures modified data blocks in memory are regularly written to disk
– CKPT can call the DBWn process in order to ensure this and does
so when writing a checkpoint record.
• Reduces Instance Recovery time by minimizing the amount of work
needed for recovery since only Redo Log File entries processed
since the last checkpoint require recovery.
• Causes all committed data to be written to datafiles during database
shutdown.
If a Redo Log File fills up and a switch is made to a new Redo Log File (this
is covered in more detail in a later module), the CKPT process also writes
checkpoint information into the headers of the datafiles.
Checkpoint information written to control files includes the system change
number (the SCN is a number stored in the control file and in the headers
of the database files that are used to ensure that all files in the system are
synchronized), location of which Redo Log File is to be used for recovery,
and other information.
CKPT does not write data blocks or redo blocks to disk – it calls DBWn and
LGWR as necessary.
The Manageability Monitor Lite Process (MMNL) writes statistics from the
Active Session History (ASH) buffer in the SGA to disk. MMNL writes to
disk when the ASH buffer is full.
The information stored by these processes is used for performance tuning
– we survey performance tuning in a later module.
RECO
The Recoverer Process (RECO) is used to resolve failures of distributed
transactions in a distributed database.
• Consider a database that is distributed on two servers – one in St.
Louis and one in Chicago.
• Further, the database may be distributed on servers of two different
operating systems, e.g. LINUX and Windows.
• The RECO process of a node automatically connects to other
databases involved in an in-doubt distributed transaction.
• When RECO reestablishes a connection between the databases, it
automatically resolves all in-doubt transactions, removing from each
database's pending transaction table any rows that correspond to the
resolved transactions.
ARCn
While the Archiver (ARCn) is an optional background process, we cover it
in more detail because it is almost always used for production systems
storing mission critical information. The ARCn process must be used to
recover from loss of a physical disk drive for systems that are "busy" with
lots of transactions being completed.
When a Redo Log File fills up, Oracle switches to the next Redo Log
File. The DBA creates several of these and the details of creating them are
covered in a later module. If all Redo Log Files fill up, then Oracle switches
back to the first one and uses them in a round-robin fashion by overwriting
ones that have already been used – it should be obvious that the
information stored on the files, once overwritten, is lost forever.
Logical Structure
It is helpful to understand how an Oracle database is organized in terms of
a logical structure that is used to organize physical objects.
Tablespace: An Oracle database must always consist of at least
two tablespaces (SYSTEM and SYSAUX), although a typical Oracle
database will multiple tablespaces.
• A tablespace is a logical storage facility (a logical container) for
storing objects such as tables, indexes, sequences, clusters, and
other database objects.
• Each tablespace has at least one physical datafile that actually
stores the tablespace at the operating system level. A large
tablespace may have more than one datafile allocated for storing
objects assigned to that tablespace.
• A tablespace belongs to only one database.
• Tablespaces can be brought online and taken offline for purposes of
backup and management, except for the SYSTEM tablespace that
must always be online.
• Tablespaces can be in either read-only or read-write status.
Block: The Oracle Server manages data at the smallest unit in what is
termed a block or data block. Data are actually stored in blocks.
The data block size is set at the time the database is created and cannot
be changed. It is set with the DB_BLOCK_SIZE parameter. The
maximum data block size depends on the operating system.
Thus, the Oracle database architecture includes both logical and physical
structures as follows:
• Physical: Control files; Redo Log Files; Datafiles; Operating System
Blocks.
• Logical: Tablespaces; Segments; Extents; Data Blocks.
Processing a query:
• Parse:
o Search for identical statement in the Shared SQL Area.
o Check syntax, object names, and privileges.
o Lock objects used during parse.
o Create and store execution plan.
• Bind: Obtains values for variables.
• Execute: Process statement.
• Fetch: Return rows to user process.
Processing a DML statement:
• Parse: Same as the parse phase used for processing a query.
• Bind: Same as the bind phase used for processing a query.
• Execute:
o If the data and undo blocks are not already in the Database
Buffer Cache, the server process reads them from the datafiles
into the Database Buffer Cache.
o The server process places locks on the rows that are to be
modified. The undo block is used to store the before image of
the data, so that the DML statements can be rolled back if
necessary.
o The data blocks record the new values of the data.
o The server process records the before image to the undo block
and updates the data block. Both of these changes are made
in the Database Buffer Cache. Any changed blocks in the
Database Buffer Cache are marked as dirty buffers. That is,
buffers that are not the same as the corresponding blocks on
the disk.
o The processing of a DELETE or INSERT command uses similar
steps. The before image for a DELETE contains the column
values in the deleted row, and the before image of an INSERT
contains the row location information.