Rename/Purge listener logfile

The listener log file can grow over time and archive the listener log file one can’t move the file and create a new file using touch as the listener has a open/active file handle to log file causes the listener to no longer update the log file. So work around this issue one would need to do the following steps to rename the file.

# change directory to location where the listener log file resides
$ cd $ORACLE_HOME/network/log
lsnrctl set log_status off
mv listener.log listener.log.old
lsnrctl set log_status on

To purge the listener log file without restarting the listener
$ cd $ORACLE_HOME/network/log
$ echo “” > listener.log

Redirection on unix

Redirection on unix

Redirecting stdout and stderr
ls > file – redirect output to file
ls 2> filename – redirect error to file named filename
ls 2>&1 file – redirect error to where output is going to file named file
ls 1>&2 file – redirect stdout to stderr to file named file
ls > file 2> err – redirect output to file and error to file named err
exec 2> std.err – Redirect all error messages to file named std.err

Determining your Database DBID – RMAN

Determining your database DBID

In situations requiring the recovery of your SPFILE or control file from autobackup, such as disaster recovery when you have lost all database files, you will need to use your DBID. Your DBID should be recorded along with other basic information about your database, as recommended in “Deciding Between ARCHIVELOG and NOARCHIVELOG Mode”.

If you do not have a record of the DBID of your database, there are two places you can find it without opening your database.

* The DBID is used in forming the filename for the control file autobackup. Locate that file, and then refer to “Configuring the Control File Autobackup Format” to see where the DBID appears in the filename.
* If you have any text files that preserve the output from an RMAN session, the DBID is displayed by the RMAN client when it starts up and connects to your database. Typical output follows:

% rman TARGET /
Recovery Manager: Release – Production on Sun Jun 12 02:41:03 2005

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: RDBMS (DBID=774627068)


the DBID will be the same if you upgrade your database from lower version to higher


SGA_MAX_SIZE sets the overall amount of memory the SGA can consume but is not dynamic.

The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.

I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB, later period of time, if you want you can resize your SGA_TARGET parameter to the value of SGA_MAX_SIZE i.e. 4GB, you can’t resize the SGA_TARGET value to more than 4GB.

It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier releases in which memory for the internal and fixed SGA was added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the size of the shared memory region allocated by the database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET

Do not dynamically set or unset the SGA_TARGET parameter. This should be set only at startup.

Basic Tablespace Recovery Option

The Recover Tablespace option is used to recover all datafiles needing recovery in a tablespace up to the point of failure, synchronizing them with the other datafiles (complete recovery only). Recover Tablespace is performed from the OPEN state after taking the tablespace OFFLINE (which brings its datafiles offline). Since the SYSTEM tablespace CANNOT be taken offline, you cannot use Recover Tablespace on SYSTEM. The basic steps while the database is open are: offline the tablespace, restore the datafiles, recover tablespace, and online the tablespace, as shown below:

$ sqlplus “/ as sysdba”
SQL> alter tablespace development offline immediate;
Using Immediate rolls back currently pending transactions.
SQL> !cp -p /u03/oradata/prod1/devl_PROD* /u03/oradata/PROD
SQL> recover automatic tablespace development;
SQL> alter tablespace development online;

Doing a Recover Tablespace after opening the database can be used to bring up your database for use before doing a recovery. The basic steps are: mount, offline the bad datafiles, open, offline the tablespace, restore the datafiles, recover tablespace, and online the tablespace, as shown below:

$ sqlplus “/ as sysdba”
SQL> startup mount
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ offline;
SQL> alter database open;
SQL> alter tablespace development offline;
SQL> !cp -p /u03/oradata/prod1/devl_PROD* /u03/oradata/PROD
SQL> recover automatic tablespace development;
SQL> alter tablespace development online;

If a write error occurs on one of the datafiles when you offline a tablespace, use “alter tablespace tsname offline temporary;”. Then, you can run a recovery on the datafile. If all datafiles in an offlined tablespace have write errors, use “alter tablespace tsname offline immediate;”. Then, you can run a recovery on the tablespace.

Basic Datafile Recovery Option

The Recover Datafile option is used to recover the specified datafile up to the point of failure, synchronizing it with the other datafiles (complete recovery only). Recover Datafile is performed either from the MOUNT state (after a shutdown), with the datafile ONLINE or OFFLINE, or from the OPEN state, with the datafile OFFLINE. A bad datafile must be taken offline before the database can be opened. Since the SYSTEM tablespace CANNOT be taken offline, you cannot use Recover Datafile from the OPEN state on SYSTEM.

The basic steps for the MOUNT state are: restore the datafile, mount, recover datafile (using automatic here instead of autorecovery), online the datafile if needed, and open, as shown below:

$ cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
$ sqlplus “/ as sysdba”
SQL> startup mount
SQL> recover automatic datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’;
SQL> select * from v$datafile;
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ online;
SQL> alter database open;

The basic steps for Recover Datafile from the OPEN state (except for SYSTEM) while the rest of the database is still up and running, are: offline the datafile, restore the datafile, recover datafile, and online the datafile, as shown below:

$ sqlplus “/ as sysdba”
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ offline;
SQL> !cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
SQL> recover automatic datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ online;

Doing a Recover Datafile from the OPEN state (except for SYSTEM) can also be used to bring up your database for use before doing a recovery. The basic steps are: mount, offline the datafile, open, restore the datafile, recover datafile, and online the datafile, as shown below:

$ sqlplus “/ as sysdba”
SQL> startup mount
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ offline;
SQL> alter database open;
The database is now available to the users, except for that datafile.
SQL> !cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
SQL> recover automatic datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter database datafile ‘/u03/oradata/PROD/devl_PROD_01.dbf’ online;

You can still access data in tables in a tablespace which has an offlined datafile, just so long as the data is in one of the other datafiles of the tablespace and the table header is not in the offlined datafile.

Moving/Renaming datafile to another location

You can move a datafile to another disk, or change it’s name (such as for restoring a copy of it from your backup tape to that other location in the event of a disk failure or moving it to another disk that has more room for it) by making the move after shutting down the database, and using the “alter database rename file” command to tell Oracle where the file has been moved to, as in the following example, which moves one of the development datafiles from disk u03 to disk u04:

$ sqlplus “/ as sysdba”
SQL> shutdown immediate
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> startup mount
SQL> alter database rename file ‘/u03/oradata/PROD/devl_PROD_01.dbf’ to ‘/u04/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter database open;
SQL> select * from v$datafile;
Lists the current datafiles, showing your changes.

This can also be done without shutting down the database, but taking the associated tablespace offline first (which prevents others from accessing that tablespace’s tables, indexes, and other data):

$ sqlplus “/ as sysdba”
SQL> alter tablespace development offline;
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> alter database rename file ‘/u03/oradata/PROD/devl_PROD_01.dbf’ to ‘/u04/oradata/PROD/devl_PROD_01.dbf’;
SQL> alter tablespace development online;
SQL> select * from v$datafile;

Control file:

The control files of a database store the status of the physical structure of the database. The control file is absolutely crucial to database operation. It contains (but is not limited to) the following types of information:

* Database information (RESETLOGS SCN and their time stamp)
* Archive log history
* Tablespace and datafile records (filenames, datafile checkpoints, read/write status, offline or not)
* Redo threads (current online redo log)
* Database’s creation date
* database name
* current archive log mode
* Log records (sequence numbers, SCN range in each log)
* RMAN catalog
* Database block corruption information
* Database ID, which is unique to each DB

The location of the control files is specified through the control_files init param.

Backing up Controlfiles:When a controlfile (multiplexed or not) becomes unavailable (because for example of a media failure, the database cannot continue to operate.Because control files keep track of datafiles the control file should be backed up whenever a change is made to the database (alter database).
The command to back the control file up is: alter database backup controlfile to ‘filename’.

Recovering database when controlfile is lost:
If the controlfile is lost, it is somewhat difficult to do a recovery because the database cannot be mounted for a recovery. The controlfile must be recreated (with create controlfile whose syntax is a bit awkyard.
In order to recover from a lost spfile, the DBID is required.

Size of control files:
The size of the control files is governed by the following parameters:

* maxlogfiles
* maxlogmembers
* maxloghistory
* maxinstances
* control_file_record_keep_time

Sections:The controlfile contains the following sections:

* ARCHIVED LOG (reusable)
* BACKUP DATAFILE (reusable)
* BACKUP PIECE (reusable)
* BACKUP REDOLOG (reusable)
* BACKUP SET (reusable)
* COPY CORRUPTION (reusable)
* DATAFILE COPY (reusable)
* DELETED OBJECT (reusable)
* LOG HISTORY (reusable)
* OFFLINE RANGE (reusable)

The minimum number of days that a reusable record is kept in the controlfile is controlled by the control_file_record_keep_time parameter.These sections consist of records. The size, total number and number of used record are exposed through v$controlfile_record_section.

Oracle processes

In addition to the SGA region in RAM memory, an Oracle instance also consists of numerous background processes. It is important to remember that an Oracle database is really a very large program running on the server. When the Oracle program needs to perform a specific task, it will spawn a factotum (slave) task to service the task.

Advanced queuing: Aq_tnxx:
These are the Oracle8 advanced queuing processes that are used to thread processes through the Oracle8 instance.

Archive Monitor:ARCHMON:
This is a process on UNIX that monitors the archive process and writes the redo logs to the archives.

Archiver Process:ARCH:
This process is only active if archive logging is in effect. This process writes the redo log data files that are filled into the archive log data files.The archive process is optional process. If your database is in archive log mode and automating archiving is enabled, then this process is active. This process writes filled redo log files to archive log location specified by you. It is required while recovering database. LGWR is responsible for starting multiple ARCH process if workload increase. There can be up to 10 archiver processes.

Callout queues:EXTPROC
There will be one callout queue for each session performing callouts.

Checkpoint processes:CKPxx:
These are the checkpoint processes that can be started to optimize the checkpoint operation for Oracle logging.The checkpoint process regularly initiates a checkpoint, and updates controlfile and datafile header. This process flushes the redo entries by means of LGWR. Uses DBWR to write all dirty buffer to datafiles and updates datafile header.

Database Writer:DBWR
This process handles data transfer from the buffers in the SGA to the database files.The database writer process writes dirty blocks from database buffer cache to datafiles. Dirty blocks are those blocks which are modified by server process, and need to be flushed out to make room for new blocks in cache.DBWR never ever modify database buffer cache, it is done by server process. DBWR writes to datafiles when space is needed in cache, on commit and when forceful checkpoint is occurred. You can configure more than one DBWR process up to 10.

This process allows multiple processes to share a finite number of Oracle servers. It queues and routes process requests to the next available server.

Distributed Recoverer:RICO
This is an Oracle process that resolves failures involving distributed transactions.

Listener (SQL*Net v1): ORASRV
If you are running SQL*Net version 1, this process will be running to service TWO_TASK requests. This parameter was obsoleted in Oracle8.

Listener (Net8):TNSLSNR
If you are running TCP/IP, this process, known as the TNS listener process, will be running.

Lock Processes:LCKn
This process is used for interinstance locking in an Oracle Parallel Server environment.

Log Writer:LGWR
The log writer process writes redo log buffer entries (redo entries) into online redo log files on disk. This is done by LGWR when transaction is committed, every 3 seconds and when DBWR signals LGWR to do so.

Parallel Query:Pnnn
These background processes are started when Oracle performs a full table scan on a table that is defined as PARALLEL. There will be one background process for each Parallel Query slave, as defined by DBA_TABLES.DEGREE

Process Monitor:PMON
This process recovers user processes that have failed and cleans up the cache. This process also recovers the resources from a failed process. It checks if any user process fails, it cleans up all resources that user process has acquired,and roll back uncommitted transaction and releasing locks. In shared server environment, PMON restarts any failed dispatcher or server process.

This process makes all the required calls to the database to resolve user requests. It returns results to the Dnnn process that calls it.

Snapshot queues:Snpxx
These are snapshot process queues.

System Monitor:SMON
This process performs instance recovery on instance startup and is responsible for cleaning up temporary segments. In a parallel environment, this process recovers failed nodes.It also merges contiguous areas of free space in datafile, this process is called as coalescing. In RAC in non-failed instance, SMON performs instance recovery of other failed instance.

The MMAN dynamically adjust the size of SGA components. It is new process added to oracle 10g a a part of “Automatic Shared Memory Management”.

Pre-Install checks for 10gR2 RDBMS (10.2.x) – Verify.tar

This rule collection is designed for an Oracle Service Engineer, or any Web Server Administrator who needs to perform Pre-Install checks for 10gR2 RDBMS (10.2.x) – HPUX PA-RISC and Itanium Platforms.
The easiest way to run this set of rules is to download [Save file as validate.tar] and execute the validation engine. The download is a small “tar” file, which consists of the following:

* validate – Shell script that executes the engine – Unix platforms
* – Actual rule engine
* 10gr2_rdbms_hpux_hcve_022506.txt – Rule text file. This file will be different for each rule set

To run the rules:

1. Click Here to download the file to your system.
2. Save the file as “validate.tar”.
3. Untar the files to a local directory. i.e tar xvf validate.tar
4. Set your environment to the one you want the validate.
5. Execute perl 10gr2_rdbms_hpux_hcve_022506.txt from the command line, as in the following examples:

# perl 10gr2_rdbms_hpux_hcve_022506.txt


# validate 10gr2_rdbms_hpux_hcve_022506.txt

6. When prompted, confirm the environment.

NOTE: “Problems with not finding Perl”

Perl 5.6.1 is usually installed by default with MOST UNIX OS’s. Please consult your System Administrator on where to find the Perl installation. The Perl software is usually installed in /opt/perl directory, please verify this. If it exist, do the following to set your environment:
export PATH=/opt/perl/bin:$PATH
export PERL5LIB=/opt/perl/lib

Execute the following command to verify that the Perl program is in your PATH:
which perl —–> should return /opt/perl/bin

perl -v ——> should return “This is perl, v5.6.1

If the version is correct or at least v5.x.x, try to run :

# validate 10gr2_rdbms_hpux_hcve_022506.txt

If it does not work, you can download Perl 5.6.1 for free from Then setup your PATH and PERL5LIB environment variables to where you installed Perl.

The engine executes all rules and prints a summary page. An example of the run session is below, output from your run session will be different:

“Health Check Rule Engine” will be run in following environment:

ORACLE_HOME: /oracle/9.0.1

If this is not correct environment
Please set correct env parameters and rerun program
Would you like to continue [Y]es/[N]o (Default Yes) :

Executing Rules

Executing Rule: sysdate – completed successfully.
Executing Rule: ulimit – completed successfully.
Executing Rule: free_oracle_home – completed successfully.
Executing Rule: free_shared_pool – completed successfully.

Tests executed at Mon Jan 20 13:07:18 2003

Test Results

===== ==================== ======= === ==============================
1 sysdate RECORD NA Mon Jan 20 13:07:17 EST 2003
2 ulimit PASSED = unlimited
3 free_oracle_home FAILED B 5087768
4 free_shared_pool PASSED > 33531748

After the execution is complete you can see a detailed output and advise on the outcome in the log file. The log file will have the same name as the rules file, but with the .log extention. The engine will prompt you the correct log file name.

Note: You can also run each individual rule manually. Although this is the less preferable choice, if you would like to execute the rules manually, follow the instructions below. It is important that you understand that the %#% in a particular rule represents the value of the output from a previous Test. i.e. (if %40% eq True then) in rule is interpreted as if the value from the output of rule 40 is true, then futher process current rule. The Rule number is the second line after Rule description and can be seen in 10gr2_rdbms_hpux_hcve_022506.txt.