Display database SGA statistics

DECLARE
libcac NUMBER(10,2);
rowcac NUMBER(10,2);
bufcac NUMBER(10,2);
redlog NUMBER(10,2);
spsize NUMBER;
blkbuf NUMBER;
logbuf NUMBER;
BEGIN
SELECT VALUE INTO redlog FROM v$sysstat
WHERE name = ‘redo log space requests’;
SELECT 100*(SUM(pins)-SUM(reloads))/SUM(pins) INTO libcac FROM v$librarycache;
SELECT 100*(SUM(gets)-SUM(getmisses))/SUM(gets) INTO rowcac FROM v$rowcache;
SELECT 100*(cur.VALUE + con.VALUE – phys.VALUE)/(cur.VALUE + con.VALUE) INTO bufcac
FROM v$sysstat cur,v$sysstat con,v$sysstat phys,v$statname ncu,v$statname nco,v$statname nph
WHERE cur.statistic# = ncu.statistic#
AND ncu.name = ‘db block gets’
AND con.statistic# = nco.statistic#
AND nco.name = ‘consistent gets’
AND phys.statistic# = nph.statistic#
AND nph.name = ‘physical reads’;
SELECT VALUE INTO spsize FROM v$parameter WHERE name = ‘shared_pool_size’;
SELECT VALUE INTO blkbuf FROM v$parameter WHERE name = ‘db_block_buffers’;
SELECT VALUE INTO logbuf FROM v$parameter WHERE name = ‘log_buffer’;
DBMS_OUTPUT.put_line(‘> SGA CACHE STATISTICS’);
DBMS_OUTPUT.put_line(‘> ********************’);
DBMS_OUTPUT.put_line(‘> SQL Cache Hit rate = ‘||libcac);
DBMS_OUTPUT.put_line(‘> Dict Cache Hit rate = ‘||rowcac);
DBMS_OUTPUT.put_line(‘> Buffer Cache Hit rate = ‘||bufcac);
DBMS_OUTPUT.put_line(‘> Redo Log space requests = ‘||redlog);
DBMS_OUTPUT.put_line(‘> ‘);
DBMS_OUTPUT.put_line(‘> INIT.ORA SETTING’);
DBMS_OUTPUT.put_line(‘> ****************’);
DBMS_OUTPUT.put_line(‘> Shared Pool Size = ‘||spsize||’ Bytes’);
DBMS_OUTPUT.put_line(‘> DB Block Buffer = ‘||blkbuf||’ Blocks’);
DBMS_OUTPUT.put_line(‘> Log Buffer = ‘||logbuf||’ Bytes’);
DBMS_OUTPUT.put_line(‘> ‘);
IF
libcac < 99 THEN DBMS_OUTPUT.put_line('*** HINT: Library Cache too low! Increase the Shared Pool Size.');
END IF;
IF
rowcac < 85 THEN DBMS_OUTPUT.put_line('*** HINT: Row Cache too low! Increase the Shared Pool Size.');
END IF;
IF
bufcac 100 THEN DBMS_OUTPUT.put_line(‘*** HINT: Log Buffer value is rather low!’);
END IF;
END;

Advertisements

UNDO INFORMATION:dbms_undo_adv

Table 1
Function calls within DBMS_UNDO_ADV

Function

Description

Outputs

undo_info

Provides basic information about your undo

Tablespace name

Maximum size possible

Current retention value

If undo is auto extensible

If undo is guaranteed undo retention

longest_query

Allows you to see what the longest running query was so
you might be able to tune in relation to time

Length of the longest query

required_retention

Query this function to assist in determining what you
should set undo_retention to in order to help prevent snap-shot-too-old
errors. This value is based on the longest running query.

init.ora parameter undo_retention

best_possible_retention

Provides you with a value for undo_retention that best
fits your current undo tablespace size and usage.

init.ora parameter undo_retention

required_undo_size

Provides a value for the size of the undo tablespace to
create in order to support the current value of the init.ora parameter
undo_retention.

Undo tablespace size

undo_health

Gives descriptive output if any problems are encountered
with your current undo tablespace size or setting of the init.ora parameter
undo_retention and provides recommendations to fix.

Problem / Resolution descriptions

undo_advisor

Uses the advisor framework to give descriptive output if
any problems are encountered with your current undo configuration and
possible resolutions.

Problem / Resolution descriptions

undo_autotune

Tells you if undo auto tuning for undo retention is
enabled

TRUE / FALSE

rbu_migration

Provides the size required for undo tablespace size if you
wish to switch to automatic undo management

Size of Undo Tablespace

set serveroutput on
DECLARE
tsn VARCHAR2(40);
tss NUMBER(10);
aex BOOLEAN;
unr NUMBER(5);
rgt BOOLEAN;
retval BOOLEAN;
BEGIN
retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt);
dbms_output.put_line(‘Undo Tablespace Is: ‘ || tsn);
dbms_output.put_line(‘Undo Tablespace Size Is: ‘ || TO_CHAR(tss));
IF aex THEN
dbms_output.put_line(‘Undo Autoextend Is Set To: TRUE’);
ELSE
dbms_output.put_line(‘Undo Autoextend Is Set To: FALSE’);
END IF;
dbms_output.put_line(‘Undo Retention Is: ‘ || TO_CHAR(unr));
IF rgt THEN
dbms_output.put_line(‘Undo Guarantee Is Set To: TRUE’);
ELSE
dbms_output.put_line(‘Undo Guarantee Is Set To: FALSE’);
END IF;
END;
/

Undo Tablespace Is: UNDOTBS
Undo Tablespace Size Is: 28076
Undo Autoextend Is Set To: FALSE
Undo Retention Is: 43200
Undo Guarantee Is Set To: FALSE

PL/SQL procedure successfully completed.

SQL> SELECT dbms_undo_adv.longest_query(SYSDATE-1, SYSDATE)FROM dual;
DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-1,SYSDATE)
———————————————-
45689
SQL> SELECT dbms_undo_adv.required_retention(SYSDATE-1, SYSDATE)FROM dual;
DBMS_UNDO_ADV.REQUIRED_RETENTION(SYSDATE-1,SYSDATE)
—————————————————
45689
SQL> show parameter undo
NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 43200
undo_tablespace string UNDOTBS

SQL> SELECT dbms_undo_adv.required_undo_size(43200, SYSDATE-1, SYSDATE)FROM dual;
DBMS_UNDO_ADV.REQUIRED_UNDO_SIZE(43200,SYSDATE-1,SYSDATE)
———————————————————
6426

SQL> set serveroutput on
DECLARE
v VARCHAR2(300);
BEGIN
v := dbms_undo_adv.undo_advisor(SYSDATE-1, SYSDATE, 1);
dbms_output.put_line(v);
END;
/
Finding 1:The undo tablespace is OK.

PL/SQL procedure successfully completed.

SQL> set serveroutput on

DECLARE
prob VARCHAR2(100);
reco VARCHAR2(100);
rtnl VARCHAR2(100);
retn PLS_INTEGER;
utbs PLS_INTEGER;
retv PLS_INTEGER;
BEGIN
retv := dbms_undo_adv.undo_health(prob, reco, rtnl, retn, utbs);
dbms_output.put_line(‘Problem: ‘ || prob);
dbms_output.put_line(‘Recmmnd: ‘ || reco);
dbms_output.put_line(‘Rationl: ‘ || rtnl);
dbms_output.put_line(‘Retentn: ‘ || TO_CHAR(retn));
dbms_output.put_line(‘UTBSize: ‘ || TO_CHAR(utbs));
END;
/

Problem: No problem found
Recmmnd:
Rationl:
Retentn: 0
UTBSize: 0

undo size based on retention:
SQL> SELECT dbms_undo_adv.required_undo_size(43200)FROM dual;

DBMS_UNDO_ADV.REQUIRED_UNDO_SIZE(43200)
—————————————
8403

SGA INFO

SQL> desc v$sgainfo
Name Null? Type
—————————————– ——– —————————-
NAME VARCHAR2(32)
BYTES NUMBER
RESIZEABLE VARCHAR2(3)

SQL>select * from v$sgainfo;

NAME BYTES RESIZEABLE
——————————– ———- —
Fixed SGA Size 2073128 No
Redo Buffers 31477760 No
Buffer Cache Size 1962934272 Yes
Shared Pool Size 788529152 Yes
Large Pool Size 33554432 Yes
Java Pool Size 16777216 Yes
Streams Pool Size 50331648 Yes
Granule Size 16777216 No
Maximum SGA Size 3456106496 No
Startup overhead in Shared Pool 117440512 No
Free SGA Memory Available 570425344

11 rows selected.

PGA Memory Management

The Program Global Area (PGA) is a private memory region containing data and control information for a server process. Access to it is exclusive to that server process and is read and written only by the Oracle code acting on behalf of it. An example of such information is the runtime area of a cursor. Each time a cursor is executed, a new runtime area is created for that cursor in the PGA memory region of the server process executing that cursor.

For complex queries (for example, decision support queries), a big portion of the runtime area is dedicated to work areas allocated by memory intensive operators, such as the following:
* Sort-based operators, such as ORDER BY, GROUP BY, ROLLUP, and window functions
* Hash-join
* Bitmap merge
* Bitmap create
* Write buffers used by bulk load operations

A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.

The size of a work area can be controlled and tuned. Generally, bigger work areas can significantly improve the performance of a particular operator at the cost of higher memory consumption. Ideally, the size of a work area is big enough that it can accommodate the input data and auxiliary memory structures allocated by its associated SQL operator. This is known as the optimal size of a work area. When the size of the work area is smaller than optimal, the response time increases, because an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. Under the one-pass threshold, when the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. This could dramatically increase the response time of the operator. This is known as the multi-pass size of the work area. For example, a serial sort operation that needs to sort 10GB of data needs a little more than 10GB to run optimal and at least 40MB to run one-pass. If this sort gets less that 40MB, then it must perform several passes over the input data.

The goal is to have most work areas running with an optimal size (for example, more than 90% or even 100% for pure OLTP systems), while a smaller fraction of them are running with a one-pass size (for example, less than 10%). Multi-pass execution should be avoided. Even for DSS systems running large sorts and hash-joins, the memory requirement for the one-pass executions is relatively small. A system configured with a reasonable amount of PGA memory should not need to perform multiple passes over the input data.

Automatic PGA memory management simplifies and improves the way PGA memory is allocated. By default, PGA memory management is enabled. In this mode, Oracle dynamically adjusts the size of the portion of the PGA memory dedicated to work areas, based on 20% of the SGA memory size. The minimum value is 10MB.

Note:
For backward compatibility, automatic PGA memory management can be disabled by setting the value of the PGA_AGGREGATE_TARGET initialization parameter to 0. When automatic PGA memory management is disabled, the maximum size of a work area can be sized with the associated _AREA_SIZE parameter, such as the SORT_AREA_SIZE initialization parameter.

When running under the automatic PGA memory management mode, sizing of work areas for all sessions becomes automatic and the *_AREA_SIZE parameters are ignored by all sessions running in that mode. At any given time, the total amount of PGA memory available to active work areas in the instance is automatically derived from the PGA_AGGREGATE_TARGET initialization parameter. This amount is set to the value of PGA_AGGREGATE_TARGET minus the amount of PGA memory allocated by other components of the system (for example, PGA memory allocated by sessions). The resulting PGA memory is then assigned to individual active work areas, based on their specific memory requirements.

Under automatic PGA memory management mode, the main goal of Oracle is to honor the PGA_AGGREGATE_TARGET limit set by the DBA, by controlling dynamically the amount of PGA memory allotted to SQL work areas. At the same time, Oracle tries to maximize the performance of all the memory-intensive SQL operations, by maximizing the number of work areas that are using an optimal amount of PGA memory (cache memory). The rest of the work areas are executed in one-pass mode, unless the PGA memory limit set by the DBA with the parameter PGA_AGGREGATE_TARGET is so low that multi-pass execution is required to reduce even more the consumption of PGA memory and honor the PGA target limit.

When configuring a brand new instance, it is hard to know precisely the appropriate setting for PGA_AGGREGATE_TARGET. You can determine this setting in three stages:

1. Make a first estimate for PGA_AGGREGATE_TARGET, based on a rule of thumb. By default, Oracle uses 20% of the SGA size. However, this initial setting may be too low for a large DSS system.
2. Run a representative workload on the instance and monitor performance, using PGA statistics collected by Oracle, to see whether the maximum PGA size is under-configured or over-configured.
3. Tune PGA_AGGREGATE_TARGET, using Oracle PGA advice statistics.

Setting PGA_AGGREGATE_TARGET Initially
The value of the PGA_AGGREGATE_TARGET initialization parameter (for example 100000 KB, 2500 MB, or 50 GB) should be set based on the total amount of memory available for the Oracle instance. This value can then be tuned and dynamically modified at the instance level.

Assume that an Oracle instance is configured to run on a system with 4 GB of physical memory. Part of that memory should be left for the operating system and other non-Oracle applications running on the same hardware system. You might decide to dedicate only 80% (3.2 GB) of the available memory to the Oracle instance.

You must then divide the resulting memory between the SGA and the PGA.

* For OLTP systems, the PGA memory typically accounts for a small fraction of the total memory available (for example, 20%), leaving 80% for the SGA.
* For DSS systems running large, memory-intensive queries, PGA memory can typically use up to 70% of that total (up to 2.2 GB in this example).

Good initial values for the parameter PGA_AGGREGATE_TARGET might be:
* For OLTP: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 20%
* For DSS: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 50%
where total_mem is the total amount of physical memory available on the system.

vmstat command

The first tool to use is the vmstat command, which quickly provides compact information about various system resources and their related performance problems.
The vmstat command reports statistics about kernel threads in the run and wait queue, memory, paging, disks, interrupts, system calls, context switches, and CPU activity. The reported CPU activity is a percentage breakdown of user mode, system mode, idle time, and waits for disk I/O.
Note: If the vmstat command is used without any options or only with the interval and optionally, the count parameter, such as vmstat 2 10; then the first line of numbers is an average since system reboot.
As a CPU monitor, the vmstat command is superior to the iostat command in that its one-line-per-report output is easier to scan as it scrolls and there is less overhead involved if there are a lot of disks attached to the system. The following example can help you identify situations in which a program has run away or is too CPU-intensive to run in a multiuser environment.

# vmstat 2
kthr memory page faults cpu
—– ———– ———————— ———— ———–
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 0 22478 1677 0 0 0 0 0 0 188 1380 157 57 32 0 10
1 0 22506 1609 0 0 0 0 0 0 214 1476 186 48 37 0 16
0 0 22498 1582 0 0 0 0 0 0 248 1470 226 55 36 0 9

2 0 22534 1465 0 0 0 0 0 0 238 903 239 77 23 0 0
2 0 22534 1445 0 0 0 0 0 0 209 1142 205 72 28 0 0
2 0 22534 1426 0 0 0 0 0 0 189 1220 212 74 26 0 0
3 0 22534 1410 0 0 0 0 0 0 255 1704 268 70 30 0 0
2 1 22557 1365 0 0 0 0 0 0 383 977 216 72 28 0 0

2 0 22541 1356 0 0 0 0 0 0 237 1418 209 63 33 0 4
1 0 22524 1350 0 0 0 0 0 0 241 1348 179 52 32 0 16
1 0 22546 1293 0 0 0 0 0 0 217 1473 180 51 35 0 14

This output shows the effect of introducing a program in a tight loop to a busy multiuser system. The first three reports (the summary has been removed) show the system balanced at 50-55 percent user, 30-35 percent system, and 10-15 percent I/O wait. When the looping program begins, all available CPU cycles are consumed. Because the looping program does no I/O, it can absorb all of the cycles previously unused because of I/O wait. Worse, it represents a process that is always ready to take over the CPU when a useful process relinquishes it. Because the looping program has a priority equal to that of all other foreground processes, it will not necessarily have to give up the CPU when another process becomes dispatchable. The program runs for about 10 seconds (five reports), and then the activity reported by the vmstat command returns to a more normal pattern.

Optimum use would have the CPU working 100 percent of the time. This holds true in the case of a single-user system with no need to share the CPU. Generally, if us + sy time is below 90 percent, a single-user system is not considered CPU constrained. However, if us + sy time on a multiuser system exceeds 80 percent, the processes may spend time waiting in the run queue. Response time and throughput might suffer.

To check if the CPU is the bottleneck, consider the four cpu columns and the two kthr (kernel threads) columns in the vmstat report. It may also be worthwhile looking at the faults column:

* cpu

Percentage breakdown of CPU time usage during the interval. The cpu columns are as follows:
o us

The us column shows the percent of CPU time spent in user mode. A UNIX® process can execute in either user mode or system (kernel) mode. When in user mode, a process executes within its application code and does not require kernel resources to perform computations, manage memory, or set variables.
o sy

The sy column details the percentage of time the CPU was executing a process in system mode. This includes CPU resource consumed by kernel processes (kprocs) and others that need access to kernel resources. If a process needs kernel resources, it must execute a system call and is thereby switched to system mode to make that resource available. For example, reading or writing of a file requires kernel resources to open the file, seek a specific location, and read or write data, unless memory mapped files are used.
o id

The id column shows the percentage of time which the CPU is idle, or waiting, without pending local disk I/O. If there are no threads available for execution (the run queue is empty), the system dispatches a thread called wait, which is also known as the idle kproc. On an SMP system, one wait thread per processor can be dispatched. The report generated by the ps command (with the -k or -g 0 option) identifies this as kproc or wait. If the ps report shows a high aggregate time for this thread, it means there were significant periods of time when no other thread was ready to run or waiting to be executed on the CPU. The system was therefore mostly idle and waiting for new tasks.
o wa

The wa column details the percentage of time the CPU was idle with pending local disk I/O and NFS-mounted disks. If there is at least one outstanding I/O to a disk when wait is running, the time is classified as waiting for I/O. Unless asynchronous I/O is being used by the process, an I/O request to disk causes the calling process to block (or sleep) until the request has been completed. Once an I/O request for a process completes, it is placed on the run queue. If the I/Os were completing faster, more CPU time could be used.

A wa value over 25 percent could indicate that the disk subsystem might not be balanced properly, or it might be the result of a disk-intensive workload.

For information on the change made to wa, see Wait I/O time reporting.
* kthr

Number of kernel threads in various queues averaged per second over the sampling interval. The kthr columns are as follows:
o r

Average number of kernel threads that are runnable, which includes threads that are running and threads that are waiting for the CPU. If this number is greater than the number of CPUs, there is at least one thread waiting for a CPU and the more threads there are waiting for CPUs, the greater the likelihood of a performance impact.
o b

Average number of kernel threads in the VMM wait queue per second. This includes threads that are waiting on filesystem I/O or threads that have been suspended due to memory load control.

If processes are suspended due to memory load control, the blocked column (b) in the vmstat report indicates the increase in the number of threads rather than the run queue.
o p

For vmstat -I The number of threads waiting on I/Os to raw devices per second. Threads waiting on I/Os to filesystems would not be included here.
* faults

Information about process control, such as trap and interrupt rate. The faults columns are as follows:
o in

Number of device interrupts per second observed in the interval. Additional information can be found in Assessing disk performance with the vmstat command.
o sy

The number of system calls per second observed in the interval. Resources are available to user processes through well-defined system calls. These calls instruct the kernel to perform operations for the calling process and exchange data between the kernel and the process. Because workloads and applications vary widely, and different calls perform different functions, it is impossible to define how many system calls per-second are too many. But typically, when the sy column raises over 10000 calls per second on a uniprocessor, further investigations is called for (on an SMP system the number is 10000 calls per second per processor). One reason could be “polling” subroutines like the select() subroutine. For this column, it is advisable to have a baseline measurement that gives a count for a normal sy value.
o cs

Number of context switches per second observed in the interval. The physical CPU resource is subdivided into logical time slices of 10 milliseconds each. Assuming a thread is scheduled for execution, it will run until its time slice expires, until it is preempted, or until it voluntarily gives up control of the CPU. When another thread is given control of the CPU, the context or working environment of the previous thread must be saved and the context of the current thread must be loaded. The operating system has a very efficient context switching procedure, so each switch is inexpensive in terms of resources. Any significant increase in context switches, such as when cs is a lot higher than the disk I/O and network packet rate, should be cause for further investigation.

Log Switch details

set lines 120;
set pages 999;
SELECT
to_char(first_time,’YYYY-MM(MON)-DD’) day,
to_char(sum(decode(to_char(first_time,’HH24′),’00’,1,0)),’99’) “12a”,
to_char(sum(decode(to_char(first_time,’HH24′),’01’,1,0)),’99’) “1a”,
to_char(sum(decode(to_char(first_time,’HH24′),’02’,1,0)),’99’) “2a”,
to_char(sum(decode(to_char(first_time,’HH24′),’03’,1,0)),’99’) “3a”,
to_char(sum(decode(to_char(first_time,’HH24′),’04’,1,0)),’99’) “4a”,
to_char(sum(decode(to_char(first_time,’HH24′),’05’,1,0)),’99’) “5a”,
to_char(sum(decode(to_char(first_time,’HH24′),’06’,1,0)),’99’) “6a”,
to_char(sum(decode(to_char(first_time,’HH24′),’07’,1,0)),’99’) “7a”,
to_char(sum(decode(to_char(first_time,’HH24′),’08’,1,0)),’99’) “8a”,
to_char(sum(decode(to_char(first_time,’HH24′),’09’,1,0)),’99’) “9a”,
to_char(sum(decode(to_char(first_time,’HH24′),’10’,1,0)),’99’) “10a”,
to_char(sum(decode(to_char(first_time,’HH24′),’11’,1,0)),’99’) “11a”,
to_char(sum(decode(to_char(first_time,’HH24′),’12’,1,0)),’99’) “12p”,
to_char(sum(decode(to_char(first_time,’HH24′),’13’,1,0)),’99’) “1p”,
to_char(sum(decode(to_char(first_time,’HH24′),’14’,1,0)),’99’) “2p”,
to_char(sum(decode(to_char(first_time,’HH24′),’15’,1,0)),’99’) “3p”,
to_char(sum(decode(to_char(first_time,’HH24′),’16’,1,0)),’99’) “4p”,
to_char(sum(decode(to_char(first_time,’HH24′),’17’,1,0)),’99’) “5p”,
to_char(sum(decode(to_char(first_time,’HH24′),’18’,1,0)),’99’) “6p”,
to_char(sum(decode(to_char(first_time,’HH24′),’19’,1,0)),’99’) “7p”,
to_char(sum(decode(to_char(first_time,’HH24′),’20’,1,0)),’99’) “8p”,
to_char(sum(decode(to_char(first_time,’HH24′),’21’,1,0)),’99’) “9p”,
to_char(sum(decode(to_char(first_time,’HH24′),’22’,1,0)),’99’) “10p”,
to_char(sum(decode(to_char(first_time,’HH24′),’23’,1,0)),’99’) “11p”
from
v$log_history
GROUP by
to_char(first_time,’YYYY-MM(MON)-DD’)
ORDER BY
1;

Naming and Identifying Oracle Databases and Instances

Identification of the database you are currently working on can sometimes be somewhat confusing.

DB_NAME

The DB_NAME parameter is the value of the database and is the name used when creating the database. It is specified within the INIT.ORA parameter file or in the CREATE DATABASE command. This is one of those parameters that is optional but is always best to set it. This is especially true for the standby database where it should match the production database.

Getting DB_NAME from V$PARAMETER

SQL> select name , value from v$parameter where name = ‘db_name’;
NAME VALUE
——————– ————————————————–
db_name abc

DB_DOMAIN

DB_DOMAIN is the value of the domain to which the database belongs. It is the location of the database within the network hierarchy.

Getting DB_DOMAIN from V$PARAMETER

SQL> select name , value from v$parameter where name = ‘db_domain’;
NAME VALUE
——————– ————————————————–
db_domain abc.tellme.net

global_name

This determes what database I am connected to. This is the parameter that everyone use to use to query and set the prompt for in SQL*PLUS to give a unique identity to the SQL*PLUS session currently active. This parameter is a combination of the DB_NAME parameter and the DB_DOMAIN parameter.

SQL> select * from global_name;
GLOBAL_NAME
—————————————
abc.red.tellme.NET

Listing 4
Setting the SQL*PLUS prompt

column gname new_value dname noprint
select substr(global_name,1,instr(global_name,’.’)-1) gname from global_name;
define prmpt=’&&dname’
set sqlprompt “&&prmpt-SQL> ”

GLOBAL_NAMES

While this variable does not directly store a value that you can identify your database with, it does dictate how you might connect to a database. This variable is either TRUE or FALSE and if it is set to TRUE it enforces database links to have the same name as the remote database to which they are linking. The interesting thing is that Oracle lets you create a database link that goes against the GLOBAL_NAMES parameter setting. It ‘is not until you actually try to use it that you get an ORA-02085 error. After setting GLOBAL_NAMES to FALSE, I am able to use the database link.

INSTANCE_NAME

The INSTANCE_NAME is the name given to the set of processes that are running on the database server. This parameter is set in the INIT.ORA and can be seen by querying the V$INSTANCE table.

SQL for extracting the INSTANCE_NAME

SQL> select instance_number,instance_name,host_name from v$instance

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
————— —————- —————————————
1 abc JKOOPMANN1

SERVICE_NAMES

This parameter will default to the GLOBAL_NAME. Remember the GLOBAL_NAME is a combination of DB_NAME and DB_DOMAIN. This parameter can take on multiple names that are comma separated. This allows you to give different service names to the same instance or a single service name for multiple instances that access the same database, as in Oracle’s real application cluster environment. The point here is that the service name will not give you a unique name for the instance to which you are connected.

DBID

While DBID is not a “name” of a database, it does have the quality of uniquely identifying a database be a unique number. The DBID number is generated at database creation. it is supposed to be unique across databases. I know of one instance where if you were to clone a database, the DBID will be the same for the cloned database as the original. At least Oracle has given us a work-around in the form of a utility to change the DBID. There are two tables you can get this information from, the V$DATABASE and the GV$DATABASE. If you are not aware of the ‘GV’ tables, they are GLOBAL across clustered instances.

SQL> select dbid,name from v$database;
DBID NAME
———- ——————–
3060884937 ABC

SQL> select inst_id,dbid,name from gv$database;

INST_ID DBID NAME
———- ———- ———
1 3060884937 abc
2 3060884937 abc