Thursday, June 21, 2012

ORACLE DBA NOTES PART 2

Row Chaning and Migration

Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently).  A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.

Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.

Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces.

So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.

Real Application Clusters

Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide a highly scalable and available database solution for all your business applications. Oracle RAC provides the foundation for enterprise grid computing.

Oracle’s Real Application Clusters (RAC) option supports the transparent deployment of a single database across a cluster of servers, providing fault tolerance from hardware failures or planned outages. Oracle RAC running on clusters provides Oracle’s highest level of capability in terms of availability, scalability, and low-cost computing.

One DB opened by multipe instances so the the db will be Highly Available if an instance crashes.
Cluster Software.
 Oracles Clusterware or products like Veritas Volume Manager are required to provide the cluster support and allow each node to know which nodes belong to the cluster and are available and with Oracle Cluterware to know which nodes have failed and to eject then from the cluster, so that errors on that node can be cleared.

Oracle Clusterware has two key components Cluster Registry OCR and Voting Disk.
The cluster registry holds all information about nodes, instances, services and ASM storage if used, it also contains state information ie they are available and up or similar.

The voting disk is used to determine if a node has failed, i.e. become separated from the majority. If a node is deemed to no longer belong to the majority then it is forcibly rebooted and will after the reboot add itself again the the surviving cluster nodes.
http://pocketoracle.blogspot.com/  Oracle Architecture

 An Oracle server:

a DBMS that provides an open, comprehensive, integrated approach to information management,
Consists of an Instance and a database.

An Oracle Instance:
a means to access an Oracle database,
always opens one and only one database and consists of memory structures and background process.

An Oracle database:
a collection of data that is treated as a unit,
Consists of Datafiles, Control files, Redo log files. (optional param file, passwd file, archived log)

Parameter Files - The Oracle Parameter File (PFILE or SPFILE) holds initialization parameters which indicate where the control files are, how memory is allocated, how logging is performed, and other characteristics of the Oracle instance.
Control Files - The Control File stores information needed to verify that the database is consistent, including a list of all database files making up the instance and other important data.
Redo Log Files - The Redo Log files store a sequence of entries describing all actions taken against the database. This data is used to recover a database in the event of instance failure.
Data Files - The Data Files contain blocks of data which store database objects (e.g., tables, indexes, materialized views, etc.) in the database.
Temp Files - The Temp files contains data used temporarily including intermediate results, sort results and so forth.
Data and Temp files are grouped into tablespaces
Instance memory Structures:

System Global Area (SGA):
Allocated at instance startup, and is a fundamental component of an Oracle Instance.

SGA Memory structures:
Includes Shared Pool,  Database Buffer Cache, Redo Log Buffer among others.

Shared Pool :
Consists of two key performance-related memory structures Library Cache and  Data Dictionary Cache.

Library Cache:
Stores information about the most recently used SQL and PL/SQL statements and enables the sharing of commonly used statements.

Data Dictionary Cache :
Stores collection of the most recently used definitions in the database Includes db files, tables, indexes, columns etc. Improves perf. During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access.

Database Buffer Cache:
Stores copies of data blocks that have been retrieved from the datafiles. Everything done here.

Redo Log Buffer :
Records all changes made to the database data blocks, Primary purpose is recovery. Redo entries contain information to reconstruct or redo changes.

User process:
Started at the time a database User requests connection to the Oracle server. requests interaction with the Oracle server, does not interact directly with the Oracle server.

Server process:
Connects to the Oracle Instance and is Started when a user establishes a session.
fulfills calls generated and returns results.
Each server process has its own nonshared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA and Return results in such a way that the application can process the information.

In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead.

Program Global Area (PGA): 
The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process. For example, it typically contains a sort area, hash area, session cursor cache, etc.

Memory area used by a single Oracle server process.
Allocated when the server process is started,
deallocated when the process is terminated and
used by only one process.
Used to process SQL statements and
to hold logon and other session information.

Background processes:
Started when an Oracle Instance is started.

Background Processes Maintains and enforces relationships between physical and memory structures
There are two types of database processes:

            1.         Mandatory background processes
            2.         Optional background processes

Mandatory background processes:
– DBWn, PMON, CKPT,  LGWR,  SMON

Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn

DBWn writes when:
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP

Log Writer (LGWR) writes:
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes
System Monitor (SMON) Responsibilities:
• Instance recovery
– Rolls forward changes in redo logs
– Opens database for user access
– Rolls back uncommitted transactions
• Coalesces free space
• Deallocates temporary segments.

Process Monitor (PMON) Cleans up after failed processes by:
• Rolling back the transaction
• Releasing locks
• Releasing other resources
• Restarting dead dispatchers

Checkpoint (CKPT) Responsible for:
• Signaling DBWn at checkpoints
• Updating datafile headers with checkpoint information
• Updating control files with checkpoint information

CKPT (Oracle Checkpoint Process) is an Oracle background process that timestamps all datafiles and control files to indicate that a checkpoint has occurred.
The "DBWR checkpoints" statistic (v$sysstat) indicates the number of checkpoint requests completed.
A checkpoint occurs when the DBWR (database writer) process writes all modified buffers in the SGA buffer cache to the database data files. Data file headers are also updated with the latest checkpoint SCN, even if the file had no changed blocks.
Checkpoints occur AFTER (not during) every redo log switch and also at intervals specified by initialization parameters.
Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the database alert log.
Checkpoints can be forced with the ALTER SYSTEM CHECKPOINT; command.

Archiver (ARCn)
• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database
Database Concepts and Architecture
http://www.orafaq.com/wiki/Database_Concepts_and_Architecture

What is an Oracle Database?
A database is a collection of data. An Oracle database holds its data in a coordinated set of files stored on disk, including following types:
Parameter Files - The Oracle Parameter File (PFILE or SPFILE) holds initialization parameters which indicate where the control files are, how memory is allocated, how logging is performed, and other characteristics of the Oracle instance.
Control Files - The Control File stores information needed to verify that the database is consistent, including a list of all database files making up the instance and other important data.
Redo Log Files - The Redo Log files store a sequence of entries describing all actions taken against the database. This data is used to recover a database in the event of instance failure.
Data Files - The Data Files contain blocks of data which store database objects (e.g., tables, indexes, materialized views, etc.) in the database.
Temp Files - The Temp files contains data used temporarily including intermediate results, sort results and so forth.
Data and Temp files are grouped into tablespaces (see definition below).

What is an Instance?
An instance is a collection of Oracle background processes and shared memory structures.
Memory Areas
SGA - System Global Area
The SGA consists of the following four (five if MTS) parts:
§          Fixed Portion
§          Variable Portion
§          Shared pool
§          java pool
§        PGA - Process Global Area
§        UGA - User Global Area
Processes
Oracle uses many small (focused) processes to manage and control the Oracle instance. This allows for optimum execution on multi-processor systems using multi-core and multi-threaded technology. Some of these processes include:
§        PMON - Process Monitor
§        SMON - System Monitor
§        ARCn - Redo Log Archiver
§        LGWR - Redo Log Writer
§        DBWn - Database Writer
§        CKPT - Checkpoint process
§        RECO - Recoverer
§        CJQn - Job Queue Coordinator
§        QMNn - Queue-monitor processes
§        Dnnn - Dispatcher Processes (multiplex server-processes on behalf of users)
§        Snnn - Shared server processes (serve client-requests)
§        MMAN - Memory Manager process which will help in automatic memory management when use sga_target,memory_target
§        LSP0 - Logical standby coordinator process (controls Data Guard log-application)
§        MRP - Media-recovery process (detached recovery-server process)
§        MMON - This is the process which will write to AWR base tables ie WR$ tables
§        MMNL - Memory monitor light (gathers and stores AWR statistics)
§        PSP0 - Process-spawner (spawns Oracle processes)
§        RFS - Remote file server process (archive to a remote site)
§        DBRM - DB resource manager (new in 11g)
§        DIAGn - Diagnosability process (new in 11g)
§        FBDA - Flashback data archiver process (new in 11g)
§        VKTM - Virtual Timekeeper (new in 11g)
§        Wnnn - Space Management Co-ordination process (new in 11g)
§        SMCn - Space Manager process (new in 11g)
An instance can mount and open one and only one database.
A database can normally only be mounted and opened by one instance. However, when using Real Application Clusters (RAC) a database can be mounted and opened by many instances.

The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process. For example, it typically contains a sort area, hash area, session cursor cache, etc.
http://www.orafaq.com/wiki/PGA

The User Global Area (UGA) is a memory area (RAM) that holds session-based information.
PMON (Process MONitor) is an Oracle background process created when you start a database instance.
The PMON process will free up resources if a user process fails (eg. release database locks).
PMON normally wakes up every 3 seconds to perform its housekeeping activities.
PMON must always be running for an instance. If not, the instance will terminate.
To speed-up housekeeping, one may also wake-up PMON (process 2 below) manually:
SQL> oradebug setmypid
SQL> oradebug wakeup 2

SMON (System MONitor) is an Oracle background process created when you start a database instance.
The SMON process performs instance recovery, cleans up after dirty shutdowns and coalesces adjacent free extents into larger free extents.
SMON wakes up every 5 minutes to perform housekeeping activities.
SMON must always be running for an instance. If not, the instance will terminate.

CKPT (Oracle Checkpoint Process) is an Oracle background process that timestamps all datafiles and control files to indicate that a checkpoint has occurred.
The "DBWR checkpoints" statistic (v$sysstat) indicates the number of checkpoint requests completed.
A checkpoint occurs when the DBWR (database writer) process writes all modified buffers in the SGA buffer cache to the database data files. Data file headers are also updated with the latest checkpoint SCN, even if the file had no changed blocks.
Checkpoints occur AFTER (not during) every redo log switch and also at intervals specified by initialization parameters.
Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the database alert log.
Checkpoints can be forced with the ALTER SYSTEM CHECKPOINT; command.

Pfile parameters:
          Compatible
·           Db_name
·           Instance_name
·           Control_files=’__’,’___’
·           Db_block_size
·           Db_block_buffers (now it is db_cache_size)
·           Db_files
·           Undo_tablespace=undo1
·           Undo_management
·           Background_dump_dest
·           User_dump_dest
·           Core_dump_dest
·           Shared_pool_size
·           Log_buffer
·           Remote_login_password
·           Sql_trace
·           Timed_statistics
·           Max_dump_file_size
·           Processes (Number of OS processes to release)
·           IFile (name of another pfile embedded within current pfile)





AWR Reports - Top five events
###########################
Please find the documentation for few of the top five events,hope it will help.I will keep you updated about the other events as well, need more time to analyze.Please correct me if you find any discrepancy .

Wait events are statistics that are increment by a server process or thread to indicate that it had to wait for an event to complete before being able to continue processing.
Db File Scattered Oracle Metric Event :
Definition
: It signifies the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return.
Explanation
:The db file scattered read wait event identifies that a full table scan is occurring. When performing a full table scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other.Such reads are called scattered read calls, because the blocks are scattered throughout memory.Multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full table scans into the buffer cache show up as waits for 'db file scattered read'."

Db File Sequential Oracle Event :
Definition
: A db file sequential read is an event that shows a wait for a foreground process while doing a sequential read from the database.
Explanation
:This is an o/s operation, most commonly used for single block reads. Single block reads are mostly commonly seen for index block access or table block access by a rowid (Eg: to access a table block after an index entry has been seen)

NOTE:
A db file scattered read is the same type of event as "db file sequential read", except
that Oracle will read multiple data blocks. Multi-block reads are typically used on full
table scans. The name "scattered read" may seem misleading but it refers to the fact that
multiple blocks are read into DB block buffers that are 'scattered' throughout memory.

Log File sync:
Definition
: Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
Explanation:
The log file sync Oracle metric indicates the process is waiting for LGWR to finish flushing the log buffer to disk. This occurs when a user commits a transaction (a transaction is not considered committed until all of the redo to recover the transaction has been successfully written to disk).
Ways to tune it :
a)Tune LGWR will help out to solve the wait issue eg: Do not put redo logs on RAID5
b)Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations
c) Metalink notes
Reference Note# 34592.1
High Waits on log file sync Note# 125269.1
Tuning the Redo log Buffer Cache and Resolving Redo Latch Contention Note# 147471.1
######################
SQL> show parameter pga

SQL>SELECT PGA_TARGET_FOR_ESTIMATE,PGA_TARGET_FACTOR,BYTES_PROCESSED,ESTD_EXTRA_BYTES_RW,ESTD_PGA_CACHE_HIT_PERCENTAGE,ESTD_OVERALLOC_COUNT FROM V$PGA_TARGET_ADVICE;
##########################

Resolving Your Wait Events
The following are 10 of the most common causes for wait events, along with explanations and potential solutions:

1. DB File Scattered Read
This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.

2. DB File Sequential Read
This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL. Check to ensure that index scans are necessary, and check join orders for multiple table joins. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they're also memory hogs that could cause high wait numbers for sequential reads. They can also show up as direct path read/write waits.

3. Free Buffer
This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency. To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks.

4. Buffer Busy
This is a wait for a buffer that is being used in an unshareable way or is being read into the buffer cache. Buffer busy waits should not be greater than 1 percent. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header. If this is the case, increase the freelist groups or increase the pctused to pctfree gap. If the wait is on an undo header, you can address this by adding rollback segments; if it's on an undo block, you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. If the wait is on a data block, you can move data to another block to avoid this hot block, increase the freelists on the table, or use Locally Managed Tablespaces (LMTs). If it's on an index block, you should rebuild the index, partition the index, or use a reverse key index. To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it's not as "hot." When a DML (insert/update/ delete) occurs, Oracle Database writes information into the block, including all users who are "interested" in the state of the block (Interested Transaction List, ITL). To decrease waits in this area, you can increase theinitrans, which will create the space in the block to allow multiple ITL slots. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified by maxtrans, when there are not enough slots built with the initrans that is specified).

5. Latch Free
Latches are low-level queuing mechanisms (they're accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). Latches are like locks on memory that are very quickly obtained and released. Latches are used to preventconcurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. Most latch problems are related to the failure to use bind variables (library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers LRU chain), and hot blocks in the buffer cache (cache buffers chain). There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case. When latch miss ratios are greater than 0.5 percent, you should investigate the issue.

6. Enqueue
An enqueue is a lock that protects a shared resource. Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time. An enqueue includes a queuing mechanism, which is FIFO (first in, first out). Note that Oracle's latching mechanism is not FIFO. Enqueue waits usually point to the ST enqueue, the HW enqueue, the TX4 enqueue, and the TM enqueue. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to preallocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces. HW enqueues are used with the high-water mark of a segment; manually allocating the extents can circumvent this wait. TX4s are the most common enqueue waits. TX4 enqueue waits are usually the result of one of three issues. The first issue is duplicates in a unique index; you need to commit/rollback to free the enqueue. The second is multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. The third and most likely issue is when multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/ormaxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. Finally, TM enqueues occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue.

7. Log Buffer Space
This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow. To address this problem, increase the size of the log files, or increase the size of the log buffer, or get faster disks to write to. You might even consider using solid-state disks, for their high speed.

8. Log File Switch
All commit requests are waiting for "logfile switch (archiving needed)" or "logfile switch (Checkpoint. Incomplete)." Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem.

9. Log File Sync
When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete. To reduce wait events here, try to commit more records (try to commit a batch of 50 instead of one at a time, for example). Put redo logs on a faster disk, or alternate redo logs on different physical disks, to reduce the archiving effect on LGWR. Don't use RAID 5, since it is very slow for applications that write a lot; potentially consider using file system direct I/O or raw devices, which are very fast at writing information.

10. Idle Event.
There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table.

Index unusable: Skip_unusable_indexes oracle10g/oracle11g

http://www.orafaq.com/node/2384

Index unusable: Skip_unusable_indexes oracle10g/oracle11g       
www.dbarajabaskar.blogspot.com
Why indexes are going to unusable status?
When will the indexes goes for unusable state?
1.Any maintenance activity in partition table – Alter command
1.Truncate the table partition
2.Drop the table partition
3.Split Part/Sub Partition
2.Any maintenance activity in table – Table Reorganization.
1.Move the table.
2.Online redefinition
More details about online redefinition -
http://dbarajabaskar.blogspot.com/2010/04/online-redefintion-in-oracle-table.html
3.Oracle Import – While import the data with skip_unusable_indexes=y.
To improve the performance of import, we can unusable the indexes manually.
After import the data, we rebuild the indexes.
4.SQL LOADER – Using sqlldr to load the data using direct load.
5.User can manually unusable the indexes for performance improvement.
How skip_unusable_indexes works?
In oracle9i release, we don’t have skip_unusable_indexes parameter.
I have faced several issues regarding this.
Sometimes indexes status went to unusable state for above reasons.
While batch cycle were failed due to indexes unusable. Application team notified to DBA group,
DBA team rebuilds the indexes & asked to application team to rerun the batch cycle.
In oracle 10g/ oracle 11g provides skip_unusable_indexes=TRUE (default) feature.
If any index status is unusable in database its skip the unusable indexes and supporting the DML operations.
But index unusable status is affecting the database performance.
While running a query using index scan it will take less time compare as full table scan.
I have tested the below scenario in 11.1.0.7 version.
SQL> create user raja identified by raja;
User created.
SQL> grant connect,resource,dba to raja;
Grant succeeded.
Default parameter value is TRUE.
SQL> show parameter skip_unusable_indexes
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
skip_unusable_indexes boolean TRUE
SQL> create table raja.test as select * from scott.emp;
Table created.
SQL> create index raja.test_idx on raja.test(empno);
Index created.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA TEST_IDX VALID
SQL> select ename,job from raja.test where empno=7844;
ENAME JOB
---------- ---------
TURNER SALESMAN
Gather the statistics
SQL> exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'RAJA' , tabname => 'TEST',cascade => true, estimate_percent => 100,method_opt=>'for all indexed columns size 1', granularity => 'ALL', degree => 4);
PL/SQL procedure successfully completed.
SQL> explain plan for select ename,job from raja.test where empno=7844;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
Plan hash value: 1389866015
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 17 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 17 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_IDX | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("EMPNO"=7844)
14 rows selected.
Query is going INDEX SCAN & COST is 2.
Now move table to same tablespace itself.
SQL> alter table raja.test move;
Table altered.
Now indexes status is unusable.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA TEST_IDX UNUSABLE
While index is unusable status, we are able to do select/DML operations. Because skip_unusable_indexes=true.
SQL> select ename,job from raja.test where empno=7844;
ENAME JOB
---------- ---------
TURNER SALESMAN
After index unusable, explain plan was changed. Now query is going to full table scan & cost is 3. Unusable indexes impacting the query performance.
SQL> explain plan for select ename,job from raja.test where empno=7844;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 17 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST | 1 | 17 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("EMPNO"=7844)
13 rows selected.
Now we changed the parameter skip_unusable_indexes=FALSE & tested SELECT/DML operations failed.
SQL> alter system set skip_unusable_indexes=false;
System altered.
SQL> show parameter skip_
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
skip_unusable_indexes boolean FALSE
SQL> select ename,job from raja.test where empno=7844;
select ename,job from raja.test where empno=7844
*
ERROR at line 1:
ORA-01502: index 'RAJA.TEST_IDX' or partition of such index is in unusable state
SQL> insert into raja.test ( select * from raja.test);
insert into raja.test ( select * from raja.test)
*
ERROR at line 1:
ORA-01502: index 'RAJA.TEST_IDX' or partition of such index is in unusable state
SQL> delete from raja.test where empno=7844;
delete from raja.test where empno=7844
*
ERROR at line 1:
ORA-01502: index 'RAJA.TEST_IDX' or partition of such index is in unusable state
Now we changed the parameter skip_unusable_indexes=TRUE & tested SELECT/DML operations were running fine.
SQL> alter system set skip_unusable_indexes=true;
System altered.
SQL> select ename,job from raja.test where empno=7844;
ENAME JOB
---------- ---------
TURNER SALESMAN
SQL> insert into raja.test (select * from raja.test);
14 rows created.
SQL> commit;
Commit complete.
SQL> delete from raja.test where empno=7844;
2 rows deleted.
SQL> commit;
Commit complete.
*************************************************
Now we changed the parameter skip_unusable_indexes=FALSE .
SQL> alter system set skip_unusable_indexes=false;
System altered.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA TEST_IDX UNUSABLE
Rebuild the index
SQL> alter index raja.test_idx rebuild online;
Index altered.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA TEST_IDX VALID
SQL> insert into raja.test ( select * from raja.test);
SQL> commit;
Commit complete.
SQL> delete from raja.test where empno=7844;
2 rows deleted.
SQL> commit;
Commit complete.
*************************************************

Note:
If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint. Therefore, this setting does not disable error reporting for unusable indexes that are unique.
We only select the table while index status is unusable (for unique only).
Scenario:
We created the table with unique key. Currently parameter value is skip_unusable_indexes=true.
SQL> drop table raja.test;
Table dropped.
SQL> create table raja.test as select * from scott.emp;
Table created.
SQL> alter table raja.test add constraint empno_pk unique (empno);
Table altered.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA EMPNO_PK VALID
SQL> alter table raja.test move;
Table altered.
SQL> select owner, index_name, status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA EMPNO_PK UNUSABLE
SQL> select ename,job from raja.test where empno=7844;
ENAME JOB
---------- ---------
TURNER SALESMAN
SQL> delete from raja.test where empno=7844;
delete from raja.test where empno=7844
*
ERROR at line 1:
ORA-01502: index 'RAJA.EMPNO_PK' or partition of such index is in unusable state
SQL> insert into raja.test ( select * from raja.test);
insert into raja.test ( select * from raja.test)
*
ERROR at line 1:
ORA-01502: index 'RAJA.EMPNO_PK' or partition of such index is in unusable state
SQL> alter index raja.empno_pk rebuild;
Index altered.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
------------------------------ ------------------------------ --------
RAJA EMPNO_PK VALID
SQL> select ename,job from raja.test where empno=7844;
ENAME JOB
---------- ---------
TURNER SALESMAN
SQL> delete from raja.test where empno=7844;
1 row deleted.
SQL> commit;
Commit complete.
SQL> insert into raja.test ( select * from raja.test);
insert into raja.test ( select * from raja.test)
*
ERROR at line 1:
ORA-00001: unique constraint (RAJA.EMPNO_PK) violated
Which situation user manually unusable the index?
Scenario:
Sometimes user loads the bulk data into table. To improve the performance they change unusable the indexes.
SQL> drop table raja.test;
Table dropped.
SQL> show parameter skip
NAME TYPE VALUE
------------------------------------ ----------- ---------------
skip_unusable_indexes boolean TRUE
SQL> create table raja.test as select * from dba_objects;
Table created.
SQL> create index raja.test_idx on raja.test(object_id);
Index created.
SQL> select owner, segment_name,bytes from dba_segments where owner='RAJA';
OWNER SEGMENT_NAME BYTES
---------- -------------------- ----------
RAJA TEST_IDX 2097152
RAJA TEST 9437184
Truncate the table
SQL> truncate table raja.test;
Table truncated.
SQL> show parameter skip
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
skip_unusable_indexes boolean TRUE
SQL> alter system set skip_unusable_indexes=false;
System altered.
Unusable the index
SQL> alter index raja.test_idx unusable;
Index altered.
SQL> select owner, segment_name,bytes from dba_segments where owner='RAJA';
OWNER SEGMENT_NAME BYTES
---------- -------------------- ----------
RAJA TEST_IDX 65536
RAJA TEST 65536
SQL> select owner, index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
---------- ------------------------------ --------
RAJA TEST_IDX UNUSABLE
SQL> show parameter skip
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
skip_unusable_indexes boolean TRUE
Load the data after index unusable
SQL> insert into raja.test (select * from dba_objects);
69326 rows created.
SQL> commit;
Commit complete.
SQL> select owner, segment_name,bytes from dba_segments where owner='RAJA';
OWNER SEGMENT_NAME BYTES
---------- -------------------- ----------
RAJA TEST_IDX 65536  Index segment is not grow
RAJA TEST 9437184
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
---------- ------------------------------ --------
RAJA TEST_IDX UNUSABLE
SQL> alter index raja.test_idx rebuild;
Index altered.
SQL> select owner,index_name,status from dba_indexes where owner='RAJA';
OWNER INDEX_NAME STATUS
---------- ------------------------------ --------
RAJA TEST_IDX VALID
SQL> select owner, segment_name,bytes from dba_segments where owner='RAJA';
OWNER SEGMENT_NAME BYTES
---------- -------------------- ----------
RAJA TEST 9437184
RAJA TEST_IDX 2097152  after rebuild index size grown
If any index status is going to unusable we don’t know & also it affecting the database performance.
In oracle 10g/11g, I suggest to setup the monitor index unusable script is better.

No comments:

Post a Comment