Sunday 12 July 2015

Why do we need odd number of voting disks ?


The reason for the odd number of voting disks is to solve the "split-brain" problem. The split-brain problem occurs when the cluster interconnect goes down and each node cannot obtain a heartbeat from the other nodes. At this time, each node in the cluster thinks they are the only one alive and they think they should become the "master node". There can be only one master node. Each half thinks they are the brains of the operation, which we cannot allow.
So how do we resolve this? 
We set up a race. Each candidate node wants to be the master so we put it up to a vote. Whichever contestant gets the most votes wins. So node 1 contacts the first voting disk and says "I am here first so I am the master!". Simultaneously, node 2 contacts the second voting disk and says  "I am here first so I am the master!". They both can't be right. If there were an even number of voting disks, then it is possible that each candidate node could come up with exactly half the number of total votes and we would have a tie, which must be resolved. Now the real race begins. The two nodes run to the third and final voting disk. The node that gets there first now has 2 votes to 1, wins the election, and becomes the master node in the cluster.

Saturday 4 July 2015

Turning Flashback ON possible when the database is open(starting from 11gR2)

Starting from 11gR2 oracle allows us to  enable the  flashback even the database is in open mode.
But this was not possible in earlier versions, till 11gR1 we can enable the flashback only when the database is in mount mode.


Simulated the situation below with the detailed steps :

11gR2 :

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon Jul 18 04:56:02 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
NO         ARCHIVELOG   READ WRITE

SQL> alter database flashback on;

Database altered.

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
YES        ARCHIVELOG   READ WRITE

SQL> alter database flashback off;

Database altered.

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
NO         ARCHIVELOG   READ WRITE


Friday 19 June 2015

Determining Which Columns to Index

We recommend that you place indexes on primary key, unique key, and foreign key columns. This is
a good starting point. Furthermore, analyze slowly performing SQL statements to see which columns are used. This will provide you additional candidate columns for indexing. These indexing
recommendations lay the foundation for maximizing SQL query performance.

Now we turn our attention to figuring out which columns should be indexed.
For  most applications you create indexes in the following situations:
Define a primary key constraint for each table: This results in an index
automatically being created on the columns specified in the primary key.
Create unique key constraints on columns that are required to be unique and are
different from the primary key columns: Each unique key constraint results in an
index automatically being created on the columns specified in the constraint.
Manually create indexes on foreign key columns: This is done for better
performance to avoid certain locking issues

Other Suitable Columns
When choosing an index, keep in mind this basic rule: the indexing strategy should be based on the
columns you use when querying the table. You can create more than one index on a table and have an
index contain multiple columns. You will make better decisions if you first consider what types of
queries you execute on a table. If you’ve identified a poorly performing SQL query, also consider creating
indexes for the following columns:
• Create indexes on columns used often as predicates in the WHERE clause; when
multiple columns from a table are used in the WHERE clause, consider using a
concatenated (multi-column) index.
• Create a covering index on columns used in the SELECT clause.
• Consider creating indexes on columns used in the ORDER BY, GROUP BY, UNION, or
DISTINCT clauses.


-> Create primary key constraints for all tables. This will automatically create a B-tree index (if
the columns in the primary key aren’t already
indexed).
-> Create unique key constraints where appropriate. This will automatically create a B-tree index (if
the columns in the unique key aren’t already
indexed).
-> Create indexes on foreign-key columns. Foreign-key columns are usually included in the
WHERE clause when joining tables and thus
improve performance of SQL SELECT statements.
-> Creating a B-tree index on foreign key columns also reduces locking issues when updating and
inserting into child tables.

B-tree indexes:
B-tree indexes are suitable for most applications
where you have high cardinality column values.

Bitmap indexes:
Consider using bitmap indexes in data warehouse environments.These indexes are ideal for low cardinality columns where the values aren’t updated often.
Bitmap indexes work well on foreign key columns on Star schema fact tables where you often run
queries that use AND and OR join conditions

  • Doing this frees up physical space and improves the performance of data manipulation language (DML) statements.


The B-tree index is the default index type in Oracle. This index type is known as B-tree because the table
row identifier (ROWID) and associated column values are stored within index blocks in a balanced treelike
structure. Oracle B-tree indexes are used for the following reasons:

  •  Improving SQL statement performance.
  •  Enforcing uniqueness of primary key and unique key constraints.
  •  Reducing potential locking issues with parent and child table tables associated via primary and foreign key constraints

Query to find out the indexes on tables:


select index_name, index_type, table_name, tablespace_name, status
from user_indexes
where table_name in ('CUST','ADDRESS');
Here is some sample output:
INDEX_NAME INDEX_TYPE TABLE_NAME TABLESPACE_NAME STATUS
-------------------- ---------- ---------- --------------- ----------
CUST_PK NORMAL CUST REPORTING_INDEX VALID
CUST_UK1 NORMAL CUST REPORTING_INDEX VALID
ADDR_FK1 NORMAL ADDRESS REPORTING_INDEX VALID

Run the following query to verify the columns on which the indexes are created:

select index_name, column_name, column_position
from user_ind_columns
where table_name in ('CUST','ADDRESS')
order by index_name, column_position;
Here is some sample output:
INDEX_NAME COLUMN_NAME COLUMN_POSITION
-------------------- -------------------- ---------------
ADDR_FK1 CUST_ID 1
CUST_PK CUST_ID 1
CUST_UK1 LAST_NAME 1
CUST_UK1 FIRST_NAME 2

To display the number of extents and space used, run the following query:

insert into cust values(1,'STARK','JIM');
insert into address values(100,1,'Vacuum Ave','Portland','OR');
Rerunning this query (that reports on segment usage) yields the following output:
SEGMENT_NAME SEGMENT_TYPE EXTENTS BYTES
-------------------- -------------------- ---------- ----------
CUST_PK INDEX 1 1048576
CUST_UK1 INDEX 1 1048576
ADDR_FK1 INDEX 1 1048576


Use the DBMS_METADATA.GET_DDL function to display an object’s DDL. Make sure you set the LONG variable to an appropriate value so that the returned CLOB value is displayed in its entirety.
 For example,

SQL> set long 1000000
SQL> select dbms_metadata.get_ddl('INDEX','ADDR_FK1') from dual;

Here is the output:

DBMS_METADATA.GET_DDL('INDEX','ADDR_FK1')
--------------------------------------------------------------------------------
CREATE INDEX "MV_MAINT"."ADDR_FK1" ON
"MV_MAINT"."ADDRESS" ("CUST_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255
COMPUTE STATISTICS STORAGE(INITIAL 1048576
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1

BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "REPORTING_INDEX"


  • Here’s an example of making an index invisible:

SQL> alter index addr_fk1 invisible;
This code makes the index invisible to the query optimizer so that it can’t be used to retrieve rows for a query. However, the index structure is still maintained by Oracle as records are modified in the table.  If you determine that the index was critical for performance, you can easily make it visible to the optimizer again via

SQL> alter index addr_fk1 visible;


  • Your other option before dropping an index is to make it unusable.


SQL> alter index addr_fk1 unusable;

This code renders the index unusable, but doesn’t drop it. Unusable means that the optimizer won’t use
the index and Oracle won’t maintain the index as DML statements operate on its table. Furthermore, an
unusable index can’t be used internally to enforce constraints or prevent locking issues.

If you need to re-enable an unusable index, then you’ll have to rebuild it. Be aware that rebuilding a
large index can take a considerable amount of time and resources.

SQL> alter index addr_fk1 rebuild;

After you’re sure an index isn’t required, use the DROP INDEX statement to remove it.
This permanently drops the index. The only way to get the index back is to re-create it.

SQL> drop index addr_fk1;


Viewing Primary Key Constraint and Index Details
You can confirm the details of the index as follows:
select index_name, index_type, uniqueness
from user_indexes
where table_name = 'CUST';

Here is the output for this example:
INDEX_NAME INDEX_TYPE UNIQUENES
-------------------- --------------- ---------
CUST_PK NORMAL UNIQUE
Here is a query to verify the constraint information:
select constraint_name, constraint_type
from user_constraints
where table_name = 'CUST';
Here is the corresponding output:
CONSTRAINT_NAME CONSTRAINT_TYPE
-------------------- ---------------
CUST_PK P

Dropping Primary Key Constraint and Index
An index that was automatically created (when the primary key constraint was created) can’t be directly
dropped. In this scenario, if you attempt to drop the index
SQL> drop index cust_pk;
you’ll receive this error
ORA-02429: cannot drop index used for enforcement of unique/primary key
To remove the index, you must do so by dropping or disabling the primary key constraint. For
example, any of the following statements will drop an index that was automatically created when the
constraint was created:
SQL> alter table cust disable constraint cust_pk;
SQL> alter table cust drop constraint cust_pk;
SQL> alter table cust drop primary key;
When you drop or disable a primary key constraint, you have the option of not dropping the
associated index. Use the KEEP INDEX clause of the DROP/DISABLE CONSTRAINT clause to retain the index.
For example,
SQL> alter table cust drop constraint cust_pk keep index;
This code instructs Oracle to drop the constraint but keep the index. If you’re working with large
tables, you may want to disable or drop the constraint for performance reasons while loading or
manipulating the data. Dropping the index associated with a large table may take considerable time and
resources to re-create.
One other aspect to be aware of is that if a primary key or unique key is referenced by an enabled
foreign key, and you attempt to drop the constraint on the parent table

SQL> alter table cust drop primary key;
you’ll receive this error
ORA-02273: this unique/primary key is referenced by some foreign keys
In this situation you’ll need to first drop or disable the referenced foreign key, or use the CASCADE
clause to automatically drop the foreign key constraint when the primary key constraint is dropped or
disabled. For example,
SQL> alter table cust drop constraint cust_pk cascade;
SQL> alter table cust disable constraint cust_pk cascade;
SQL> alter table cust drop primary key cascade

  •  

Saturday 23 May 2015

Oracle dba interview questions and answers

What is main purpose of CHECKPOINT?
Answer:
A Checkpoint is a database event, which synchronizes the data blocks in memory with the datafiles on disk.
A checkpoint has two purposes:
    1. to establish data consistency
    2. Enable faster database recovery

The following are the parameter that will be used by DBA to adjust time or interval of how frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.


How do you backup and restore using Transportable Tablespaces
Answer:
   
1. Run DBMS_TTS against the tablespace to see if tablespace is self contained or not.
2. Make tablespace Read Only.
3. Export Metadata (EXP TRANSPORT_TABLESPACES=Y TABLESPACE=ts1)
4. Copy data file to target host
5. Copy export dump to target
6. Import the Metadata (IMP TRANSPORT_TABLESPACES=Y DATAFILES (file1,file2) )
7. Bring Tablespace ONLINE and enable SOURCE tablespace to READ WRITE

What does RESETLOGS option do?

1. Creates a new incarnation of the database, putting a new SCN in all data file headers.
2. Reset Log Sequence number to 1
3. Reformats ONLINE REDO LOGFILES if they exists

What are the options available to refresh snapshots?
Answer:
COMPLETE - Tables are completely regenerated using the snapshots query and the master tables
every time the snapshot referenced.
FAST - If simple snapshot used then a snapshot log can be used to send the changes to the snapshot
tables.
FORCE - Default value. If possible it performs a FAST refresh; otherwise it will perform a complete
refresh.

What is snapshot log?
Answer:
It is a table that maintains a record of modifications to the master table in a snapshot. It is stored in the
same database as master table and is only available for simple snapshots. It should be created before
creating snapshots.

Explain the difference between $ORACLE_HOME and $ORACLE_BASE.
Answer:
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is
where the oracle products reside

Explain Different values supported by CURSOR_SHARING parameter and its
explanation
Answer:
Below values are supported by CURSOR_SHARING init.ora parameter:
FORCE - Literals will be replaced by system generated bind variables where possible
SIMILAR - Oracle determines which literals are "safe" for substitution with bind variables. This will result
in some SQL not being shared in an attempt to provide a more efficient execution plan.
EXACT - Only allows statements with identical text to share the same cursor


Background processes:
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and memory structures

There are two types of database processes:
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn

DBWn writes when:
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP

Log Writer (LGWR) writes:
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes

System Monitor (SMON) Responsibilities:
• Instance recovery
– Rolls forward changes in redo logs
– Opens database for user access
– Rolls back uncommitted transactions
• Coalesces free space
• Deallocates temporary segments.
Process Monitor (PMON) Cleans up after failed processes by:
• Rolling back the transaction
• Releasing locks
• Releasing other resources
• Restarting dead dispatchers

Checkpoint (CKPT) Responsible for:
• Signaling DBWn at checkpoints
• Updating datafile headers with checkpoint information
• Updating control files with checkpoint information
Archiver (ARCn)
• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database


What is Global Inventory ?

Global Inventory holds information about Oracle Products on a Machine. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server . This global Inventory location will be determined by file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle products on machine check for file inventory.xml under ContentsXML in oraInventory Please note if you have multiple global Inventory on machine check all oraInventory directories)
You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1?/

What is Local Inventory ?

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

What is Oracle Home Inventory?

Oracle home inventory or local inventory is present inside each Oracle home. It only contains information relevant to a particular Oracle home. This file is located in the following location:
$ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties File
· Other Folders


What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”


If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted ?

Background processes are started automatically when the instance is started.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Any issues related to backgroud processes should be monitored and analyzed from the trace files generated and the alert log.


In what scenarios open resetlogs required ?

An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.

If both CPU and PSU are available for given version which one, you will prefer to apply?

Once you apply the PSU then the recommended way is to apply the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU (If you apply CPU over PSU will considered you are trying to rollback the PSU and will require more effort in fact). So if you have not decided or applied any of the patches then, I will suggest you to go to use PSU patches. For more details refer: Oracle Products [ID 1430923.1], ID 1446582.1

PSU is superset of CPU and CPU mostly deals with security issues where as PSU fixes security issues as well as functionality changes.
if you are only concerned about security issues then CPU may be the good approach.
Oracle release both the patch sets every quarter.




  •  Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).



  •  Direct upgrade to Oracle 10g is only supported if your database is running one of the following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to one of these releases or use a different upgrade option (like export/ import).



  • Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+, 10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via intermediate upgrades to a supported upgrade version.



Oracle version 11.2.0.4.0 what does each number refers to?
Oracle version number refers:
11 – Major database release number
2 – Database Maintenance release number
0 – Application server release number
4 – Component Specific release number
0 – Platform specific release number



How to get the latest version of OPatch?

You can download the latest version of OPatch from metalink using following URL.

http://updates.oracle.com/download/6880880.html



Explain the difference between a FUNCTION, PROCEDURE and PACKAGE ?

A function and procedure are the same in that they are intended to be a collection of PL/SQL code that carries a single task. While a procedure does not have to return any values to the calling application, a function will return a single value. A package on the other hand is a collection of functions and procedures that are grouped together based on their commonality to a business function or application.

What is different between TRUNCATE and DELETE?

The Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.


Thursday 21 May 2015

What is Row chaining and how can you find it? and how you will resolve the issue ?

When you are inserting record if a row is not fit into the single block  it will chain across multiple blocks leaving  a pointer between these  blocks.

analyze table owner.tablename;

select table_name,chain_cnt from dba_tables where table_name=’tablename';

Solution:
create a table with bigger the block size
1) Create tablespace tbs1 data file ‘/u01/oradata/test/data01.dbf’ size 100m blocksize 16k;
2) alter table tablename move to tbs1;
Here tbs1 is the tablespace name with larger block size and  before creating tablespace it is assumed that you have created a db buffer cache for it.

What is row migration? When does it occur? Where can you find this information?
Row migration happens when update occurs at one column and the row is not adequate to fit in the block then the entire row will be moved to the new block.

select table_name,chain_cnt from dba_tables where table_name=’tablename';
Solution:
set pct_free storage parameter for table to adequate.;

Do you know about statistics , what is the use of it? What kind of statistics exists in database?
Statistics is a collection information about data or database
There are different types of statistics that oracle maintains-
1)System-Statistics: statistics about the hardware like cpu speed,I/O speed,read time write time etc : select * from aux_stats$
2)Object statistics : For a table oracle collects the information about no.of rows,no.of blocks,avg row length etc.We can view
SQL>select table_name,num_rows,blocks,avg_row_len from dba_tables
for index oracle collect statistics on index column about no.of rows,no.of root blocks,no.of branch blocks,no.of leaf blocks,no.of distinct values etc.

Monday 18 May 2015

Bitmap indexes
Bitmap indexes are not recommended for OLTP environments. There are concurrency issues when
bitmap indexes are built on tables incurring DML. Bitmap indexes are recommended for read-only
applications, and read mostly application such as OLAP.


Use indexes to improve SQL performance.
• Use B*Tree indexes:
– For join columns
– For columns frequently appearing in WHERE clauses
• Use composite indexes:
– For columns frequently used together in WHERE clauses
– Consider using compressed key indexes for large indexes
• Use bitmap indexes:
– For columns with few distinct values and large number of
rows
– For read-only or read-mostly tables
– For WHERE clause with multiple predicates

B*Tree indexes on the columns most used in the WHERE clause. Indexing a column frequently used
in a join column can improve join performance. Do not use B*Tree indexes on columns with only a
few distinct values, bitmap indexes are a better choice for these.
Composite B*Tree indexes: Consider composite indexes over columns that frequently appear
together in the same WHERE clause. Composite indexes of several low selectivity columns may
have a higher selectivity. Place the most frequently used column first in the index order. For large
composite indexes, consider using a compressed index. Index key compression works well on
indexes with leading columns that have a few distinct values—the leading, repeating values are
eliminated. This reduces the size of the index and number of I/Os.
Bitmap indexes can help performance where the tables have a large number of rows, the columns
have a few distinct values, and the WHERE clause contains multiple predicates using these columns.
Do not use bitmap indexes on tables or columns that are frequently updated.


Table Reorganization Methods
Online reorganization (DBMS_REDEFINITION)
• Online; data is available
• Could use considerable resources
Shrink segments
• Online; data is available
• In place, no additional space required
Export, recreate, and import
• Offline; data is not available
• Time consuming for large tables
CTAS, ALTER TABLE … MOVE
• Mostly offline; data is not available
• Out of place, requires additional space


Monday 11 May 2015

OCR and Voting disk backup and recovery strategies

to determine the path defined for the OCR
and its mirror location. Use the following command:
more /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
ocrmirrorconfig_loc=+VOTEOCR

to modify the default path. However, you need to be the root user to successfully run the
command:
./$ORA_GRID_HOME/bin/ocrconfig –backuploc <new_location_details>

After changing the default path, all the subsequent OCR automatic and manual
backups will be generated in the new location.


Run the following command as root user to perform the on-demand OCR
binary backup:
./ocrconfig –manualbackup <directory_name>

====================================================

The following steps demonstrate the procedure to restore the OCR using the latest
binary backup copy on a shared storage device, applicable for 11g R1 and 11g R2.
=====================================================

./ocrconfig –showbackup

./crsctl stop crs (repeat the command on all nodes in 11g R1)


In 11g R2:
./crsctl stop cluster –all (just from the first node)
./crsctl stop crs (repeat the command on all nodes in 11g R2)


./ocrconfig –restore <backup_copy_location_filename>


./crsctl start crs


./ocrcheck
./cluvfy comp ocr –n all -verbose

=================================================

Now, we will look at the procedure to restore an OCR and Voting disk file that
are stored together in the same ASM diskgroup. The steps demonstrate how to
reconstruct the corrupted or lost diskgroup followed by restoring/recovering the
OCR and Voting disk file:

./ocrconfig –showbackup

./crsctl stop crs (use –f flag if you have any issues
stopping the cluster stack)


Start up the cluster stack in exclusive mode on a node that holds the latest
binary backup file for OCR. Run the following command as root user:
./crsctl start crs -excl

Connect to the local ASM instance on the node to recreate the diskgroup first,
ensure the diskgroup COMPATIBLE.ASM attribute is set to 11.2, and run the
following commands as GRID software owner:
export ORACLE_SID=+ASM1 – assuming it is on the first node
export ORACLE_HOME=$ORA_GRID_HOME – set gird home
sqlplus / as sysasm
SQL> CREATE DISKGROUP data EXTERNAL REDUNDANCY
DISK '/dev/diskname1'
ATTRIBUTE 'COMPATIBLE.ASM'='11.2';

As a root user:

./ocrconfig –restore /u00/grid/oracle/product/11.2.0/cdata/


./crsctl start res ora.crsd –init
./crsctl replace votedisk +DATA


./crsctl stop crs
./crsctl start crs (repeat this command on all nodes)

./ocrcheck
./cluvfy comp ocr –n all -verbose

===================================================
Restoring the Voting disk
===================================================

Start up the Clusterware stack on the first node of the cluster in exclusive
mode, using the following command:
./crsctl start crs -excl

./crsctl replace voting disk +DISKGROP_NAME

If the Voting disk was previously stored on shared storage and you want
to restore it on shared storage, use the following command, applicable only
in 11g R2:
./crsctl query css votedisk
Note down the File Universal Id (FUID) of the voting disk
./crsctl delete css votedisk <FUID>
./crsctl add css votedisk <destination_for_votedisk>

./crsctl stop crs –f
./crsctl start crs
crsgrid-scan/backup00.ocr

Sunday 10 May 2015

Adding a Mirror location for the OCR and Voting disk

With 11g R1, you can add the redundant copies of OCR and the Voting disk
while the Clusterware is running across the nodes.
First, You need to log in as root to run the commands.

To add a redundant copy for OCR on a shared storage location :
run the following command:
./$ORA_GRID_HOME/bin/ocrconfig –add /dev/sdf1 --new_location_path

To place the redundant copy in ASM disk group.use the following
command. Ensure the diskgroup is created with the COMPATIBLE.ASM attribute and
that the value is set to 11.2 and mounted on all ASM instances of the cluster.
./ocrconfig –add +OCRVOTE

To add a redundant copy of a Voting disk file on a shared storage location:
use the following command as root user. However, you cannot add a redundant copy of a
Voting disk when it is placed on the ASM storage.
./$ORA_GRID_HOME/bin/crsctl add css votedisk /dev/sdg1 --new_location_path

Thursday 7 May 2015

Why and how the deadlock situation arises

A deadlock situation arises when two or more users wait for the same resource locked by one another or two or more processes wait to update rows which are locked by other processes. Oracle if detects a deadlock, rolls back the session chosen by the deadlock victim.

What is VIP used for?

It returns a dead connection IMMIDIATELY, when its primary node fails. Without using VIP IP, the clients have to wait around 10 minutes to receive ORA-3113: “end of file on communications channel”. However, using Transparent Application Failover (TAF) could avoid ORA-3113.

Wednesday 6 May 2015

Reconfiguring a failed Grid Infrastructure configuration


If you would like to resume a failed configuration attempt, you first need to
deconfigure the failed configuration and then proceed again with the configuration
using the following actions.
Firstly, identify the failures caused during the root.sh script execution referring
to the installation logs. For example, use the crsconfig log file under the $GRID_
HOME/cfgtoollogs directory. Then, as root user, proceed with the following
deconfiguration step:
./$GRID_HOME/install/rootcrs.pl –verbose –deconfig –force --(except on the last node)
Once you run the script successfully on the first node of the cluster, proceed with
subsequent nodes, repeating the same command.
However, when you reach the last node of the cluster, you need to run the following
(slightly modified) command:
./$GRID_HOME/install/rootcrs.pl –verbose –deconfig –force –lastnode
--(just on the last node only)
Assuming that the problems that occurred during the root.sh execution time have
been addressed properly and the failed configuration has been removed successfully
on the node, using the preceding procedure, you need to rerun the root.sh script as
root user to reconfigure the Grid Infrastructure installation again.
./$GRID_HOME/root.sh
Once the root.sh script has successfully completed on the first node, then proceed
with the other nodes and repeat the same command, one node at a time.

Removing a successful Grid Infrastructure configuration

Keep in mind that to remove a successful configuration, you are required to perform a two-step action:
• You need to deconfigure the Clusterware services
• You need to deinstall the software binaries

Firstly, stop any active databases running on the node, followed by an ASM instance.
You could use the SRVCTL STOP DATABASE/ASM or the SQLPLUS command SHUTDOWN
IMMEDIATE to stop the databases and the ASM instance

Now, log in as root user, navigate to the $GRID_HOME/install directory, and initiate
the following command:
./rootcrs.pl -verbose –deconfig –force --(except on the last node)
Once the command has completed successfully on the node, move to the next node
and repeat the command.
Once you reach the last node of the cluster, you need to run the following (slightly
modified) command:
./rootcrs.pl –verbose –deconfig –force –lastnode –-(just on the
last node)

When you run the command on the last node, indicated by the –lastnode flag,
Oracle completes the deconfiguration process successfully and also zeroes out
the OCR and voting disks respectively. However, the –deconfig option with the
.rootcrs.pl will only deconfigure the Clusterware; it doesn't remove the software
binaries from the GRID HOME. In order to remove the binaries from the home, you
need to run the following command on each node of the cluster as the grid software
owner, for example, as oracle user:
./$GRID_HOME/deinstall/deinstall

What happens when the root.sh is run after grid software installation ?


It is only during the root.sh script execution time that the majority of Clusterware
actions (configuration) are performed.Hence, the error-free completion of the script
on all nodes of a cluster is the key factor for a successful cluster configuration.
Typically, the script is likely to go through the following list of actions when the
root.sh is executed on the first node:
• Sets the Oracle base and home environments
• Creates the /etc/oratab file
• Verifies the super user privileges
• Creates a trace directory
• Generates OCR keys for the root user
• Adds daemon information to the inittab file
• Starts up the Oracle High Availability Service Daemon (OHASD) process
• Creates and configures an ASM instance and starts up the instance
• Creates required ASM disk groups, if ASM is being used to put OCR and
voting files
• Accumulates OCR keys for the root user
• Starts up the Cluster Ready Service Daemon (CRSD) process
• Creates the voting disk file
• Puts the voting disk on the Voting disk, if ASM type is selected
• Displays voting disk details
• Stops and restarts a cluster stack and other cluster resources on the local node
• Backs up the OCR to a default location
• Installs the cvuqdisk-1.0.7-1 package
• Updates the Oracle inventory file
• Completes with the UpdateNodeList success operation

When root.sh is executed on the last node of the cluster, the following set of actions
is likely to be performed by the script:

• Sets Oracle base and home environmental variables
• The /etc/oratab file will be created
• Performs the super user privileges verification
• Adds trace directories
• Generates OCR keys for the root user
• Adds a daemon to inittab
• Starts the Oracle High Availability Service Daemon (OHASD) process
• Stops/starts a cluster stack and other cluster resources on the local node
• Performs a backup of the OCR file
• Installs the cvuqdisk-1.0.7-1 package
• Updates the Oracle inventory file
• Completes with UpdateNodeList success operation

Tuesday 5 May 2015

ASM

/etc/init.d/oracleasm listdisks
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks
/etc/init.d/oracleasm listdisks | xargs /etc/init.d/oracleasm querydisk -d
/etc/init.d/oracleasm status
/usr/sbin/oracleasm configure
ls -l /dev/oracleasm/disks/*
rpm -qa | grep oracleasm
uname -a

Monday 4 May 2015

Nologging and force logging

NOLOGGING can be used to prevent bulk operations from logging too much information to Oracle's Redo log files.
On the other hand, FORCE LOGGING can be used on tablespace or database level to force logging of changes to the redo. This may be required for sites that are mining log data, using Oracle Streams or using Data Guard (standby databases).

Nologging


NOLOGGING can be used to minimize the amount of redo generated by Oracle. Only the following operations can make use of nologging:
  • SQL*Loader in direct mode
  • INSERT /*+APPEND*/ ...
  • CTAS
  • ALTER TABLE statements (move/add/split/merge partitions)
  • CREATE INDEX
  • ALTER INDEX statements (move/add/split/merge partitions)
To create a table in NOLOGGING mode:
SQL> CREATE TABLE t1 (c1 NUMBER) NOLOGGING;
Table created.
To enable NOLOGGING for a table:
ALTER TABLE t1 NOLOGGING;
Table altered.

[edit]Force logging

A tablespace or the entire database is either in force logging or no force logging mode. To see which it is, run:
SQL> SELECT force_logging FROM v$database;
FOR
---
NO
or
SQL> SELECT tablespace_name, force_logging FROM dba_tablespaces;
TABLESPACE_NAME                FOR
------------------------------ ---
SYSTEM                         NO
...
To enable force logging:
SQL> ALTER DATABASE force logging;
Database altered.
SQL> ALTER TABLESPACE users FORCE LOGGING;
Tablespace altered.
To disable:
SQL> ALTER DATABASE no force logging;
Database altered.
SQL> ALTER TABLESPACE users NO FORCE LOGGING;
Tablespace altered.



Temporary tablespaces and temporary segments have no effect during FORCE LOGGING mode because these objects do not generate any redo. Undo tablespaces are in FORCE LOGGING mode by default, so they cannot be put into FORCE LOGGING mode. Oracle will generate an error if an attempt is made to put a temporary tablespace or undo tablespace into FORCE LOGGING mode.

What is the difference between recovery and restoring of the database


Restore is the process in which physical files are copied from backup to required location.
Recovery is the process of making the copied datafiles up to date with the help of archive redo log files.

To protect from fractured block corruption during begin backup mode oracle copies the whole block into redo record , So the size if redo increases . In this context more redo is generated