Friday 2 September 2016

Local and remote connections to the database -beginner level


Connecting to the Database with SQL*Plus


About SQL*Plus
SQL*Plus is the primary command-line interface to your Oracle database. You use SQL*Plus to start up and shut down the database, set database initialization parameters, create and manage users, create and alter database objects (such as tables and indexes), insert and update data, run SQL queries, and more. Before you can submit SQL statements and commands, you must connect to the database.
With SQL*Plus, you can connect locally or remotely. Connecting locally means connecting to an Oracle database running on the same computer on which you are running SQL*Plus.
 Connecting remotely means connecting over a network to an Oracle database that is running on a remote computer. Such a database is referred to as a remote database.
The SQL*Plus executable on the local computer is provided by a full Oracle Database installation, an Oracle Client installation, or an Instant Client installation.
Connecting locally to the database:











Connecting remotely to the database:
Connecting remotely to the database requires the listener to be up and running because remote connection always goes to the listener then listener route that connection to the database.
Here i am trying to connect to the database remotley but it couldn't be able to establish the connection to the database because the listener is not started yet.


connection  failed with ORA-12541: TNS:no listener











Now i am starting the listener as stated below.



Now i am trying to connect to the database remotely with same command and this time its allowed me to establish a connection to the database. So  remote database connections always need the listener  to be up to establish a connection to the database.












Bear in mind that listener restart will not affect the existing connections, only the users try to connect to the database when the listener restart are going to rejected.

Tuesday 8 March 2016

Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)


Traditional Cross Platform Transportable Tablespaces

The high-level steps in a typical XTTS scenario are the following:


  1. Make tablespaces in source database READ ONLY
  2. Transfer datafiles to destination system
  3. Convert datafiles to destination system endian format
  4. Export metadata of objects in the tablespaces from source database using Data Pump
  5. Import metadata of objects in the tablespaces into destination database using Data Pump
  6. Make tablespaces in destination database READ WRITE
Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively unavailable to users for the entire duration of the procedure.  Due to the serial nature of the steps, the downtime required for this procedure is proportional to the amount of data.  If data size is large, datafile transfer and convert times can be long, thus downtime can be long.

Reduce Downtime using Cross Platform Incremental Backup

To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental backups, to work in a cross-platform scenario.  By using a series of incremental backups, each smaller than the last, the data at the destination system can be brought almost current with the source system, before any downtime is required.  The downtime required for datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block changes in the source system.

The high-level steps using the cross platform incremental backup capability are the following:
  • Prepare phase (source data remains online)
  1. Transfer datafiles to destination system
  2. Convert datafiles, if necessary, to destination system endian format
  • Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)
  1. Create incremental backup on source system
  2. Transfer incremental backup to destination system
  3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
  • Transport phase (source data is READ ONLY)
  1. Make tablespaces in source database READ ONLY
  2. Repeat the Roll Forward phase one final time
  3. This step makes destination datafile copies consistent with source database.
      Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
  1. Export metadata of objects in the tablespaces from source database using Data Pump
  2. Import metadata of objects in the tablespaces into destination database using Data Pump
  3. Make tablespaces in destination database READ WRITE

The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to reduce downtime when transporting tablespaces across platforms.

Monday 7 March 2016

Oracle Architecture


An Oracle database server consists of an oracle database and one or more database instances.
Oracle instance consists of memory structures and background processes. Every time when the instance started, shared memory area called the system global area(SGA) is allocated and background processes are started.

The database consists of both physical structures and logical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting access
to logical storage structures.






































Sunday 12 July 2015

Why do we need odd number of voting disks ?


The reason for the odd number of voting disks is to solve the "split-brain" problem. The split-brain problem occurs when the cluster interconnect goes down and each node cannot obtain a heartbeat from the other nodes. At this time, each node in the cluster thinks they are the only one alive and they think they should become the "master node". There can be only one master node. Each half thinks they are the brains of the operation, which we cannot allow.
So how do we resolve this? 
We set up a race. Each candidate node wants to be the master so we put it up to a vote. Whichever contestant gets the most votes wins. So node 1 contacts the first voting disk and says "I am here first so I am the master!". Simultaneously, node 2 contacts the second voting disk and says  "I am here first so I am the master!". They both can't be right. If there were an even number of voting disks, then it is possible that each candidate node could come up with exactly half the number of total votes and we would have a tie, which must be resolved. Now the real race begins. The two nodes run to the third and final voting disk. The node that gets there first now has 2 votes to 1, wins the election, and becomes the master node in the cluster.

Saturday 4 July 2015

Turning Flashback ON possible when the database is open(starting from 11gR2)

Starting from 11gR2 oracle allows us to  enable the  flashback even the database is in open mode.
But this was not possible in earlier versions, till 11gR1 we can enable the flashback only when the database is in mount mode.


Simulated the situation below with the detailed steps :

11gR2 :

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon Jul 18 04:56:02 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
NO         ARCHIVELOG   READ WRITE

SQL> alter database flashback on;

Database altered.

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
YES        ARCHIVELOG   READ WRITE

SQL> alter database flashback off;

Database altered.

SQL> select flashback_on, log_mode, open_mode from v$database;

FLASHBACK_ON       LOG_MODE OPEN_MODE
------------------ ------------ --------------------
NO         ARCHIVELOG   READ WRITE


Friday 19 June 2015

Determining Which Columns to Index

We recommend that you place indexes on primary key, unique key, and foreign key columns. This is
a good starting point. Furthermore, analyze slowly performing SQL statements to see which columns are used. This will provide you additional candidate columns for indexing. These indexing
recommendations lay the foundation for maximizing SQL query performance.

Now we turn our attention to figuring out which columns should be indexed.
For  most applications you create indexes in the following situations:
Define a primary key constraint for each table: This results in an index
automatically being created on the columns specified in the primary key.
Create unique key constraints on columns that are required to be unique and are
different from the primary key columns: Each unique key constraint results in an
index automatically being created on the columns specified in the constraint.
Manually create indexes on foreign key columns: This is done for better
performance to avoid certain locking issues

Other Suitable Columns
When choosing an index, keep in mind this basic rule: the indexing strategy should be based on the
columns you use when querying the table. You can create more than one index on a table and have an
index contain multiple columns. You will make better decisions if you first consider what types of
queries you execute on a table. If you’ve identified a poorly performing SQL query, also consider creating
indexes for the following columns:
• Create indexes on columns used often as predicates in the WHERE clause; when
multiple columns from a table are used in the WHERE clause, consider using a
concatenated (multi-column) index.
• Create a covering index on columns used in the SELECT clause.
• Consider creating indexes on columns used in the ORDER BY, GROUP BY, UNION, or
DISTINCT clauses.


-> Create primary key constraints for all tables. This will automatically create a B-tree index (if
the columns in the primary key aren’t already
indexed).
-> Create unique key constraints where appropriate. This will automatically create a B-tree index (if
the columns in the unique key aren’t already
indexed).
-> Create indexes on foreign-key columns. Foreign-key columns are usually included in the
WHERE clause when joining tables and thus
improve performance of SQL SELECT statements.
-> Creating a B-tree index on foreign key columns also reduces locking issues when updating and
inserting into child tables.

B-tree indexes:
B-tree indexes are suitable for most applications
where you have high cardinality column values.

Bitmap indexes:
Consider using bitmap indexes in data warehouse environments.These indexes are ideal for low cardinality columns where the values aren’t updated often.
Bitmap indexes work well on foreign key columns on Star schema fact tables where you often run
queries that use AND and OR join conditions

  • Doing this frees up physical space and improves the performance of data manipulation language (DML) statements.


The B-tree index is the default index type in Oracle. This index type is known as B-tree because the table
row identifier (ROWID) and associated column values are stored within index blocks in a balanced treelike
structure. Oracle B-tree indexes are used for the following reasons:

  •  Improving SQL statement performance.
  •  Enforcing uniqueness of primary key and unique key constraints.
  •  Reducing potential locking issues with parent and child table tables associated via primary and foreign key constraints

Query to find out the indexes on tables:


select index_name, index_type, table_name, tablespace_name, status
from user_indexes
where table_name in ('CUST','ADDRESS');
Here is some sample output:
INDEX_NAME INDEX_TYPE TABLE_NAME TABLESPACE_NAME STATUS
-------------------- ---------- ---------- --------------- ----------
CUST_PK NORMAL CUST REPORTING_INDEX VALID
CUST_UK1 NORMAL CUST REPORTING_INDEX VALID
ADDR_FK1 NORMAL ADDRESS REPORTING_INDEX VALID

Run the following query to verify the columns on which the indexes are created:

select index_name, column_name, column_position
from user_ind_columns
where table_name in ('CUST','ADDRESS')
order by index_name, column_position;
Here is some sample output:
INDEX_NAME COLUMN_NAME COLUMN_POSITION
-------------------- -------------------- ---------------
ADDR_FK1 CUST_ID 1
CUST_PK CUST_ID 1
CUST_UK1 LAST_NAME 1
CUST_UK1 FIRST_NAME 2

To display the number of extents and space used, run the following query:

insert into cust values(1,'STARK','JIM');
insert into address values(100,1,'Vacuum Ave','Portland','OR');
Rerunning this query (that reports on segment usage) yields the following output:
SEGMENT_NAME SEGMENT_TYPE EXTENTS BYTES
-------------------- -------------------- ---------- ----------
CUST_PK INDEX 1 1048576
CUST_UK1 INDEX 1 1048576
ADDR_FK1 INDEX 1 1048576


Use the DBMS_METADATA.GET_DDL function to display an object’s DDL. Make sure you set the LONG variable to an appropriate value so that the returned CLOB value is displayed in its entirety.
 For example,

SQL> set long 1000000
SQL> select dbms_metadata.get_ddl('INDEX','ADDR_FK1') from dual;

Here is the output:

DBMS_METADATA.GET_DDL('INDEX','ADDR_FK1')
--------------------------------------------------------------------------------
CREATE INDEX "MV_MAINT"."ADDR_FK1" ON
"MV_MAINT"."ADDRESS" ("CUST_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255
COMPUTE STATISTICS STORAGE(INITIAL 1048576
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1

BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "REPORTING_INDEX"


  • Here’s an example of making an index invisible:

SQL> alter index addr_fk1 invisible;
This code makes the index invisible to the query optimizer so that it can’t be used to retrieve rows for a query. However, the index structure is still maintained by Oracle as records are modified in the table.  If you determine that the index was critical for performance, you can easily make it visible to the optimizer again via

SQL> alter index addr_fk1 visible;


  • Your other option before dropping an index is to make it unusable.


SQL> alter index addr_fk1 unusable;

This code renders the index unusable, but doesn’t drop it. Unusable means that the optimizer won’t use
the index and Oracle won’t maintain the index as DML statements operate on its table. Furthermore, an
unusable index can’t be used internally to enforce constraints or prevent locking issues.

If you need to re-enable an unusable index, then you’ll have to rebuild it. Be aware that rebuilding a
large index can take a considerable amount of time and resources.

SQL> alter index addr_fk1 rebuild;

After you’re sure an index isn’t required, use the DROP INDEX statement to remove it.
This permanently drops the index. The only way to get the index back is to re-create it.

SQL> drop index addr_fk1;


Viewing Primary Key Constraint and Index Details
You can confirm the details of the index as follows:
select index_name, index_type, uniqueness
from user_indexes
where table_name = 'CUST';

Here is the output for this example:
INDEX_NAME INDEX_TYPE UNIQUENES
-------------------- --------------- ---------
CUST_PK NORMAL UNIQUE
Here is a query to verify the constraint information:
select constraint_name, constraint_type
from user_constraints
where table_name = 'CUST';
Here is the corresponding output:
CONSTRAINT_NAME CONSTRAINT_TYPE
-------------------- ---------------
CUST_PK P

Dropping Primary Key Constraint and Index
An index that was automatically created (when the primary key constraint was created) can’t be directly
dropped. In this scenario, if you attempt to drop the index
SQL> drop index cust_pk;
you’ll receive this error
ORA-02429: cannot drop index used for enforcement of unique/primary key
To remove the index, you must do so by dropping or disabling the primary key constraint. For
example, any of the following statements will drop an index that was automatically created when the
constraint was created:
SQL> alter table cust disable constraint cust_pk;
SQL> alter table cust drop constraint cust_pk;
SQL> alter table cust drop primary key;
When you drop or disable a primary key constraint, you have the option of not dropping the
associated index. Use the KEEP INDEX clause of the DROP/DISABLE CONSTRAINT clause to retain the index.
For example,
SQL> alter table cust drop constraint cust_pk keep index;
This code instructs Oracle to drop the constraint but keep the index. If you’re working with large
tables, you may want to disable or drop the constraint for performance reasons while loading or
manipulating the data. Dropping the index associated with a large table may take considerable time and
resources to re-create.
One other aspect to be aware of is that if a primary key or unique key is referenced by an enabled
foreign key, and you attempt to drop the constraint on the parent table

SQL> alter table cust drop primary key;
you’ll receive this error
ORA-02273: this unique/primary key is referenced by some foreign keys
In this situation you’ll need to first drop or disable the referenced foreign key, or use the CASCADE
clause to automatically drop the foreign key constraint when the primary key constraint is dropped or
disabled. For example,
SQL> alter table cust drop constraint cust_pk cascade;
SQL> alter table cust disable constraint cust_pk cascade;
SQL> alter table cust drop primary key cascade

  •  

Saturday 23 May 2015

Oracle dba interview questions and answers

What is main purpose of CHECKPOINT?
Answer:
A Checkpoint is a database event, which synchronizes the data blocks in memory with the datafiles on disk.
A checkpoint has two purposes:
    1. to establish data consistency
    2. Enable faster database recovery

The following are the parameter that will be used by DBA to adjust time or interval of how frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.


How do you backup and restore using Transportable Tablespaces
Answer:
   
1. Run DBMS_TTS against the tablespace to see if tablespace is self contained or not.
2. Make tablespace Read Only.
3. Export Metadata (EXP TRANSPORT_TABLESPACES=Y TABLESPACE=ts1)
4. Copy data file to target host
5. Copy export dump to target
6. Import the Metadata (IMP TRANSPORT_TABLESPACES=Y DATAFILES (file1,file2) )
7. Bring Tablespace ONLINE and enable SOURCE tablespace to READ WRITE

What does RESETLOGS option do?

1. Creates a new incarnation of the database, putting a new SCN in all data file headers.
2. Reset Log Sequence number to 1
3. Reformats ONLINE REDO LOGFILES if they exists

What are the options available to refresh snapshots?
Answer:
COMPLETE - Tables are completely regenerated using the snapshots query and the master tables
every time the snapshot referenced.
FAST - If simple snapshot used then a snapshot log can be used to send the changes to the snapshot
tables.
FORCE - Default value. If possible it performs a FAST refresh; otherwise it will perform a complete
refresh.

What is snapshot log?
Answer:
It is a table that maintains a record of modifications to the master table in a snapshot. It is stored in the
same database as master table and is only available for simple snapshots. It should be created before
creating snapshots.

Explain the difference between $ORACLE_HOME and $ORACLE_BASE.
Answer:
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is
where the oracle products reside

Explain Different values supported by CURSOR_SHARING parameter and its
explanation
Answer:
Below values are supported by CURSOR_SHARING init.ora parameter:
FORCE - Literals will be replaced by system generated bind variables where possible
SIMILAR - Oracle determines which literals are "safe" for substitution with bind variables. This will result
in some SQL not being shared in an attempt to provide a more efficient execution plan.
EXACT - Only allows statements with identical text to share the same cursor


Background processes:
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and memory structures

There are two types of database processes:
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn

DBWn writes when:
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP

Log Writer (LGWR) writes:
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes

System Monitor (SMON) Responsibilities:
• Instance recovery
– Rolls forward changes in redo logs
– Opens database for user access
– Rolls back uncommitted transactions
• Coalesces free space
• Deallocates temporary segments.
Process Monitor (PMON) Cleans up after failed processes by:
• Rolling back the transaction
• Releasing locks
• Releasing other resources
• Restarting dead dispatchers

Checkpoint (CKPT) Responsible for:
• Signaling DBWn at checkpoints
• Updating datafile headers with checkpoint information
• Updating control files with checkpoint information
Archiver (ARCn)
• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database


What is Global Inventory ?

Global Inventory holds information about Oracle Products on a Machine. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server . This global Inventory location will be determined by file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle products on machine check for file inventory.xml under ContentsXML in oraInventory Please note if you have multiple global Inventory on machine check all oraInventory directories)
You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1?/

What is Local Inventory ?

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

What is Oracle Home Inventory?

Oracle home inventory or local inventory is present inside each Oracle home. It only contains information relevant to a particular Oracle home. This file is located in the following location:
$ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties File
· Other Folders


What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”


If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted ?

Background processes are started automatically when the instance is started.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Any issues related to backgroud processes should be monitored and analyzed from the trace files generated and the alert log.


In what scenarios open resetlogs required ?

An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.

If both CPU and PSU are available for given version which one, you will prefer to apply?

Once you apply the PSU then the recommended way is to apply the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU (If you apply CPU over PSU will considered you are trying to rollback the PSU and will require more effort in fact). So if you have not decided or applied any of the patches then, I will suggest you to go to use PSU patches. For more details refer: Oracle Products [ID 1430923.1], ID 1446582.1

PSU is superset of CPU and CPU mostly deals with security issues where as PSU fixes security issues as well as functionality changes.
if you are only concerned about security issues then CPU may be the good approach.
Oracle release both the patch sets every quarter.




  •  Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).



  •  Direct upgrade to Oracle 10g is only supported if your database is running one of the following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to one of these releases or use a different upgrade option (like export/ import).



  • Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+, 10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via intermediate upgrades to a supported upgrade version.



Oracle version 11.2.0.4.0 what does each number refers to?
Oracle version number refers:
11 – Major database release number
2 – Database Maintenance release number
0 – Application server release number
4 – Component Specific release number
0 – Platform specific release number



How to get the latest version of OPatch?

You can download the latest version of OPatch from metalink using following URL.

http://updates.oracle.com/download/6880880.html



Explain the difference between a FUNCTION, PROCEDURE and PACKAGE ?

A function and procedure are the same in that they are intended to be a collection of PL/SQL code that carries a single task. While a procedure does not have to return any values to the calling application, a function will return a single value. A package on the other hand is a collection of functions and procedures that are grouped together based on their commonality to a business function or application.

What is different between TRUNCATE and DELETE?

The Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.