01 Oracle DBA Interview Questions
01 Oracle DBA Interview Questions
******************************************************************************
**
` INITIALIZATION PARAMETERS
******************************************************************************
******************************************************************************
**
PARAMETERS
2.SESSIONS:-SESSIONS specifies the maximum number of sessions that can be created in the
system. Because every login requires a session,
this parameter effectively determines the maximum number of concurrent users in the
system.
3.DB_BLOCK_CHECKING
4.DB_CREATE_ONLINE_LOG_DEST_n
5.DB_FILE_MULTIBLOCK_READ_COUNT
1
you must add a corresponding file to the standby database. When the standby database is
updated, this parameter converts the datafile name on the primary database to the datafile
name on the standby database. The file on the standby database must exist and be writable, or
the recovery process will halt with an error
8.LOG_ARCHIVE_DEST_n:-For multiplexing the Archive log files set the destination of the file.
(Range 1-31).
each of which must specify either the LOCATION or the SERVICE attribute to specify
where to archive the redo data.
All other attributes are optional. Note that whether you are specifying the
LOCATION attribute or the SERVICE attribute,
9.SHARED_SERVERS:-specifies the number of server processes that you want to create when an
instance is started. If system load decreases, then this minimum number of servers is
maintained. Therefore, you should take care not to set SHARED_SERVERS too high at system
startup.
symbols for AD, BC, a.m., and p.m., and the default sorting mechanism
2
The ALTER SYSTEM statement without the DEFERRED keyword modifies the global value of the
parameter for all sessions in the instance, for the duration of the instance (until the database is
shut down). The value of the following initialization parameters can be changed with ALTER
SYSTEM:
17.ASM_PREFERRED_READ_FAILURE_GROUPS
19.CONTROL_FILES:-Every database has a control file, which contains entries that describe the
structure of the database (such as its name,
the timestamp of its creation, and the names and locations of its datafiles and redo
files). CONTROL_FILES specifies one or
20.CORE_DUMP_DEST
3
21.DB_nK_CACHE_SIZE:-Cache size with the block size of nK.
22.DB_BLOCK_SIZE:-specifies (in bytes) the size of Oracle database blocks. Typical values are
4096 and 8192. The value of this parameter
The value for DB_BLOCK_SIZE in effect at the time you create the database determines
the size of the blocks. The value
NON-MODIFIABLE.
4
27.DB_CREATE_FILE_DEST:-specifies the default location for the flash recovery area. The flash
recovery area contains multiplexed copies
of current control files and online redo logs, as well as archived redo logs, flashback
logs, and RMAN backups.
5
33.DB_RECOVERY_FILE_DEST:-Destination for FRA to store the Backups, Backupsets, Archival
logs backup, control file & spfile backup under SID directory.
36.DISPATCHERS:-No. of dispatchers.
37.FAL_SERVER:-specifies the FAL (fetch archive log) server for a standby database.
The value is an Oracle Net service name, which is assumed to be configured properly on the
standby database system to point to the desired FAL server.
38.FAL_CLIENT:-specifies the FAL (fetch archive log) client name that is used by the FAL service,
configured through the FAL_SERVER parameter, to refer to the FAL client. The value is an Oracle
Net service name, which is assumed to be configured properly on the FAL server system to
point to the FAL client (standby database). Given the dependency of FAL_CLIENT on
FAL_SERVER, the two parameters should be configured or changed at the same time.
6
takes to perform crash recovery of a single instance. When specified,
FAST_START_MTTR_TARGET is overridden by LOG_CHECKPOINT_INTERVAL.
The special value string UNLIMITED means that there is no upper limit on trace file size.
Thus, dump files can be as large as the operating system permits.
47.MAX_SHARED_SERVERS
49.MEMORY_MAX_TARGET:-Specifies the maximum value to which the DBA can increase the
memory_target parameter
8
50.OPEN_CURSORS:-OPEN_CURSORS specifies the maximum number of open cursors (handles
to private SQL areas) a session can have at once.
You can use this parameter to prevent a session from opening an excessive number of
cursors
51.PGA_AGGREGATE_TARGET:-Total size of memory assigned to all the PGA of users which will
be connected.
52.:-
53.RESOURCE_LIMIT
9
58.SHARED_SERVER_SESSIONS
60.STANDBY_FILE_MANAGEMENT
61.UNDO_RETENTION:-This parameter is set to the value in seconds which ensures that much
time the undo data will be guaranteed stored in UNDO tbspace and we can recover from that
by Flashback.
instance is in manual undo management mode, then an error will occur and startup
will fail.
63.USER_DUMP_DEST
The ALTER SYSTEM ... DEFERRED statement does not modify the global value of the parameter
for existing sessions, but the value will be modified for future sessions that connect to the
database. The value of the following initialization parameters can be changed with ALTER
SYSTEM ... DEFERRED:
10
64.AUDIT_FILE_DEST
To see the current settings for initialization parameters, use the following SQL*Plus command:
This command displays all parameters in alphabetical order, along with their current values.
Enter the following text string to display all parameters having BLOCK in their names:
You can use the SPOOL command to write the output to a file.
You should not specify the following two types of parameters in your parameter files:
* Parameters that you never alter except when instructed to do so by Oracle to resolve a
problem
* Derived parameters, which normally do not need altering because their values are
11
calculated automatically by the Oracle database server
Some parameters have a minimum setting below which an Oracle instance will not start. For
other parameters, setting the value too low or too high may cause Oracle to perform badly, but
it will still run. Also, Oracle may convert some values outside the acceptable range to usable
levels.
If a parameter value is too low or too high, or you have reached the maximum for some
resource, then Oracle returns an error. Frequently, you can wait a short while and retry the
operation when the system is not as busy. If a message occurs repeatedly, then you should shut
down the instance, adjust the relevant parameter, and restart the instance.
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
12
******************************************************************************
**
DATAGUARD QUESTIONS
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
Oracle Data Guard classified in to two types based on way of creation and method used for
Redo Apply. They are as follows.
Following are the different benefits in using Oracle Data Guard feature in your environment.
13
1. High Availability.
2. Data Protection.
Following are the different Services available in Oracle Data Guard of Oracle database.
3. Role Transitions.
What are the different Protection modes available in Oracle Data Guard?
Following are the different protection modes available in Data Guard of Oracle database you
can use any one based on your application requirement.
1. Maximum Protection
2. Maximum Availability
3. Maximum Performance
How to check what protection mode of primary database in your Oracle Data Guard?
14
By using following query you can check protection mode of primary database in your Oracle
Data Guard setup.
For Example:
PROTECTION_MODE
——————————–
MAXIMUM PERFORMANCE
By using following query your can change the protection mode in your primary database after
setting up required value in corresponding LOG_ARCHIVE_DEST_n parameter in primary
database for corresponding standby database.
Example:
15
What are the advantages of using Physical standby database in Oracle Data Guard?
Advantages of using Physical standby database in Oracle Data Guard are as follows.
* High Availability.
* Data Protection.
* Disaster Recovery.
Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Physical standby database are
created as exact copy i.e block by block copy of primary database. In physical standby database
transactions happen in primary database are synchronized in standby database by using Redo
Apply method by continuously applying redo data on standby database received from primary
database. Physical standby database can offload the backup activity and reporting activity from
Primary database. Physical standby database can be opened for read-only transactions but redo
apply won’t happen during that time. But from 11g onwards using Active Data Guard option
(extra purchase) you can simultaneously open the physical standby database for read-only
access and apply redo logs received from primary database.
Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Logical standby database can
be created similar to Physical standby database and later you can alter the structure of logical
standby database. Logical standby database uses SQL Apply method to synchronize logical
standby database with primary database. This SQL apply technology converts the received redo
16
logs to SQL statements and continuously apply those SQL statements on logical standby
database to make standby database consistent with primary database. Main advantage of
Logical standby database compare to physical standby database is you can use Logical standby
database for reporting purpose during SQL apply i.e Logical standby database must be open
during SQL apply. Even though Logical standby database are opened for read/write mode,
tables which are in synchronize with primary database are available for read-only operations
like reporting, select queries and adding index on those tables and creating materialized views
on those tables. Though Logical standby database has advantage on Physical standby database
it has some restriction on data-types, types of DDL, types of DML and types of tables.
What are the advantages of Logical standby database in Oracle Data Guard?
* Data Protection
* High Availability
* Disaster Recovery
17
Step for Physical Standby
4. Enable archiving
6. Configure the listener and tnsnames to support the database on both nodes
18
SELECT thread#, sequence# AS “SEQ#”, name, first_change# AS “FIRSTSCN”,
FROM v$archived_log
V$ log_history
Log_Archive_Dest_n
Log_Archive_Dest_State_n
Log_Archive_Config
Log_File_Name_Convert
Standby_File_Managment
DB_File_Name_Convert
19
DB_Unique_Name
Control_Files
Fat_Client
Fat_Server
The LOG_ARCHIVE_CONFIG parameter enables or disables the sending of redo streams to the
standby sites. The DB_UNIQUE_NAME of the primary database is dg1 and the
DB_UNIQUE_NAME of the standby database is dg2. The primary database is configured to ship
redo log stream to the standby database. In this example, the standby database service is dg2.
Next, STANDBY_FILE_MANAGEMENT is set to AUTO so that when Oracle files are added or
dropped from the primary database, these changes are made to the standby databases
automatically. The STANDBY_FILE_MANAGEMENT is only applicable to the physical standby
databases.
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
20
**
DATA COMMANDS
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
--
In Oracle Database 10g and below you could open the physical standby database for read-only
activities, but only after stopping the recovery process.
In Oracle 11g, you can query the physical standby database in real time while applying the
archived logs. This means standby continue to be in sync with primary but can use the standby
for reporting.
21
SQL> alter database recover managed standby database cancel;
Database altered.
Database altered.
While the standby database is open in read-only mode, you can resume the managed recovery
process.
Database altered.
Snapshot Standby
In Oracle Database 11g, physical standby database can be temporarily converted into an
updateable one called Snapshot Standby Database.
In that mode, you can make changes to database. Once the test is complete, you can rollback
the changes made for testing and convert the database into a standby undergoing the normal
22
recovery. This is accomplished by creating a restore point in the database, using the Flashback
database feature to flashback to that point and undo all the changes.
Steps:
System altered.
System altered.
Database altered.
23
Database altered.
...
SQL> startup
OPEN_MODE DATABASE_ROLE
---------- ----------------
After your testing is completed, you would want to convert the snapshot standby database back
to a regular physical standby database by following the steps below
Connected.
24
SQL> startup mount
...
Database mounted.
Database altered.
SQL> shutdown
...
Database mounted.
25
Now the standby database is back in managed recovery mode. When the database was in
snapshot standby mode, the archived logs from primary were not applied to it. They will be
applied now.
In 10g this can be done by following steps .. DR failover test with flashback
Redo Compression
In Oracle Database 11g you can compress the redo that goes across to the standby server via
SQL*Net using a parameter compression set to true. This works only for the logs shipped during
the gap resolution. Here is the command you can use to enable compression.
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
26
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the
SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.
Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization. Export the objects (especially tables
and indexes) with COMPRESS=Y. If table was spawning 20 Extents of 1M each (which is not
desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL
generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not
have contiguous space on disk (tablespace), and do not want imports to fail.
27
4. How to improve exp performance?
3. If you are running multiple sessions, make sure they write to different disks.
7. It's advisable to drop indexes before importing to speed up the import process or set
INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after
the data was successfully imported.
8. Use STATISTICS=NONE
28
Will write DDLs of the objects in the dumpfile into the specified file.
Will ignore the errors during import and will continue the import.
8. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump has APIs, from procedures we can run Data Pump jobs.
Data Pump import will create the user, if user doesn’t exist.
9. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Using parallel option which increases worker threads. This should be set based on the number
of cpus.
Using parallel option which increases worker threads. This should be set based on the number
29
of cpus.
12. In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how
it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the
JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how
much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
30
14. How to import only metadata?
CONTENT= METADATA_ONLY
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
31
******
Oracle Data Pump is a feature of Oracle Database 10g that enables very fast bulk data and
metadata movement between Oracle databases. Oracle Data Pump provides new high-speed,
parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise
Manager interface.
Data Pump Export and Import utilities are typically much faster than the original Export and
Import Utilities. A single thread of Data Pump Export is about twice as fast as original Export,
while Data Pump Import is 15-45 times fast than original Import.
Data Pump jobs can be restarted without loss of data, whether or not the stoppage was
voluntary or involuntary.
Data Pump jobs support fine-grained object selection. Virtually any type of object can be
included or excluded in a Data Pump job.
Data Pump supports the ability to load one instance directly from another (network import) and
unload a remote instance (network export).
For this example, once your export your database before that you must be give privilege on this
user. If you need to export you can give " EXP_FULL_DATABASE " and if you need import you
can give " IMP_FULL_DATABASE "
Connected.
32
SQL> GRANT CREATE ANY DIRECTORY TO ORTHONOVC16;
Grant succeeded.
Directory created.
Grant succeeded.
Grant succeeded.
33
Schema level export :-
Other export's :
Estimate_Only = Before export your dumpfile you can estimate your dumpfile size using the
bellow
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
34
Differnece B/W exp/imp and
expdp/impdp
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of
oracle database. Oracle 10g introduces a new feature called data pump export and import.Data
pump export/import differs from original export/import. The difference is listed below.
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and
Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump
Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL
commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved
performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod
and then impdp full=y where in original export/import does not always exhibit this behavior.
35
5)Expdp/Impdp access files on the server rather than on the client.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single
sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in
original export/import we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than
the method used by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any
row violates an active constraint, the load is discontinued and no data is loaded. This is different
from original Import, which logs any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being
loaded. Whereas, the original Import utility loaded data in such a way that if a even table had
compression enabled, the data was not compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most
of the restrictions that exist for character set conversions in the original Import utility do not
apply to Data Pump. The one case in which character set conversions are not supported under
the Data Pump is when using transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was
provided by the COMPRESS parameter. Instead, extents are reallocated according to storage
parameters for the target table.
36
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
Scenarios
4) Exporting Tablespace.
37
Example of Exporting Full Database
In the above command, FILE option specifies the name of the dump file, FULL option specifies
that you want to export the full database, USERID option specifies the user account to connect
to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE
privilege.
To export Objects stored in a particular schemas you can run export utility with the following
arguments
The above command will export all the objects stored in SCOTT and ALI’s schema.
38
This will export scott’s emp and sales tables.
Exporting Tablespaces
Query Mode
From Oracle 8i one can use the QUERY= export parameter to selectively unload a subset of the
data from a table. You may need to escape special chars on the command line, for example:
query=\”where deptno=10\”. Look at these examples:
If you include CONSISTENT=Y option in export command argument then, Export utility will
export a consistent image of the table i.e. the changes which are done to the table during
export operation will not be exported.
39
exp userid=scott/tiger@orcl parfile=export.txt
BUFFER=10000000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y
o Incremental - will backup only the tables that have changed since the last incremental,
cumulative or complete export.
o Cumulative - will backup only the tables that have changed since the last cumulative or
complete export.
o Complete - will backup the entire database, with the exception of updates on tables
where incremental and cumulative exports are tracked. The parameter "INCTYPE=COMPLETE"
must be specified for those tables with updates and are tracked to be backed up.
o Exporting users - can backup all objects in a particular users schema including
tables/data/grants/indexes
o Importing table/s
40
o Importing user/s
Data pump expdp/impdp utilities were first introduced in Oracle 10g (10.1). These new utilities
were considered high performance replacements for the legacy export/import had been in use
since Oracle 7. Some of the key advantages are: greater data/metadata filtering, handling of
partitioned tables during import and support of the full range of data types.
* Expdp/Impdp provide same categories and modes as the legacy exp/imp utility
o Plus more
o user/s schema/s
o file dumpfile
o log logfile
o For full list of all of the new parameter mappings visit this link for dp_legacy support
documentation for 11.2 which is latest. Oracle 11.2 DP_Legacy Parameter Mappings
# dumpfile=exp_test1_09202012_%U.dmp
41
o Network mode - allows import over dblink
o Estimate - provides the estimated size of dumpfiles so space allocation can be made
beforehand
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
42
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
Database mounted.
iii) Use RMAN to restore the database and recover the database.
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
43
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
$ rman target /
44
$ rman target /
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
exit;
45
$rman target /
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Switch the restored datafile so that the control file considers it the current datafile.
run {
46
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
$ rman target /
47
RMAN> sql 'alter database datafile <file#> online';
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
48
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> restore database; --required if datafile(s) have been added after the backup
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
49
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
shutdown abort
startup nomount
=============================
RMAN> run {
'YYYY/MM/DD HH14:MI:SS')";
restore database;
recover database;
50
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
Clear the archived or unarchived group. (For archive status, check in v$log)
51
1.2 Clearing Inactive, Not-Yet-Archived Redo
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
52
And open database using resetlogs
1) Restore database backup and archive log backup(if hot) to target server.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
or
5) On target:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
53
CONTROL_FILES
$ rman target /
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
OR
OR
And now…
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
54
11. Restoring backups from tape.
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
55
Rman Scenarios
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
56
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
Database mounted.
iii) Use RMAN to restore the database and recover the database.
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
57
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
$ rman target /
58
$ rman target /
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
exit;
59
$rman target /
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Switch the restored datafile so that the control file considers it the current datafile.
run {
60
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
$ rman target /
61
RMAN> sql 'alter database datafile <file#> online';
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
62
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> restore database; --required if datafile(s) have been added after the backup
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
63
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
shutdown abort
startup nomount
=============================
RMAN> run {
'YYYY/MM/DD HH14:MI:SS')";
restore database;
recover database;
64
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
Clear the archived or unarchived group. (For archive status, check in v$log)
65
1.2 Clearing Inactive, Not-Yet-Archived Redo
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
66
And open database using resetlogs
1) Restore database backup and archive log backup(if hot) to target server.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
or
5) On target:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
67
CONTROL_FILES
$ rman target /
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
OR
OR
And now…
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
68
11. Restoring backups from tape.
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
69
- It is assumed that your control files are still accessible.
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
Database mounted.
iii) Use RMAN to restore the database and recover the database.
70
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
71
3. Complete Open Database Recovery. Non system tablespace is missing, database is up
$ rman target /
$ rman target /
72
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
exit;
$rman target /
73
$ rman target / catalog rman/rman@rcat
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Switch the restored datafile so that the control file considers it the current datafile.
run {
74
6. Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
$ rman target /
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
75
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
rman target /
RMAN> restore database; --required if datafile(s) have been added after the backup
Make a new complete backup, as the database is open in a new incarnation and previous
76
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
shutdown abort
77
startup nomount
=============================
RMAN> run {
'YYYY/MM/DD HH14:MI:SS')";
restore database;
recover database;
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
78
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
Clear the archived or unarchived group. (For archive status, check in v$log)
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
79
And now open database:
1) Restore database backup and archive log backup(if hot) to target server.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
80
RMAN> backup current controlfile to '<path/filename.ctl>';
or
5) On target:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
CONTROL_FILES
$ rman target /
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
81
RMAN> restore database until time ‘<date/time>’ ;
OR
OR
And now…
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
82
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
83
RMAN needs to be able to access the control file to determine which backup sets are necessary
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
Database mounted.
iii) Use RMAN to restore the database and recover the database.
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
84
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
$ rman target /
85
$ rman target /
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
exit;
86
$rman target /
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Switch the restored datafile so that the control file considers it the current datafile.
run {
87
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
$ rman target /
88
RMAN> sql 'alter database datafile <file#> online';
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
89
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> restore database; --required if datafile(s) have been added after the backup
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
90
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
shutdown abort
startup nomount
=============================
RMAN> run {
'YYYY/MM/DD HH14:MI:SS')";
restore database;
recover database;
91
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
Clear the archived or unarchived group. (For archive status, check in v$log)
92
1.2 Clearing Inactive, Not-Yet-Archived Redo
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
93
And open database using resetlogs
1) Restore database backup and archive log backup(if hot) to target server.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
or
5) On target:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
94
CONTROL_FILES
$ rman target /
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
OR
OR
And now…
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
95
11. Restoring backups from tape.
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
96
******************************************************************************
******************************************************************************
**********
RMAN COMMANDS
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
LIST Commands :
LIST Commands are generic RMAN commands that will show/list various types of information
when executed within RMAN utility.
The above command lists out all the (individual) files that are in the backup.
97
The above command lists out all the files that are in the backup that belong to the tablespace
‘system’.
The above command lists out all backups of the control file.
CONFIGURE Commands :
The above commands indicates how many days the backup copies need to be retained. Default
is 1 day.
The above command resets the Retention policy to the default value of 1 day.
The above command verifies to make sure that identical files are NOT backed up to the device
specified.
98
CONFIGURE BACKUP OPTIMIZATION CLEAR;
The above command resets the Optimization option to the default value.
SHOW Commands :
The above command shows all the current configuration settings on the screen.
The above command shows the default device type configured for backups.
The above command shows the number of backup copies available in the target database.
99
BACKUP Commands :
Backup commands are the real commands which do the actual backup work.
The above command backs up the target database and the current control file and archive log
files.
REPORT Commands :
Report commands list/report specific information. The difference between report and list
command is report command output is in a better format.
DELETE Commands :
Delete commands delete/remove specific items from the server, repository catalog.
100
RMAN> delete obsolete;
The above command deletes all the backups that are older based on the retention policy setup.
The summary of backups include backupset key, the status, device type, completion time etc,
It provides the summary of the backups available for each datafile, controlfile, archivelog file
and spfile.
Detailed Report
101
If you want the detailed report on the backups, then issue the following command.
Expired Backups
The list backup shows both available and expired backups. To view only the expired backups,
102
RMAN> List Backup of Tablespace Test;
The above list commands displayed information about the backusets. If you have performed
Image copy backups then you must use the list copy command as shown below,
RMAN> List Copy of archivelog from sequence 1000 until sequence 1010;
103
RMAN: Archivelogs lost
Problem: I have lost some of the archivelog files without taking backup. If I run the rman to
backup available archive logs, it throws error that the archivelog_seq# is not available.
Now you run the backup archivelog command. RMAN will backup the available archivelogs
successfully.
Thanks
RMAN Incremental Backups backup only the blocks that were changed since the lastest base
incremental backups. But RMAN had to scan the whole database to find the changed blocks.
Hence the incremental backups read the whole database and writes only the changed blocks.
Thus the incremental backups saves space but the reduction in the time is fairly neglegible.
Block Change Tracking (BCT) is a new feature in Oracle 10g. BCT enables RMAN to read only the
blocks that were changed since the lastest base incremental backups. Hence by enabling BCT,
RMAN reads only the changed blocks and writes only the changed blocks.
104
Without BCT, RMAN has to read every block in the database and compare the SCN in the block
with the SCN in the base backup. If the block's SCN is greater than the SCN in the base backup
then the block is a candidate for the new incremental backup. Usually only few blocks are
changed between backups and the RMAN has to do unncessary work of reading the whole
database.
BCT Stores the information about the blocks being changed inthe BlockChange Tracking File.
The background process that does this logging is Change Tracking Writer (CWTR).
BCT File is one per database and in the case RAC, it is shared among all the instances. BCT File is
created in the location defined by the parameter DB_CREATE_FILE_DEST as OMF file.
To enable BCT
To disable BCT
105
SQL> Alter Database Disable Block Change Tracking;
SQL> Alter Database enable Block Change Tracking using File '/Backup/BCT/bct.ora';
A useful query,
From v$backup_datafile
Where,
Thanks
106
Posted by Vinod Dhandapani at Monday, March 01, 2010 1 comments
Restore Vs Recovery
Restore
Restore means using the backup files to replace the original files after a media failure.
Recovery
Recovery means bringing the database up to date using the restored files, archive logs and
online redo logs.
Thanks
FRA Parameters
2. DB_CREATE_ONLINE_LOG_DEST_n - location for control files and online redo log files. If this
parameter is not set then oracle creates all three types of files in the DB_CREATE_FILE_DEST
location.
107
3. DB_RECOVERY_FILE_DEST_SIZE - Size of FRA.
For eg.,
DB_CREATE_FILE_DEST = /oradata/dbfiles/
LOG_ARCHIVE-DEST_1 = 'LOCATION=/export/archive/arc_dest1'
LOG_ARCHIVE-DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST_SIZE =350GB
DB_RECOVERY_FILE_DEST='LOCATION=/fra/rec_area'
one copy of current control file and online redo log files are stored in FRA also.
DB_CREATE_ONLINE_DEST_n : Setting this init parameter enable Oracle to create OMF control
files and online redolog files in the location specified by the parameter.
DB_RECOVERY_FILE_DEST : Setting this init parameter enables Oracle to create OMF control
files and online redolog files in the FRA.
Specifying both the above parameters enable Oracle to create OMF based control and redolog
files on both the locations.
Omitting the above parameters will enable Oracle to create non-OMF based control and
108
redolog files in the system specific default location.
Thanks
To setup the FRA, you need to set two init paramters in the following order.
Disable FRA
109
Set DB_RECOVERY_FILE_DEST=''
Steps to convert the database from Archivelog Mode to No Archive log mode.
Database Altered
Database Closed
Database Dismounted
Instance Shutdown
.....
110
Database Mounted
Database Altered
Database Altered.
Labels: Backup
The post has been moved to another location. Click Here to access the post.
111
Tuesday, July 22, 2008
Oracle databases are created in NOARCHIVELOG mode. This means that you can only take
consistent cold backups.
In order to perform online backups your database must be running in ARCHIVELOG mode.
Archive log files can also be transmitted and applied to a standby database inDR (Disaster
Recovery) site.
To find out the mode database is running. Login as sys user and issue the following statement.
LOG_MODE
---------------
Archivelog
112
Convert NoArchive to ArchiveLog mode
3. Log_archive_format = 'prod_%t_%r_%s.arc'
4. Log_archive_max_processes= 2
5. Log_archive_min_succeed_dest = 1
Step 2:
Database Closed
Database Dismounted
Instance Shutdown
.....
Database Mounted
Database Altered
113
SQL> Alter Database Open;
Database Altered.
where,
Reopen - (time in secs) retries to archive when it did not succeed the first time. (default 300)
2. Log_archive_format: Format of the archived log filenames. Use any text in combination with
any of the substitution variable to uniquely name the file. The substitution variables are,
%t - Thread Number
%r - Resetlogs ID (Ensure uniqueness when you reset the log sequence number)
%d - Database ID.
114
Alternate - Initially deferred. Available when there is a failure in the primary destination.
6. Log_archive_duplex_dest: When you use this parameter you can archive to a maximum of
two destination. The first destination is given by log_archive_dest parameter and the second
destination by this parameter.
7. Log_archive_start: Set this parameter to true to enable the archiver starts when you restart
the instance. This parameter is not dynamic, so you must restart the instance to take effect.
In 10g this parameter has been deprecated. By enabling the the archive log mode, the archiver
will start automatically.
"@"
115
Run a command file.
"@@"
Run a command file in the same directory as another command file that is currently running.
The @@ command differs from the @ command only when run from within a command file.
"ALLOCATE CHANNEL"
"allocOperandList"
A subclause that specifies channel control options such as PARMS and FORMAT.
"ALTER DATABASE"
116
Mount or open a database.
"archivelogRecordSpecifier"
"BACKUP"
Back up database files, copies of database files, archived logs, or backup sets.
"BLOCKRECOVER"
Recover an individual data block or set of data blocks within one or more datafiles.
"CATALOG"
Add information about a datafile copy, archived redo log, or control file copy to the repository.
117
"CHANGE"
Mark a backup piece, image copy, or archived redo log as having the status UNAVAILABLE or
AVAILABLE; remove the repository record for a backup or copy; override the retention policy for
a backup or copy.
"completedTimeSpec"
"CONFIGURE"
Configure persistent RMAN settings. These settings apply to all RMAN sessions until explicitly
changed or disabled.
"CONNECT"
Establish a connection between RMAN and a target, auxiliary, or recovery catalog database.
"connectStringSpec"
118
Specify the username, password, and net service name for connecting to a target, recovery
catalog, or auxiliary database. The connection is necessary to authenticate the user and identify
the database.
"CONVERT"
"CREATE CATALOG"
"CREATE SCRIPT"
"CROSSCHECK"
Determine whether files managed by RMAN, such as archived logs, datafile copies, and backup
pieces, still exist on disk or tape.
119
"datafileSpec"
"DELETE"
Delete backups and copies, remove references to them from the recovery catalog, and update
their control file records to status DELETED.
"DELETE SCRIPT"
"deviceSpecifier"
"DROP CATALOG"
120
Remove the schema from the recovery catalog.
"DROP DATABASE"
"DUPLICATE"
Use backups of the target database to create a duplicate database that you can use for testing
purposes or to create a standby database.
"EXECUTE SCRIPT"
"EXIT"
"fileNameConversionSpec"
121
Specify patterns to transform source to target filenames during BACKUP AS COPY, CONVERT and
DUPLICATE.
"FLASHBACK"
"formatSpec"
"HOST"
Invoke an operating system command-line subshell from within RMAN or run a specific
operating system command.
"keepOption"
Specify that a backup or copy should or should not be exempt from the current retention policy.
122
"LIST"
"listObjList"
A subclause used to specify which items will be displayed by the LIST command.
"maintQualifier"
A subclause used to specify additional options for maintenance commands such as DELETE and
CHANGE.
"maintSpec"
A subclause used to specify the files operated on by maintenance commands such as CHANGE,
CROSSCHECK, and DELETE.
"obsOperandList"
123
A subclause used to determine which backups and copies are obsolete.
"PRINT SCRIPT"
"QUIT"
"recordSpec"
A subclause used to specify which objects the maintenance commands should operate on.
"RECOVER"
Apply redo logs and incremental backups to datafiles restored from backup or datafile copies, in
order to update them to a specified time.
"REGISTER"
124
Register the target database in the recovery catalog.
"RELEASE CHANNEL"
"releaseForMaint"
"REPLACE SCRIPT"
Replace an existing script stored in the recovery catalog. If the script does not exist, then
REPLACE SCRIPT creates it.
"REPORT"
"RESET DATABASE"
125
Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed
and that a new incarnation of the target database has been created, or reset the target
database to a prior incarnation.
"RESTORE"
Restore files from backup sets or from disk copies to the default or a new location.
"RESYNC"
Perform a full resynchronization, which creates a snapshot control file and then copies any new
or changed information from that snapshot control file to the recovery catalog.
"RUN"
Execute a sequence of one or more RMAN commands, which are one or more statements
executed within the braces of RUN.
"SEND"
126
Send a vendor-specific quoted string to one or more specific channels.
"SET"
Sets the value of various attributes that affect RMAN behavior for the duration of a RUN block
or a session.
"SHOW"
"SHUTDOWN"
Shut down the target database. This command is equivalent to the SQL*Plus SHUTDOWN
command.
"SPOOL"
"SQL"
127
Execute a SQL statement from within Recovery Manager.
"STARTUP"
Start up the target database. This command is equivalent to the SQL*Plus STARTUP command.
"SWITCH"
Specify that a datafile copy is now the current datafile, that is, the datafile pointed to by the
control file. This command is equivalent to the SQL statement ALTER DATABASE RENAME FILE
as it applies to datafiles.
"UNREGISTER DATABASE"
"untilClause"
A subclause specifying an upper limit by time, SCN, or log sequence number. This clause is
128
usually used to specify the desired point in time for an incomplete recovery.
"UPGRADE CATALOG"
Upgrade the recovery catalog schema from an older version to the version required by the
RMAN executable.
"VALIDATE"
Examine a backup set and report whether its data is intact. RMAN scans all of the backup pieces
in the specified backup sets and looks at the checksums to verify that the contents can be
successfully restored.
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
129
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
1. What is RMAN ?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery
activities.
INIT.ORA (manually)
2. When you take a hot backup putting Tablespace in begin backup mode, Oracle records SCN
# from header of a database file. What happens when you issue hot backup database in RMAN
at block level backup? How does RMAN mark the record that the block has been backed up ?
How does RMAN know what blocks were backed up so that it doesn't have to scan them again?
In 11g, there is Oracle Block Change Tracking feature. Once enabled; this new 10g feature
records the modified since last backup and stores the log of it in a block change tracking file.
During backups RMAN uses the log file to identify the specific blocks that must be backed up.
This improves RMAN's performance as it does not have to scan whole datafiles to detect
changed blocks.
130
Logging of changed blocks is performed by the CTRW process which is also responsible for
writing data to the block change tracking file. RMAN uses SCNs on the block level and the
archived redo logs to resolve any inconsistencies in the datafiles from a hot backup. What
RMAN does not require is to put the tablespace in BACKUP mode, thus freezing the SCN in the
header. Rather, RMAN keeps this information in either your control files or in the RMAN
repository (i.e., Recovery Catalog).
1.RMAN executable
2.Server processes
3.Channels
4.Target database
A channel is an RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. It is
through the allocation of channels that you govern I/O characteristics such as:
* Type of I/O device being read or written to, either a disk or an sbt_tape
131
* Maximum size of files created on I/O devices
Because RMAN manages backup and recovery operations, it requires a place to store necessary
information about the database. RMAN always stores this information in the target database
control file. You can also store RMAN metadata in a recovery catalog schema contained in a
separate database. The recovery catalog
A backup of all or part of your database. This results from issuing an RMAN backup command. A
backup consists of one or more backup sets.
A logical grouping of backup files -- the backup pieces -- that are created when you issue an
RMAN backup command. A backup set is RMAN's name for a collection of files associated with a
backup. A backup set is composed of one or more backup pieces.
132
A physical binary file created by RMAN during a backup. Backup pieces are written to your
backup medium, whether to disk or tape. They contain blocks from the target database's
datafiles, archived redo log files, and control files. When RMAN constructs a backup piece from
datafiles, there are a several rules that it follows:
# A datafile can span backup pieces as long as it stays within one backup set
# Datafiles and control files can coexist in the same backup sets
# Archived redo log files are never in the same backup set as datafiles or control files RMAN is
the only tool that can operate on backup pieces. If you need to restore a file from an RMAN
backup, you must use RMAN to do it. There's no way for you to manually reconstruct database
files from the backup pieces. You must use RMAN to restore files from a backup piece.
1. Incremental backups that only copy data blocks that have changed since the last backup.
2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during
online backups.
The PREVIEW option of the RESTORE command allows you to identify the backups required to
complete a specific restore operation. The output generated by the command is in the same
format as the LIST command. In addition the PREVIEW SUMMARY command can be used to
produce a summary report with the same format as the LIST SUMMARY command. The
following examples show how these commands are used:
133
# Spool output to a log file
# Show what files will be used to restore the SYSTEM tablespace’s datafile
The recovery catalog to be used by rman should be created in a separate database other than
the target database. The reason been that the target database will be shutdown while datafiles
are restored.
12. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
134
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
run
Basically this error is because of flash recovery area been full. One way to solve is to increase
the space available for flashback database.
rman>backup database;
……………….
piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1
135
_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONE
Oracle Flashback
run {
shutdown immediate;
startup mount;
restore database;
recover database;
rman>list backup;
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0
database;
You can also use backup full database; which means the same thing as level=0;
18. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive
the files, when a command is issued through rman to backup archivelogs it uses one of the
location to backup the data. When we specify delete input the location which was backedup will
get deleted, if we specify delete all all log_archive_dest_n will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;
136
19. How do I backup archive log?
run
run
21. In catalog database, if some of the blocks are corrupted due to system crash, How will you
recover?
22. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
137
restrictions:
23. Where RMAN keeps information of backups if you are using RMAN without Catalog?
CATALOG vs NOCATALOG
the difference is only who maintains the backup records like when is the last successful backup
incremental differential etc.
In CATALOG mode another database (TARGET database) stores all the information.
SQL> SELECT sid totalwork sofar FROM v$session_longops WHERE sid 153;
RMAN backup time consumption is very less than compared to regular online backup as RMAN
copies only modified blocks
Central Repository
Incremental Backup
138
Corruption Detection
1). copies only the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are filled
with data then RMAN will only create a backup for that 500 filled blocks.
5). can create and store the backup and recover scripts.
6). increase performance through automatic parallelization( allocating channels) less redo
generation.
RMAN offers three encryption modes: transparent mode, password mode and dual mode.
28. What are the steps required to perform in $ORACLE_HOME for enabling the RMAN backups
with netbackup or TSM tape library software?
I can explain what are all the steps to take a rman backup with TSM tape library as follows
2.Once u installed the TDPO automatically one link is created from TDPO directory to
/usr/lib.Now we need to Create soft link between OS to ORACLE_HOME
/usr/tivoli/tsm/client/oracle/bin/tdpo.opt as follows
DSMI_ORC_CONFIG /usr/Tivoli/tsm/client/oracle/bin64/dsm.opt
DSMI_LOG /home/tmp/oracle
TDPO_NODE backup
139
TDPO_PSWDPATH /usr/tivoli/tsm/client/oracle/bin64
TCPPort 1500
passwordacess prompt
nodename backup
enablelanfree yes
RMAN>run
29. What is the significance of incarnation and DBID in the RMAN backups?
When you have multiple databases you have to set your DBID (Database Id) which is unique to
each database. You have to set this before you do any restore operation from RMAN.
There is possibility that incarnation may be different of your database. So it is advised to reset
to match with the current incarnation. If you run the RMAN command ALTER DATABASE OPEN
RESETLOGS then RMAN resets the
target database automatically so that you do not have to run RESET DATABASE. By resetting the
140
database RMAN considers the new incarnation as the current incarnation of the database.
30. List at least 6 advantages of RMAN backups compare to traditional hot backups?
4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.
31. How do you enable the autobackup for the controlfile using RMAN?
32. How do you identify what are the all the target databases that are being backed-up with
RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect
to the target database and give list backup this will give you the backup information with date
141
and timing.
33. What is the difference between cumulative incremental and differential incremental
backups?
Differential backup: This is the default type of incremental backup which backs up all blocks
changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or
lower.
34. How do you identify the block corruption in RMAN database? How do you fix it?
First check whether the block is corrupted or not by using this command
file# block
2 507
conn to Rman
35. How do you clone the database using RMAN software? Give brief steps? When do you use
142
crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
1) Duplicate
2) Restore.
36. What is the difference between obsolete RMAN backups and expired RMAN backups?
The term obsolete does not mean the same as expired. In short obsolete means "not needed "
whereas expired means "not found."
37. List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RMAN Target /
run
Format' D_ U_ T_ t';
143
Backup database;
Steps to be followed:
2) At catalog database create one new user or use existing user and give that user a
recovery_catalog_owner privilege.
a) export ORACLE_SID
5) register database;
41. When do you recommend hot backup? What are the pre-reqs?
Archive Destination must be set and LOG_ARCHIVE_START TRUE (EARLIER VERSION BEFORE
10G)
144
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
'/u01/app/oracle/product/10.2.0/db_2/dbs/snapcf_dba.f'; # default
In Oracle Logical Backup is "which is taken using either Traditional Export/Import or Latest Data
Pump". Where as Physical backup is known "when you take Physical O/s Database related Files
as Backup".
43. What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID1: Mirroring
44. What are things which play major role in designing the backup strategy?
I Believe in designing a good backup strategy it will not only be simply backup but also a
145
contingency plan. In this case you should consider the following:
1. How long is the allowable down time during recovery? - If short you could consider using
dataguard.
2. How long is the backup period? - If short I would advise to use RMAN instead of user
managed backup.
3. If limited disk space for backup never use user managed backup.
4. If the database is large you could consider doing full rman backups on a weekend and do a
incremental backup on a weekday.
5. Schedule your backup on the time where there is least database activity this is to avoid
resource huggling.
6. Backup script should always be automized via scheduled jobs. This way operators would
never miss a backup period.
7. Retention period should also be considered. Try keeping atleast 2 full backups. (current and
previous backup).
Cold backup: shutdown the database and copy the datafiles with the help of
O.S. command. this is simply copying of datafiles just like any other text file copy.
Hot backup: backup process starts even though database in running. The process to take a hot
backup is
3) after copying
Begin backup clause will generate the timestamp. it'll be used in backup consistency i.e. when
begin backup pressed it'll generate the timestamp. During restore database will restore the data
from backup till that timestamp and remaining backup will be recovered from archive log.
Hot backup when the database is online cold backup is taken during shut down period
146
46. How do you test that your recovery was successful?
The files on disk that have not previously been backed up will be backed up. They are full and
incremental backup sets, control file auto-backups, archive logs and datafile copies.
48. How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
50. How can you use the CURRENT_SCN column in the V$DATABASE view to obtain the
currentSCN?
51. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
147
52. In catalog database, if some of the blocks are corrupted due to system crash, How will you
recover?
54. How do you identify the expired, active, obsolete backups? Which RMAN command you
use?
Use command:
55. How do you enable the autobackup for the controlfile using RMAN?
RMAN> configure control file auto backup format for device type disk
56. How do you identify what are the all the target databases that are being backed-up with
148
RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect
to the target database and give list backup, this will give you the backup information with date
and timing
57. What is the difference between cumulative incremental and differential incremental
backups?
Differential backup: This is the default type of incremental backup which backs up all blocks
changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or
lower
58. Explain how to setup the physical stand by database with RMAN?
Using target database controlfile instead of recovery catalog RMAN configuration parameters
are:
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels
configured, then before issuing the DUPLICATE command, manually allocate at least one
auxiliary channel with in the same RUN command.
RMAN can also store its backups in an RMAN-exclusive format which is called backup set. A
backupset is a collection of backup pieces, each of which may contain one or more datafile
149
backups
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and
recoveringOracle Databases. RMAN ships with the database server and doesn't require a
separate installation. TheRMAN executable is located in your ORACLE_HOME/bin directory.
A: It is a unified storage location for all recovery-related files and activities in an Oracle
Database. Itincludes Control File, Archived Log Files, Flashback Logs, Control File Autobackups,
Data Files, andRMAN files.
A: To define a Flash Recovery Area set the following Oracle Initialization Parameters.
64. How do you use the V$RECOVERY_FILE_DEST view to display information regarding the
flashrecovery area?
150
66. How to use the best practice to use Oracle Managed File (OMF) to let Oracle database to
create andmanage the underlying operating system files of a database?
67. How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
It shows where the block change-tracking file is located, the status of it and the size.
69. How do you use the V$BACKUP_DATAFILE view to display how effective the block change
trackingis in minimizing the incremental backup I/O?
151
72. How do you backup datafiles and control files?
Use a fast recovery without restoring all backups from their backup location to the location
specified inthe controlfile.
73. RMAN will adjust the control file so that the data files point to the backup file location and
then starts recovery.Why use Rman ?
2.RMAN introduced in Oracle 8 it has become simpler with new version and easier that user
managed backups.
3.Proper Security
6.Facility of Testing validity of backups also command like cross check to checkthe status of
backup.
7.Oracle 10g has got further optimized incremental backups with has resulted inimprovement
of performance during backup
11.No Extra redo generated when backup is taken. compared to online backup
152
14.Maintains repository of backup metadata.
5.Binary Compression
6.Global Scripting
7.Duration Clause
8.Configure This
153
A.RMAN> print script full_backup to file 'my_script_file.txt'
Oracle Database 10g provides a new concept of global scripts, which you can executeagainst
any database registered in the recovery catalog, as long as your RMAN client isconnected to the
recovery catalog and a target database simultaneously.CPISOLUTION.COM
###################################################
If you are in noarchivelog mode and you loss any datafile then whether it is temporary or
permanent media failure, the database will automatically shut down. If failure is temporary
then correct the underline hardware and start the database. Usually crash recovery will perform
recovery of the committed transaction of the database from online redo log files. If you have
permanent media failure then restore a whole database from a good backup. How to restore a
database is as follows:
If a media failure damages datafiles in a NOARCHIVELOG database, then the only option for
recovery is usually to restore a consistent whole database backup. As you are in noarchivelog
mode so you have to understand that changes after taken backup is lost.
If you logical backup that is export file you can import that also.
In order to recover database in noarchivelog mode you have to follow the following procedure.
SQL>SHUTDOWN IMMEDIATE;
154
2)If possible, correct the media problem so that the backup database files can be restored to
their original locations.
3)Copy all of the backup control files, datafiles to the default location if you corrected media
failure. However you can restore to another location. Remember that all of the files not only
the damaged files.
4)Because online redo logs are not backed up, you cannot restore them with the datafiles and
control files. In order to allow the database to reset the online redo logs, you must have to do
incomplete recovery:
CANCEL
In order to rename your control files or in case of media damage you can copy it to another
location and then by setting (if spfile)
STARTUP NOMOUNT
In order to rename data files or online redo log files first copy it to new location and then point
control file to new location by,
155
TO '+ORQ/orq1/datafile/system02.dbf';
###################################################
If the datafile that is lost is under SYSTEM tablespace or if it is a datafile contain active undo
segments then database shuts down. If the failure is temporary then correct the underline
hardware and start the database. Usually crash recovery will perform recovery of the
committed transaction of the database from online redo log files.
If the datafile that is lost in not under SYSTEM tablespace and not contain active undo segments
then the affected datafile is gone to offline. The database remains open. In order to fix the
problem take the affected tablespace offline and then recover the tablespace.
77. Outline the steps for recovery with missing online redo logs?
If the CURRENT redo log is lost and if the DB is closed consistently, OPEN RESETLOGS can be
issued directly without any transaction loss. It is advisable to take a full backup of DB
immediately after the STARTUP.
When a current redo log is lost, the transactions in the log file are also lost before making to
archived logs. Since a DB startup can no more perform a crash recovery (since all the now-
available online log files are not sufficient to startup the DB in consistent state), an incomplete
media recovery is the only option. We will need to restore the DB from a previous backup and
restore to the point just before the lost redo log file. The DB will need to be opened in
RESETLOGS mode. There is some transaction loss in this scenario.
156
HH24:MI:SS')";
78. Outline steps for recovery with missing archived redo logs?
If a redo log file is already archived, its loss can safely be ignored. Since all the changes in the DB
are now archived and the online log file is only waiting for its turn to be re-written by LGWR
(redo log files are written circularly) the loss of the redo log file doesnt matter much. It may be
re-created using the command
This will re-create all groups and no transactions are lost. The database can be opened normally
after this.
Flash recovery area where you can store not only the traditional components found in a backup
strategy such as control files, archived log files, and Recovery Manager (RMAN) datafile copies
but also a number of other file
components such as flashback logs. The flash recovery area simplifies backup operations, and it
increases the availability of the database because many backup and recovery operations using
the flash recovery area can be performed when the database is open and available to users.
Because the space in the flash recovery area is limited by the initialization parameter DB_
RECOVERY_FILE_DEST_SIZE , the Oracle database keeps track of which files are no longer
needed on disk so that they can be deleted when there is not enough free space for new files.
Each time a file is deleted from the flash recovery area, a message is written to the alert log.
A message is written to the alert log in other circumstances. If no files can be deleted, and the
recovery area used space is at 85 percent, a warning message is issued. When the space used is
at 97 percent, a critical warning is
issued. These warnings are recorded in the alert log file, are viewable in the data dictionary
157
view DBA_OUTSTANDING_ALERTS , and are available to you on the main page of the EM
Database Control
80. What is Channel? How do you enable the parallel backups with RMAN?
Channel is a link that RMAN requires to link to target database. This link is required when
backup and recovery operations are performed and recorded. This channel can be allocated
manually or can be preconfigured by using
The number of allocated channels determines the maximum degree of parallelism that is used
during backup, restore or recovery. For example, if you allocate 4 channels for a backup
operation, 4 background processes for the operation can run concurrently.
Parallelization of backup sets allocates multiple channels and assigns files to specific channels.
You can configure parallel backups by setting a PARALLELISM option of the CONFIGURE
command to a value greater than 1 or by
RTO: Recovery Time objective-is the maximum amount of time that the database can be
unavailable and still stasfy SLA's
If you wish to modify your existing backup environment so that all RMAN backups are
158
encrypted, perform the following steps:
Restoring involves copying backup files from secondary storage (backup media) to disk. This can
be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can roll-
forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last
transaction recorded in the log files.
RMAN> run {
restore database;
recover database;
What are the various tape backup solutions available in the market?
Outline the steps for recovering the full database from cold backup?
Explain the steps to perform the point in time recovery with a backup which is taken before the
resetlogs of the db?
159
Outline the steps involved in TIME based recovery from the full database from hot backup?
Can a schema be restored in oracle 9i RMAN when the schema having numerous table spaces?
How do you identify the expired, active, obsolete backups? Which RMAN command you use?
List the steps required to enable the RMAN backup for a target database?
How do you verify the integrity of the image copy in RMAN environment?
Outline the steps involved in SCN based recovery from the full database from hot backup?
Outline the steps involved in CANCEL based recovery from the full database from hot backup?
Outline the steps involved in TIME based recovery from the full database from hot backup?
Is it possible to specific tables when using RMAN DUPLICATE feature? If yes, how?
Explain the steps to perform the point in time recovery with a backup which is taken before the
resetlogs of the db?
Outline the steps for recovering the full database from cold backup?
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
miscellaneous commands
160
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
sessions_per_user 3
connect_time 1400
idle_time 120
password_life_time 60
password_reuse_time 60
password_reuse_max unlimited
failed_login_attempts 3
password_lock_time unlimited
161
#drop a profile
existing users.
* but this command will fail because this parameter cant be set dyanamically
# Old undo tablespace will be active until last transaction is finished Only one UNDO tablespace
can be active at a time.
guarantee/noguarantee;
#DML will get errors and long running queries will get benefited
-------------------------
162
alter system checkpoint;
--------------------
identified by 'Pass!'
------------------------
--------------------------
identified by 'Pass1234';
--------------------------
account unlock;
163
SQL> select username, status
from dba_users;
--------------------------
account lock;
--------------------------
password expire;
----------------------------------------------
identified externally;
$ sqlplus /
ops$peter
164
How to assign a role to a user
------------------------------
-------------------------
on orders to salesmgr;
165
SQL> grant salesmgr to public;
----------------------------------
-----------------------
1. connect time
2. idle time
4. Password attributes
a. length of a password
a. cpu
166
Note: Use resource manager instead
------------------------
"default" profile.
sessions_per_user 3
connect_time 120
idle_time 10
failed_login_attempt 5
password_verify_function myfunc;
a file named:
$ORACLE_HOME/rdbms/admin/utlpwdmg.sql
167
WHere is the info for profiles kept
-----------------------------------
-----------------------------------------
from dba_users;
# tablespace commands
# creating tablespace
#The following statement creates a locally managed tablespace named sumittbs and specifies
AUTOALLOCATE:
#The following example creates a tablespace with uniform 128K extents. (In a database with 2K
blocks, each extent would be equivalent to 64 database blocks). Each 128K extent is
represented by a bit in the extent bitmap for this file.
168
#The following statement creates tablespace sumittbs with automatic segment space
management:
#The following statement creates a temporary tablespace in which each extent is 16M. Each
16M extent (which is the equivalent of 8000 blocks when the standard block size is 2K) is
represented by a bit in the bitmap for the file.
#The following statements take offline and bring online tempfiles. They behave identically to
the last two ALTER TABLESPACE statements in the previous example.
169
#The following statement resizes a temporary file:
#The following statement drops a temporary file and deletes the operating system file:
INCLUDING DATAFILES;
# renaming tablespace
#dropping a tablespace
170
DROP TABLESPACE users INCLUDING CONTENTS AND DATAFILES;
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
Here are few interview questions with answers found on the internet. As I don't have time to
format these questions to wiki I am just posting them hoping someone to format them.
1. Explain the difference between a hot backup and a cold backup and the benefits associated
with each.
A hot backup is basically taking a backup of the database while it is still up and running and it
must be in archive log mode. A cold backup is taking a backup of the database while it is shut
down and does not require being in archive log mode. The benefit of taking a hot backup is that
171
the database is still available for use while the backup is occurring and you can recover the
database to any point in time. The benefit of taking a cold backup is that it is typically easier to
administer the backup and recovery process. In addition, since you are taking cold backups the
database does not require being in archive log mode and thus there will be a slight performance
gain as the database is not cutting archive logs to disk.
2. You have just had to restore from backup and do not have any control files. How would you
go about bringing up this database?
I would create a text based backup control file, stipulating where on disk all the data files
were and then issue the recover command with the using backup control file clause.
A data block is the smallest unit of logical storage for a database object. As objects grow they
take chunks of additional storage that are composed of contiguous data blocks. These groupings
of contiguous data blocks are called extents. All the extents that an object takes when grouped
together are considered the segment of the database object.
5. Give two examples of how you might determine the structure of the table DEPT.
6. Where would you look for errors from the database engine?
172
In the alert log.
Both the truncate and delete command have the desired outcome of getting rid of all the
rows in a table. The difference between the two is that the truncate command is a DDL
operation and just moves the high water mark and produces few rollback data. The delete
command, on the other hand, is a DML operation, which will produce rollback data and thus
take longer to complete.
9. Give the two types of tables involved in producing a star schema and the type of data they
hold.
Fact tables and dimension tables. A fact table contains measurements while dimension tables
will contain data that will help describe the fact tables.
A Bitmap index.
11. Give some examples of the types of database contraints you may find in Oracle and indicate
their purpose.
173
* A Primary or Unique Key can be used to enforce uniqueness on one or more columns.
12. A table is classified as a parent table and you want to drop and re-create it. How would you
do this without affecting the children tables?
Disable the foreign key constraint to the parent, drop the table, re-create the table, enable
the foreign key constraint.
13. Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode and the
benefits and disadvantages to each.
ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all
transactions that have occurred in the database so that you can recover to any point in time.
NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage
of not being able to recover to any point in time. NOARCHIVELOG mode does have the
advantage of not having to write transactions to an archive log and thus increases the
performance of the database slightly.
14. What command would you use to create a backup control file?
15. Give the stages of instance startup to a usable state where normal users may access it.
174
STARTUP NOMOUNT - Instance startup
16. What column differentiates the V$ views to the GV$ views and how?
The INST_ID column which indicates the instance in a RAC environment the information came
from.
Use the explain plan set statement_id = 'tst1' into plan_table for a SQL statement
18. How would you go about increasing the buffer cache hit ratio?
Use the buffer cache advisory over a given workload and then query the v$db_cache_advice
table. If a change was necessary then I would use the alter system set db_cache_size command.
You get this error when you get a snapshot too old within rollback. It can usually be solved by
increasing the undo retention or increasing the size of rollbacks. You should also look at the
logic involved in the application getting the error message.
175
20. Explain the difference between $ORACLE_HOME and $ORACLE_BASE.
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE
is where the oracle products reside.
1. How would you determine the time zone under which a database was operating?
It ensure the use of consistent naming conventions for databases and links in a networked
environment.
WRAP
176
Procedure may or may not return value.
Package is the collection of functions, procedures, variables which can be logically grouped
together.
7. Where in the Oracle directory tree structure are audit traces placed?
9. When a user process fails, what background process cleans up after it?
PMON
11. How would you determine what sessions are connected and what resources they are
waiting for?
v$session,v$session_wait
177
12. Describe what redo logs are.
14. Give two methods you could use to determine what DDL changes have been made.
Coalesce simply takes contigous free extents and makes them into a single bigger free extent.
16. What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?
TEMP tablespace gets cleared once the transaction is done where as PERMANENT tablespace
retails the data.
SYSTEM
18. When creating a user, what permissions must you grant to allow them to connect to the
database?
178
19. How do you add a data file to a tablespace?
21. What view would you use to look at the size of a data file?
dba_data_files
22. What view would you use to determine free space in a tablespace?
dba_free_space
23. How would you determine who has added a row to a table?
By implementing an INSERT trigger for logging details during each INSERT operation on the
table
179
25. Explain what partitioning is and what its benefit is.
A table partition is also a table segment, and by using partitioning technique we can enhance
performance of table access.
26. You have just compiled a PL/SQL package but got errors, how would you view the errors?
show errors
exec dbms_stats.gather_table_stats
29. What is the difference between the SQL*Loader and IMPORT utilities?
SQL*LOADER loads external data which is in OS files to oracle database tables while IMPORT
utility imports data only which is exported by EXPORT utility of oracle database.
180
30. Name two files used for network connection to a database.
1. Describe the difference between a procedure, function and anonymous pl/sql block.
Candidate should mention use of DECLARE statement, a function must return a value while a
procedure doesn't have to.
2. What is a mutating table error and how can you get around it? This happens with triggers. It
occurs because the trigger is trying to update a row it is currently using. The usual fix involves
either use of views or temporary tables so the database is selecting from one while updating
the other.
3. Describe the use of %ROWTYPE and %TYPE in PL/SQL Expected answer: %ROWTYPE allows
you to associate a variable with an entire table row. The %TYPE associates a variable with a
single column type.
4. What packages (if any) has Oracle provided for use by developers? Expected answer: Oracle
provides the DBMS_ series of packages. There are many which developers should be aware of
such as DBMS_SQL, DBMS_PIPE, DBMS_TRANSACTION, DBMS_LOCK, DBMS_ALERT,
DBMS_OUTPUT, DBMS_JOB, DBMS_UTILITY, DBMS_DDL, UTL_FILE. If they can mention a few of
these and describe how they used them, even better. If they include the SQL routines provided
by Oracle, great, but not really what was asked.
181
5. Describe the use of PL/SQL tables Expected answer: PL/SQL tables are scalar arrays that can
be referenced by a binary integer. They can be used to hold values for use in later queries or
calculations. In Oracle 8 they will be able to be of the %ROWTYPE designation, or RECORD.
6. When is a declare statement needed ? The DECLARE statement is used in PL/SQL anonymous
blocks such as with stand alone, non-stored PL/SQL procedures. It must come first in a PL/SQL
stand alone file if it is used.
8. What are SQLCODE and SQLERRM and why are they important for PL/SQL developers?
Expected answer: SQLCODE returns the value of the error number for the last error
encountered. The SQLERRM returns the actual error message for the last error encountered.
They can be used in exception handling to report, or, store in an error log table, the error that
occurred in the code. These are especially useful for the WHEN OTHERS exception.
9. How can you find within a PL/SQL block, if a cursor is open? Expected answer: Use the %
ISOPEN cursor status variable.
10. How can you generate debugging output from PL/SQL? Expected answer: Use the
182
DBMS_OUTPUT package. Another possible method is to just use the SHOW ERROR command,
but this only shows errors. The DBMS_OUTPUT package can be used to show intermediate
results from loops and the status of variables as the procedure is executed. The new package
UTL_FILE can also be used.
11. What are the types of triggers? Expected Answer: There are 12 types of triggers in PL/SQL
that consist of combinations of the BEFORE, AFTER, ROW, TABLE, INSERT, UPDATE, DELETE and
ALL key words: BEFORE ALL ROW INSERT AFTER ALL ROW INSERT BEFORE INSERT AFTER INSERT
etc.
1. A tablespace has a table with 30 extents in it. Is this bad? Why or why not.
Multiple extents in and of themselves aren?t bad. However if you also have chained rows this
can hurt performance.
You should always attempt to use the Oracle Flexible Architecture standard or another
partitioning scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG, DATA,
TEMPORARY and INDEX segments.
3. You see multiple fragments in the SYSTEM tablespace, what should you check first?
Ensure that users don?t have the SYSTEM tablespace as their TEMPORARY or DEFAULT
183
tablespace assignment by checking the DBA_USERS view.
4. What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is
steadily decreasing performance with all other tuning parameters the same.
5. What is the general guideline for sizing db_block_size and db_multi_block_read for an
application that does many full table scans?
Oracle almost always reads in 64k chunks. The two should have a product equal to 64 or a
multiple of 64.
Fetch by rowid
7. Explain the use of TKPROF? What initialization parameter should be turned on to get full
TKPROF output?
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements.
You use it by first setting timed_statistics to true in the initialization file and then turning on
tracing for either the entire database via the sql_trace parameter or for the session using the
184
ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the
trace file and then look at the output from the tkprof tool. This can also be used to generate
explain plan output.
8. When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad -How do
you correct it?
If you get excessive disk sorts this is bad. This indicates you need to tune the sort area
parameters in the initialization files. The major sort are parameter is the SORT_AREA_SIZe
parameter.
9. When should you increase copy latches? What parameters control copy latches
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit
ratio. You can increase copy latches via the initialization parameter
LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.
10. Where can you get a list of all initialization parameters for your instance? How about an
indication if they are default settings or have been changed
You can look in the init.ora file for an indication of manually set parameters. For all parameters,
their value and whether or not the current value is the default value, look in the v$parameter
view.
11. Describe hit ratio as it pertains to the database buffers. What is the difference between
185
instantaneous and cumulative hit ratio and which should be used for tuning
The hit ratio is a measure of how many times the database was able to read a value from the
buffers verses how many times it had to re-read a data value from the disks. A value greater
than 80-90% is good, less could indicate problems. If you simply take the ratio of existing
parameters this will be a cumulative value since the database started. If you do a comparison
between pairs of readings based on some arbitrary time span, this is the instantaneous ratio for
that time span. Generally speaking an instantaneous reading gives more valuable data since it
will tell you what your instance is doing for the time it was generated over.
12. Discuss row chaining, how does it happen? How can you reduce it? How do you correct it
Row chaining occurs when a VARCHAR2 value is updated and the length of the new value is
longer than the old value and won?t fit in the remaining block space. This results in the row
chaining to another block. It can be reduced by setting the storage parameters on the table to
appropriate values. It can be corrected by export and import of the effected table.
1. Give one method for transferring a table from one schema to another:
There are several possible methods, export-import, CREATE TABLE... AS SELECT, or COPY.
2. What is the purpose of the IMPORT option IGNORE? What is it?s default setting
186
The IMPORT IGNORE option tells import to ignore "already exists" errors. If it is not specified
the tables that already exist will be skipped. If it is specified, the error is ignored and the tables
data will be inserted. The default value is N.
3. You have a rollback segment in a version 7.2 database that has expanded beyond optimal,
how can it be restored to optimal
4. If the DEFAULT and TEMPORARY tablespace clauses are left out of a CREATE USER command
what happens? Is this bad or good? Why
The user is assigned the SYSTEM tablespace as a default and temporary tablespace. This is bad
because it causes user objects and temporary segments to be placed into the SYSTEM
tablespace resulting in fragmentation and improper table placement (only data dictionary
objects and the system rollback segment should be in SYSTEM).
5. What are some of the Oracle provided packages that DBAs should be aware of
Oracle provides a number of packages in the form of the DBMS_ packages owned by the SYS
user. The packages used by DBAs may include: DBMS_SHARED_POOL, DBMS_UTILITY,
DBMS_SQL, DBMS_DDL, DBMS_SESSION, DBMS_OUTPUT and DBMS_SNAPSHOT. They may also
try to answer with the UTL*.SQL or CAT*.SQL series of SQL procedures. These can be viewed as
extra credit but aren?t part of the answer.
187
6. What happens if the constraint name is left out of a constraint clause
The Oracle system will use the default name of SYS_Cxxxx where xxxx is a system generated
number. This is bad since it makes tracking which table the constraint belongs to or what the
constraint does harder.
7. What happens if a tablespace clause is left off of a primary key constraint clause
This results in the index that is automatically generated being placed in then users default
tablespace. Since this will usually be the same tablespace as the table is being created in, this
can cause serious performance problems.
8. What is the proper method for disabling and re-enabling a primary key constraint
You use the ALTER TABLE command for both. However, for the enable clause you must specify
the USING INDEX and TABLESPACE clause for primary keys.
9. What happens if a primary key constraint is disabled and then enabled without fully
specifying the index clause
The index is created in the user?s default tablespace and all sizing information is lost. Oracle
doesn?t store this information as a part of the constraint definition, but only as part of the
index definition, when the constraint was disabled the index was dropped and the information
is gone.
188
10. (On UNIX) When should more than one DB writer process be used? How many should be
used
If the UNIX system being used is capable of asynchronous IO then only one is required, if the
system is not capable of asynchronous IO then up to twice the number of disks used by Oracle
number of DB writers should be specified by use of the db_writers initialization parameter.
11. You are using hot backup without being in archivelog mode, can you recover in the event of
a failure? Why or why not
You can?t use hot backup without being in archivelog mode. So no, you couldn?t recover.
12. What causes the "snapshot too old" error? How can this be prevented or mitigated
This is caused by large or long running transactions that have either wrapped onto their own
rollback space or have had another transaction write on part of their rollback space. This can be
prevented or mitigated by breaking the transaction into a set of smaller transactions or
increasing the size of the rollback segments and their extents.
13. How can you tell if a database object is invalid By checking the status column of the DBA_,
ALL_ or USER_OBJECTS views, depending upon whether you own or only have permission on
the view or are using a DBA account.
189
13. A user is getting an ORA-00942 error yet you know you have granted them permission on
the table, what else should you check
You need to check that the user has specified the full name of the object (select empid from
scott.emp; instead of select empid from emp;) or has a synonym that balls to the object (create
synonym emp for scott.emp;)
14. A developer is trying to create a view and the database won?t let him. He has the
"DEVELOPER" role which has the "CREATE VIEW" system privilege and SELECT grants on the
tables he is using, what is the problem
You need to verify the developer has direct grants on all tables used in the view. You can?t
create a stored object with grants given through views.
15. If you have an example table, what is the best way to get sizing data for the production
table implementation
The best way is to analyze the table and then use the data provided in the DBA_TABLES view to
get the average row length and other pertinent data for the calculation. The quick and dirty way
is to look at the number of blocks the table is actually using and ratio the number of rows in the
table to its number of blocks against the number of expected rows.
16. How can you find out how many users are currently logged into the database? How can you
find their operating system id
There are several ways. One is to look at the v$session or v$process views. Another way is to
190
check the current_logins parameter in the v$sysstat view. Another if you are on UNIX is to do a
"ps -ef|grep oracle|wc -l? command, but this only works against a single instance installation.
17. A user selects from a sequence and gets back two values, his select is: SELECT
pk_seq.nextval FROM dual;What is the problem Somehow two values have been inserted into
the dual table. This table is a single row, single column table that should only have one value in
it.
18. How can you determine if an index needs to be dropped and rebuilt
Run the ANALYZE INDEX command on the index to validate its structure and then calculate the
ratio of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isn?t near 1.0 (i.e. greater than 0.7 or
so) then the index should be rebuilt. Or if the ratio BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is
nearing 0.3.
By use of the & symbol. For passing in variables the numbers 1-8 can be used (&1, &2,...,&8) to
pass the values after the command into the SQLPLUS session. To be prompted for a specific
variable, place the ampersanded variable in the code itself: "select * from dba_tables where
owner=&owner_name;" . Use of double ampersands tells SQLPLUS to resubstitute the value for
each subsequent use of the variable, a single ampersand will cause a reprompt for the value
unless an ACCEPT statement is used to get the value from the user.
191
2. You want to include a carriage return/linefeed in your output from a SQL script, how can you
do this
The best method is to use the CHR() function (CHR(10) is a return/linefeed) and the
concatenation function "||". Another method, although it is hard to document and isn?t always
portable is to use the return/linefeed as a part of a quoted string.
4. How do you execute a host operating system command from within SQL
By use of the exclamation ball "!" (in UNIX and some other OS) or the HOST (HO) command.
5. You want to use SQL to build SQL, what is this called and give an example
This is called dynamic SQL. An example would be: set lines 90 pages 0 termout off feedback off
verify off spool drop_all.sql select ?drop user ?||username||? cascade;? from dba_users where
username not in ("SYS?,?SYSTEM?); spool off Essentially you are looking to see that they know
to include a command (in this case DROP USER...CASCADE;) and that you need to concatenate
using the ?||? the values selected from the database.
192
6. What SQLPlus command is used to format output from a select
7. You want to group the following set of select returns, what can you group on
8. What special Oracle feature allows you to specify how the cost based system treats a SQL
statement
The COST based system allows the use of HINTs to control the optimizer path selection. If they
can give some example hints such as FIRST ROWS, ALL ROWS, USING INDEX, STAR, even better.
9. You want to determine the location of identical rows in a table before attempting to place a
unique index on the table, how can this be done
Oracle tables always have one guaranteed unique column, the rowid column. If you use a
min/max function against your rowid and then select against the proposed primary key you can
squeeze out the rowids of the duplicate rows pretty quick. For example: select rowid from emp
e where e.rowid > (select min(x.rowid) from emp x where x.emp_no = e.emp_no); In the
situation where multiple columns make up the proposed key, they must all be used in the
where clause.
193
10. What is a Cartesian product
A Cartesian product is the result of an unrestricted join of two or more tables. The result set of
a three table Cartesian product will have x * y * z number of rows where x, y, z correspond to
the number of rows in each table involved in the join.
11. You are joining a local and a remote table, the network manager complains about the traffic
involved, how can you reduce the network traffic Push the processing of the remote data to the
remote instance by using a view to pre-select the information for the join. This will result in only
the data required for the join being sent across.
Ascending
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements.
You use it by first setting timed_statistics to true in the initialization file and then turning on
tracing for either the entire database via the sql_trace parameter or for the session using the
ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the
trace file and then look at the output from the tkprof tool. This can also be used to generate
explain plan output.
194
13. What is explain plan and how is it used
The EXPLAIN PLAN command is a tool to tune SQL statements. To use it you must have an
explain_table generated in the user you are running the explain plan for. This is created using
the utlxplan.sql script. Once the explain plan table exists you run the explain plan command
giving as its argument the SQL statement to be explained. The explain_plan table is then
queried to see the execution plan of the statement. Explain plans can also be run using tkprof.
14. How do you set the number of lines on a page of output? The width
The SET command in SQLPLUS is used to control the number of lines generated per page and
the width of those lines, for example SET PAGESIZE 60 LINESIZE 80 will generate reports that are
60 lines long with a line width of 80 characters. The PAGESIZE and LINESIZE options can be
shortened to PAGES and LINES.
The SET option TERMOUT controls output to the screen. Setting TERMOUT OFF turns off screen
output. This option can be shortened to TERM.
16. How do you prevent Oracle from giving you informational messages during and after a SQL
statement execution
195
The SET options FEEDBACK and VERIFY can be set to OFF.
A CO-RELATED SUBQUERY is one that has a correlation name as table or view designator in the
FROM clause of the outer query and the same correlation name as a qualifier of a search
condition in the WHERE clause of the subquery. eg
where
field1=X.field1);
(The subquery in a correlated subquery is revaluated for every row of the table or view named
in the outer query.)
196
Self join-Its a join foreign key of a table references the same table.
Outer Join--Its a join condition used where One can query all the rows of one of the tables in
the join condition even though they don't satisfy the join condition.
Equi-join--Its a join condition that retrieves rows from one or more tables in which one or more
columns in one table are equal to one or more columns in the second table.
Rename is a permanent name given to a table or column whereas Alias is a temporary name
given to a table or column which do not exist once the SQL statement is executed.
6. What is a view
A view is stored procedure based on one or more tables, its a virtual table.
197
7. What are various privileges that a user can grant to another user
A table can have only one PRIMARY KEY whereas there can be any number of UNIQUE keys. The
columns that compose PK are automatically define NOT NULL, whereas a column that compose
a UNIQUE is not automatically defined to be mandatory must also specify the column is NOT
NULL.
Yes
By using DISTINCT
SQL*PLUS is a command line tool where as SQL and PL/SQL language interface and reporting
tool. Its a command line tool that allows user to type SQL commands to be executed directly
against an Oracle database. SQL is a language used to query the relational database
(DML,DCL,DDL). SQL*PLUS commands are used to format query result, Set options, Edit SQL
commands and PL/SQL.
198
12. Which datatype is used for storing graphics and images
LONG RAW data type is used for storing BLOB's (binary large objects).
13. How will you delete duplicating rows from a base table
DROP old_table RENAME new_table TO old_table DELETE FROM table_name A WHERE rowid
NOT IN (SELECT MAX(ROWID) FROM table_name GROUP BY column_name)
15. There is a string '120000 12 0 .125' ,how you will find the position of the decimal place
199
16. There is a '%' sign in one field of a column. What will be the query to find it.
17. When you use WHERE clause and when you use HAVING clause
HAVING clause is used when you want to specify a condition for a group function and it is
written after GROUP BY clause The WHERE clause is used when you want to specify a condition
for columns, single row functions except group functions and it is written before GROUP BY
clause if it is used.
EXISTS is more faster than IN because EXISTS returns a Boolean value whereas IN returns a
value.
Result of the subquery is small Then "IN" is typicaly more appropriate. and Result of the
subquery is big/large/long Then "EXIST" is more appropriate.
Outer Join--Its a join condition used where you can query all the rows of one of the tables in the
join condition even though they dont satisfy the join condition.
200
20. How you will avoid your query from using indexes
i.e you have to concatenate the column name with space within codes in the where condition.
Suppose customer table is there having different columns like customer no, payments.What will
be the query to select top three max payments.
201
Oracle does not allow a user to specifically locate tables, since that is a part of the function of
the RDBMS. However, for the purpose of increasing performance, oracle allows a developer to
create a CLUSTER. A CLUSTER provides a means for storing data from different tables together
for faster retrieval than if the table placement were left to the RDBMS.
3. What is a cursor.
Oracle uses work area to execute SQL statements and store processing information PL/SQL
construct called a cursor lets you name a work area and access its stored information A cursor is
a mechanism used to fetch more than one row in a Pl/SQl block.
PL/SQL declares a cursor implicitly for all SQL data manipulation statements, including quries
that return only one row. However,queries that return more than one row you must declare an
explicit cursor or use a cursor FOR loop.
Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement
via the CURSOR...IS statement. An implicit cursor is used for all SQL statements Declare, Open,
Fetch, Close. An explicit cursors are used to process multirow SELECT statements An implicit
cursor is used to process INSERT, UPDATE, DELETE and single row SELECT. .INTO statements.
Cursor For Loop is a loop where oracle implicitly declares a loop variable, the loop index that of
202
the same record type as the cursor's record.
NO DATA FOUND is an exception raised only for the SELECT....INTO statements when the where
clause of the querydoes not match any rows. When the where clause of the explicit cursor does
not match any rows the %NOTFOUND attribute is set to TRUE instead.
LOOP
UPDATE students
SET current_credits=current_credits+v_numcredits
WHERE CURRENT OF X;
END LOOP
COMMIT;
203
END;
A cursor variable is associated with different statements at run time, which can hold different
values at run time. Static cursors can only be associated with one run time query. A cursor
variable is reference type(like a pointer in C). Declaring a cursor variable: TYPE type_name IS
REF CURSOR RETURN return_type type_name is the name of the reference type,return_type is
a record type indicating the types of the select list that will eventually be returned by the cursor
variable.
11. What should be the return type for a cursor variable.Can we use a scalar data type as return
type.
The return type for a cursor must be a record type.It can be declared explicitly as a user-defined
or %ROWTYPE can be used. eg TYPE t_studentsref IS REF CURSOR RETURN students%ROWTYPE
OPEN cursor variable FOR SELECT...Statement CLOSE cursor variable In order to associate a
cursor variable with a particular SELECT statement OPEN syntax is used.In order to free the
resources used for the query CLOSE statement is used.
In PL/SQL 2.2 cursor variables cannot be declared in a package.This is because the storage for a
cursor variable has to be allocated using Pro*C or OCI with version 2.2,the only means of
passing a cursor variable to a PL/SQL block is via bind variable or a procedure parameter.
204
14. Can cursor variables be stored in PL/SQL tables.If yes how.If not why.
No, a cursor variable points a row which cannot be stored in a two-dimensional PL/SQL table.
Functions are named PL/SQL blocks that return a value and can be called with arguments
procedure a named block that can be called with parameter. A procedure all is a PL/SQL
statement by itself, while a Function call is called as part of an expression.
16. What are different modes of parameters used in functions and procedures.
IN OUT INOUT
The variables declared in the procedure and which are passed, as arguments are called actual,
the parameters in the procedure declaration. Actual parameters contain the values that are
passed to a procedure and receive results. Formal parameters are the placeholders for the
values of actual parameters
Yes
205
19. Can a function take OUT parameters.If not why.
Yes. A function return a value, but can also have one or more OUT parameters. it is best
practice, however to use a procedure rather than a function if you have multiple values to
return.
20. What is syntax for dropping a procedure and a function .Are these operations possible.
Using ORACLE PRECOMPILERS ,SQL statements and PL/SQL blocks can be contained inside 3GL
programs written in C,C++,COBOL,PASCAL, FORTRAN,PL/1 AND ADA. The Precompilers are
known as Pro*C,Pro*Cobol,... This form of PL/SQL is known as embedded pl/sql,the language in
which pl/sql is embedded is known as the host language. The prcompiler translates the
embedded SQL and pl/sql ststements into calls to the precompiler runtime library.The output
must be compiled and linked with this library to creater an executable.
Oracle Call Interface is a method of accesing database from a 3GL program. Uses--No
precompiler is required,PL/SQL blocks are executed like other DML statements.
206
The OCI library provides
-execute statements
a) Data base trigger(DBT) fires when a DML operation is performed on a data base table.Form
trigger(FT) Fires when user presses a key or navigates between fields on the screen b) Can be
row level or statement level No distinction between row level and statement level. c) Can
manipulate data stored in Oracle tables via SQL Can manipulate data in Oracle tables as well as
variables in forms. d) Can be fired from any session executing the triggering DML statements.
Can be fired only from the form that define the trigger. e) Can cause other database triggers to
fire.Can cause other database triggers to fire,but not other form triggers.
with it. UTL_FILE is a package that adds the ability to read and write to operating system files
Procedures associated with it are FCLOSE, FCLOSE_ALL and 5 procedures to output data to a file
PUT, PUT_LINE, NEW_LINE, PUTF, FFLUSH.PUT, FFLUSH.PUT_LINE,FFLUSH.NEW_LINE. Functions
associated with it are FOPEN, ISOPEN.
No
207
26. What is the maximum buffer size that can be specified using the DBMS_OUTPUT.ENABLE
function?
1,000,000
1. When looking at the estat events report you see that you are getting busy buffer waits. Is this
bad? How can you find what is causing it
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check
the v$waitstat view to see what areas are causing the problem. The value of the "count"
column tells where the problem is, the "class" column tells you with what. UNDO is rollback
segments, DATA is data base buffers.
2. If you see contention for library caches how can you fix it
3. If you see statistics that deal with "undo" what are they really talking about
4. If a tablespace has a default pctincrease of zero what will this cause (in relationship to the
208
smon process)
The SMON process won?t automatically coalesce its free space fragments.
5. If a tablespace shows excessive fragmentation what are some methods to defragment the
tablespace? (7.1,7.2 and 7.3 only)
In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace name coalesce
level ts#';? command is the easiest way to defragment contiguous free space fragmentation.
The ts# parameter corresponds to the ts# value found in the ts$ SYS table. In version 7.3 the ?
alter tablespace coalesce;? is best. If the free space isn?t contiguous then export, drop and
import of the tablespace contents may be the only way to reclaim non-contiguous free space.
If a select against the dba_free_space table shows that the count of a tablespaces extents is
greater than the count of its data files, then it is fragmented.
7. You see the following on a status report: redo log space requests 23 redo log space wait time
0 Is this something to worry about? What if redo log space wait time is high? How can you fix
this Since the wait time is zero, no. If the wait time was high it might indicate a need for more
or larger redo logs.
8. What can cause a high value for recursive calls? How can this be fixed
209
A high value for recursive calls is cause by improper cursor usage, excessive dynamic space
management actions, and or excessive statement re-parses. You need to determine the cause
and correct it By either relinking applications to hold cursors, use proper space management
techniques (proper storage and sizing) or ensure repeat queries are placed in packages for
proper reuse.
9. If you see a pin hit ratio of less than 0.8 in the estat library cache report is this a problem? If
so, how do you fix it
This indicate that the shared pool may be too small. Increase the shared pool size.
10. If you see the value for reloads is high in the estat library cache report is this a matter for
concern
Yes, you should strive for zero reloads if possible. If you see excessive reloads then increase the
size of the shared pool.
11. You look at the dba_rollback_segs view and see that there is a large number of shrinks and
they are of relatively small size, is this a problem? How can it be fixed if it is a problem
A large number of small shrinks indicates a need to increase the size of the rollback segment
extents. Ideally you should have no shrinks or a small number of large shrinks. To fix this just
increase the size of the extents and adjust optimal accordingly.
210
12. You look at the dba_rollback_segs view and see that you have a large number of wraps is
this a problem
A large number of wraps indicates that your extent size for your rollback segments are probably
too small. Increase the size of your extents to reduce the number of wraps. You can look at the
average transaction size in the same view to get the information on transaction size.
1. You have just started a new instance with a large SGA on a busy existing server. Performance
is terrible, what should you check for
The first thing to check with a large SGA is that it isn?t being swapped out.
2. What OS user should be used for the first part of an Oracle installation (on UNIX)
3. When should the default values for Oracle initialization parameters be used as is
Never
211
4. How many control files should you have? Where should they be located
At least 2 on separate disk spindles. Be sure they say on separate disks, not just file systems.
5. How many redo logs should you have and how should they be configured for maximum
recoverability
You should have at least three groups of two redo logs with the two logs each on a separate
disk spindle (mirrored by Oracle). The redo logs should not be on raw devices on UNIX if it can
be avoided.
6. You have a simple application with no "hot" tables (i.e. uniform IO and access requirements).
How many disks should you have assuming standard layout for SYSTEM, USER, TEMP and
ROLLBACK tablespaces
Something like: In third normal form all attributes in an entity are related to the primary key
and only to the primary key
212
8. Is the following statement true or false:
"All relational databases must be in third normal form" False. While 3NF is good for logical
design most databases, if they have more than just a few tables, will not perform well using full
3NF. Usually some entities will be denormalized in the logical to physical transfer process.
9. What is an ERD
10. Why are recursive relationships bad? How do you resolve them
A recursive relationship (one where a table relates to itself) is bad when it is a hard relationship
(i.e. neither side is a "may" both are "must") as this can result in it not being possible to put in a
top or perhaps a bottom of the table (for example in the EMPLOYEE table you couldn?t put in
the PRESIDENT of the company because he has no boss, or the junior janitor because he has no
subordinates). These type of relationships are usually resolved by adding a small intersection
entity.
11. What does a hard one-to-one relationship mean (one where the relationship on both ends is
"must")
Expected answer: This means the two entities should probably be made into one entity.
213
12. How should a many-to-many relationship be handled
13. What is an artificial (derived) primary key? When should an artificial (or derived) primary
key be used
A derived key comes from a sequence. Usually it is used when a concatenated key becomes too
cumbersome to use as a foreign key.
2. How can you determine if an Oracle instance is up from the operating system level
There are several base Oracle processes that will be running on multi-user operating systems,
these will be smon, pmon, dbwr and lgwr. Any answer that has them using their operating
system process showing feature to check for these is acceptable. For example, on UNIX a ps -
ef|grep dbwr will show what instances are up.
214
3. Users from the PC clients are getting messages indicating : ORA-06114: (Cnct err, can't get err
txt. See Servr Msgs & Codes Manual)
What could the problem be The instance name is probably incorrect in their connection string.
4. Users from the PC clients are getting the following error stack: ERROR: ORA-01034: ORACLE
not available ORA-07318: smsget: open error when opening sgadef.dbf file. HP-UX Error: 2: No
such file or directory
What is the probable cause The Oracle instance is shutdown that they are trying to access,
restart the instance.
5. How can you determine if the SQLNET process is running for SQLNET V1? How about V2
For SQLNET V1 check for the existence of the orasrv process. You can use the command "tcpctl
status" to get a full status of the V1 TCPIP server, other protocols have similar command
formats. For SQLNET V2 check for the presence of the LISTENER process(s) or you can issue the
command "lsnrctl status".
6. What file will give you Oracle instance status information? Where is it located
215
The alert.ora log. It is located in the directory specified by the background_dump_dest
parameter in the v$parameter table.
7. Users aren?t being allowed on the system. The following message is received: ORA-00257
archiver is stuck. Connect internal only, until freed What is the problem The archive destination
is probably full, backup the archive logs and remove them and the archiver will re-start.
8. Where would you look to find out if a redo log was corrupted assuming you are using Oracle
mirrored redo logs
There is no message that comes to the SQLDBA or SRVMGR programs during startup in this
situation, you must check the alert.log file for this information.
9. You attempt to add a datafile and get: ORA-01118: cannot add anymore datafiles: limit of 40
exceeded What is the problem and how can you fix it When the database was created the
db_files parameter in the initialization file was set to 40. You can shutdown and reset this to a
higher value, up to the value of MAX_DATAFILES as specified at database creation. If the
MAX_DATAFILES is set to low, you will have to rebuild the control file to increase it before
proceeding.
10. You look at your fragmentation report and see that smon hasn?t coalesced any of you
tablespaces, even though you know several have large chunks of contiguous free extents. What
is the problem
216
Check the dba_tablespaces view for the value of pct_increase for the tablespaces. If
pct_increase is zero, smon will not coalesce their free space.
11. Your users get the following error: ORA-00055 maximum number of DML locks exceeded
What is the problem and how do you fix it The number of DML Locks is set by the initialization
parameter DML_LOCKS. If this value is set to low (which it is by default) you will get this error.
Increase the value of DML_LOCKS. If you are sure that this is just a temporary problem, you can
have them wait and then try again later and the error should clear.
12. You get a call from you backup DBA while you are on vacation. He has corrupted all of the
control files while playing with the ALTER DATABASE BACKUP CONTROLFILE command. What do
you do
As long as all datafiles are safe and he was successful with the BACKUP controlfile command
you can do the following: CONNECT INTERNAL STARTUP MOUNT (Take any read-only
tablespaces offline before next step ALTER DATABASE DATAFILE .... OFFLINE;) RECOVER
DATABASE USING BACKUP CONTROLFILE ALTER DATABASE OPEN RESETLOGS; (bring read-only
tablespaces back online) Shutdown and backup the system, then restart If they have a recent
output file from the ALTER DATABASE BACKUP CONTROL FILE TO TRACE; command, they can
use that to recover as well. If no backup of the control file is available then the following will be
required: CONNECT INTERNAL STARTUP NOMOUNT CREATE CONTROL FILE .....; However, they
will need to know all of the datafiles, logfiles, and settings for MAXLOGFILES,
MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
1. How would you determine the time zone under which a database was operating? 2. Explain
217
the use of setting GLOBAL_NAMES equal to TRUE. 3. What command would you use to encrypt
a PL/SQL application? 4. Explain the difference between a FUNCTION, PROCEDURE and
PACKAGE. 5. Explain the use of table functions. 6. Name three advisory statistics you can collect.
7. Where in the Oracle directory tree structure are audit traces placed? 8. Explain materialized
views and how they are used. 9. When a user process fails, what background process cleans up
after it? 10. What background process refreshes materialized views? 11. How would you
determine what sessions are connected and what resources they are waiting for? 12. Describe
what redo logs are. 13. How would you force a log switch? 14. Give two methods you could use
to determine what DDL changes have been made. 15. What does coalescing a tablespace do?
16. What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?
17. Name a tablespace automatically created when you create a database. 18. When creating a
user, what permissions must you grant to allow them to connect to the database? 19. How do
you add a data file to a tablespace? 20. How do you resize a data file? 21. What view would you
use to look at the size of a data file? 22. What view would you use to determine free space in a
tablespace? 23. How would you determine who has added a row to a table? 24. How can you
rebuild an index? 25. Explain what partitioning is and what its benefit is. 26. You have just
compiled a PL/SQL package but got errors, how would you view the errors? 27. How can you
gather statistics on a table? 28. How can you enable a trace for a session? 29. What is the
difference between the SQL*Loader and IMPORT utilities? 30. Name two files used for network
connection to a database.
1. In a system with an average of 40 concurrent users you get the following from a query on
rollback extents:
--------------------------
R01 11 R02 8 R03 12 R04 9 SYSTEM 4 2. You have room for each to grow by 20 more extents
each. Is there a problem? Should you take any action
No there is not a problem. You have 40 extents showing and an average of 40 concurrent users.
Since there is plenty of room to grow no action is needed.
218
3. You see multiple extents in the temporary tablespace. Is this a problem
As long as they are all the same size this isn?t a problem. In fact, it can even improve
performance since Oracle won?t have to create a new extent when a user needs one.
4. Define OFA.
OFA stands for Optimal Flexible Architecture. It is a method of placing directories and files in an
Oracle system so that you get the maximum flexibility for future tuning and file placement.
The answer here should show an understanding of separation of redo and rollback, data and
indexes and isolation os SYSTEM tables from other tables. An example would be to specify that
at least 7 disks should be used for an Oracle installation so that you can place SYSTEM
tablespace on one, redo logs on two (mirrored redo logs) the TEMPORARY tablespace on
another, ROLLBACK tablespace on another and still have two for DATA and INDEXES. They
should indicate how they will handle archive logs and exports as well. As long as they have a
logical plan for combining or further separation more or less disks can be specified.
6. What should be done prior to installing Oracle (for the OS and the disks)
adjust kernel parameters or OS tuning parameters in accordance with installation guide. Be sure
219
enough contiguous disk space is available.
7. You have installed Oracle and you are now setting up the actual instance. You have been
waiting an hour for the initialization script to finish, what should you check first to determine if
there is a problem
Check to make sure that the archiver isn?t stuck. If archive logging is turned on during install a
large number of logs will be created. This can fill up your archive log destination causing Oracle
to stop to wait for more space.
SQLNET.ORA, TNSNAMES.ORA
10. What must be installed with ODBC on the client in order for it to work with Oracle
SQLNET and PROTOCOL (for example: TCPIP adapter) layers of the transport programs.
220
* What Oracle products have you worked with?
* Compare Oracle to any other database that you know. Why would you prefer to work on
one and not on the other?
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
Interview Questions
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
• Cloning is the process of creating an identical copy of the Oracle application system.
221
• Moving an existing system to a different machine.
Ans : Rapid Clone is the new cloning utility introduced in Release 11.5.8. Rapid Clone leverages
the new installation and configuration technology utilized by Rapid Install
Ans : First, verify system is AutoConfig enabled. Then, verify that you have applied the latest
Rapid Clone patch.
Ans :
1. Run adpreclone as applmgr and oracle user on source Perl adpreclone.pl dbTier as oracle user
Perl adpreclone.pl appsTier as applmgr user
6. Run perl adcfgclone.pl dbTier as oracle user,if the backup type is cold
7. If the backup type is hotbackup then Perl adcfgclone.pl dbTechStack. Create the control file
on target from the control script trace file from source Recover the database Alter database
open resetlogs
222
10. Run autoconfig with the ports changed as per requirement in xml.
• You must login as the owner of file system once the database cloning is done.
• Accept for target system having more than one application tier server node.
• Collect the details for processing node, admin node, forms node, and web node.
• Now you get a prompt for the various mount point details and it creates the context file for
you.
6. What are the files you need to copy from APPL_TOP for creating a clone application system?
• APPL_TOP
• OA_HTML
• OA_JAVA
• OA_JRE_TOP
• COMMON_TOP>/util
• COMMON_TOP>/clone
• 806 ORACLE_HOME
• iAS ORACLE_HOME
223
• 806 ORACLE_HOME: preserves the patch level and Oracle inventory
• APPL_TOP and Database: preserves the patch level and history tables.
• adpreclone.pl prepares the source system and adcfgclone.pl configures the target system.
• It also creates generic templates of files containing source specified hardcore values.
9. What are the pre-upgrade steps that need to be taken for the up gradation of non_11i
instance to 11.5.10?
Ans: Clone usually takes around 48hrs to copy and configure and upgrade depends on the
224
database size and module involved. upgrade from 11.5.9 to 11.5.10.2 will take around 3-4 days
and 11i to R12 upgrade will take around 4-5 days.
Ans:
FND_NODES table contains node information, If you have cloned test instance from production
still the node information of production will be present after clone in the test instance.
USER is "APPS"
SQL> commit;
Commit complete.
This will delete all the entries in the fnd_nodes table, to populate it with target system node
information, Run autoconfig on DB node and Applications node.
Ans : RDBMS_ORACLE_HOME/appsutil/scripts/
Ans : $RDBMS_ORACLE_HOME/appsutil/clone/bin
225
15. What is the location of adpreclone.pl for applmgr user?
Ans : $COMMON_TOP/admin/scripts/
Ans : $COMMON_TOP/clone/bin
Ans : If clone directory exists under RDBMS_ORACLE_HOME/appsutil for oracle user and
$COMMON_TOP for applmgr user.
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
226
******
This article is about cloning an ASM database to another ASM database without using rman.
This is purly experimental not recomanded by oracle. The oracle version I tried is 11.2.0.1.0 with
no patch installed
I am using old way of backing up the data files by putting the tablespaces in backup mode. Put
each tablespace into back mode and copy the files to local file system using asmcmd cp
command
login as sys to the database and issue begin backup for each tablespace.
exit;
asmcmd
ASMCMD>cp +DATA/ORCL/DATAFILE/MY_TS.266.727878895
/local/mnt/bk/datafile/MY_TS.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/SYSAUX.257.727526219
/local/mnt/bk/datafile/SYSAUX.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/SYSTEM.256.727526219
/local/mnt/bk/datafile/SYSTEM.dbf
227
ASMCMD>cp +DATA/ORCL/DATAFILE/USERS.259.727526219 /local/mnt/bk/datafile/USERS.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/UNDOTBS1.258.727526219
/local/mnt/bk/datafile/UNDOTBS1.dbf
cd /local/mnt/rman/
STARTUP NOMOUNT;
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
228
GROUP 1 (
‘+DATA1/trdb/onlinelog/log1a.dbf’,
‘+FRA1/trdb/onlinelog/log1b.dbf’
GROUP 2 (
‘+DATA1/trdb/onlinelog/log2a.dbf’,
‘+FRA1/trdb/onlinelog/log2b.dbf’
GROUP 3 (
‘+DATA1/trdb/onlinelog/log3a.dbf’,
‘+FRA1/trdb/onlinelog/log3b.dbf’
DATAFILE
‘+DATA1/trdb/datafile/system.dbf’,
‘+DATA1/trdb/datafile/sysaux.dbf’,
‘+DATA1/trdb/datafile/undotbs1.dbf’,
‘+DATA1/trdb/datafile/users.dbf’,
‘+DATA1/trdb/datafile/my_ts.dbf’
Once all datafiles are copied, we need to get the online logs or archive logs to recover our
database at the target site.
Go to the alert log of the source database and find out which one is the current online log. and
229
then copy the online log to the local file system using above method.
Once all fiel are on the local file system we can scp the file to the destination/target machine.
Now we need to copy the local file to ASM disk using asmcmd cp command.
asmcmd
ASMCMD> cp /opt/mis/rman/datafile/MY_TS.dbf
+DATA/TRDB/DATAFILE/MY_TS.266.727878895
Perform this for each datafile. Next step is to copy the current logfile to the ASM disk. This can
be a temporary directory on ASM disk.
asmcmd
ASMCMD>mkdir +DATA/tmp
Copy the init.ora file from soruce database to the target oracle home dbs directory and edit to
look simllar to the following one.
*.audit_file_dest=’/local/mnt/oracle/admin/TRDB/adump’
*.audit_trail=’db’
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA1/trdb/controlfile/control01.dbf’,'+FRA1/trdb/controlfile/control02.dbf’
230
*.db_block_size=8192
*.db_create_file_dest=’+DATA1′
*.db_domain=”
*.db_name=’TRDB’
*.db_recovery_file_dest=’+FRA1′
*.db_recovery_file_dest_size=4070572032
*.diagnostic_dest=’/local/mnt/admin/TRDB’
*.dispatchers=’(PROTOCOL=TCP) (SERVICE=ORCLXDB)’
*.open_cursors=300
*.pga_aggregate_target=169869312
*.processes=150
*.remote_login_passwordfile=’EXCLUSIVE’
*.sga_target=509607936
*.undo_tablespace=’UNDOTBS1′
We are ready to recover the database using the create controlfile script generated at source
site.
+DATA1/trdb/onlinelog/
+FRA1//trdb/datafile/
+DATA1/trdb/controlfile/
+FRA1/trdb/controlfile/
+DATA1/tmp/
231
login as sys
TRDB:linux:oracle$ si
SQL> @/tmp/cr_control_file.sql
SQL>
232
ORA-00289: suggestion : +FRA1
+DATA1/trdb/bklog/logfile2.dbf
Log applied.
Database altered.
SQL> @add_tempfile.sql
We are done!. So ASM files can be copied to local file system and moved between machines.
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
233
******
******************************************************************************
******************************************************************************
******
Backup and recovery is one of the most important aspects of a DBA's job. If you lose your
company's data, you could very well lose your job. Hardware and software can always be
replaced, but your data may be irreplaceable!
Normally one would schedule a hierarchy of daily, weekly and monthly backups, however
consult with your users before deciding on a backup schedule. Backup frequency normally
depends on the following factors:
* Read-only tablespace needs backing up just once right after you make it read-only
* If you are running in archivelog mode you can backup parts of a database over an extended
cycle of days
* If archive logging is enabled one needs to backup archived log files timeously to prevent
database freezes
* Etc.
Carefully plan backup retention periods. Ensure enough backup media (tapes) are available and
that old backups are expired in-time to make media available for new backups. Off-site vaulting
234
is also highly recommended.
Frequently test your ability to recover and document all possible scenarios. Remember, it's the
little things that will get you. Most failed recoveries are a result of organizational errors and
miscommunication.
* Export/Import - Exports are "logical" database backups in that they extract logical
definitions and data from the database to a file. See the Import/ Export FAQ for more details.
* Cold or Off-line Backups - shut the database down and backup up ALL data, log, and control
files.
* Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the
tablespaces into backup mode and backup their files. Also remember to backup the control files
and archived redo log files.
* RMAN Backups - while the database is off-line or on-line, use the "rman" utility to backup
the database.
It is advisable to use more than one of these methods to backup your database. For example, if
you choose to do on-line database backups, also cover yourself by doing database exports. Also
test ALL backup and recovery scenarios carefully. It is better to be safe than sorry.
Regardless of your strategy, also remember to backup all required software libraries, parameter
files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup
archived log files.
A hot (or on-line) backup is a backup performed while the database is open and available for
use (read and write activity). Except for Oracle exports, one can only do on-line backups when
235
the database is ARCHIVELOG mode.
A cold (or off-line) backup is a backup performed while the database is off-line and unavailable
to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or
NOARCHIVELOG mode.
It is easier to restore from off-line backups as no recovery (from archived logs) would be
required to make the database consistent. Nevertheless, on-line backups are less disruptive and
don't require database downtime.
Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when
the database is in ARCHIVELOG mode.
Restoring involves copying backup files from secondary storage (backup media) to disk. This can
be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can roll-
forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last
transaction recorded in the log files.
RMAN> run {
restore database;
recover database;
This is probably not the appropriate time to be sarcastic, but, recovery without backups are not
supported. You know that you should have tested your recovery strategy, and that you should
236
always backup a corrupted database before attempting to restore/recover it.
Nevertheless, Oracle Consulting can sometimes extract data from an offline database using a
utility called DUL (Disk UnLoad - Life is DUL without it!). This utility reads data in the data files
and unloads it into SQL*Loader or export dump files. Hopefully you'll then be able to load the
data into a working database.
Note that DUL does not care about rollback segments, corrupted blocks, etc, and can thus not
guarantee that the data is not logically corrupt. It is intended as an absolute last resort and will
most likely cost your company a lot of money!
DUDE (Database Unloading by Data Extraction) is another non-Oracle utility that can be used to
extract data from a dead database. More info about DUDE is available at
http://www.ora600.nl/.
Oracle exports are "logical" database backups (not physical) as they extract data and logical
definitions from the database into a file. Other backup strategies normally back-up the physical
data files.
One of the advantages of exports is that one can selectively re-import tables, however one
cannot roll-forward from an restored export. To completely restore a database from an export
file one practically needs to recreate the entire database.
Always do full system level exports (FULL=YES). Full exports include more information about the
database in the export file than user level exports. For more information about the Oracle
export and import utilities, see the Import/ Export FAQ.
The main reason for running in archivelog mode is that one can provide 24-hour availability and
guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode
before one can start to use on-line database backups.
237
SQL> ARCHIVE LOG START;
Alternatively, add the above commands into your database's startup command script, and
bounce the database.
log_archive_start = TRUE
log_archive_dest_1 = 'LOCATION=/arch_dir_name'
log_archive_dest_state_1 = ENABLE
log_archive_format = %d_%t_%s.arc
NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode.
Without it one would not be able to recover. Also, implement an archivelog backup to prevent
the archive log directory from filling-up.
NOTE 2:' ARCHIVELOG mode was introduced with Oracle 6, and is essential for database point-
in-time recovery. Archiving can be used in combination with on-line and off-line database
backups.
NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG
mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=...
NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG
START TO 'directory'; statement. This statement is often used to switch archiving between a set
of directories.
NOTE 5: When running Oracle Real Application Clusters (RAC), you need to shut down all nodes
before changing the database to ARCHIVELOG mode. See the RAC FAQ for more details.
The following INIT.ORA/SPFILE parameter can be used if your current redologs are corrupted or
blown away. It may also be handy if you do database recovery and one of the archived log files
are missing and cannot be restored.
238
NOTE: Caution is advised when enabling this parameter as you might end-up losing your entire
database. Please contact Oracle Support before using it.
_allow_resetlogs_corruption = true
This should allow you to open the database. However, after using this parameter your database
will be inconsistent (some committed transactions may be lost or partially applied).
Steps:
* If the database asks for recovery, use an UNTIL CANCEL type recovery and apply all available
archive and on-line redo logs, then issue CANCEL and reissue the "ALTER DATABASE OPEN
RESETLOGS;" command.
* Do a "SHUTDOWN NORMAL"
Shut down the database from sqlplus or server manager. Backup all files to secondary storage
(eg. tapes). Ensure that you backup all data files, all control files and all log files. When
completed, restart your database.
239
Do the following queries to get a list of all files that needs to be backed up:
Sometimes Oracle takes forever to shutdown with the "immediate" option. As workaround to
this problem, shutdown using these commands:
shutdown abort
startup restrict
shutdown immediate
Note that if your database is in ARCHIVELOG mode, one can still use archived log files to roll
forward from an off-line backup. If you cannot take your database down for a cold (off-line)
backup at a convenient time, switch your database into ARCHIVELOG mode and perform hot
(on-line) backups.
Each tablespace that needs to be backed-up must be switched into backup mode before
copying the files out to secondary storage (tapes). Look at this simple example.
! cp xyzFile1 /backupDir/
It is better to backup tablespace for tablespace than to put all tablespaces in backup mode.
240
Backing them up separately incurs less overhead. When done, remember to backup your
control files. Look at this example:
ALTER SYSTEM SWITCH LOGFILE; -- Force log switch to update control file headers
NOTE: Do not run on-line backups during peak processing periods. Oracle will write complete
database blocks instead of the normal deltas to redo log files while in backup mode. This will
lead to excessive database archiving and even database freezes.
If a database was terminated while one of its tablespaces was in BACKUP MODE (ALTER
TABLESPACE xyz BEGIN BACKUP;), it will tell you that media recovery is required when you try
to restart the database. The DBA is then required to recover the database and apply all archived
logs to the database. However, from Oracle 7.2, one can simply take the individual datafiles out
of backup mode and restart the database.
One can select from V$BACKUP to see which datafiles are in backup mode. This normally saves
a significant amount of database down time. See script end_backup2.sql in the Scripts section of
this site.
From Oracle9i onwards, the following command can be used to take all of the datafiles out of
hotbackup mode:
This command must be issued when the database is mounted, but not yet opened.
When a tablespace is in backup mode, Oracle will stop updating its file headers, but will
241
continue to write to the data files.
When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally
only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction
of a block if only half of it was backed up (split blocks). Because of this, one should notice
increased log activity and archiving during on-line backups.
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and
recovering Oracle Databases. RMAN ships with the database server and doesn't require a
separate installation. The RMAN executable is located in your ORACLE_HOME/bin directory.
In fact RMAN, is just a Pro*C application that translates commands to a PL/SQL interface. The
PL/SQL calls are statically linked into the Oracle kernel, and does not require the database to be
opened (mapped from the ?/rdbms/admin/recover.bsq file).
RMAN can do off-line and on-line database backups. It cannot, however, write directly to tape,
but various 3rd-party tools (like Veritas, Omiback, etc) can integrate with RMAN to handle tape
library management.
RMAN can be operated from Oracle Enterprise Manager, or from command line. Here are the
command line arguments:
-----------------------------------------------------------------------------
242
trace quoted-string name of output debugging message log file
-----------------------------------------------------------------------------
Here is an example:
...
The biggest advantage of RMAN is that it only backup used space in the database. RMAN
doesn't put tablespaces in backup mode, saving on redo generation overhead. RMAN will re-
243
read database blocks until it gets a consistent image of it. Look at this simple backup example.
run {
backup
format '/app/oracle/backup/%d_t%t_s%s_p%p'
(database);
run {
The examples above are extremely simplistic and only useful for illustrating basic concepts. By
default Oracle uses the database controlfiles to store information about backups. Normally one
would rather setup a RMAN catalog database to store RMAN metadata in. Read the Oracle
244
Backup and Recovery Guide before implementing any RMAN backups.
Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media
manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to
disk and then manually copy the backups to tape.
One can backup archived log files using RMAN or any operating system backup utility.
Remember to delete files after backing them up to prevent the archive log directory from filling
up. If the archive log directory becomes full, your database will hang! Look at this simple RMAN
backup scripts:
RMAN> run {
3> backup
7> }
The "delete input" clause will delete the archived logs as they are backed-up.
RMAN> run {
245
2> allocate channel dev1 type disk;
3> restore (archivelog low logseq 78311 high logseq 78340 thread 1 all);
5> }
Start by creating a database schema (usually called rman). Assign an appropriate tablespace to
it and grant it the recovery_catalog_owner role. Look at this example:
sqlplus sys
SQL> alter user rman default tablespace tools temporary tablespace temp;
SQL> exit;
Next, log in to rman and create the catalog schema. Prior to Oracle 8i this was done by running
the catrman.sql script.
RMAN> exit;
You can now continue by registering your databases in the catalog. Look at this example:
246
rman catalog rman/rman target backdba/backdba
One can also use the "upgrade catalog;" command to upgrade to a new RMAN release, or the
"drop catalog;" command to remove an RMAN catalog. These commands need to be entered
twice to confirm the operation.
The following Media Management Software Vendors have integrated their media management
software with RMAN (Oracle Recovery Manager):
* etc...
The above Media Management Vendors will provide first line technical support (and installation
guides) for their respective products.
When allocating channels one can specify Media Management spesific parameters. Here are
247
some examples:
Netbackup on Solaris:
Netbackup on Windows:
or:
The first step to clone or duplicate a database with RMAN is to create a new INIT.ORA and
password file (use the orapwd utility) on the machine you need to clone the database to.
Review all parameters and make the required changed. For example, set the DB_NAME
parameter to the new database's name.
Secondly, you need to change your environment variables, and do a STARTUP NOMOUNT from
sqlplus. This database is referred to as the AUXILIARY in the script below.
248
Lastly, write a RMAN script like this to do the cloning, and call it with "rman cmdfile dupdb.rcv":
connect auxiliary /
run {
logfile
The above script will connect to the "target" (database that will be cloned), the recovery catalog
(to get backup info), and the auxiliary database (new duplicate DB). Previous backups will be
restored and the database recovered to the "set until time" specified in the script.
249
Notes: the "set newname" commands are only required if your datafile names will different
from the target database.
[edit]Can one restore RMAN backups without a CONTROLFILE and RECOVERY CATALOG?
Details of RMAN backups are stored in the database control files and optionally a Recovery
Catalog. If both these are gone, RMAN cannot restore the database. In such a situation one
must extract a control file (or other files) from the backup pieces written out when the last
backup was taken. Let's look at an example:
250
channel ORA_DISK_1: finished piece 1 at 20-AUG-04
piece handle=
/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_
0lczd9tf_.bkp comment=NONE
piece handle=
/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_
0lczfrx8_.bkp comment=NONE
/oradata/orcl/control02.ctl,
/oradata/orcl/control03.ctl
251
SQL> shutdown abort;
Now, let's see if we can restore it. First we need to start the databaase in NOMOUNT mode:
Now, from SQL*Plus, run the following PL/SQL block to restore the file:
DECLARE
v_devtype VARCHAR2(100);
v_done BOOLEAN;
v_maxPieces NUMBER;
v_pieceName t_pieceName;
252
BEGIN
-- Define the backup pieces... (names from the RMAN Log file)
v_pieceName(1) :=
'/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_
0lczfrx8_.bkp';
v_pieceName(2) :=
'/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_
0lczd9tf_.bkp';
v_maxPieces := 2;
DBMS_BACKUP_RESTORE.restoreSetDataFile;
-- CFNAME mist be the exact path and filename of a controlfile taht was backed-up
DBMS_BACKUP_RESTORE.restoreControlFileTo
(cfname=>'/app/oracle/oradata/orcl/control01.ctl');
DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done,
params=>null);
253
END LOOP;
DBMS_BACKUP_RESTORE.deviceDeAllocate('d1');
EXCEPTION
DBMS_BACKUP_RESTORE.deviceDeAllocate;
RAISE;
END;
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
254
1) What is ASM?
In Oracle Database 10g/11g there are two types of instances: database and ASM instances. The
ASM instance, which is generally named +ASM, is started with the INSTANCE_TYPE=ASM init.ora
parameter. This parameter, when set, signals the Oracle initialization routine to start an ASM
instance and not a standard database instance. Unlike the standard database instance, the ASM
instance contains no physical files; such as logfiles, controlfiles or datafiles, and only requires a
few init.ora parameters for startup.
Upon startup, an ASM instance will spawn all the basic background processes, plus some new
ones that are specific to the operation of ASM. The STARTUP clauses for ASM instances are
similar to those for database instances. For example, RESTRICT prevents database instances
from connecting to this ASM instance. NOMOUNT starts up an ASM instance without mounting
any disk group. MOUNT option simply mounts all defined diskgroups
For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance
number.
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel.
Withthis capability, ASM simplifies storage management tasks, such as creating/laying out
databases and disk space management. Since ASM allows disk management to be done using
familiar create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make
crucial decisions on provisioning.
* ASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize
performance.
* ASM eliminates the need for over provisioning and maximizes storage resource utilization
255
facilitating database consolidation.
* Maintains redundant copies of data to provide high availability, or leverages 3rd party RAID
functionality.
* For simplicity and easier migration to ASM, an Oracle database can contain ASM and non-
ASM files.
* Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
* RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
* Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and
file activities.
Disk Groups
ASM Disks
256
LUNs presented to ASM
ASM Files
Files that are stored in ASM disk groups are called ASM files, this includes database file
Notes:
The database communicates with ASM instance using the ASMB (umblicus process) process.
Once the database obtains the necessary extents from extent map, all database IO going
forward is processed through by the database processes, bypassing ASM. Thus we say ASM is
not really in the IO path. So, the question how do we make ASM go faster…..you don’t have to.
4) What init.ora parameters does a user need to configure for ASM instances?
The default parameter settings work perfectly for ASM. The only parameters needed for 11g
ASM:
• PROCESSES*
257
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE
5) How does the database interact with the ASM instance and how do I make ASM go faster?
ASM is not in the I/O path so ASM does not impede the database file access. Since the RDBMS
instance is performing raw I/O, the I/O is as fast as possible.
No. The RDBMS does I/O directly to the raw disk devices, the FILESYSTEMIO_OPTIONS
parameter is only for filesystems.
8) We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1
large disk group, a couple of disk groups, or otherwise?
For VLDBs you will probably end up with different storage tiers; e.g with some of our large
customers they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3 (SATA), etc. Each one of these is
mapped to a diskgroup.
10) Would it be better to use BIGFILE tablespaces, or standard tablespaces for ASM?
258
The use of Bigfile tablespaces has no bearing on ASM (or vice versa). In fact most database
object related decisions are transparent to ASM.
There is no best size! In most cases the storage team will dictate to you based on their
standardized LUN size. The ASM administrator merely has to communicate the ASM Best
Practices and application characteristics to storage folks :
• Minimum of 4 LUNs
• The workload characteristic (random r/w, sequential r/w) & any response time SLA
Using this info , and their standards, the storage folks should build a nice LUN group set for you.
12) In 11g RAC we want to separate ASM admins from DBAs and create different users and
groups. How do we set this up?
For clarification
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
• Choose a different OSDBA group for ASM instance (asmdba) than for
259
separate ASM Admin and DBAs.
Yes. ASM can be at a higher version or at lower version than its client databases. There’s two
components of compatiblity:
Software compatibility
compatible.asm
compatible.rdbms
14) Where do I run my database listener from; i.e., ASM HOME or DB HOME?
It is recommended to run the listener from the ASM HOME. This is particularly important for
RAC env, since the listener is a node-level resource. In this config, you can create additional
[user] listeners from the database homes as needed.
Not applicable! ASM has no files to backup, as its does not contain controlfile,redo logs etc.
16) When should I use RMAN and when should I use ASMCMD copy?
* RMAN is the recommended and most complete and flexible method to backup and
transport database files in ASM.
260
ASMCMD copy is good for copying single files
17) I’m going to do add disks to my ASM diskgroup, how long will this rebalance take?
18) We are migrating to a new storage array. How do I move my ASM database from storage A
to storage B?
Given that the new and old storage are both visible to ASM, simply add the new disks to the
ASM disk group and drop the old disks. ASM rebalance will migrate data online.
Note 428681.1 covers how to move OCR/Voting disks to the new storage array
19) Is it possible to unplug an ASM disk group from one platform and plug into a server on
another platform (for example, from Solaris to Linux)?
No. Cross-platform disk group migration not supported. To move datafiles between endian-ness
platforms, you need to use XTTS, Datapump or Streams.
261
20) How does ASM work with multipathing software?
It works great! Multipathing software is at a layer lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing pseudo
devices.
No…No…Nope!! ASM provides even distribution of extents across all disks in a disk group. Since
each disk will equal number of extents, no single disk will be hotter than another. Thus the
answer NO, ASM does not dynamically move hot spots, because hot spots simply do not
occur in ASM configurations. Rebalance only occurs on storage configuration changes (e.g. add,
drop, or resize disks).
22) What are the file types that ASM support and keep in disk groups?
Control files
Flashback logs
Data files
DB SPFILE
262
RMAN backup sets
OCR files
Archive logs
ASM SPFILE
* Reduces the time significantly to resynchronize a transient failure by tracking changes while
disk is offline
* Is cluster-aware
* Supports reading from mirrored copy instead of primary copy for extended clusters
263
ASM can use variable size data extents to support larger files, reduce memory requirements,
and improve performance.
ASM stripes files using extents with a coarse method for load balancing or a fine method to
reduce latency.
26. How many ASM Diskgroups can be created under one ASM Instance?
264
* Two-terabyte maximum storage for each ASM disk (non-Exadata)
A disk group consists of multiple disks and is the fundamental object that ASM manages. Each
disk group contains the metadata that is required for the management of space in the disk
group. The ASM instance manages the metadata about the files in a Disk Group in the same way
that a file system manages metadata about its files. However, the vast majority of I/O
operations do not pass through the ASM instance. In a moment we will look at how file
1B. ASM sends the extent map for the file to database instance. Starting with 11g, the RDBMS
only receives first 60 extents the remaining extents in the extent map are paged in on demand,
providing a faster open
3B. ASM does the allocation for its essentially reserving the allocation units
265
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
3E. If file creation is successful, then the RDBMS commits the file creation
30) Can my disks in a diskgroup can be varied size? For example one disk is of 100GB and
another disk is of 50GB. If so how does ASM manage the extents?
Yes, disk sizes can be varied, Oracle ASM will manage data efficiently and intelligent by placing
the extents proportional to the size of the disk in the disk group, bigger diskgroups have more
extents than lesser ones.
• PROCESSES*
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE
•ASM is a very passive instance in that it doesn’t have a lot concurrent transactions
•Even if you have 20 dbs connected to ASM , the ASM SGA does not need to change.
266
This is b/c the ASM metadata is not directly tied to the number of clients
•The processes parameter may need to be modified. Use the formula to determine
267
ASM is not really in the IO path. So, the question how do we make
Server
Operating System
DATABASE ASM
(1A) OPEN
(2A) READ
(3A) CREATE
(3D) Commit
1B. ASM sends the extent map for the file to database instance. Starting
with 11g, the RDBMS only receives first 60 extents the remaining extents
268
3A.RDBMS foreground initiates a create tablespace for example
3B. ASM does the allocation for its essentially reserving the allocation units
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
3E. If file creation is successful, then the RDBMS commits the file creation
storage.
This parameter controls which IO options are used. The value may be any of
the following:
OS.
ASYNC and DIRECT IO. "none" - This disables ASYNC IO and DIRECT
269
IO so that Oracle uses normal synchronous writes, without any direct io
options.
ASM is also supported for NFS files as ASM disks. In such cases, the
FILESYSTEMIO_OPTIONS.
performance.
To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that
generally no more than two diskgroups be maintained and managed per RAC cluster or
oDatabase work area: This is where active database files such as datafiles, control files,
online redo logs, and change tracking files used in incremental backups are stored. This
oFlash recovery area: Where recovery-related files are created, such as multiplexed copies
of the current control file and online redo logs, archived redo logs, backup sets, and
270
•Having one DATA container means only place to store all your database files, and obviates
the need to juggle around datafiles or having to decide where to place a new tablespace.
By having one container for all your files also means better storage utilization. Making the IT
director very happy. If more storage capacity or IO capacity is needed, just add an ASM
You have to ensure that this storage pool container houses enough spindles to
Bottom line, one container == one pool manage, monitor, and track
Note however, that additional diskgroups may be added to support tiered storage classes in
deployments
diskgroup.
and latency senstive tablespaces in Tier1, and not as IO critical on Tier2, etc.
For 10g VLDBs its best to set an AU size of 16MB, this is more for metadata space
efficiency than for performance. The 16MB recommendation is only necessary if the
271
diskgroup is going to be used by 10g databases. In 11g we introduced variable size
compatible.asm to be set to 11.1.0.0. With 11g you should set your AU size to the largest
I/O that you wish to issue for sequential access (other parameters need to be set to
increase the I/O size issued by Oracle). For random small I/Os the AU size does not
matter very much as long as every file is broken into many more extents than there are
disks.
15
For all 11g ASM/DB users, it best to create a disk group using 4 MB ASM AU size. Metalink
272
Nevertheless, Bigfile tablespaces benefits:
open),
• Minimum of 4 LUNs
Using this info , and their standards, the storage folks should build a
In most cases the storage team will dictate to you what the standardized LUN size
including RAID LUN set builds (concatenated, striped, hypers, etc..). Having too
time and is it very hard to manage On the flip side, having too few LUNs makes array
273
cache management difficult to
control and creates un-manageable large LUNs (which are difficult to expand).
The ASM adminstrator merely has to communicate to SA/storage folks that you need
equally sized/performance LUNs and what the capacity requirement is, say 10TB.
Using this info, the workload characteristic (random r/w, sequential r/w), and their
standards, the storage folks should build a nice LUN group set for you
Having too many LUNs elongates boot time and is it very hard to manage (zoning,
A. For clarification
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
• Choose a different OSDBA group for ASM instance (asmdba) than for
designated
274
A typical deployment could be as follows:
ASM administrator:
User : asm
Database administrator:
User : oracle
ASM administrator:
User : asm
Database administrator:
User : oracle
ASM administrator:
User : asm
Database administrator:
275
User : oracle
versions?
components of compatiblity:
• Software compatibility
• compatible.asm
• compatible.rdbms
• Advance compatible.asm
SET ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’
276
group
22
• compatible = 11.0
• Advance compatible.rdbms
SET ATTRIBUTE
‘compatible.rdbms’ = ’11.1.0.7.0’
• Preferred read
23
DISK ‘/dev/sdd[bcd]1’
ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’,
‘compatible.rdbms’ = ’11.1.0.7.0’,
277
‘au_size’ = ’4M’
be reversed
HOME or DB HOME?
of creating one listener per node with node name suffix (that is
create multiple listeners in different homes and use'em but that would
278
Unlike the database, ASM does require a controlfile type structure or
any other external metadata to bootstrap itself. All the data ASM needs
metadata).
like a standard file system. All the metadata about the usage of the
space in the disk group is completely contained within the disk group.
If ASM can find all the disks in a disk group it can provide access to
ASMCMD copy?
in ASM.
stored in ASM.
279
based databases. Many combine BCV mirrors with RMAN backup
of the mirrors. Why would you want to do that? RMAN ensures the
Now most of you are wondering about the 11g asmcmd copy
backup).
In 11g, we introduced the asmcmd copy command. Key point here is that copy files
1. archive logs
2. Controlfiles
TTS
28
ASMCMD Copy
ASMCMD> ls
+fra/dumpsets/expdp_5_5.dat
280
pdp_5_5.dat
source +fra/dumpsets/expdp_5_5.dat
target +DATA/dumpsets/expdp_5_5.dat
copying file(s)...
file, +DATA/dumpsets/expdp_5_5.dat,
copy committed.
Migration
Use v$asm_operation
31
A. Given that the new and old storage are both visible to
ASM, simply add the new disks to the ASM disk group
281
and drop the old disks. ASM rebalance will migrate
drop disk
data_legacy1, data_legacy2,
data_legacy3
add disk
‘/dev/sddb1’, ‘/dev/sddc1’,
‘/dev/sddd1’;
ASM Rebalancing
added
ASM Rebalancing
282
storage configuration changes
ASM Rebalancing
ASM Rebalancing
ASM Rebalancing
283
platform and plug into a server on another platform
Streams.
The first problem that you run into here is that Solaris and Linux
format their disks differently. Solaris and Linux do not recognize each
ASM does track the endian-ness of its data. However, currently, the
ASM code does not handle disk groups whose endian-ness does not
Experiments have been done to show that ASM disk groups can be
http://downloadwest.
oracle.com/docs/cd/B19306_01/server.102/b25159/outage.htm#CA
CFFIDD
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10
gR2_PlatformMigrationTTS.pdf
284
3rd Party Software
40
oDetect any component failures in the I/O path; e.g., fabric port, channel adapter, or
HBA.
oWhen a loss of path occurs, ensure that I/Os are re-routed to the available paths,
oEnsure that failed paths get revalidated as soon as possible and provide autofailback
capabilities.
methods; e.g., round robin, least I/Os queued, or least service time.
When a given disk has several paths defined, each one will be presented as a unique path
to same disk device. ASM, however, can only tolerate the discovery of one unique device
path per disk. For example, if the asm_diskstring is ‘/dev/rdsk/*’, then several paths to the
same device will be discovered, and ASM will produce an error message stating this.
285
When using a multipath driver, which sits above this SCSI-block layer, the driver will
generally produce a pseudo device that virtualizes the sub-paths. For example, in the
case of EMC’s PowerPath, you can use the following asm_diskstring setting of
‘/dev/rdsk/emcpower*’. When I/O is issued to this disk device, the multipath driver will
intercept it and provide the necessary load balancing to the underlying subpaths.
work. Most all MP products are known to work w/ ASM. But remember ASM does not
certify MP products, though we have a list products that work w/ ASM, this is more of
Examples of multi-pathing software include EMC PowerPath, Veritas DMP, Sun Traffic
Manager, Hitachi HDLM, and IBM SDDPCM. Linux 2.6 has a kernel based multipathing
ASM/RDBMS
/dev/
Multipath driver
/dev/sda1 /dev/sdb1
Controller Controller
Cache
Disk
IO cloud
spots”?
286
A. No…No…Nope!!
Bad rumor. ASM provides even distribution of extents across all disks
single disk will be hotter than another. Thus the answer NO, ASM does
not dynamically move hot spots, because hot spots simply do not
43
I/O Distribution
• ASM spreads file extents evenly across all disks in disk group
50
100
150
200
250
300
Total
Disk
IOPS
FG1: - cciss/c0d2
287
FG1: - cciss/c0d3
FG1: - cciss/c0d4
FG1: - cciss/c0d5
FG1: - cciss/c0d6
FG2: - cciss/c0d2
FG2: - cciss/c0d3
FG2: - cciss/c0d4
FG2: - cciss/c0d5
FG2: - cciss/c0d6
FG3: - cciss/c0d2
FG3: - cciss/c0d3
FG3: - cciss/c0d4
FG3: - cciss/c0d5
FG3: - cciss/c0d6
FG4: - cciss/c0d2
FG4: - cciss/c0d3
FG4: - cciss/c0d4
FG4: - cciss/c0d5
FG4: - cciss/c0d6
As indicated, ASM implements the policy of S.A.M.E. that stripes and mirrors
files across all the disks in a Disk Group. If the disks are highly reliable as
the case may be with a high-end array, mirroring can be optionally disabled
for a particular Disk Group. This policy of striping and mirroring across all
288
configuration of balanced performance.
• Manageability
• Simple provisioning
• VM/FS co-existence
• Consolidation
• Self-tuning
• Performance
available storage
• Availability
• Rolling upgrades
• Online patches
• Cost Savings
• Just-in-Time provisioning
• No license fees
289
• No support fees
45
Summary:
performance
ASM
databases
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel.
With
this capability, ASM simplifies storage management tasks, such as creating/laying out databases
and
disk space management. Since ASM allows disk management to be done using familiar
create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial
decisions
on provisioning.
oASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize
performance.
oASM eliminates the need for over provisioning and maximizes storage resource utilization
290
facilitating
database consolidation.
oPerforms automatic online redistribution after the incremental addition or removal of storage
capacity.
oMaintains redundant copies of data to provide high availability, or leverages 3rd party RAID
functionality.
oFor simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM
files.
Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
oRMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
oEnterprise Manager Database Control or Grid Control can be used to manage ASM disk and file
activities.
ASM reduces Oracle Database cost and complexity without compromising performance or
availability
46
http://www.oracle.com/technology/asm
291
Top 10 ASM Questions
recommended.
o Reduced Overhead
ASMLIB provides the capability for a process (RBAL) to perform a global open/close
This reduces the number of open file descriptors on the system, thus reduces the
probability of running out of global file descriptors on the system. Also, the open
and close operations are reduced, ensuring orderly cleanup of file descriptors when
With ASMLib the ASM disk name is automatically taken from the name given it by the
administrative tool. This simplifies adding disks and correlating OS names with ASM
names, as well as eliminates erroneous disk management activities since disks are
292
already pre-named.
The default discovery string for ASM is NULL, however, if ASMLIB is used, the
ASMLIB default string replaces the NULL string, making disk discovery much more
straightforward. Note, disk discovery has been one of the big challenges for
administrators.
The ASMLib permissions are persistent across reboot and in the event of major/minor
number changes
Once the disks are labeled on one node, the other clustered nodes simply use the
With ASMLib, there is not a requirement to modify the initializations (e.g. “/etc/init.d”)
RAC configuration
own
shutdown on that node. Once the ASMLIB is upgarded then the stack can be
restarted.
******************************************************************************
293
******************************************************************************
*********
******************************************************************************
******************************************************************************
*********
1) What is a database?
• Database offer a single point of mechanism for storing and retrieving information with the
help of tables.
• Table is made up of columns and rows where each column stores specific attribute and each
row displays a value for the corresponding attribute.
• It is a structure that stores information about the attributes of the entities and relationships
among them.
• Well known DBMS include Oracle, ibm db2, Microsoft sql server, Microsoft access, mysql and
sqlLite.
2) What are the different types of storage systems available and which one is used by Oracle?
• Most databases use RDBMS model, Oracle also uses RDBMS model.
294
• Information Management System (IMS) from IBM.
• Merge Join – Sorting both the tables using join key and then merge the rows which are sorted.
• Nested loop join – It gets a result set after applying filter conditions based on the outer table.
• Then it joins the inner table with the respective result set.
• Hash join - It uses hash algorithm first on smaller table and then on the other table to produce
joined columns. After that matching rows are returned.
4) What are the components of logical data model and list some differences between logical
and physical data model?
• Entity – Entity refers to an object that we use to store information. It has its own table.
• Attribute – It represents the information of the entity that we are interested in. It is stored as
a column of the table and has specific datatype associated with it.
• Record – It refers to a collection of all the properties associated with an entity for one specific
condition, represented as row in a table.
• Domain – It is the set of all the possible values for a particular attribute.
• Logical data model represents database in terms of logical objects, such as entities and
relationships.
• Physical data model represents database in terms of physical objects, such as tables and
constraints.
295
5) What is normalization? What are the different forms of normalization?
• First Normal Form – If all underlying domains contain atomic values only.
• Second Normal Form – If it is in first normal form and every non key attribute is fully
functionally dependent on primary key.
• Third Normal Form - If it is in 2nd normal form and every non key attribute is non transitively
dependent on the primary key.
• Boyce Codd Normal Form - A relation R is in BCNF if and only every determinant is a candidate
key.
6) Differentiate between a database and Instance and explain relation between them?
• Database is a collection of three important files which include data files, control files and redo
log files which physically exist on a disk
• A database may be mounted and opened by one or more instances (using RAC).
296
7) What are the components of SGA?
• It mainly includes Library cache, Data Dictionary cache, Database Buffer Cache, Redo log
Buffer cache, Shared Pool.
• Data Dictionary Cache – It contains the definition of Database objects and privileges granted
to users.
• Data Base buffer cache – It holds copies of data blocks which are frequently accessed, so that
they can be retrieved faster for any future requests.
• Redo log buffer cache – It records all changes made to the data files
• SGA (System Global Area) is a memory area allocated during an instance start up.
• PGA (Program or Process Global Area) is a memory area that stores a user session specific
information.
297
These are the physical components which gets stored in the disk.
• Data files
• Control files
• Password files
• Parameter files
• You can get the SCN number by querying select SCN from v$database from SQLPLUS.
11) What is Database Writer (DBWR) and when does DBWR write to the data file?
• DBWR is a background process that writes data blocks information from Database buffer
cache to data files.
• Every 3 seconds
• When server process needs free space in database buffer cache to read new blocks.
12) What is Log Writer and when does LGWR writes to log file?
298
• LGWR writes redo or changed information from redo log buffer cache to redo log files in
database.
• It is responsible for moving redo buffer information to online redo log files, when you commit
and a log switch also occurs.
• LGWR writes to redo files when the redo log buffer is 1/3 rd full.
• Before DBWR writes modified blocks to the datafiles, LGWR writes to the
log file
13) Which Table spaces are created automatically when you create a database?
• SYSAUX tablespace
• UNDO tablespace
• TEMP tablespace
• UNDO & TEMP tablespace are optional when you create a database.
14) Which file is accessed first when Oracle database is started and What is the difference
between SPFILE and PFILE?
• Settings required for starting a database are stored as parameters in this file.
• SPFILE is by default created during database creation whereas PFILE should be created from
SPFILE.
299
• PFILE is client side text file whereas SPFILE is server side binary file.
• SPFILE is a binary file (it can’t be opened) whereas PFILE is a text file we can edit it and set
parameter values.
• Changes made in SPFILE are dynamically effected with running database whereas PFILE
changes are effected after bouncing the database.
• You can’t make any changes to PFILE when the database is up.
• SPFILE parameters changes are checked before they are accepted as it is maintained by Oracle
server thereby reducing the human typo errors.
16) How can you find out if the database is using PFILE or SPFILE?
• You can query Dynamic performance view (v$parameter) to know your database is using
PFILE or SPFILE.
300
• SQL> startup PFILE = ‘full path of Pfile location’;
17) Where are parameter files stored and how can you start a database using a specific
parameter file?
• In UNIX they are stored in the location $ORACLE_HOME/dbs and ORACLE_HOME/database for
Windows directory.
• If you want to start the database with specific file we can append it at the startup command
as
• You can create PFILE from SPFILE as create PFILE from SPFILE;
• Similarly, create SPFILE from PFILE; command creates SPFILE from PFILE.
• PGA_AGGREGATE TARGET parameter specifies target aggregate PGA memory available to all
server process attached to an instance.
19) What is the purpose of configuring more than one Database Writer Processes? How many
should be used? (On UNIX)
• DBWn process writes modified buffers in Database Buffer Cache to data files, so that user
301
process can always find free buffers.
• To efficiently free the buffer cache to make it available to user processes, you can use
multiple DBWn processes.
• We can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to
improve write performance if our system modifies data heavily.
• If the Unix system being used is capable of asynchronous input/output processing then only
one DBWn process is enough, if not the case the total DBWn processes required will be twice
the number of disks used by oracle, and this can be set with DB_WRITER_PROCESSES
initialization parameter.
20) List out the major installation steps of oracle software on UNIX in brief?
• Set up disk and make sure you have Installation file (run Installer) in your dump.
1) ORACLE_BASE
2) ORACLE_HOME
3) PATH
4) LD_LIBRARY_PATH
5) TNS_ADMIN
302
• Source the Environment file to the respective bash profile and now run Oracle Universal
Installer.
21) Can we check number of instances running on Oracle server and how to set kernel
parameters in Linux?
• Editing the /etc/oratab file on a server gives the list of oracle instances running on your
server.
• Editing /etc/sysctl.conf file with vi editor will open a text file listing out kernel level
parameters.
• We can make changes to kernel parameters as required for our environment only as a root
user.
• To make the changes affected permanently to kernel run the command /sbin/sysctl –p.
• We must also set file maximum descriptors during oracle installation which can be done by
editing /etc/security/limits.conf as a root user.
303
• ORACLE_BASE=/u01/app/<installation-directory>
• ORACLE_HOME=$ORACLE_BASE/product/11.2.0(for 11g)/dbhome_1
• ORACLE_SID=<instance-name>
• PATH=$ORACLE_HOME/bin:$PATH
• LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
• TNS_ADMIN=$ORACLE_HOME/network/admin
• Control file is a binary file which records the physical structure of a database.
• It includes number of log files and their respective location, Database name and timestamp
when database is created, checkpoint information.
• We can multiplex control files, store in different locations to make control files available even
if one is corrupted.
25) At what stage of instance, control file information is read and can we recover control file
and how to know information in a control file?
• We can’t recover or restore lost control file, but we can still startup the database using
control files created using multiplexing in different locations.
• We find a trace file(.trc) in udump location,we can edit it and find the complete database
304
structure.
• The above query gives the name, Location of control files in our physical disk.
• We can edit PFILE using a vi editor and control_files parameter gives us information about
number of and location of control files.
• The solution was to delete the tablespace, recreating it with different sized datafiles.
• After 7.2 you can resize a datafile by using ALTER DATABASE DATAFILE <file_name> RESIZE;
• Resizing Table space includes creation of new data file or resizing existing data file.
28) Name the views used to look at the size of a datafile, controlfiles, block size, determine free
space in a tablespace ?
• V$contolfile used to look at the size of a control file which includes maxlogfiles,
305
maxlogmembers, maxinstances.
• In archive log mode, the database will makes archive of all redo log files that are filled, called
as archived redo logs or archive log files.
• By default your database runs in NO ARCHIVE LOG mode, so we can’t perform online backup’s
(HOT backup).
• You must shut down database to perform clean backup (COLD backup) and recovery can be
done to the previous backup state.
• Archive log files are stored in a default location called FRA (Flash Recovery Area).
• We can also define our own backup location by setting log_archive_dest parameter.
30) Assume you work in an xyz company as senior DBA and on your absence your back up DBA
has corrupted all the control files while working with the ALTER DATABASE BACKUP
CONTROLFILE command. What do you do?
• As long as all data files are safe and on a successful completion of BACKUP control file
command by your Back up DBA you are in safe zone.
306
4) RECOVER DATABASE USING BACKUP CONTROL FILE
• Then give the command ALTER DATABSE BACKUP CONTROL FILE TO TRACE
If control file backup is not available, then the following will be required
• But we need to know all of the datafiles, logfiles, and settings of MAXLOGFILES,
MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
• In oracle 11g you reduce space of TEMP datafile by shrinking the TEMP tablespace.It is a new
307
feature to 11g.
• The dynamic performance view can be very useful in determining which table space to shrink.
32) What do you mean by database backup and which files must be backed up?
• Database stores most crucial data of business ,so it’s important to keep the data safe and this
can be achieved by backup.
• Control files
• Password file
33) What is a full backup and name some tools you use for full backup?
• A full backup is a backup of all the control files, data files, and parameter file (SPFILE or PFILE).
• You must also backup your ORACLE_HOME binaries which are used for cloning.
• A full backup can be performed when our database runs in NON ARCHIVE LOG mode.
• As a thumb rule, you must shutdown your database before you perform full backup.
34) What are the different types of backup’s available and also explain the difference between
them?
308
1) COLD backup(User managed & RMAN)
• Hot backup is taken when the database is still online and database should be in ARCHIVE LOG
MODE.
• End backup by
35) How to recover database if we lost the control file and we do not have a backup and what is
RMAN?
• We can recover our database at any point of time, when we have backup of our control files
in different mount points.
• Also check whether the control file is available in trace file located in USERDUMP or the alert
log to recover the database.
RMAN
• RMAN called as Recovery manager tool supplied by oracle that can be used to manage backup
and recovery activities.
• You can perform both offline (Cold) and online (Hot) backup’s using RMAN.
309
6) Name the architectural components of RMAN
• RMAN executable
• Server process
• Channels
• Target database
• The size of the recovery catalog schema depends upon the number of databases monitored
by the catalog.
• It is used to restore a physical backup, reconstruct it, and make it available to the oracle
server.
• Table spaces are not put in the backup mode ,therefore there is no extra redo log file during
310
online backups.
• Incremental backups that only copy data blocks, which have changed since last backup.
39) How to bring a database from ARCHIVE LOG mode to NON ARCHIVE LOG MODE?
• You should change your init<SID>.ora file with the following information
• log_archive_format=’%t_%s.dbf’
• sql>shutdown;
• Make sure you backup your database before switching to ARCHIVELOG mode.
• Database undergoes different stages before making itself available to end users
? NoMount
? Mount
? Open
311
• NoMount – Oracle Instance is available based on the parameters defined in SPFile.
• Mount - Based on the Information from parameter control files location in spfile, it opens and
reads them and available to next stage.
• Open - Datafiles, redo log files are available to the end users.
41) Name some of the important dynamic performance views used in Oracle.
• V$Parameter
• V$Database
• V$Instance
• V$Datafiles
• V$controlfiles
• V$logfiles
42) What are the different methods we can shutdown our database?
No new connections are accepted and wait for the user to close the session.
• SHUTDOWN TRANSACTIONAL
No new connections are accepted and wait for the existing transactions to commit and logouts
the session without the permission of user.
• SHUTDOWN IMMEDIATE
No new connections are accepted and all committed transactions are reflected in database and
all the transactions are about to commit are rolled back to previous value.
• SHUTDOWN ABORT
It’s just like an immediate power off for a database, it doesn’t mind what are the transactions
running it just stops entire activity -(even committed transactions are not reflected in database)
and make database unavailable. SMON process takes responsibility for recovery during next
312
startup of database.
• Reverse Key Index - It Is most useful for oracle real application clusters applications.
• Hash cluster Index – Refers to the index that is defined specifically for a hash cluster.
44) What is the use of ALERT log file? Where can you find the ALERT log file?
• Alert log file is a log file that records database-wide events which is used for trouble shooting.
313
• Information about a modified control file.
• It is generated only if the value of SQL_TRACE parameter is set to true for a session.
• If it set at instance level, trace file will be created for all connected sessions.
• If it is set at session level, trace file will be generated only for specified session.
• The location of user process trace file is specified in the USER_DUMP_DEST parameter.
• System locks – controlled by oracle and held for a very brief period of time.
• TX Lock – Acquired once for every transaction. It isa row transaction lock.
• TM Lock – Acquired once for each object, which is being changed. It is a DML lock. The ID1
column identifies the object being modified.
314
• While the db_file-scattered_read event indicates full table scan.
• Latch is an on/off switch in oracle that a process must access in order to perform certain type
of activities.
• They enforce serial access to the resources and limit the amount of time for which a single
process can use a resource.
• A latch is acquired for a very short amount of time to ensure that the resource is allocated.
• We may face performance issues which may be due to either of the two following reasons
• Standby Database – Refers to a copy of primary or production database.It may have more
than one standby database.
• Log transport service – Manages transfer of archive log files primary to standby database.
• Network configuration – Refers to the network connection between primary and standby
database.
• Role management services – Manages the role change between primary and standby
315
database.
• Data guard broker – Manages data guard creation process and monitors the dataguard.
Primary
Standby
• Role transition is the change of role between primary and standby databases.
• Data guard enables you to change this roles dynamically by issuing the sql statements.
• Transition happens in between primary and standby databases in the following way
• Switchover, where primary database is switched to standby database and standby to primary
database.
• Failover, where a standby database can be used as a disaster recovery solution in case of a
failure in primary database.
• DRM allows you to create resource plans, which specify resource allocation to various
consumer groups.
• DRM offers an easy-to-use and flexible system by defining distinct independent components.
• Enables you to limit the length of time a user session can stay idle and automatically
terminates long-running SQL statement and users sessions.
• DRM will automatically queue all the subsequent requests until the currently running sessions
complete
316
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
View Description
DBA_EXTENTS, USER_EXTENTS Information about data extents within all (or user
accessible) tablespaces(segment_name,tablespace_name,bytes)
DBA_FREE_SPACE, USER_FREE_SPACE Information about free extents within all (or user
accessible) tablespaces(tablespace_name,bytes)
317
each temporary tablespace.
V$LOG Displays the redo log file information from the control
file(members-no. of members,status)
318
The following data dictionary views provide useful information about the datafiles of a
database:
View Description
DBA_EXTENTS,USER_EXTENTS
DBA_FREE_SPACE,USER_FREE_SPACE
DBA view lists the free extents in all tablespaces. Includes the
file ID of the datafile containing the extent. USER view lists the
free extents in the tablespaces accessible to the current user.
319
ASM VIEWS:-
v$asm_alias Displays every alias for all disk groups that are mounted by the
ASM instance
v$asm_diskgtoup Displays every disk group that exists in the ASM instance (at
discovery time)
v$asm_file Displays all files within each disk group that is mounted by the
ASM instance.
v$asm_template Displays all templates within each disk group that is mounted
by the ASM instance.
V$ASM_DISK : Contains disk details and usage statistiocs (e.g. ASM disk
discovery).
320
V$ASM_DISK_STAT : Contains disk details and usage statistiocs
V$ASM_CLIENT : Lists all instances that are connected to ASM the ASM instance
DBA_DATAPUMP_JOBS Displays information on running Data Pump jobs. Also comes in the
USER_DATAPUMP_JOBS variety(owner_name,job_name,operations)
DATAPUMP_PATHS Provides a list of valid object types that you can associate with
the include or exclude parameters of expdp or impdp.
DBLINK VIEWS:-
View Purpose
V$DBLINK Lists all open database links in your session, that is, all database links with the
321
IN_TRANSACTION column set to YES.
PROFILE VIEWS:-
V$SESSION Lists session information for each current session, includes user
name
V$STATNAME Displays decoded statistic names for the statistics shown in the
V$SESSTAT view
PROXY_USERS Describes users who can assume the identity of other users
http://docs.oracle.com/cd/B19306_01/network.102/b14266/admusers.htm
322