0% found this document useful (0 votes)
870 views23 pages

Oracle DBA Real Time Interview Questions

The daily activities of an Oracle database administrator include: 1. Checking if the Oracle database instance and listeners are running 2. Monitoring for sessions blocking other sessions and checking the alert log for errors 3. Verifying DBMS jobs are running and checking job statuses 4. Identifying sessions using high physical I/O and monitoring log switch frequency Weekly activities include checking for object fragmentation, table sizes, and performing diagnostic checks. Monthly activities involve validating database configuration and size, tablespace usage, and performing trend analysis. Nightly activities include routine maintenance tasks like analyzing objects and rebuilding indexes. One-time setup includes creating database users and configuring backups. The cluster startup sequence begins when the operating system boots and

Uploaded by

maneesha reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
870 views23 pages

Oracle DBA Real Time Interview Questions

The daily activities of an Oracle database administrator include: 1. Checking if the Oracle database instance and listeners are running 2. Monitoring for sessions blocking other sessions and checking the alert log for errors 3. Verifying DBMS jobs are running and checking job statuses 4. Identifying sessions using high physical I/O and monitoring log switch frequency Weekly activities include checking for object fragmentation, table sizes, and performing diagnostic checks. Monthly activities involve validating database configuration and size, tablespace usage, and performing trend analysis. Nightly activities include routine maintenance tasks like analyzing objects and rebuilding indexes. One-time setup includes creating database users and configuring backups. The cluster startup sequence begins when the operating system boots and

Uploaded by

maneesha reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

What are your day to day activities? And what is your role in the Team?

Oracle instance(s) are running or not.


Check SMON or PMON process
ps -ef | grep smon & ps -ef | grep pmon
Check all Instance of any database as follows.
If RAC is used, then check all Instances of database.
[oracle@msddbadm01 ~]$ srvctl status database -d <DB_NAME>

2- Local ( and SCAN ) listeners are running or not.


[oracle@msddbadm01 ~]$ lsnrctl status
[oracle@msddbadm01 ~]$ srvctl status scan_listener

3- Check the Server Storage or Disk of Oracle database . 


 
Check if ASM and File system disks size are enough or not.
Use the following script to check Filesystem disks.

Check Filesystem disks.


[oracle@msddbadm01 ~]$ df -h
Unix
[oracle@msddbadm01 ~]$ df –g

Check Oracle ASM diskgroups.


Use the following script to check Diskgroup size.
 [grid@msdidb01 ~]$ asmcmd lsdg
 [grid@msdidb01 ~]$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED HIGH Y 512 512 4096 4194304 1056964608 214081832
14680064 66467256 2 Y DATA/
MOUNTED HIGH N 512 512 4096 4194304 134507520 101014692
1868160 33048844 0 N RECO/
ASMCMD>

4- Check the Tablespaces for objects to extend if required.


Use the following script to check Tablespace usage

5- Check the Recovery Size Area


Use the following script to check Recovery Size Area

6-Check the alert log if a vital error exists or not ( Corruption )


Use the following script to check alertlog

7- Check the latest Archivelog and Full Backup are done or not
Use the following script to check Backups.
 
SELECT TO_CHAR (start_time, 'DD-MM-YYYY HH24:MI:SS') start_time, input_type,
status, ROUND (elapsed_seconds / 3600, 1) time_hr,INPUT_BYTES/1024/1024/1024
IN_GB,OUTPUT_BYTES/1024/1024/1024 OUT_GB ,OUTPUT_DEVICE_TYPE FROM
v$rman_backup_job_details WHERE START_TIME > SYSDATE - 3 ORDER BY
start_time DESC;

8- Check any session blocking the other session ( blocking session and Lock control )

9- Check the DBMS jobs running or not and check the status of the Jobs
Use the following script to check Scheduler jobs state.
-- Failed Scheduled Jobs 
SELECT owner, job_name,status,LOG_DATE, ERROR#, ( EXTRACT (SECOND FROM
run_duration) /60 + EXTRACT (MINUTE FROM run_duration) + EXTRACT (HOUR
FROM run_duration) * 60 + EXTRACT (DAY FROM run_duration) * 60 * 24)
MINUTES,ADDITIONAL_INFO
FROM dba_scheduler_job_run_details
WHERE LOG_DATE > SYSDATE - 1 AND status != 'SUCCEEDED' ORDER BY 1 ASC,
4 DESC;

-- Running and Succeeded Scheduled Jobs


SELECT OWNER, JOB_NAME, LAST_START_DATE, STATE
FROM DBA_SCHEDULER_JOBS
WHERE LAST_START_DATE > SYSDATE - 1 AND STATE <> 'SCHEDULED';

10- Check the Data guard is synchronized or not.

11- Check the Performance Page of Enterprise Manager or Enterprise Manager Cloud
Control
Open Performance Page of Enterprise manager Cloud Control as follows to check
Performance.
12- Check the TOP session and TOP activity of database.
Open TOP Activity Page of Enterprise manager Cloud Control as follows to check TOP
Activity.

 
13- Detect lock objects
Use the following script to check Lock objects and tables.
 
SELECT B.Owner, B.Object_Name, A.Oracle_Username, A.OS_User_Name
FROM gv$Locked_Object A, All_Objects B
WHERE A.Object_ID = B.Object_ID;

14-  Check the SQL query consuming lot of resources ( CPU and Disk Resources ) 
 
Use the following script to check TOP CPU and Disk SQL Statements.

15- Check the usage of physical RAM and SGA – Paging or Swapping exist or not.
[oracle@msddbadm01 ~]$ free -m

SQL> select * from v$sgainfo;


16- Check Log Switch and Archivelog generation frequency.
 
You can query and list the Log Switch ( Archivelog ) Frequency map according to per hour
and daily as follows.
select to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23"
from v$log_history group by to_char(first_time,'YYYY-MON-DD');
 
Query result is as follows.

Daily Activity

1. Oracle Database instance is running or not


2. Database Listener is running or not.
3. Check any session blocking the other session
4. Check the alert log for an error
5. Check is there any dbms jobs running & check the status of the same
6. Check the Top session using more Physical I/O
7. Check the number of log switch per hour
8. How_much_redo_generated_per_hour.sql
9. Run the statpack report
10. Detect lock objects
11. Check the SQL query consuming lot of resources.
12. Check the usage of SGA
13. Display database sessions using rollback segments
14. State of all the DB Block Buffer

Weekly Activity

1. Check the objects fragmented


2. Check the Chaining & Migrated Rows
3. Check the size of tables & check weather it need to partition or not
4. Check for Block corruption
5. Check the tables without PK
6. Check the tables having no Indexes
7. Check the tables having more Indexes
8. Check the tables having FK but there is no Index
9. Check the objects having the more extents
10. Check the frequently pin objects & place them in separate tablespace & in cache Check the objects
reload in memory many time
11. Check the free space at O/s Level
12. Check the CPU, Memory usage at O/s level define the threshold for the same.
13. Check the used & free Block at object level as well as on tablespaces.
14. Check the objects reaching to it’s Max extents
15. Check free Space in the tablespace
16. Check invalid objects of the database
17. Check open cursor not reaching to the max limit
18. Check locks not reaching to the max lock
19. Check free quota limited available of each user
20. Check I/O of each data file

Monthly Activity

1. Check the database size & compare it previous size to find the exact growth of the database
2. Find Tablespace Status, segment management, initial & Max Extents and Extent Management
3. Check location of data file also check auto extendable or not
4. Check default tablespace & temporary tablespace of each user
5. Check the Indexes which is not used yet
6. Check the Extents of each object and compare if any object extent are overridden which is define at
tablespace level
7. Tablespace need coalescing
8. Check the overall database statistics
9. Trend Analysis of objects with tablespace, last analyzed, no. of Rows, Growth in days & growth in KB

Nightly Activity

1. Analyzed the objects routinely.


2. Check the Index need to Rebuild
3. Check the tablespace for respective Tables & Indexes 

One Time Activity


1. Database user creation with required privileges
2. Make the portal of Oracle Predefined error with possible solution.
3. Check database startup time(if not 24X7)
4. Check location of control file
5. Check location of log file
6. Prepare the Backup strategy and test all the recovery scenario

2) Explain cluster startup sequence?

Brief explanation of the startup sequence.

Once the Operating system starts and finish the boot scrap process it reads /etc/init.d file via
the initialisation daemon called init or init.d. The init tab file is the one it triggers oracle high
availability service daemon.

1. When a node of an Oracle Clusterware cluster starts, OHASD is started by platform-specific


means like init.d in Linux. OHASD is the root for bringing up Oracle Clusterware. OHASD
has access to the OLR (Oracle Local Registry) stored on the local file system. OLR provides
needed data to complete OHASD initialization.
2. OHASD brings up GPNPD and CSSD ( Cluster synchronization Service Daemon ). CSSD
has access to the GPNP Profile stored on the local file system. This profile contains the
following vital bootstrap data:
a. ASM Diskgroup Discovery String
b. ASM SPFILE location (Diskgroup name)
c. Name of the ASM Diskgroup containing the Voting Files
3. The Voting Files locations on ASM Disks are accessed by CSSD with well-known pointers in
the ASM Disk headers and CSSD is able to complete initialization and start or join an
existing cluster.
4. OHASD starts an ASM instance and ASM can now operate with CSSD initialized and
operating. The ASM instance uses special code to locate the contents of the ASM SPFILE,
assuming it is stored in a Diskgroup.
5. With an ASM instance operating and its Diskgroups mounted, access to Clusterware’s OCR
is available to CRSD.
6. OHASD starts CRSD with access to the OCR in an ASM Diskgroup.
7. Clusterware completes initialization and brings up other services under its control.

When Clusterware starts three files are involved.

1. OLR – Is the first file to be read and opened. This file is local and this file contains
information regarding where the voting disk is stored
and information to startup the ASM. (e.g ASM DiscoveryString)

2. VOTING DISK – This is the second file to be opened and read, this is dependent on
only OLR being accessible.

ASM starts after CSSD or ASM does not start if CSSD is offline (i.e voting file missing)

How are Voting Disks stored in ASM?

Voting disks are placed directly on ASMDISK. Oracle Clusterware will store the votedisk on the
disk within a disk group that holds the Voting Files.
Oracle Clusterware does not rely on ASM to access the Voting Files, which means Oracle
Clusterware does not need of Diskgroup to read and write on ASMDISK. It is possible to check
for existence of voting files on a ASMDISK using the V$ASM_DISK column VOTING_FILE.
So, voting files not depend of Diskgroup to be accessed, does not mean that the diskgroup is not
needed, diskgroup and voting file are linked by their settings.

3. OCR – Finally the ASM Instance starts and mount all Diskgroups, then Clusterware Deamon
(CRSD) opens and reads the OCR which is stored on Diskgroup.

So, if ASM already started, ASM does not depend on OCR or OLR to be online. ASM depends


on CSSD (Votedisk) to be online.

There is a exclusive mode to start ASM without CSSD (but it’s to


restore OCR or VOTE purposes)

As Per Oracle doc below are the high level steps for clusterware initialization.
INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High
Availability Services Daemon). This daemon spawns 4 processes.

Level 1: OHASD Spawns:


• cssdagent – Agent responsible for spawning CSSD.
• orarootagent – Agent responsible for managing all root owned ohasd resources.
• oraagent – Agent responsible for managing all oracle owned ohasd resources.
• cssdmonitor – Monitors CSSD and node health (along wth the cssdagent).

Level 2: OHASD rootagent spawns:


• CRSD – Primary daemon responsible for managing cluster resources.
• CTSSD – Cluster Time Synchronization Services Daemon
• Diskmon
• ACFS (ASM Cluster File System) Drivers

Level 3: OHASD oraagent spawns:


• MDNSD – Used for DNS lookup
• GIPCD – Used for inter-process and inter-node communication
• GPNPD – Grid Plug & Play Profile Daemon
• EVMD – Event Monitor Daemon
• ASM – Resource for monitoring ASM instances

Level 4: CRSD spawns:


• orarootagent – Agent responsible for managing all root owned crsd resources.
• oraagent – Agent responsible for managing all oracle owned crsd resources.
Level 4: CRSD rootagent spawns:
• Network resource – To monitor the public network
• SCAN VIP(s) – Single Client Access Name Virtual IPs
• Node VIPs – One per node
• ACFS Registery – For mounting ASM Cluster File System
• GNS VIP (optional) – VIP for GNS

Level 5: CRSD oraagent spawns:


• ASM Resource – ASM Instance(s) resource
• Diskgroup – Used for managing/monitoring ASM diskgroups.
• DB Resource – Used for monitoring and managing the DB and instances
• SCAN Listener – Listener for single client access name, listening on SCAN VIP
• Listener – Node listener listening on the Node VIP
• Services – Used for monitoring and managing services
• ONS – Oracle Notification Service
• eONS – Enhanced Oracle Notification Service
• GSD – For 9i backward compatibility
• GNS (optional) – Grid Naming Service – Performs name resolution

What is GPNP Profile?


The GPnP profile is a XML file located at CRS_HOME/profiles/peer as profile.xml. Each
node of the cluster maintains a copy of this profile locally and is maintained
by ora.gpnpd (GPnP daemon) together with ora.mdnsd (mdns daemon.)

This GPNP profile (profile.xml) contains information like.

Network interfaces for public and private interconnect


ASM server parameter file,
CSS voting disks.
Clustername
Clusterid
Copy the gpnp profile to /tmp location and view it for complete content what it contains.

What is OLR?

In Oracle Clusterware 11g Release 2 an additional component related to the OCR called the
Oracle Local Registry (OLR) is installed on each node in the cluster.

The OLR is a local registry for node specific resources. Location of OLR is
CRS_HOME/cdata/.olr and Location of olr is stored in /etc/oracle/olr.loc

Some importent information which OLR contains

Active crs version


ORA_CRS_HOME
GPnP details
OCR latest backup time and location etc.

This is the first file red to obtain the information to start the CRS stack

Suppose users are connected to node1 and Os team, want to do emergency


maintenance and they need to reboot. What wil happens to the transactions
on node1 , and can we move them to node 2.

The Oracle RAC solution delivers 24/7 database availability, performance, and
scalability. 
Cache Fusion is the key memory feature that enables Oracle RAC performance,
and the new Transparent Application Failover (TAF) is what applications use to
sync up with Oracle RAC availability. 

Oracle RAC and Hardware Failover

To detect a node failure, the Cluster Manager uses a background process?Global


Enqueue Service Monitor (LMON)?to monitor the health of the cluster. When a
node fails, the Cluster Manager reports the change in the cluster's membership to
Global Cache Services (GCS) and Global Enqueue Service (GES). These services
are then re-mastered based on the current membership of the cluster.

To successfully re-master the cluster services, Oracle RAC keeps track of all
resources and resource states on each node and then uses this information to restart
these resources on a backup node.

These processes also manage the state of in-flight transactions and work with TAF
to either restart or resume the transactions on the new node.

Using Transparent Application Failover

After an Oracle RAC node crashes?usually from a hardware failure?all new


application transactions are automatically rerouted to a specified backup node. The
challenge in rerouting is to not lose transactions that were "in flight" at the exact
moment of the crash. One of the requirements of continuous availability is the
ability to restart in-flight application transactions, allowing a failed node to resume
processing on another server without interruption. Oracle's answer to application
failover is a new Oracle Net mechanism dubbed Transparent Application Failover.
TAF allows the DBA to configure the type and method of failover for each Oracle
Net client.

Transparent Application Failover (TAF) is what applications use to sync up with Oracle RAC
availability.TAF in the database reroutes application clients to an available database node in
the cluster when the connected node fails. Application clients do not see error messages
describing loss of service.

In the above configuration, if the users connection to Node 1 dies, their transaction is rolled
back but they can continue work without having to manually reconnect.

For an application to use TAF, it must use failover-aware API calls from the
Oracle Call Interface (OCI). Inside OCI are TAF callback routines that can be used
to make any application failover-aware.

While the concept of failover is simple, providing an apparent instant failover can
be extremely complex, because there are many ways to restart in-flight
transactions. The TAF architecture offers the ability to restart transactions at either
the transaction (SELECT) or session level:

 SELECT failover. With SELECT failover, Oracle Net keeps track of


all SELECT statements issued during the transaction, tracking how many rows
have been fetched back to the client for each cursor associated with
a SELECT statement. If the connection to the instance is lost, Oracle Net
establishes a connection to another Oracle RAC node and re-executes
the SELECT statements, repositioning the cursors so the client can continue
fetching rows as if nothing has happened. The SELECT failover approach is
best for data warehouse systems that perform complex and time-consuming
transactions.
 SESSION failover. When the connection to an instance is lost, SESSION
failover results only in the establishment of a new connection to another
Oracle RAC node; any work in progress is lost. SESSION failover is ideal
for online transaction processing (OLTP) systems, where transactions are
small.

Oracle TAF also offers choices on how to restart a failed transaction. The Oracle
DBA may choose one of the following failover methods:

 BASIC failover. In this approach, the application connects to a backup node


only after the primary connection fails. This approach has low overhead, but
the end user experiences a delay while the new connection is created.
 PRECONNECT failover. In this approach, the application simultaneously
connects to both a primary and a backup node. This offers faster failover,
because a pre-spawned connection is ready to use. But the extra connection
adds everyday overhead by duplicating connections.

Currently, TAF will fail over standard SQL SELECT statements that have been
caught during a node crash in an in-flight transaction failure. In the current release
of TAF, however, TAF must restart some types of transactions from the beginning
of the transaction.

The following types of transactions do not automatically fail over and must be
restarted by TAF:

 Transactional statements. Transactions involving INSERT,


UPDATE, or DELETE statements are not supported by TAF.
 ALTER SESSION statements. ALTER SESSION and
SQL*Plus SET statements do not fail over.

The following do not fail over and cannot be restarted:

 Temporary objects. Transactions using temporary segments in the TEMP


tablespace and global temporary tables do not fail over.
 PL/SQL package states. PL/SQL package states are lost during failover.

bubba.world =
(DESCRIPTION_LIST =
(FAILOVER = true)
(LOAD_BALANCE = true)
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = redneck)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = bubba)
(SERVER = dedicated)
(FAILOVER_MODE =
(BACKUP=cletus)
(TYPE=select)
(METHOD=preconnect)
(RETRIES=20)
(DELAY=3)
)
)
)

The failover_mode section of the tnsnames.ora file lists the parameters and their


values:
BACKUP=cletus. This names the backup node that will take over failed connections when a node crashes. In this
example, the primary server is bubba, and TAF will reconnect failed transactions to the clients instance in case of
server failure.

TYPE=select. This tells TAF to restart all in-flight transactions from the beginning of the transaction (and not to track
cursor states within each transaction).

METHOD=preconnect. This directs TAF to create two connections at transaction startup time: one to the primary
bubba database and a backup connection to the clients database. In case of instance failure, the clients database will
be ready to resume the failed transaction.

RETRIES=20. This directs TAF to retry a failover connection up to 20 times.

DELAY=3. This tells TAF to wait three seconds between connection retries.

Remember, you must set these TAF parameters in every tnsnames.ora file on every


Oracle Net client that needs transparent failover.

Watching TAF in Action

The transparency of TAF operation is a tremendous advantage to application users,


but DBAs need to quickly see what has happened and where failover traffic is
going, and they need to be able to get the status of failover transactions. To provide
this capability, the Oracle data dictionary has several new columns in the
V$SESSION view that give the current status of failover transactions.

The following query calls the new FAILOVER_TYPE, FAILOVER_METHOD,


and FAILED_OVER columns of the V$SESSION view. Be sure to note that the
query is restricted to nonsystem sessions, because Oracle data definition language
(DDL) and data manipulation language (DML) are not recoverable with TAF.
select
username,
sid,
serial#,
failover_type,
failover_method,
failed_over
from
v$session
where
username not in ('SYS','SYSTEM',
'PERFSTAT')
and
failed_over = 'YES';

What are the advantages of partitioning?

1) Performance, 2) Maintenance, and 3) Availability

Performance Advantages
The main advantage -- and the purpose -- of partitioning is said to be to provide
performance advantages. It also enables better manageability for various
applications. The objective of partitioning is to divide database objects, such as
tables, indexes and other objects into smaller, manageable pieces.

Can we convert non-partitioned table to partitioned table online( in


production)?

In previous releases you could partition a non-partitioned table using EXCHANGE


PARTITION or DBMS_REDEFINITION in an "almost online" manner, but both methods
required multiple steps. Oracle Database 12c Release 2 makes it easier than ever to convert a
non-partitioned table to a partitioned table, requiring only a single command and no
downtime.

This is one of the new feature of oracle 12.2 release . 


Non-partitioned tables can be converted to partitioned table online without any
downtime to the application , i.e no impact to the DML activities.
Till now for this activity, we were using dbms_redef methods. But in Oracle 12.2 release
this has been simplified much.

1. Identify the non partitioned table.


SQL> desc BSSTDBA.ORDER_TAB
Name Null? Type
----------------------------------------- -------- ----------------------------
ROW_ID NOT NULL VARCHAR2(15 CHAR)
CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(15 CHAR)
LAST_UPD NOT NULL DATE
MODIFICATION_NUM NOT NULL NUMBER(10)
CONFLICT_ID NOT NULL VARCHAR2(15 CHAR)
ALW_PART_SHIP_FLG NOT NULL CHAR(1 CHAR)

SQL> col owner for a13


SQL> col table_name for a14
SQL> set lines 299
SQL> select owner,table_name,partitioned from dba_tables where
table_name='ORDER_TAB';

OWNER TABLE_NAME PAR


------------- -------------- ---
BSSTDBA ORDER_TAB NO

SQL> select count(*) from BSSTDBA.ORDER_TAB;

COUNT(*)
----------
954598

SQL> SQL> create index BSSTDBA.ORDER_TAB_IND1 on BSSTDBA.ORDER_TAB(row_id);

Index created.

SQL> SQL> create index BSSTDBA.ORDER_TAB_IND2 on BSSTDBA.ORDER_TAB(created);

2. Alter table modify to partition the table.( partition key is column CREATED )
alter table BSSTDBA.ORDER_TAB modify
PARTITION BY RANGE (CREATED)
(partition created_2105_p8 VALUES LESS THAN (TO_DATE('01/09/2015', 'DD/MM/YYYY')),
partition created_2105_p9 VALUES LESS THAN (TO_DATE('01/10/2015', 'DD/MM/YYYY')),
partition created_2105_p10 VALUES LESS THAN (TO_DATE('01/11/2015', 'DD/MM/YYYY')),
partition created_2105_p11 VALUES LESS THAN (TO_DATE('01/12/2015', 'DD/MM/YYYY')),
partition created_2105_p12 VALUES LESS THAN (TO_DATE('01/01/2016', 'DD/MM/YYYY')),
PARTITION Created_MX VALUES LESS THAN (MAXVALUE)) ONLINE;

This activity will take some time depending upon amount of data table has.
While this alter statement is running, I have started running DML activities on the same
table, To check whether it is impacting the DML activities.
SESSION 2:
insert into BSSTDBA.ORDER_TAB select * from BSSTDBA.ORDER_TAB;

Lets check for blocking session:


SID USERNAME MODULE STATUS EVENT BLOCKING_SESSION
---------- -------------------- ------------------------------ --------
------------------------------ ----------------
490 SYS sqlplus@bttstdev64 (TNS V1-V3) ACTIVE enq: TX - row lock contention 7

SID > 490


SQL_TEXT > alter table BSSTDBA.ORDER_TAB modify PARTITION BY RANGE (CREATE
D) (partition created_2105_p8 VALUES LESS THAN (TO_DATE('01/09/2
015', 'DD/MM/YYYY')), partition created_2105_p9 VALUES LESS THAN
(TO_DATE('01/10/2015', 'DD/MM/YYYY')), partition created_2105_p
10 VALUES LESS THAN (TO_DATE('01/11/2015', 'DD/MM/YYYY')), parti
tion created_2105_p11 VALUES LESS THAN (TO_DATE('01/12/2015', 'D
D/MM/YYYY')), partition created_2105_p12 VALUES LESS THAN (TO_DA
TE('01/01/2016', 'DD/MM/YYYY')), partition created_2016_p1 VALUE
THAN (MAXVALUE)) ONLINE

SID > 7
SQL_TEXT> insert into BSSTDBA.ORDER_TAB select * from BSSTDBA.ORDER_TAB;
                           We can see that the insert statement(SID 7), is blocking the alter table
command(SID 490), not the other way around. It means during this partition conversion
activity, if any DML requests are coming, then it will allow them to complete their
request. This may slow down the partition conversion time, But it won’t impact the
application. Once ALTER TABLE MODIFY is completed. Check the whether table was
partitioned properly or not. 
SQL> select partition_name,high_value from dba_tab_partitions where
table_name='ORDER_TAB';

PARTITION_NAME HIGH_VALUE
-----------------------
--------------------------------------------------------------------------------
CREATED_2105_P10 TO_DATE(' 2015-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIA
CREATED_2105_P11 TO_DATE(' 2015-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIA
CREATED_2105_P12 TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIA
CREATED_2105_P8 TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIA
CREATED_2105_P9 TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIA
CREATED_MX MAXVALUE

20 rows selected.

But what happened to the INDEXES:


select index_name,PARTITIONED from dba_indexes where table_name='ORDER_TAB';

INDEX_NAME PARTITIONED
------------------- ------------
ORDER_TAB_IND1 NO
ORDER_TAB_IND2 YES

We can see ORDER_TAB_IND1 was NON partitioned, But ORDER_TAB_IND2 was


partitioned.
Oracle document Says:
If no index clause is mentioned in the alter table statement, then
nonprefixed indexes(i.e index column is not a partitioned key) will be become global non-
partitioned Index.
prefixed indexes(i.e index column is a partitioned key) will become local partitioned Index.
ORDER_TAB_IND1 -
-------------------

INDEX_SQL - > create index BSSTDBA.ORDER_TAB_IND1 on BSSTDBA.ORDER_TAB(row_id);


It is an nonprefixed Index i.e index column is not a partitioned key. So it became
global non partitioned Index

ORDER_TAB_IND2 -
--------------------

create index BSSTDBA.ORDER_TAB_IND2 on BSSTDBA.ORDER_TAB(created);


It is an prefixed Index. i.e index column in a partitione key .
So this indexes became local partitioned Index.

SQL> select index_name,PARTITION_NAME,HIGH_VALUE from dba_ind_partitions where


index_name='ORDER_TAB_IND2';

INDEX_NAME PARTITION_NAME HIGH_VALUE


------------------- -----------------------
--------------------------------------------------------------------------------
ORDER_TAB_IND2 CREATED_2016_P9 TO_DATE(' 2016-10-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_2105_P10 TO_DATE(' 2015-11-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_2105_P11 TO_DATE(' 2015-12-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_2105_P12 TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_2105_P8 TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_2105_P9 TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
ORDER_TAB_IND2 CREATED_MX MAXVALUE

20 rows selected.
Online Conversion of a Non-Partitioned Table to a
Partitioned Table in Oracle Database 12c Release 2
(12.2)
In previous releases you could partition a non-partitioned table using EXCHANGE
PARTITION or DBMS_REDEFINITION in an "almost online" manner, but both methods
required multiple steps. Oracle Database 12c Release 2 makes it easier than ever to
convert a non-partitioned table to a partitioned table, requiring only a single
command and no downtime.

 Setup
 Partition a Table
 Composite Partition (Sub-Partition) a Table
 Restrictions

Related articles.

 Online Conversion of a Non-Partitioned Table to a Partitioned Table in Oracle 12.2


Onward 
 Partitioning an Existing Table using EXCHANGE PARTITION
 Partitioning an Existing Table using DBMS_REDEFINITION
 All Partitioning Articles
 Partitioning Enhancements in Oracle Database 12c Release 2 (12.2)

Setup
Create and populate a test table. You will need to repeat this between each test.

DROP TABLE t1 PURGE;

CREATE TABLE t1 (
id NUMBER,
description VARCHAR2(50),
created_date DATE,
CONSTRAINT t1_pk PRIMARY KEY (id)
);

CREATE INDEX t1_created_date_idx ON t1(created_date);

INSERT INTO t1
SELECT level,
'Description for ' || level,
ADD_MONTHS(TO_DATE('01-JAN-2017', 'DD-MON-YYYY'), -
TRUNC(DBMS_RANDOM.value(1,4)-1)*12)
FROM dual
CONNECT BY level <= 10000;
COMMIT;

We can see the data is spread across three years.

SELECT created_date, COUNT(*)


FROM t1
GROUP BY created_date
ORDER BY 1;

CREATED_D COUNT(*)
--------- ----------
01-JAN-15 3340
01-JAN-16 3290
01-JAN-17 3370

SQL>

Partition a Table
We convert the table to a partitioned table using the ALTER TABLE ...
MODIFY command. Here are some basic examples of this operation. Adding
the ONLINE keyword allows the operation to be completed online, meaning most
DML operations can continue during the conversion.

-- Online operation.
ALTER TABLE t1 MODIFY
PARTITION BY RANGE (created_date) (
PARTITION t1_part_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-
YYYY')),
PARTITION t1_part_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-
YYYY')),
PARTITION t1_part_2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-
YYYY'))
) ONLINE;

We gather statistics and check the table partitions. We can see the data has been
split between the partitions as expected.

-- Gather statistics.
EXEC DBMS_STATS.gather_table_stats(NULL, 'T1');

-- Check table partitions.


SELECT table_name, partition_name, num_rows
FROM user_tab_partitions
ORDER BY 1,2;

TABLE_NAME PARTITION_NAME NUM_ROWS


-------------------- -------------------- ----------
T1 T1_PART_2015 3340
T1 T1_PART_2016 3290
T1 T1_PART_2027 3370

SQL>

When we check the indexes we see the index on the CREATED_DATE column has been
converted to a locally partitioned index. By default all prefixed indexes, an index with
the partition key in the column list, will be converted to locally partitioned indexes.
All indexes are left in a valid state at the end of the operation.

-- Check indexes.
SELECT index_name, partitioned, status
FROM user_indexes
ORDER BY 1;

INDEX_NAME PARTITIONED STATUS


-------------------- ----------- --------
T1_CREATED_DATE_IDX YES N/A
T1_PK NO VALID

SQL>

-- Check index partitions.


SELECT index_name, partition_name, status
FROM user_ind_partitions
ORDER BY 1,2;

INDEX_NAME PARTITION_NAME STATUS


-------------------- -------------------- --------
T1_CREATED_DATE_IDX T1_PART_2015 USABLE
T1_CREATED_DATE_IDX T1_PART_2016 USABLE
T1_CREATED_DATE_IDX T1_PART_2017 USABLE

SQL>

The offline operation is similar, but we omit the ONLINE keyword. The outcome will be
the same as the online operation, but DML will not be available during the operation.

-- Offline operation.
ALTER TABLE t1 MODIFY
PARTITION BY RANGE (created_date) (
PARTITION t1_part_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-
YYYY')),
PARTITION t1_part_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-
YYYY')),
PARTITION t1_part_2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-
YYYY'))
);

We can influence the index conversion using the UPDATE INDEXES clause. This can
indicate the type of index to be rebuilt as part of the operation, as well as some
storage parameters. In the following example we want the index on
the CREATED_DATE column to remain as a global index.

-- Online operation with modification of index partitioning.


ALTER TABLE t1 MODIFY
PARTITION BY RANGE (created_date) (
PARTITION t1_part_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-
YYYY')),
PARTITION t1_part_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-
YYYY')),
PARTITION t1_part_2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-
YYYY'))
) ONLINE
UPDATE INDEXES
(
t1_pk GLOBAL,
t1_created_date_idx GLOBAL
);

After running the last example we can see the indexes as both non-partitioned, and
as a result there are no index partitions.

-- Check indexes.
SELECT index_name, partitioned, status
FROM user_indexes
ORDER BY 1;

INDEX_NAME PARTITIONED STATUS


-------------------- ----------- --------
T1_CREATED_DATE_IDX NO VALID
T1_PK NO VALID

SQL>

-- Check index partitions.


SELECT index_name, partition_name, status
FROM user_ind_partitions
ORDER BY 1,2;

no rows selected

SQL>

Composite Partition (Sub-Partition) a Table


The original table can also be composite-partitioned using the ALTER TABLE ...
MODIFY command. In this example we convert the original table to a range-hash
partitioned table.

ALTER TABLE t1 MODIFY


PARTITION BY RANGE (created_date) SUBPARTITION BY HASH (id)(
PARTITION t1_part_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-
YYYY')) (
SUBPARTITION t1_sub_part_2015_1,
SUBPARTITION t1_sub_part_2015_2,
SUBPARTITION t1_sub_part_2015_3,
SUBPARTITION t1_sub_part_2015_4
),
PARTITION t1_part_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-
YYYY')) (
SUBPARTITION t1_sub_part_2016_1,
SUBPARTITION t1_sub_part_2016_2,
SUBPARTITION t1_sub_part_2016_3,
SUBPARTITION t1_sub_part_2016_4
),
PARTITION t1_part_2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-
YYYY')) (
SUBPARTITION t1_sub_part_2017_1,
SUBPARTITION t1_sub_part_2017_2,
SUBPARTITION t1_sub_part_2017_3,
SUBPARTITION t1_sub_part_2017_4
)
) ONLINE
UPDATE INDEXES
(
t1_pk GLOBAL,
t1_created_date_idx LOCAL
);

The sub-partitions of the table and partitioned index can be displayed using the
following queries.

COLUMN table_name FORMAT A20


COLUMN partition_name FORMAT A20
COLUMN subpartition_name FORMAT A20

SELECT table_name, partition_name, subpartition_name


FROM user_tab_subpartitions
ORDER BY 1,2, 3;

TABLE_NAME PARTITION_NAME SUBPARTITION_NAME


-------------------- -------------------- --------------------
T1 T1_PART_2015 T1_SUB_PART_2015_1
T1 T1_PART_2015 T1_SUB_PART_2015_2
T1 T1_PART_2015 T1_SUB_PART_2015_3
T1 T1_PART_2015 T1_SUB_PART_2015_4
T1 T1_PART_2016 T1_SUB_PART_2016_1
T1 T1_PART_2016 T1_SUB_PART_2016_2
T1 T1_PART_2016 T1_SUB_PART_2016_3
T1 T1_PART_2016 T1_SUB_PART_2016_4
T1 T1_PART_2017 T1_SUB_PART_2017_1
T1 T1_PART_2017 T1_SUB_PART_2017_2
T1 T1_PART_2017 T1_SUB_PART_2017_3
T1 T1_PART_2017 T1_SUB_PART_2017_4
SQL>

COLUMN index_name FORMAT A20


COLUMN partition_name FORMAT A20
COLUMN subpartition_name FORMAT A20

SELECT index_name, partition_name, subpartition_name, status


FROM user_ind_subpartitions
ORDER BY 1,2;

INDEX_NAME PARTITION_NAME SUBPARTITION_NAME STATUS


-------------------- -------------------- -------------------- --------
T1_CREATED_DATE_IDX T1_PART_2015 T1_SUB_PART_2015_1 USABLE
T1_CREATED_DATE_IDX T1_PART_2015 T1_SUB_PART_2015_2 USABLE
T1_CREATED_DATE_IDX T1_PART_2015 T1_SUB_PART_2015_3 USABLE
T1_CREATED_DATE_IDX T1_PART_2015 T1_SUB_PART_2015_4 USABLE
T1_CREATED_DATE_IDX T1_PART_2016 T1_SUB_PART_2016_1 USABLE
T1_CREATED_DATE_IDX T1_PART_2016 T1_SUB_PART_2016_2 USABLE
T1_CREATED_DATE_IDX T1_PART_2016 T1_SUB_PART_2016_4 USABLE
T1_CREATED_DATE_IDX T1_PART_2016 T1_SUB_PART_2016_3 USABLE
T1_CREATED_DATE_IDX T1_PART_2017 T1_SUB_PART_2017_1 USABLE
T1_CREATED_DATE_IDX T1_PART_2017 T1_SUB_PART_2017_3 USABLE
T1_CREATED_DATE_IDX T1_PART_2017 T1_SUB_PART_2017_2 USABLE
T1_CREATED_DATE_IDX T1_PART_2017 T1_SUB_PART_2017_4 USABLE

SQL>

You might also like