Quantcast
Channel: A! Help
Viewing all 315 articles
Browse latest View live

Adding Far Sync Instances to Existing Data Guard Configuration

$
0
0
This post list the steps of adding far sync instances to an existing data guard configuration (for both primary and standby databases, to be used when standby become primary). Far sync is an new feature introduced with 12c which allows transporting of redo data synchronously from primary DB to a "near by" far sync instance, which then transport the redo data asynchronously "over a longer distance". The idea is that there's low overhead on primary when transporting redo synchronously to a "near by" far sync instance compared to transporting "over a long distance" to achieve zero data loss as well as off-loading of the redo transport.
Oracle documentation provides a complete description of the far sync concept. Given below are few important excerpts
Many configurations have a primary database shipping redo to a standby database using asynchronous transport at the risk of some data loss at failover time. Using synchronous redo transport to achieve zero data loss may not be a viable option because of the impact on the commit response times at the primary due to network latency between the two databases. Creating a far sync instance close to the primary has the benefit of minimizing impact on commit response times to an acceptable threshold (due to the smaller network latency between primary and far sync instance) while allowing for higher data protection guarantees -- if the primary were to fail, and assuming the far sync instance was synchronized at the time of the failure, the far sync instance and the terminal standby would coordinate a final redo shipment from the far sync instance to the standby to ship any redo not yet available to the Standby and then perform a zero-data-loss failover.
A far sync instance manages a control file, receives redo into standby redo logs (SRLs), and archives those SRLs to local archived redo logs. A far sync instance does not have user data files, cannot be opened for access, cannot run redo apply, and can never function in the primary role or be converted to any type of standby database.
Far sync instances are part of the Oracle Active Data Guard Far Sync feature, which requires an Oracle Active Data Guard license.
In a configuration that contains a far sync instance, there must still be a direct network connection between the primary database and the remote standby database. The direct connection between the primary and the remote standby is used to perform health checks and switchover processing tasks. It is not used for redo transport unless the standby has been configured as an alternate destination in case the far sync instance fails and there is no alternate far sync configured to maintain the protection level.
The existing data guard configuration's primary database parameter setting and active data guard creation script of the standby database is given below. (other prerequisites for setting up data guard is omitted)
Primary database parameter changes
alter system set log_archive_config='dg_config=(ent12c1,ent12c1s)' scope=both ;
alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=ent12c1' scope=both;
alter system set log_archive_dest_2='service=ENT12C1STNS ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=ent12c1s' scope=both;
alter system set log_archive_dest_state_2='defer' scope=both;
alter system set log_archive_dest_state_1='enable' scope=both;
alter system set fal_server='ENT12C1STNS' scope=both;
alter system set log_archive_max_processes=10 scope=both;
alter system set db_file_name_convert='/opt/app/oracle/oradata/ENT12C1S','/data/oradata/ENT12C1' scope=spfile;
alter system set log_file_name_convert='/opt/app/oracle/oradata/ENT12C1S','/data/oradata/ENT12C1' ,'/opt/app/oracle/fast_recovery_area/ENT12C1S','/data/flash_recovery/ENT12C1' scope=spfile;
alter system set standby_file_management='AUTO' scope=both;
Standby creation script

mkdir -p /opt/app/oracle/oradata/ENT12C1S/controlfile
mkdir -p /opt/app/oracle/fast_recovery_area/ENT12C1S/controlfile

duplicate target database for standby from active database spfile
parameter_value_convert 'ent12c1','ent12c1s','ENT12C1','ENT12C1S','data','opt/app/oracle','flash_recovery','fast_recovery_area'
set db_unique_name='ent12c1s'
set db_create_file_dest='/opt/app/oracle/oradata'
set db_recovery_file_dest='/opt/app/oracle/fast_recovery_area'
set db_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/ENT12C1S'
set log_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/ENT12C1S','/data/flash_recovery/ENT12C1','/opt/app/oracle/fast_recovery_area/ENT12C1S'
set log_archive_max_processes='10'
set fal_server='ENT12C1TNS'
set log_archive_dest_2='service=ENT12C1TNS ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=ent12c1'
set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=ent12c1s';
This single instance data guard configuration which transport redo asynchronously is transformed as show below.
In situation 1 ENT12C1 is the primary database while ENT12C1S is the standby and FS12C1 is the far sync instance the primary database ships redo synchronously.
After a role switch when the ENT12C1S becomes the new primary then it uses FS12C1S as the far sync instance to transport redo synchronously.
In both situation there exists a direct redo transport path between primary and standby and this will transport redo asynchronously in case of far sync instance failure. Once the far sync is backup again the data guard configuration will revert to using the far sync instance for redo transport. If the standby redo logs were created on the primary then when far sync instances are in use for redo transport standby redo logs will be created automatically for them.
1. On the servers used for creating the far sync instances, install the oracle database software and create a listener. There's no requirement for creating static listener configuration as far sync instance automatically registers with the listener.
2. Create TNS entries for far sync instances on the existing databases (primary and standby) and copy the existing TNS entries into the far sync instances tnsnames.ora file.
cat tnsnames.ora

ENT12C1TNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ent12c1-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ent12c1)
)
)

ENT12C1STNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ent12c1s-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ent12c1)
)
)

FS12C1TNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = fs12c1-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = fs12c1)
)
)

FS12C1STNS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = fs12c1s-host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = fs12c1s)
)
)
3. Create a control file for the far sync instances by connecting to the primary database. Same control file is used for both far sync instances in this case (fs12c1 and fs12c1s).
SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/home/oracle/controlfs.ctl';
4. Copy the control file to the far sync instances. In this case the control file is multiplexed and renamed as control01.ctl and control02.ctl on the far sync instances (shown on the pfiles in subsequent steps).

scp controlfs.ctl fs12c1-host:/opt/app/oracle/oradata/FS12C1/controlfile/controlfs01.ctl
scp controlfs.ctl fs12c1s-host:/opt/app/oracle/oradata/FS12C1/controlfile/controlfs01.ctl
Similarly copy the password file from the primary to $ORACLE_HOME/dbs on the servers where far sync instances will be created. For far sync instances fs12c1 and fs12c1s the password file need to be renamed as orapwfs12c1 and orapwfs12c1s respectively.

5. Create pfile from the primary spfile. This will be modified to reflect the far sync instance settings.
SQL> create pfile='/home/oracle/pfilefs.ora' from spfile;
6. Copy the pfile to far sync instances ($ORACLE_HOME/dbs) and rename them to reflect the instance names (eg. initfs12c1.ora and initfs12c1s.ora). Modify the init file used for the primary far sync instance (fs12c1) as shown below. Not all the parameters are needed for far sync and those could be removed from the pfile.
cat initfs12c1.ora

*.audit_file_dest='/opt/app/oracle/admin/fs12c1/adump'
*.audit_trail='OS'
*.compatible='12.1.0.2'
*.control_files='/opt/app/oracle/oradata/FS12C1/controlfile/controlfs01.ctl','/opt/app/oracle/oradata/FS12C1/controlfile/controlfs02.ctl'
*.db_block_size=8192
*.db_create_file_dest='/opt/app/oracle/oradata'
*.db_name='ent12c1'
*.db_recovery_file_dest_size=21474836480
*.db_recovery_file_dest='/opt/app/oracle/fast_recovery_area'
*.db_unique_name='fs12c1'
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=fs12c1XDB)'
*.fal_server='ENT12C1TNS'
*.log_archive_config='dg_config=(ent12c1,ent12c1s,fs12c1,fs12c1s)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fs12c1'
*.log_archive_dest_2='service=ENT12C1STNS ASYNC NOAFFIRM valid_for=(STANDBY_LOGFILES,standby_role) db_unique_name=ent12c1s max_failure=10 max_connections=5 reopen=180'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/FS12C1','/data/flash_recovery/ENT12C1','/opt/app/oracle/fast_recovery_area/FS12C1'
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
7. Mount the far sync instance using the pfile and then create a spfile from the pfile. Without the spfile a warning is shown when the far sync is added to a data guad broker configuration. Beside that having a spfile also helps with any subsequent parameter changes without the need to restart the far sync instance. Restart (mount) the far sync instance using the spfile.



8. Similarly create the pfile for the far sync instance used by current standby (when it becomes the primary) FS12C1S.
cat initfs12c1s.ora

*.audit_file_dest='/opt/app/oracle/admin/fs12c1s/adump'
*.audit_trail='OS'
*.compatible='12.1.0.2'
*.control_files='/opt/app/oracle/oradata/FS12C1S/controlfile/control01.ctl','/opt/app/oracle/fast_recovery_area/FS12C1S/controlfile/control02.ctl'
*.db_create_file_dest='/opt/app/oracle/oradata'
*.db_name='ent12c1'
*.db_recovery_file_dest_size=21474836480
*.db_recovery_file_dest='/opt/app/oracle/fast_recovery_area'
*.db_unique_name='fs12c1s'
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=fs12c1sXDB)'
*.fal_server='ENT12C1STNS'
*.log_archive_config='dg_config=(ent12c1,ent12c1s,fs12c1,fs12c1s)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fs12c1s'
*.log_archive_dest_2='service=ENT12C1TNS ASYNC NOAFFIRM valid_for=(standby_logfiles,standby_role) db_unique_name=ent12c1 max_failure=10 max_connections=5 reopen=180'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/FS12C1S','/data/flash_recovery/ENT12C1','/opt/app/oracle/fast_recovery_area/FS12C1S'
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
9. Similar to earlier (step 7) create a spfile and restart (mount) the far sync instance using the spfile.

10. Update the log archive config parameter on both primary and standby to include the far sync instance information as well.
alter system set log_archive_config='dg_config=(ent12c1,ent12c1s,fs12c1,fs12c1s)' scope=both ;
11. Update the fal server parameter on the primary (ent12c1) as below which allows the ent12c1 to fetch archive logs (when it becomes a standby) either from the primary (ent12c1s) or from the far sync instance (fs12c1s).
alter system set fal_server='ENT12C1STNS','FS12C1STNS' scope=both;
12. Update the fal server parameter on the standby (ent12c1s) so that it can fetch the archive logs either from the ent12c1 (primary) or far sync instance (fs12c1).
alter system set fal_server='ENT12C1TNS','FS12C1TNS' scope=both;
13. Update the log archive destination and log archive destination state on the primary such that redo transport is synchronized between primary and far sync and asynchronous between the primary and standby (direct). Further more the asynchronous log archive destination is set with state alternate so that when the synchronous log archive destination fails the data guard configuration start shipping redo via this alternate log archive destination.
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set log_archive_dest_2='service=FS12C1TNS SYNC AFFIRM db_unique_name=fs12c1 max_failure=1 valid_for=(online_logfiles,primary_role) alternate=log_archive_dest_3 max_connections=5' scope=both;

alter system set log_archive_dest_state_3='alternate' scope=both;
alter system set log_archive_dest_3='service=ENT12C1STNS ASYNC NOAFFIRM db_unique_name=ent12c1s valid_for=(online_logfiles,primary_role) alternate=log_archive_dest_2 max_failure=10 max_connections=5 reopen=180' scope=both;
14. Change and add log archive destination settings on the standby database so that when it becomes primary (ent12c1s) it too can use the far sync instance for synchronous redo transport and failing that use asynchronous redo transport directly with the standby at the time (ent12c1)
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set log_archive_dest_2='service=FS12C1STNS SYNC AFFIRM db_unique_name=fs12c1s valid_for=(online_logfiles,primary_role) alternate=log_archive_dest_3 max_connections=5 max_failure=1' scope=both;

alter system set log_archive_dest_state_3='alternate'scope=both;
alter system set log_archive_dest_3='service=ENT12C1TNS ASYNC NOAFFIRM db_unique_name=ent12c1 valid_for=(online_logfiles,primary_role) alternate=log_archive_dest_2 max_failure=10 max_connections=5 reopen=180' scope=both;
15. Increase the protection mode of the primary database to maximum availability.
ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

SQL> SELECT PROTECTION_MODE FROM V$DATABASE;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY
16. Do few log switches and verify that redo transport is happening via the far sync instance. Easiest way to monitor is through the alert log, which log switch will be logged on both far sync instance alert log and standby instance alert log if it was transported via the far sync. If there's any issues with regard to log archive destination this could be observed on the primary
SQL> SELECT DEST_NAME,STATUS,DESTINATION FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME STATUS DESTINATION
------------------------------ --------- ------------------------------
LOG_ARCHIVE_DEST_1 VALID USE_DB_RECOVERY_FILE_DEST
LOG_ARCHIVE_DEST_2 VALID FS12C1TNS
LOG_ARCHIVE_DEST_3 ALTERNATE ENT12C1STN
S
and on the far sync instance
SQL> SELECT DEST_NAME,STATUS,DESTINATION FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME STATUS DESTINATION
------------------------------ --------- ------------------------------
LOG_ARCHIVE_DEST_1 VALID USE_DB_RECOVERY_FILE_DEST
LOG_ARCHIVE_DEST_2 VALID ENT12C1STNS
STANDBY_ARCHIVE_DEST VALID USE_DB_RECOVERY_FILE_DEST
Above output shows that primary is transporting to log archive dest 2 and status valid and dest 3 is still in alternative state. On the far sync output it shows that far sync instance is shipping redo as per its log archive dest 2 value and current status is valid.

17. Shutdown abort the far instance and view the archive dest output. If the configuration works properly then log archive dest 3 should be valid and redo transport should be happening directory between primary and standby in asynchronous mode.
SQL> SELECT DEST_NAME,STATUS,DESTINATION FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME STATUS DESTINATION
------------------------------ --------- ------------------------------
LOG_ARCHIVE_DEST_1 VALID USE_DB_RECOVERY_FILE_DEST
LOG_ARCHIVE_DEST_2 ALTERNATE FS12C1TNS
LOG_ARCHIVE_DEST_3 VALID ENT12C1STNS
From the above output it could be seen that after far sync instance is terminated the log archive dest 3 has become the valid destination and log archive dest 2 is kept as an alternate destination. On the primary instance alert log following could be observed when the far sync instance is terminated
Destination LOG_ARCHIVE_DEST_2 is UNSYNCHRONIZED
LGWR: Failed to archive log 1 thread 1 sequence 1503 (3113)
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
and on the standby instance alert log the following
Primary database is in MAXIMUM PERFORMANCE mode
Changing standby controlfile to MAXIMUM PERFORMANCE mode
It must be also mentioned on few occasions where the far sync was abruptly terminated (shutdown abort) the recovery process on standby got stopped due to lost writes
MRP0: Background Media Recovery terminated with error 742  <-- far sync instance terminated
Fri Sep 12 17:32:28 2014
Errors in file /opt/app/oracle/diag/rdbms/ent12c1s/ent12c1s/trace/ent12c1s_pr00_10098.trc:
ORA-00742: Log read detects lost write in thread 1 sequence 1503 block 868
ORA-00312: online log 4 thread 1: '/opt/app/oracle/fast_recovery_area/ENT12C1S/onlinelog/o1_mf_4_b0y1dn8v_.log'
ORA-00312: online log 4 thread 1: '/opt/app/oracle/oradata/ENT12C1S/onlinelog/o1_mf_4_b0y1dml4_.log'
Managed Standby Recovery not using Real Time Apply
RFS[16]: Assigned to RFS process (PID:10165)
RFS[16]: Selected log 5 for thread 1 sequence 1504 dbid 209099011 branch 833730501
Fri Sep 12 17:32:28 2014
Recovery interrupted!
Recovered data files to a consistent state at change 19573793

Fri Sep 12 17:32:28 2014
Errors in file /opt/app/oracle/diag/rdbms/ent12c1s/ent12c1s/trace/ent12c1s_pr00_10098.trc:
ORA-00742: Log read detects lost write in thread 1 sequence 1503 block 868
ORA-00312: online log 4 thread 1: '/opt/app/oracle/fast_recovery_area/ENT12C1S/onlinelog/o1_mf_4_b0y1dn8v_.log'
ORA-00312: online log 4 thread 1: '/opt/app/oracle/oradata/ENT12C1S/onlinelog/o1_mf_4_b0y1dml4_.log'
Fri Sep 12 17:32:28 2014
MRP0: Background Media Recovery process shutdown (ent12c1s)
Fri Sep 12 17:32:28 2014
Archived Log entry 107 added for thread 1 sequence 1503 rlc 833730501 ID 0xde697d0 dest 3:
Fri Sep 12 17:38:51 2014
alter database recover managed standby database disconnect<-- manual start at 17:38 after it was stopped 17:32
So it maybe good idea to keep an eye on the recovery when the far sync instance terminates. According to 1302539.1 when active data guard is in place the there's automatic block repair transparent to the user. Once the far sync instance is backup again the redo transport will go back to original setting
SQL>  SELECT DEST_NAME,STATUS,DESTINATION FROM V$ARCHIVE_DEST WHERE DESTINATION IS NOT NULL;

DEST_NAME STATUS DESTINATION
------------------------------ --------- ------------------------------
LOG_ARCHIVE_DEST_1 VALID USE_DB_RECOVERY_FILE_DEST
LOG_ARCHIVE_DEST_2 VALID FS12C1TNS
LOG_ARCHIVE_DEST_3 ALTERNATE ENT12C1STNS
And following could be observed on the alert log of primary
Destination LOG_ARCHIVE_DEST_2 is UNSYNCHRONIZED
LGWR: Standby redo logfile selected to archive thread 1 sequence 1506
LGWR: Standby redo logfile selected for thread 1 sequence 1506 for destination LOG_ARCHIVE_DEST_2
Fri Sep 12 16:21:08 2014
Thread 1 advanced to log sequence 1506 (LGWR switch)
Current log# 1 seq# 1506 mem# 0: /data/oradata/ENT12C1/onlinelog/o1_mf_1_9bcsl3ds_.log
Current log# 1 seq# 1506 mem# 1: /data/flash_recovery/ENT12C1/onlinelog/o1_mf_1_9bcsl3hm_.log
Fri Sep 12 16:21:08 2014
Archived Log entry 1408 added for thread 1 sequence 1505 ID 0xde697d0 dest 1:
Fri Sep 12 16:21:11 2014
Thread 1 cannot allocate new log, sequence 1507
Checkpoint not complete
Current log# 1 seq# 1506 mem# 0: /data/oradata/ENT12C1/onlinelog/o1_mf_1_9bcsl3ds_.log
Current log# 1 seq# 1506 mem# 1: /data/flash_recovery/ENT12C1/onlinelog/o1_mf_1_9bcsl3hm_.log
Fri Sep 12 16:21:14 2014
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
On the alert log of the far sync instance
Primary database is in MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
Standby controlfile consistent with primary
RFS[1]: Assigned to RFS process (PID:3557)
RFS[1]: Selected log 5 for thread 1 sequence 1506 dbid 209099011 branch 833730501
Fri Sep 12 17:04:06 2014
******************************************************************
TT00: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
******************************************************************
TT00: Standby redo logfile selected for thread 1 sequence 1506 for destination LOG_ARCHIVE_DEST_2
Fri Sep 12 17:04:07 2014
Changing standby controlfile to MAXIMUM PERFORMANCE mode
RFS[2]: Assigned to RFS process (PID:3561)
RFS[2]: Opened log for thread 1 sequence 1505 dbid 209099011 branch 833730501
Fri Sep 12 17:04:08 2014
Archived Log entry 203 added for thread 1 sequence 1505 rlc 833730501 ID 0xde697d0 dest 2:
Fri Sep 12 17:04:13 2014
Changing standby controlfile to MAXIMUM AVAILABILITY mode
On the standby alert log
Primary database is in MAXIMUM PERFORMANCE mode
Changing standby controlfile to MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
Similarly it is possible to shutdown the standby instance and see if primary is able to ship redo to far sync instance and if the redo is fetch by standby once it is started again. With these test the situation 1 on the figure shown at the beginning of the post is complete.

18. To test the situation 2 on the figure above do a switchover and check the redo transport via the far sync (fs12c1s) instance.

This conclude adding of far sync instances to existing data guard configuration on 12c.

Useful metalink notes
Cascaded Standby Databases in Oracle 12c [ID 1542969.1]
Data Guard 12c New Feature: Far Sync Standby [ID 1565071.1]
Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration [ID 1302539.1]

APPEND_VALUES Hint and JDBC

$
0
0
APPEND_VALUE hint was introduced in 11gR2 for direct path inserts with values clause. Append hint is only useful when doing direct path loading with a select sub-query. On Oracle documentation mention append_value hint is useful in enhnacing performance and list OCI program and PL/SQL as example. There's no mention of JDBC. Is there a performance gain when using append_value hint when the inserts are issued through JDBC?
This is a simple test case that tried to answer this question. The test case involves inserting 50000 rows to a single column table. The insert statement is issued with and without (conventional insert) the hint. Also the table is created with and without logging enabled. Append* hints generate less redo when the table has nologging enabled. But this is not entirely dependent on table but also other factors as well. The test measures redo size, cpu used by session and the total elapsed time to insert the rows (this is measured on the side of the java test code) statistics. Test is repeated for above combination employing JDBC batching as well. The java code used for the test case is given at the end of the post. The tests were carried out on 11.2.0.3 EE database. (update 2014/10/24: Same test was done on 12.1.0.2 EE DB. The results more or less the same. In 12.1.0.2 even nologging + batching with append hint didn't out perform same test without the hint)
First comparison is the size of the redo generated during the inserts.
The first graph shows the redo generated without batching inserts. There's no surprise that when the table is in nologging mode the amount of redo generated is less than when table is in logging mode. But not having the append hint (in this post append hint means append_value hint) seems to generate less redo than having the append hint. This is true when table is logging and nologging mode. On the other hand when the inserts are batched and when table is in nologging having the append hint results in minimum redo being generated and this amount is less than the redo generated without the append hint (in both logging and nologging modes). If the table is in logging mode then batching without the append hint results in less redo generated compared to using the append hint.
The next statistic is the CPU used for completing the inserts. The CPU used by this session statistics is used to calculate this by capturing CPU statistics value before and after inserts.
Batching the inserts results in minimum CPU being consumed for inserts compared to not batching. There's no great deal difference in the CPU consumption when the append hint is used compared to it not being used. However when inserts are not batched the amount of CPU consumed is doubled and tripled when append hint is used compared to without the hint. So in terms of CPU consumption, having the append hint and not batching the inserts will result in performance degradation.
Final statistics is the total elapsed time to insert the rows. This is roughly the total execution time for the test code. The time is measured in milliseconds.
Similar to CPU usage, batching results in lowest elapsed time. This is no surprise as CPU is a component of the overall elapsed time. However when inserts are not batched then having the append hint results in high elapsed time compared to inserting without the append hint.
From this limited test case it seems that only time that's beneficial to use append hint with JDBC is when inserts are batched and table is in nologging mode. Other times the non batched inserts and batched inserts out perform the effects of append hint. But having nologging table may not be an option in a production system and even if it is possible have a nologging table, highly concurrent inserts into a nologging table could results high number of control file related wait events.
Furthermore there few other points to consider before deciding to use append hint in the application. When append_value is used to insert to a table another session cannot insert until first session commits. The second session will hang and it will be waiting on a enq: TM - contention wait event which is usually associated with unindexed foreign key related issues. So the concurrent nature of the inserts must be considered. If the inserts are highly concurrent then having the append hint may not be a good idea.
Within the same session after one insert another cannot be made without first committing the previous insert.
SQL> set auto off
SQL> insert /*+ append_values */ into append_hint_test values ('abc');

1 row created.

SQL> insert /*+ append_values */ into append_hint_test values ('def');
insert /*+ append_values */ into append_hint_test values ('def')
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
Therefore java codes that's been written to reuses the cursors and have auto commit set to false will encounter following error
Exception in thread "main" java.sql.SQLException: ORA-12838: cannot read/modify an object after modifying it in parallel
Also the append hint results in direct loading of data to the end of the table. This results in continuous growth of the table even if there's free space available (which may or may not be a problem in some cases). Therefore it maybe better to use traditional batching with JDBC than using append hint as the negative consequence of using it seem to out weigh the gains.



On the other hand in PL/SQL with batch inserts (using FORALL) the append hint seem to out perform the conventional inserts. PL/SQL code used is also given at the end of the post. Below graph shows the redo size generated for inserting 50000 rows with and without append hint.


Create table for the test cases.
create table append_hint_test(value varchar2(50));
create table append_hint_plsql_test(value varchar2(50));
Java Code Used.
public class AppendHintTest {

public static void main(String[] args) throws Exception {

OracleDataSource dataSource = new OracleDataSource();
dataSource.setURL("jdbc:oracle:thin:@192.168.0.66:1521:ent11g2");
dataSource.setUser("asanga");
dataSource.setPassword("asa");

Connection con = dataSource.getConnection();
// con.setAutoCommit(false);
DBStats stats = new DBStats();
stats.initStats(con);

String SQL = "insert /*+ APPEND_VALUES */ into append_hint_test values (?)";
// String SQL = "insert into append_hint_test values (?)";

PreparedStatement pr = con.prepareStatement(SQL);


long t1 = System.currentTimeMillis();

for (int i = 0; i < 50000; i++) {
pr.setString(1, "akhgaipghapga " + i);
pr.execute();

// pr.addBatch();
// if(i%1000==0){
// pr.executeBatch();

// }
}

// pr.executeBatch();
con.commit();
pr.close();

long t2 = System.currentTimeMillis();
String[][] statsValues = stats.getStatsDiff(con);
con.close();
System.out.println("time taken " + (t2 - t1));

for (String[] x : statsValues) {

System.out.println(x[0] + " : " + x[1]);
}

}
}

public class DBStats {

private HashMap stats = new HashMap<>();
private String SQL = "select name,value " + "from v$mystat,v$statname " + "where v$mystat.statistic#=v$statname.statistic# "
+ "and v$statname.name in ('CPU used when call started','CPU used by this session','db block gets',"
+ "'db block gets from cache','db block gets from cache (fastpath)','db block gets direct',"
+ "'consistent gets','consistent gets from cache','consistent gets from cache (fastpath)',"
+ "'consistent gets - examination','consistent gets direct','physical reads',"
+ "'physical reads direct','physical read IO requests','physical read bytes',"
+ "'consistent changes','physical writes','physical writes direct',"
+ "'physical write IO requests','physical writes from cache','redo size')";

public void initStats(Connection con) {
try {
PreparedStatement pr = con.prepareStatement(SQL);

ResultSet rs = pr.executeQuery();


while (rs.next()) {

stats.put(rs.getString(1), rs.getLong(2));
}

rs.close();
pr.close();
} catch (SQLException ex) {
ex.printStackTrace();
}

}

public String[][] getStatsDiff(Connection con) {

String[][] statDif = new String[stats.size()][2];

try {
PreparedStatement pr = con.prepareStatement(SQL);

ResultSet rs = pr.executeQuery();

int i = 0;
while (rs.next()) {
Long val = rs.getLong(2) - stats.get(rs.getString(1));
statDif[i][0] = rs.getString(1);
statDif[i][1] = val.toString();
i++;
}

rs.close();
pr.close();
} catch (SQLException ex) {
ex.printStackTrace();
}


return statDif;


}
}
PL/SQL code used. Before running the PL/SQL test populate the Append_Hint_Test table with rows using java code above.
SET serveroutput ON
DECLARE
Type Arry_Type
IS
TABLE OF Loadt%Rowtype INDEX BY PLS_INTEGER;
Loadtable Arry_Type;
redosize1 NUMBER;
redosize2 NUMBER;
t1 NUMBER;
t2 NUMBER;
Begin
EXECUTE immediate 'truncate table append_hint_plsql_test';
Select * Bulk Collect Into Loadtable From Append_Hint_Test ;

Dbms_Output.Put_Line(Loadtable.Count);

SELECT Value
INTO Redosize1
FROM V$mystat,
V$statname
Where V$mystat.Statistic#=V$statname.Statistic#
AND V$statname.Name = 'redo size';

Dbms_Output.Put_Line('redo size 1 '||Redosize1);
T1 := Dbms_Utility.Get_Time;

Forall Indx IN 1 .. Loadtable.Count
-- INSERT /*+ APPEND_VALUES */
-- INTO append_hint_plsql_test VALUES
-- (Loadtable(Indx).A
-- );
Insert Into Append_Hint_Plsql_Test Values (Loadtable(Indx).A );

Commit;

T2 := Dbms_Utility.Get_Time;

SELECT Value
INTO Redosize2
FROM V$mystat,
V$statname
WHERE V$mystat.Statistic#=V$statname.Statistic#
AND V$statname.Name = 'redo size';
Dbms_Output.Put_Line('redo size 2 '||Redosize2);
Dbms_Output.Put_Line('redo generated : '||(Redosize2-Redosize1)|| ' Time taken : '||(t2-t1));
END;
/

Databases With Different Timezones in Same Server

$
0
0
There may be occasions where two database that reside in the same server is required to have different timezones. Changing the timezone of the database does not help in this case as this is applicable only to columns of "timestamp with local timezone". Changing the timezone on the OS level may also not be useful as there are two databases to contend with.
Solution is to use the TZ environment variable. This is applicable for both single instance and RAC databases. This post gives an example having two databases with different timezones in the same server.
First up is the single instance case. The two databases are std11g2 and ent11g2 (both 11.2.0.3 databases). The timezone of the std11g2 will be changed to GMT+5 while the timezone of the ent11g2 will remain unaffected. As it is now both databases have the same timezone
SQL> select dbtimezone from dual;

DBTIME
------
+00:00
Set the TZ to desired timezone and restart the database that requires timezone to be changed
export TZ=Etc/GMT+5
There was no need to restart the listener. In fact in this case three databases were running in the same server and listener was running out of a 12.1.0.2 Oracle home and the two 11.2 databases used for this post registered with this listener. Even after the database is restarted the timezone will still show as before the restart. But querying the systimestamp will show the time according to the timezone used.On std11g2
SQL> select dbtimezone from dual;

DBTIME
------
+00:00

SQL> SELECT systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 12.09.57.837228 PM -05:00
On ent11g2
SQL> select dbtimezone from dual;

DBTIME
------
+00:00

SQL> SELECT systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 05.09.57.694861 PM +00:00
All the remote connections to the database will use the respective timezones
unset TZ

sqlplus sys@std11g2 as sysdba

SQL> select systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 07.26.24.918270 AM-05:00

sqlplus sys@ent11g2 as sysdba

SQL> select systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 12.26.45.653530 PM +00:00
For RAC databases where the start and stop of database is managed by the clusterware the timezone information is specified using the setenv. In this case two databases (std12c1 and tzdb both 12.1) reside in same cluster nodes and it's expected that tzdb to have a different timezone. Both databases were using the same listeners (SCAN, Listener). Query the current environment setting for any timezone information using getenv
srvctl getenv database -d std12c1 
std12c1:

srvctl getenv database -d tzdb
tzdb:
Set the timezone information using setenv for the tzdb database
srvctl setenv database -d tzdb -T 'TZ=Etc/GMT+5'
Verify the setting
srvctl getenv database -d tzdb
tzdb:
TZ=GMT+5
Stop and restart the database
srvctl stop database -d tzdb
srvctl start database -d tzdb
Query the databases for timestamp
sqlplus asanga@tzdb

SQL> select systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 11.22.50.685116 AM -05:00

sqlplus asanga@std12c1

SQL> select systimestamp from dual;

SYSTIMESTAMP
-----------------------------------
19-NOV-14 04.23.08.054139 PM +00:00



A test cases was used to simulate how a application server that connects to the database via JDBC would see the time values. The java code is given at the end of the post. The output resulted from running this code against the RAC databases is given below.
2014-11-19 16:44:10.542481 xxxx 2014-11-19 16:44:10.542481 +0:00 xxxx 16:44:10
2014-11-19 16:44:10.0 xxxx 2014-11-19 16:44:10 xxxx 16:44:10

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

2014-11-19 16:44:10.606114 xxxx 2014-11-19 11:44:10.606114 -5:00 xxxx 16:44:10
2014-11-19 11:44:10.0 xxxx 2014-11-19 11:44:10 xxxx 11:44:10
The program queries the database for systimestmap and sysdate. The top half before the divider (xxxxx) represent the output from the std12c1 database (which didn't change the timezone). The bottom half shows the output from the database tzdb which had the timezone changed.
On each half there are two lines. First line represent getting the systimestamp from the JDBC resultset using getTimestamp,getString and getTime methods. The second line represent getting sysdate from the resultset using the same set of methods. The machine that ran the java program had the same timezone as the std12c1 database.
From the output it could be seen that querying the systimestamp and getting the result using either getTimestamp and getTime methods loses the timezone information and shows the incorrect time. On the other hand getting the results using the getString method preserves the timezone information.
However querying the sysdate and obtaining the result from any of the aforementioned methods doesn't have this problem and time with respect to the timezone used is given. Therefore application using this method could run into problems if the client side timezone is different to that of the database timezone and how systimestmap results are obtained.
To overcome this problem change the timezone on the application servers to match the database timezone. If multiple applications are running out of same server use "user.timezone" to set the timezone for each application server based on the database it is connecting to.

Java code used for the test case

Public class Test {

public static void main(String[] args) throws SQLException {


tz1();
System.out.println("\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n");
tz2();

}

public static void tz1() throws SQLException {

OracleDataSource ds = new OracleDataSource();
ds.setUser("asanga");
ds.setPassword("asa");
ds.setURL("jdbc:oracle:thin:@192.168.0.66:1521:ent11g2");

Connection con = ds.getConnection();
PreparedStatement pr = con.prepareStatement("select systimestamp,sysdate from dual");
ResultSet rs = pr.executeQuery();
while (rs.next()) {

System.out.println(rs.getTimestamp(1) + " xxxx " + rs.getString(1) + " xxxx " + rs.getTime(1));
System.out.println(rs.getTimestamp(2) + " xxxx " + rs.getString(2) + " xxxx " + rs.getTime(2));


}

rs.close();
pr.close();
con.close();

}

public static void tz2() throws SQLException {

OracleDataSource ds = new OracleDataSource();
ds.setUser("asanga");
ds.setPassword("asa");
ds.setURL("jdbc:oracle:thin:@192.168.0.66:1521:std11g2");

Connection con = ds.getConnection();
PreparedStatement pr = con.prepareStatement("select systimestamp,sysdate from dual");
ResultSet rs = pr.executeQuery();
while (rs.next()) {

System.out.println(rs.getTimestamp(1) + " xxxx " + rs.getString(1) + " xxxx " + rs.getTime(1));
System.out.println(rs.getTimestamp(2) + " xxxx " + rs.getString(2) + " xxxx " + rs.getTime(2));

}

rs.close();
pr.close();
con.close();
}
}

Tunneling VNC Over SSH Using PuTTY

$
0
0
By default VNC runs on port 5901. This port may not always be open for access. Vnc access maybe needed for GUI base work such as runInstaller, DBCA, DBUA etc. (there are other ways to get GUI to desktop, such as Xming). In situation where VNC port is not open it could be tunneled over SSH. This post shows how to use PuTTY for this effect.
1. Set the tunneling information before opening the ssh connection. Source port is the local listening port. In this case port 5999 has been chosen as the local listener port. Destination is a host:port combination. In this case the local host is the destination host and port is set 5901 which the remote vnc listening port.
2. Once Click add to make the tunneling take effect when ssh connection is established.



3.Establish the ssh connection.
4. Connect a vncviwer specifying the source port used earlier.
If the source port had been 5901 (the default vnc port) then the vncviwer connection could use the following

Creating Local Yum Repository Using an ISO or DVD for RHEL5, 6 and 7

$
0
0
Creating a local yum repository allows a convenient way of installing the required rpms as part of an Oracle installation. Using yum repository is convenient as oppose to installing required rpms via rpm -i as yum fetches the necessary dependencies from the repository. The post gives the steps for setting up local yum repository for RHEL5, RHEL6 and RHEL7 using either ISO or DVD which has the installation.
Mount the ISO or the DVD. In all cases the ISO or DVD is mounted on /media mount point.

Setting up local yum repository on RHEL 5.
Currently there are no repositories.
 yum repolist
Loaded plugins: rhnplugin, security
This system is not registered with RHN.
RHN support will be disabled.
repolist: 0
Create the repository file (extension must be .repo).
cat rhel5.repo
[RHEL5ISO]
name=RHEL 5 ISO
baseurl=file:///media/Server
enabled=1
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
The repository data is taken from "repodata/repomd.xml". In RHEL5 this is located inside the Server directory for linux server installation. Similar repomd.xml exists for cluster,clusterstorage and VT. Setting only /media for baseurl will result in following error
file:///media/repodata/repomd.xml: [Errno 5] OSError: [Errno 2] No such file or directory: '/media/repodata/repomd.xml'
If the gpgcheck is set to 1 (enabled) then prior to installing a package authenticity of it is checked using the GPG signatures. For this to work the gpgkey must be set to the RPM-GPG-KEY-redhat-release file which is available under /media.
Once the file is setup run the following commands to clean and list the repository data.
yum clean all
Loaded plugins: rhnplugin, security
Cleaning up Everything

yum repolist
Loaded plugins: rhnplugin, security
This system is not registered with RHN.
RHN support will be disabled.
RHEL5ISO | 1.5 kB 00:00
RHEL5ISO/primary | 920 kB 00:00
RHEL5ISO 3285/3285
repo id repo name status
RHEL5ISO RHEL 5 ISO enabled: 3,285 repolist: 3,285
Run yum list or yum grouplist to test the repository is working.
yum grouplist
Loaded plugins: rhnplugin, security
This system is not registered with RHN.
RHN support will be disabled.
Setting up Group Process
RHEL5ISO | 1.5 kB 00:00
RHEL5ISO/primary | 920 kB 00:00
RHEL5ISO/group | 1.0 MB 00:00
Installed Groups:
Administration Tools
Authoring and Publishing
Development Libraries
Development Tools
Editors
FTP Server
GNOME Desktop Environment
GNOME Software Development
Graphical Internet
Legacy Network Server
Legacy Software Development
Legacy Software Support
Mail Server
Network Servers
Office/Productivity
Printing Support
Server Configuration Tools
System Tools
Text-based Internet
X Software Development
X Window System
Available Groups:
DNS Name Server
Engineering and Scientific
Games and Entertainment
Graphics
Java Development
KDE (K Desktop Environment)
KDE Software Development
MySQL Database
News Server
OpenFabrics Enterprise Distribution
PostgreSQL Database
Sound and Video
Web Server
Windows File Server
Done



Setting up local yum repository on RHEL 6 and RHEL 7.
Setting up local yum repository for both RHEL6 and RHEL7 is similar and same repo file could be used for both RHEL versions. Only difference between these RHEL versions and the RHEL5 is the baseurl.
cat rhel6.repo
[RHEL6ISO]
name=RHEL 6 ISO
baseurl=file:///media
enabled=1
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
The "repodata/repomd.xml" is available from the base media directory. As such baseurl is set to the mount point of the ISO or the DVD.
Run yum clean all and yum repolist as before and use grouplist to validate the repository.
From RHEL 6
 yum grouplist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Group Process
RHEL6ISO/group_gz | 204 kB 00:00 ...
Installed Groups:
Additional Development
Base
Compatibility libraries
Console internet tools
Debugging Tools
Desktop Platform
Dial-up Networking Support
Directory Client
E-mail server
FTP server
Fonts
General Purpose Desktop
Graphical Administration Tools
Hardware monitoring utilities
Java Platform
KDE Desktop
....
From RHEL7
 yum grouplist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Available environment groups:
Minimal Install
Infrastructure Server
File and Print Server
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Compatibility Libraries
Console Internet Tools
Development Tools
Graphical Administration Tools
Legacy UNIX Compatibility
Scientific Support
Security Tools
Smart Card Support
System Administration Tools
System Management
Done

RHEL 7 VNC Setup - "Oh no! Somethign has gone wrong"

$
0
0
If RHEL 7 was installed as a minimal install
 yum grouplist
Available environment groups:
Minimal Install
then setting up vnc by installing tigervnc-server would result in following screen when login with a vncviwer.
Installing X windows did not resolve the issue
yum groupinstall "X Window System"
The minimal installation does not have any GUI related components as such the vnc has no desktop to connect to. Installing KDE desktop
yum groupinstall "kde-desktop"
or the GNOME the default desktop for RHEL7
yum groupinstall GNOME
resolved the issue.

Related Post
Blank VNC Screen


Installing Oracle Database 12.1.0.2 on RHEL 7

$
0
0
Oracle has certified 12.1.0.2 on RHEL 7 kernels 3.10.0-54.0.1.el7 or later (1304727.1). The current RHEL 7 available from RedHat site is
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
and comes with following kernel
uname -r
3.10.0-123.el7.x86_64
which is valid for installing 12.1.0.2. It must be noted only 12.1.0.2 is certified on RHEL 7.

RHEL 7 comes with XFS as the default file system. Although Oracle has its own material praising the XFS this is still not listed as supported file system on 236826.1 - Supported and Recommended File Systems on Linux. It's not yet known if XFS is supported and recommended for Oracle RDBMS data files. In any explicit support statement is with regard to Oracle linux support but not Oracle RDBMS support. On 1529864.1 (which is relevant for RHEL 6) it is explicitly stated database is supported on ext2, ext3 and ext4 file systems, however there's no similar document (Requirements for Installing Oracle Database 12.1 on RHEL7 or OL7 64-bit (x86-64)) available yet. On the other hand going by documents published for RHEL 6 (1601759.1) which states oracle does not certify local file systems and support and certification is up to the OS vendor. Nevertheless it was possible to install a 12.1.0.2 database on XFS local file system without any problem.



On the updated document which list RHEL 7 as an option for 12.1, the compat-libstdc++-33 is missing from the required RPM list. However the installer checks for this package and raise a warning.
compat-libstdc++-33 is not available on RHEL 7 install media. However it could be downloaded as a separate package from the redhat site.

RHEL 7 uses grub version 2. Editing entries in grub is different to that of RHEL6. This will come into play when setting elevator and transparent_hugepages.

In RHEL 7 the concept of runlevels has been replaced with systemd targets. The runlevel command exists but only for compatibility reasons and it is recommended if possible to avoid using the runlevel command. In minimal installation the runlevel command was returning "unknown" instead of expected "N 3" (RHEL 7 installed with Server and GUI didn't have this problem). As a result the runlevel pre-req check could fail on grid infrastructure and database installs.
It was possible to ignore this pre-req failure and continue the installation.

All other preinstallation task setup is similar to earlier versions of RHEL.

Useful metalink notes
Certification Information for Oracle Database on Linux x86-64 [ID 1304727.1]
Supported and Recommended File Systems on Linux [ID 236826.1]

Related Posts
Installing 11.2.0.3 on RHEL 6
Installing 11gR2 (11.2.0.3) GI with Role Separation on RHEL 6
Installing 11gR2 (11.2.0.3) GI with Role Separation on OEL 6
Installing 11gR2 Standalone Server with ASM and Role Separation on RHEL 6
Installing 12c (12.1.0.1) RAC on RHEL 6 with Role Seperation - Clusterware

RMAN Backups on NFS

$
0
0
There are many options for mounting NFS to be used with various oracle files (binaries, datafiles, OCR). This post shows results of three mount options used and backup time for a single tablespace (size 16GB) using RMAN. The database is a single instance 11.2.0.3 database. Both NFS server and client were on the same network segment connected via a single switch.
NFS server export file content
 more /etc/exports
/opt/backup 192.168.0.66(rw,sync,all_squash,insecure,anonuid=500,anongid=500)
anonuid and anongid represent user id and group id of oracle user on NFS client (server running the DB). no_root_squash has not been used.
Options used for mounting the NFS are
1. mount -t nfs -o rw,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp,nolock 192.168.0.76:/opt/backup /opt/backup (backup time : 06:45)

2. mount -t nfs 192.168.0.76:/opt/backup /opt/backup (backup time : 07:05)

3. mount -t nfs -o rw,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp,nolock,actimeo=0 192.168.0.76:/opt/backup /opt/backup (backup time : 06:35)

In test 1 the mount is without actimeo. Test 2 doesn't specify any mount options and rely on default options. In test 3 the options are similar to test 1 with the exception of actimeo being included.
There's some confusion as to when to use actimeo option. Some documents state it explicitly not to use noac option RMAN but say nothing explicitly about actimeo option but at the same time states it could be used with single instance databases (359515.1) while others explicitly state not to use actimeo with single instance and only required to RAC (1164673.1,762374.1). Neverthless the option 3 was able to complete the backup in roughly same time at option. However use of noac increased the backup time unacceptably long (into hours. Not included in the post).




Below is the IO rate and throughput observed from the emconsole during each test.


Useful metalink note
Howto Optimize NFS Performance with NFS options. [ID 397194.1]
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 [ID 1452614.1]
Step by Step - Configure Direct NFS Client (DNFS) on Linux (11g) [ID 762374.1]
Mount Options for Oracle files when used with NFS on NAS devices [ID 359515.1]
NFS Performance Decline Introduced by Mount Option "actimeo=0"[ID 1164673.1]

Upgrading 12c CDB and PDB from 12.1.0.1 to 12.1.0.2

$
0
0
An earlier post listed the steps for upgrading non-CDB in a data guard configuration from 12.1.0.1 to 12.1.0.2. This post shows the steps for upgrading a CDB with a PDB from 12.1.0.1 to 12.1.0.2. This is not a complete how to guide, refer (1933011.1) and oracle documentation for complete checklist for upgrade. The software upgrade from 12.1.0.1 to 12.1.0.2 is similar to 11gR2 upgrades where new database software is installed in a new oracle home location (out of place upgrade). It is assumed that software upgrade is done and only remaining task is the database upgrade.
Open all PDBs before running the preupgrade script. The CDB used here is called cdb12c and the PDB is called pdbone.
SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDBONE MOUNTED

SQL> alter pluggable database PDBONE open;

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDBONE READ WRITE NO
Run preupgrade script from the 12.1.0.2 home or download from (884522.1). Preuprade script consists of two *.sql files. Copy them to a temporary location.
mkdir -p /home/oracle/tmp
cp preupgrd.sql utluppkg.sql /home/oracle/tmp
Execute the catcon.pl perl script (refer 1932340.1) which will run the upgrade script in all of the containers (CDB and PDB).
/opt/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl catcon.pl -d /home/oracle/tmp -l /home/oracle/tmp -b pdblogs preupgrd.sql
-d specify where the preupgrade scripts have been copied to and -b specify the base for generated log files.
The generated preupgrade log file will have sections for each container as shown below (not the entire log output).
more  /opt/app/oracle/cfgtoollogs/cdb12c/preupgrade/preupgrade.log

Script Version: 12.1.0.2.0 Build: 009
**********************************************************************
Database Name: CDB12C
Container Name: CDB$ROOT
Container ID: 1
Version: 12.1.0.1.0
Compatible: 12.1.0.0.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone file: V18
**********************************************************************


**********************************************************************
Database Name: CDB12C
Container Name: PDB$SEED
Container ID: 2
Version: 12.1.0.1.0
Compatible: 12.1.0.0.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone file: V18
**********************************************************************


**********************************************************************
Database Name: CDB12C
Container Name: PDBONE
Container ID: 3
Version: 12.1.0.1.0
Compatible: 12.1.0.0.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone file: V18
**********************************************************************
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
--> Oracle XDK for Java [upgrade] VALID
--> Real Application Clusters [upgrade] OPTION OFF
--> Oracle Workspace Manager [upgrade] VALID
--> OLAP Analytic Workspace [upgrade] VALID
--> Oracle Label Security [upgrade] VALID
--> Oracle Database Vault [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle Multimedia [upgrade] VALID
--> Oracle Spatial [upgrade] VALID
--> Oracle OLAP API [upgrade] VALID
**********************************************************************
Rectify any outstanding errors and warning before upgrade.
@/opt/app/oracle/cfgtoollogs/cdb12c/preupgrade/preupgrade_fixups.sql
Run dbua from the new 12.1.0.2 home. Select upgrade database option.
Select the CDB to upgrade. There's only one CDB in this server.
The PDBs will be also upgraded along with the CDB. Any PDB created in the CDB will be auto detected by the CDB.
Select upgrade related options as desired.




Upgrade summary
During the upgrade it is possible to view the progress of each the containers (CDB and PDBs)
Once the upgrade has completed click the upgrade result button to view the upgrade results. The DBUA is exited through this process of viewing upgrade results and then close button. The finish button is not active during this time.

Once the upgrade is complete run the post upgrade script
@/opt/app/oracle/cfgtoollogs/cdb12c/preupgrade/postupgrade_fixups.sql
The preupgrade.log also states the following
After your database is upgraded and open in normal mode you must run
rdbms/admin/catuppst.sql which executes several required tasks and completes
the upgrade process.

You should follow that with the execution of rdbms/admin/utlrp.sql, and a
comparison of invalid objects before and after the upgrade using
rdbms/admin/utluiobj.sql
Run these scripts to complete the upgrade. The remaining step is to update the compatibility parameter (compatible=12.1.0.2.0). As this is irreversible make sure the database is performing as expected before updating the compatible parameter. If ASM is used then compatible.asm and compatible.rdbms parameters may need updating as well.
Because of a bug in 12.1.0.2 the upgrade step not registered in the database registry history.
SQL>  select * from dba_registry_history;

ACTION_TIME ACTION NAMESPACE VERSION ID COMMENTS
----------------------------- --------------- ----------- ---------- ---------- --------------------
20-JUN-14 01.26.28.005739 PM APPLY SERVER 12.1.0.1 0 Patchset 12.1.0.0.0
25-NOV-14 04.19.50.469828 PM VIEW INVALIDATE 8289601 view invalidation
To resolve this apply patch 19518079. However this patch itself has an issue as it fails with
Error: prereq checks failed!
patch 19518079: apply script /opt/app/oracle/product/12.1.0/dbhome_2/sqlpatch/19518079/18024100/19518079_apply.sql does not exist
Prereq check failed, exiting without installing any patches.
Reason is the directory path 19518079/18024100 is missing. Create the directory path inside sqlpatch and move the 19518079_apply.sql into it and run datapatch. After which the upgrade step is listed in the dba_registry_history
SQL>  select * from dba_registry_history;

ACTION_TIME ACTION NAMESPACE VERSION ID COMMENTS
----------------------------- --------------- ----------- ---------- ---------- --------------------
20-JUN-14 01.26.28.005739 PM APPLY SERVER 12.1.0.1 0 Patchset 12.1.0.0.0
25-NOV-14 04.19.50.469828 PM VIEW INVALIDATE 8289601 view invalidation
25-NOV-14 05.40.16.396667 PM UPGRADE SERVER 12.1.0.2.0 Upgraded from 12.1.0.1.0

Related Post
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure

Useful metalink notes
Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC) [ID 1520299.1]
Complete Checklist for DBUA Upgrade from 12.1.0.1 to 12.1.0.N [ID 1933011.1]
How to Download and Run Oracle's Database Pre-Upgrade Utility [ID 884522.1]
How to execute sql scripts in Multitenant environment (catcon.pl) [ID 1932340.1]
Complete checklist for 12c R1 PDB upgrade (Upgrading single/multiple PDB) [ID 1933391.1]
Complete checklist for manual upgrade from 12.1.0.1 to 12.1.0.N (Full CDB Upgrade) [ID 1932762.1]
Oracle Database 12c Release 1 (12.1) DBUA : Understanding New Changes with All New 12.1 DBUA [ID 1493645.1]
Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA [ID 1516557.1]
Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST [ID 1509653.1]
dba_registry_history Table On Newly Created 11.2.0.2 Database Shows PSU Entry[ID 1367065.1]
"VIEW INVALIDATE" In Action Column Dba_Registry_History During Upgarde To 11.2.0.2 [ID 1390402.1]

ORA-00942 for WMSYS.OWM_MIG_PKG When Upgrading from 11.2.0.3 to 11.2.0.4

$
0
0
"ORA-06512: at "WMSYS.OWM_MIG_PKG", line 1579" was observed while upgrading 11.2.0.3 enterprise edition to 11.2.0.4. Upgrade was continued selecting the ignore option. Show below is the output from the upgrade result summary.

If after the upgrade the database registry shows the status valid for oracle workspace manager then this error could be safely ignored.
COMP_ID                        COMP_NAME                                STATUS     VERSION
------------------------------ ---------------------------------------- ---------- -----------
EM Oracle Enterprise Manager VALID 11.2.0.4.0
OWB OWB VALID 11.2.0.3.0
APEX Oracle Application Express VALID 3.2.1.00.12
AMD OLAP Catalog VALID 11.2.0.4.0
SDO Spatial VALID 11.2.0.4.0
ORDIM Oracle Multimedia VALID 11.2.0.4.0
XDB Oracle XML Database VALID 11.2.0.4.0
CONTEXT Oracle Text VALID 11.2.0.4.0
EXF Oracle Expression Filter VALID 11.2.0.4.0
RUL Oracle Rules Manager VALID 11.2.0.4.0
OWM Oracle Workspace Manager VALID 11.2.0.4.0
CATALOG Oracle Database Catalog Views VALID 11.2.0.4.0
CATPROC Oracle Database Packages and Types VALID 11.2.0.4.0
JAVAVM JServer JAVA Virtual Machine VALID 11.2.0.4.0
XML Oracle XDK VALID 11.2.0.4.0
CATJAVA Oracle Database Java Packages VALID 11.2.0.4.0
APS OLAP Analytic Workspace VALID 11.2.0.4.0
XOQ Oracle OLAP API VALID 11.2.0.4.0
RAC Oracle Real Application Clusters VALID 11.2.0.4.0




Useful metalink notes
ORA-00942 during database upgrade of Oracle Workspace Manager [ID 1399508.1]
Post-Upgrade Status Script utlu112s.sql Fails with ORA-942 [ID 1051991.1]
ORA-20194 And ORA-01720 During Upgrade of Oracle Workspace Manager When Upgrading from 11.2.0.3 to 11.2.0.4 [ID 1666657.1]

Related Posts
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Database

Upgrading RAC from 12.1.0.1 to 12.1.0.2 - Grid Infrastructure

$
0
0
This post gives the highlights of an grid infrastructure upgrade from 12.1.0.1 to 12.1.0.2. It is not a step by step guide to upgrade but to shows things to look out for during the upgrade. For comprehensive information refer oracle documentation. The system to be upgraded is a 2 node cluster running on RHEL6(x86-64) with role separation and uses GNS.
Running the cluster verification (cluvfy) shows few failed pre-reqs which are new to 12.1.0.2 or known issues which could be ignored. First is the panic_on_oops kernel parameter.
Check: Kernel parameter for "panic_on_oops"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 1 unknown 1 failed (ignorable) Configured value incorrect.
rhel12c2 1 unknown 1 failed (ignorable) Configured value incorrect.

PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "rhel12c1"
PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "rhel12c2"
By default this is set to 1 so this could be safely ignored or panic_on_oops kernel parameter could be added to sysctl.conf. For more information refer PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "racnode1" (Doc ID 1921982.1)
Second is with regard to DNS response times.
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rhel12c1 failed
rhel12c2 failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rhel12c1,rhel12c2
Refer an earlier post regarding PRVF-5636/PRVF-5637.
Also connected to DNS, make sure that public host names are resolved through the DNS. This could be done by adding entries to forward and reverse name look up files.
add host resolving to DNS
cat /var/named/rev.domain.net.zone
93 IN PTR rhel12c1.domain.net.
94 IN PTR rhel12c2.domain.net.

cat /var/named/domain.net.zone
rhel12c1 A 192.168.0.93
rhel12c2 A 192.168.0.94
The grid infrastructure management repository is no longer optional in 12.1.0.2. This by default will get installed in the same disk group where OCR and vote disks are as such this disk group need to be expanded.
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 1098 801 Y CLUSTERDG/
MOUNTED EXTERN N 512 4096 1048576 10236 8073 N DATA/
MOUNTED EXTERN N 512 4096 1048576 10236 8687 N FRA/
Size of the disk group depends on the number of nodes and repository data retention time, which at the time of upgrade has no control over. If there's not enough space on the disk group then upgrade will not proceed and required amount of space will be given in the error details.
Add more disks to the disk group containing clusterware files or move the clusterware files and ASM password file to a disk group with sufficiently large. Later, if needed this CHM repository database could be moved to a different disk group, refer 1589394.1
Since the CHM creates a database the dbca folder inside $ORACLE_HOME/cfgtoollogs needs write permission for oinstall group as this CHM related database is created as grid user. Without this write permission the CHM repository database creation will fail.
[root@rhel12c1 cfgtoollogs]# ls -l
drwxr-x---. 3 oracle oinstall 4096 Jun 17 2014 dbca
[root@rhel12c1 cfgtoollogs]# chmod 770 dbca
[root@rhel12c1 cfgtoollogs]# ls -l
drwxrwx---. 3 oracle oinstall 4096 Jun 17 2014 dbca
One thing strange that was observed was complaining about the permission of the block devices used for ASM disk group.
Node Name Raw device      Block device    Permission      Owner           Group             Comment
------ --------------- --------------- --------------- --------------- ----------------- ------
rhel12c1 /dev/sdb1 /dev/sdb1 0660 grid asmadmin failed
rhel12c1 /dev/sdc1 /dev/sdc1 0660 grid asmadmin failed
rhel12c1 /dev/sdd1 /dev/sdd1 0660 grid asmadmin failed
rhel12c2 /dev/sdb1 /dev/sdb1 0660 grid asmadmin failed
rhel12c2 /dev/sdc1 /dev/sdc1 0660 grid asmadmin failed
rhel12c2 /dev/sdd1 /dev/sdd1 0660 grid asmadmin failed
PRVG-4666 : The group for block devices "/dev/sdb1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin]
PRVG-4666 : The group for block devices "/dev/sdc1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin]
PRVG-4666 : The group for block devices "/dev/sdd1" are incorrect on node "rhel12c1" [Expected = oinstall, Actual = asmadmin]
PRVG-4666 : The group for block devices "/dev/sdb1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin]
PRVG-4666 : The group for block devices "/dev/sdc1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin]
PRVG-4666 : The group for block devices "/dev/sdd1" are incorrect on node "rhel12c2" [Expected = oinstall, Actual = asmadmin]
As this is an upgrade the ASM instance was created long ago and permission has been validated. This failed message appeared during several runs of the cluvfy but went away on its own. The correct permission group is asmadmin if it has been designated as the ASM admin group when cluster was created.
RACcheck has been replaced with orachk (refer ORAchk - Health Checks for the Oracle Stack (Doc ID 1268927.2)) which could also be used for pre upgrade checks with the command
./orachk -u -o pre
The output from the cluvfy with upgrade option is given below
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/app/12.1.0/grid -dest_crshome /opt/app/12.1.0/grid2 -dest_version 12.1.0.2.0 -fixup -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rhel12c1"
Destination Node Reachable?
------------------------------------ ------------------------
rhel12c1 yes
rhel12c2 yes
Result: Node reachability check passed from node "rhel12c1"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rhel12c2 passed
rhel12c1 passed
Result: User equivalence check passed for user "grid"

Check: Package existence for "cvuqdisk"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
rhel12c1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
Result: Package existence check passed for "cvuqdisk"

Check: Grid Infrastructure home writeability of path /opt/app/12.1.0/grid2
Grid Infrastructure home check passed

Checking CRS user consistency
Result: CRS user consistency check successful
Checking network configuration consistency.
Result: Check for network configuration consistency passed.
Checking ASM disk size consistency
All ASM disks are correctly sized
Checking if default discovery string is being used by ASM
ASM discovery string "/dev/sd*" is not the default discovery string
Checking if ASM parameter file is in use by an ASM instance on the local node
Result: ASM instance is using parameter file "+NEWCLUSTERDG/rhel12c/ASMPARAMETERFILE/REGISTRY.253.868893307" on node "rhel12c1" on which upgrade is requested.

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking node connectivity...

Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rhel12c1 passed
rhel12c2 passed

Verification of the hosts config file successful


Interface information for node "rhel12c1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.93 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500
eth0 192.168.0.89 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500
eth0 192.168.0.90 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:CB:D8:AE 1500
eth1 192.168.1.87 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:92:0B:69 1500
eth1 169.254.145.58 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:92:0B:69 1500


Interface information for node "rhel12c2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.94 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500
eth0 192.168.0.87 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500
eth0 192.168.0.86 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500
eth0 192.168.0.91 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500
eth0 192.168.0.92 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:9C:66:81 1500
eth1 192.168.1.88 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:1E:0E:A8 1500
eth1 169.254.87.189 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:1E:0E:A8 1500


Check: Node connectivity using interfaces on subnet "192.168.0.0"

Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel12c1[192.168.0.90] rhel12c1[192.168.0.93] yes
rhel12c1[192.168.0.90] rhel12c1[192.168.0.89] yes
rhel12c1[192.168.0.90] rhel12c2[192.168.0.94] yes
rhel12c1[192.168.0.90] rhel12c2[192.168.0.87] yes
...
Result: Node connectivity passed for subnet "192.168.0.0" with node(s) rhel12c1,rhel12c2


Check: TCP connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel12c1 : 192.168.0.90 rhel12c1 : 192.168.0.90 passed
rhel12c1 : 192.168.0.93 rhel12c1 : 192.168.0.90 passed
rhel12c1 : 192.168.0.89 rhel12c1 : 192.168.0.90 passed
rhel12c2 : 192.168.0.94 rhel12c1 : 192.168.0.90 passed
...
Result: TCP connectivity check passed for subnet "192.168.0.0"


Check: Node connectivity using interfaces on subnet "192.168.1.0"

Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel12c1[192.168.1.87] rhel12c2[192.168.1.88] yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rhel12c1,rhel12c2


Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel12c1 : 192.168.1.87 rhel12c1 : 192.168.1.87 passed
rhel12c2 : 192.168.1.88 rhel12c1 : 192.168.1.87 passed
rhel12c1 : 192.168.1.87 rhel12c2 : 192.168.1.88 passed
rhel12c2 : 192.168.1.88 rhel12c2 : 192.168.1.88 passed
Result: TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

Confirming that at least one ASM disk group is configured...
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Checking OCR integrity...
Disks "+NEWCLUSTERDG" are managed by ASM.

OCR integrity check passed

Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 4.4199GB (4634568.0KB) 4GB (4194304.0KB) passed
rhel12c1 4.4199GB (4634568.0KB) 4GB (4194304.0KB) passed
Result: Total memory check passed

Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 3.2136GB (3369676.0KB) 50MB (51200.0KB) passed
rhel12c1 2.979GB (3123740.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed

Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 5GB (5242872.0KB) 4.4199GB (4634568.0KB) passed
rhel12c1 5GB (5242872.0KB) 4.4199GB (4634568.0KB) passed
Result: Swap space check passed

Check: Free disk space for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel12c2 / 9.1055GB 7.9635GB passed
/var rhel12c2 / 9.1055GB 7.9635GB passed
/etc rhel12c2 / 9.1055GB 7.9635GB passed
/opt/app/12.1.0/grid rhel12c2 / 9.1055GB 7.9635GB passed
/sbin rhel12c2 / 9.1055GB 7.9635GB passed
/tmp rhel12c2 / 9.1055GB 7.9635GB passed
Result: Free disk space check passed for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp"

Check: Free disk space for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel12c1 / 8.3122GB 7.9635GB passed
/var rhel12c1 / 8.3122GB 7.9635GB passed
/etc rhel12c1 / 8.3122GB 7.9635GB passed
/opt/app/12.1.0/grid rhel12c1 / 8.3122GB 7.9635GB passed
/sbin rhel12c1 / 8.3122GB 7.9635GB passed
/tmp rhel12c1 / 8.3122GB 7.9635GB passed
Result: Free disk space check passed for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp"

Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel12c2 passed exists(501)
rhel12c1 passed exists(501)

Checking for multiple users with UID value 501
Result: Check for multiple users with UID value 501 passed
Result: User existence check passed for "grid"

Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel12c2 passed exists
rhel12c1 passed exists
Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel12c2 passed exists
rhel12c1 passed exists
Result: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c2 yes yes yes yes passed
rhel12c1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rhel12c2 yes yes yes passed
rhel12c1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed

Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 3 3,5 passed
rhel12c1 3 3,5 passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel12c2 hard 65536 65536 passed
rhel12c1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel12c2 soft 1024 1024 passed
rhel12c1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel12c2 hard 16384 16384 passed
rhel12c1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel12c2 soft 2047 2047 passed
rhel12c1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"

There are no oracle patches required for home "/opt/app/12.1.0/grid".

There are no oracle patches required for home "/opt/app/12.1.0/grid".

Checking for suitability of source home "/opt/app/12.1.0/grid" for upgrading to version "12.1.0.2.0".
Result: Source home "/opt/app/12.1.0/grid" is suitable for upgrading to version "12.1.0.2.0".

Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 x86_64 x86_64 passed
rhel12c1 x86_64 x86_64 passed
Result: System architecture check passed

Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 2.6.32-358.el6.x86_64 2.6.32 passed
rhel12c1 2.6.32-358.el6.x86_64 2.6.32 passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 3010 3010 250 passed
rhel12c2 3010 3010 250 passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 385280 385280 32000 passed
rhel12c2 385280 385280 32000 passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 3010 3010 100 passed
rhel12c2 3010 3010 100 passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 128 128 128 passed
rhel12c2 128 128 128 passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 68719476736 68719476736 2372898816 passed
rhel12c2 68719476736 68719476736 2372898816 passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 4096 4096 4096 passed
rhel12c2 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 4294967296 4294967296 463456 passed
rhel12c2 4294967296 4294967296 463456 passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 6815744 6815744 6815744 passed
rhel12c2 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
rhel12c2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 4194304 4194304 262144 passed
rhel12c2 4194304 4194304 262144 passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 4194304 4194304 4194304 passed
rhel12c2 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 1048576 1048576 262144 passed
rhel12c2 1048576 1048576 262144 passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 2097152 2097152 1048576 passed
rhel12c2 2097152 2097152 1048576 passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 3145728 3145728 1048576 passed
rhel12c2 3145728 3145728 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Kernel parameter for "panic_on_oops"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel12c1 1 1 1 passed
rhel12c2 1 1 1 passed
Result: Kernel parameter check passed for "panic_on_oops"

Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
rhel12c1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
rhel12c1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
rhel12c1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
rhel12c1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
rhel12c1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
rhel12c1 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
rhel12c1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 gcc-4.4.7-3.el6 gcc-4.4.4 passed
rhel12c1 gcc-4.4.7-3.el6 gcc-4.4.4 passed
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
rhel12c1 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 ksh ksh passed
rhel12c1 ksh ksh passed
Result: Package existence check passed for "ksh"

Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 make-3.81-20.el6 make-3.81 passed
rhel12c1 make-3.81-20.el6 make-3.81 passed
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
rhel12c1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
rhel12c1 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
rhel12c1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
rhel12c1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Check: Package existence for "nfs-utils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel12c2 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed
rhel12c1 nfs-utils-1.2.3-36.el6 nfs-utils-1.2.3-15 passed
Result: Package existence check passed for "nfs-utils"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rhel12c2 passed
rhel12c1 passed

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
Node Name File exists?
------------------------------------ ------------------------
rhel12c2 no
rhel12c1 no
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking Core file name pattern consistency...
Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rhel12c2 passed does not exist
rhel12c1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed

Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rhel12c2 0022 0022 passed
rhel12c1 0022 0022 passed
Result: Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of 'domain' and 'search' entries is defined
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
Checking if 'domain' entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if 'search' entry in file "/etc/resolv.conf" is consistent across the nodes...
Checking file "/etc/resolv.conf" to make sure that only one 'search' entry is defined
More than one "search" entry does not exist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rhel12c1 failed
rhel12c2 failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rhel12c1,rhel12c2
checking DNS response from all servers in "/etc/resolv.conf"
checking response for name "rhel12c2" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
rhel12c2 192.168.0.85 IPv4 passed
checking response for name "rhel12c1" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
rhel12c1 192.168.0.85 IPv4 passed

Check for integrity of file "/etc/resolv.conf" failed


UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations

Check: Time zone consistency
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed.

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running

Check: Daemon "avahi-daemon" not configured
Node Name Configured Status
------------ ------------------------ ------------------------
rhel12c2 no passed
rhel12c1 no passed
Daemon not configured check passed for process "avahi-daemon"

Check: Daemon "avahi-daemon" not running
Node Name Running? Status
------------ ------------------------ ------------------------
rhel12c2 no passed
rhel12c1 no passed
Daemon not running check passed for process "avahi-daemon"

Starting check for Network interface bonding status of private interconnect network interfaces ...
Check for Network interface bonding status of private interconnect network interfaces passed
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.


Run the installer for 12.1.0.2. (not all steps are shown).
All instances are selected by default.
ASM admin group has asmadmin as the OS group.
Upgrade summary

Active,release and software versions are as given below
[grid@rhel12c1 grid]$  crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@rhel12c1 grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@rhel12c1 grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel12c1] is [12.1.0.1.0]
When the root upgrade script is run the software version will change to 12.1.0.2 for the node. When all nodes are upgrade the active version will change to 12.1.0.2.

Rootupgrade.sh output from first node
# /opt/app/12.1.0/grid2/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/12.1.0/grid2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
2015/01/13 16:32:00 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/01/13 16:32:37 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/01/13 16:32:52 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/01/13 16:33:13 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/01/13 16:33:14 CLSRSC-363: User ignored prerequisites during installation

2015/01/13 16:33:48 CLSRSC-515: Starting OCR manual backup.

2015/01/13 16:33:54 CLSRSC-516: OCR manual backup successful.

2015/01/13 16:34:08 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2015/01/13 16:34:08 CLSRSC-482: Running command: '/opt/app/12.1.0/grid/bin/crsctl start rollingupgrade 12.1.0.2.0'

CRS-1131: The cluster was successfully set to rolling upgrade mode.
2015/01/13 16:34:15 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /opt/app/12.1.0/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'


ASM configuration upgraded in local node successfully.

2015/01/13 16:34:25 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2015/01/13 16:34:25 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/01/13 16:35:52 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/01/13 16:39:11 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/01/13 16:44:27 CLSRSC-472: Attempting to export the OCR

2015/01/13 16:44:27 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2015/01/13 16:44:42 CLSRSC-473: Successfully exported the OCR

2015/01/13 16:44:50 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2015/01/13 16:44:50 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.

2015/01/13 16:44:50 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2015/01/13 16:44:50 CLSRSC-543:
3. The downgrade command must be run on the node rhel12c2 with the '-lastnode' option to restore global configuration data.

2015/01/13 16:45:16 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/01/13 16:45:42 CLSRSC-474: Initiating upgrade of resource types

2015/01/13 16:45:54 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'

2015/01/13 16:45:54 CLSRSC-475: Upgrade of resource types successfully initiated.

2015/01/13 16:46:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Rootupgrade.sh run on the second (last) node
#  /opt/app/12.1.0/grid2/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/12.1.0/grid2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
2015/01/13 16:46:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/01/13 16:47:36 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/01/13 16:47:42 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/01/13 16:48:07 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/01/13 16:48:07 CLSRSC-363: User ignored prerequisites during installation


ASM configuration upgraded in local node successfully.

2015/01/13 16:48:39 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/01/13 16:50:03 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/01/13 16:50:42 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/01/13 16:54:48 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/01/13 16:55:06 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2015/01/13 16:55:06 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2015/01/13 16:56:15 CLSRSC-479: Successfully set Oracle Clusterware active version

2015/01/13 16:56:24 CLSRSC-476: Finishing upgrade of resource types

2015/01/13 16:56:35 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'

2015/01/13 16:56:35 CLSRSC-477: Successfully completed upgrade of resource types

2015/01/13 16:57:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
When all nodes are upgraded the active version reflect the upgraded version (12.1.0.2)
[grid@rhel12c1 grid]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@rhel12c1 grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.1.0]
[grid@rhel12c1 grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel12c1] is [12.1.0.2.0]
Use
cluvfy stage -post crsinst
or
orachk -u -o post
to check for any post upgrade issues. If satisfied with the upgrade uninstall the previous version of grid infrastructure software.
This conclude the upgrade of grid infrastructure from 12.1.0.1 to 12.1.0.2

Related Posts
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure

Upgrading RAC from 12.1.0.1 to 12.1.0.2 - Database

$
0
0
After upgrading the grid infrastructure from 12.1.0.1 to 12.1.0.2 the next step is the upgrade of RAC software and the database itself. There are earlier post which upgraded 12.1.0.1 to 12.1.0.2 in data guard configuration and with PDBs. These posts have useful metalink notes that could be benefitial in RAC environment as well.
The database software upgrade is an out of place upgrade with role separation. Use orachk or cluvfy to check the pre upgrade status. Output from cluvfy is shown below.
cluvfy stage  -pre dbinst -upgrade -src_dbhome /opt/app/oracle/product/12.1.0/dbhome_1 -dest_dbhome /opt/app/oracle/product/12.1.0/dbhome_2 -dest_version 12.1.0.2.0

Performing pre-checks for database installation

Checking node reachability...
Node reachability check passed from node "rhel12c1"


Checking user equivalence...
User equivalence check passed for user "oracle"
Specify user name for database "rac12c1" [default "DBSNMP"] : system
Specify password for user "system" in database "rac12c1" :

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rhel12c2,rhel12c1
TCP connectivity check passed for subnet "192.168.1.0"


Check: Node connectivity using interfaces on subnet "192.168.0.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rhel12c2,rhel12c1
TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rhel12c2:/opt/app/oracle/product/12.1.0/dbhome_2,rhel12c2:/tmp"
Free disk space check passed for "rhel12c1:/opt/app/oracle/product/12.1.0/dbhome_2,rhel12c1:/tmp"
Check for multiple users with UID value 500 passed
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmdba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Membership check for user "oracle" in group "asmdba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
There are no oracle patches required for home "/opt/app/oracle/product/12.1.0/dbhome_1".
There are no oracle patches required for home "/opt/app/oracle/product/12.1.0/dbhome_2".
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Default user file creation mask check passed

Checking CRS integrity...

Clusterware version consistency passed.

CRS integrity check passed

Checking Cluster manager integrity...


Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed

Oracle Clusterware is installed on all nodes.
CTSS resource check passed
Query of CTSS for time offset passed

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rhel12c1,rhel12c2
checking DNS response from all servers in "/etc/resolv.conf"

Check for integrity of file "/etc/resolv.conf" failed

Time zone consistency check passed

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN listeners...
TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "rhel12c-scan.rac.domain.net"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking stale database schema statistics...
Database stale schema statistics check is passed


ASM and CRS versions are compatible
Database Clusterware version compatibility passed.
OS user consistency check for upgrade successful
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "asmadmin" passed
Group existence check passed for "asmoper"
Membership check for user "grid" in group "asmoper" passed
Group existence check passed for "asmdba"
Membership check for user "grid" in group "asmdba" passed

Pre-check for database installation was unsuccessful on all the nodes.
The failure is due to PRVF-5636 which is ignored in this case.



Once the pre-req checks are validated run the installer from the 12.1.0.2 installation media. Give a new home for Oracle database software (part of out of place upgrade).
Upgrade summary
Once the database software is installed next step is upgrade of the database. The database in this case is a non-CDB database and will be upgraded using DBUA. As this is a role separated environment check the ownership of the oracle binaries are correct set to (oracle:asmadmin). If not set the correct permission using setasmgidwrap.
Refer earlier posts (non-CDB upgrade,CDB+PDB upgrade) for additional information with regard to pre-reqs for upgrades using DBUA.
After running DBUA from the new 12.1.0.2 oracle home select upgrade option.
Select the database to be upgraded. DBUA detects the database type as RAC.
DBUA summary
Upgrade results
Verify the post upgrade status using orachk (orachk -u -o post). When satisfied with the upgrade status upgrade the compatible parameter. In RAC this parameter is expected to be same on all instances and cannot be updated in a rolling fashion (1636681.1).
At the end of the database upgrade the release version will also reflect the new version
crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
Apply the patch 19518079 so that database upgrade status is reflected on registry$history.
This conclude the upgrade of RAC database from 12.1.0.1 to 12.1.0.2.

Related Posts
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure

Useful metalink notes
Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC) [ID 1520299.1]
Complete Checklist for DBUA Upgrade from 12.1.0.1 to 12.1.0.N [ID 1933011.1]
How to Download and Run Oracle's Database Pre-Upgrade Utility [ID 884522.1]
How to execute sql scripts in Multitenant environment (catcon.pl) [ID 1932340.1]
Complete checklist for 12c R1 PDB upgrade (Upgrading single/multiple PDB) [ID 1933391.1]
Complete checklist for manual upgrade from 12.1.0.1 to 12.1.0.N (Full CDB Upgrade) [ID 1932762.1]
Oracle Database 12c Release 1 (12.1) DBUA : Understanding New Changes with All New 12.1 DBUA [ID 1493645.1]
Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA [ID 1516557.1]
Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST [ID 1509653.1]
dba_registry_history Table On Newly Created 11.2.0.2 Database Shows PSU Entry[ID 1367065.1]
"VIEW INVALIDATE" In Action Column Dba_Registry_History During Upgarde To 11.2.0.2 [ID 1390402.1]

Transportable Tablespaces With Segment Dependencies

$
0
0
Transportable tablespaces allow moving of tablespaces between databases. The transportable tablespace must contain all the segments within itself. If there are segment dependencies between tablespace this would require all the dependent tablespaces being moved. This post shows a steps in transporting three tablespaces which have segment dependencies between them.
1. Create three tablespaces in this case one for storing table data, one for indexes and one for lob segments.
create tablespace tabletbs datafile '+data(datafile)' size 10m;
create tablespace indextbs datafile '+data(datafile)' size 10m;
create tablespace lobstbs datafile '+data(datafile)' size 10m;
2. Grant the user quotas on the tablespaces
alter user asanga quota unlimited on tabletbs quota unlimited on indextbs quota unlimited on lobstbs;
3. Create the segments such that all tablespaces are used.
create table abc (a number, b varchar2(100)) tablespace tabletbs;

create table def (b number, c blob) tablespace tabletbs lob(c)
STORE AS c_lob_seg (
TABLESPACE lobstbs
CHUNK 32K
CACHE
STORAGE (MAXEXTENTS UNLIMITED)
INDEX c_lob_idx (
TABLESPACE lobstbs
STORAGE (MAXEXTENTS UNLIMITED)
)
);

create index aidx on abc(a) tablespace indextbs;
create index caidx on def(b) tablespace indextbs;
4. Populate the tables
SQL> begin
for i in 1 .. 1000
loop
insert into abc values(i,'abc'||i);
end loop;
commit;
end;
/

SQL> begin
for i in 1000 .. 2001
loop
insert into def values(i,'def'||i);
end loop;
commit;
end;
/
5. If the full transportable tablespace set is not specified this will be mentioned transport set violation view. Below output shows the transport set violation when only the tabletbs is selected for transport.
SQL>  EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('tabletbs',true);
PL/SQL procedure successfully completed.

SQL> SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS;

VIOLATIONS
----------------------------------------------------------------------------------------------------------------
ORA-39905: Table ASANGA.C_LOB_SEG in tablespace lobstbs points to LOB segment ASANGA.DEF in tablespace tabletbs.
When all the depended tablespaces are selected no errors are shown
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('tabletbs,indextbs,lobstbs',TRUE);
PL/SQL procedure successfully completed.

SQL> SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS;
no rows selected
6. Change the tablespace to read only mode.
SQL> ALTER TABLESPACE tabletbs READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE indextbs READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE lobstbs READ ONLY;
Tablespace altered.
7. Export the tablespace metadata.
expdp  system/rac11g2db directory=exec_dir transport_tablespace=y tablespaces=tabletbs,indextbs,lobstbs dumpfile=tbs.dmp logfile=tbs.log

Export: Release 11.2.0.3.0 - Production on Wed Feb 18 11:39:22 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "transport_tablespace=TRUE" Location: Command Line, Replaced with: "transport_tablespaces=tabletbs,indextbs,lobstbs"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** directory=exec_dir tablespaces=tabletbs,indextbs,lobstbs dumpfile=tbs.dmp logfile=tbs.log reuse_dumpfiles=true
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/home/oracle/df/tbs.dmp
******************************************************************************
Datafiles required for transportable tablespace indextbs:
+DATA/rac11g2/datafile/indextbs.269.871989879
Datafiles required for transportable tablespace lobstbs:
+DATA/rac11g2/datafile/lobstbs.268.871989887
Datafiles required for transportable tablespace tabletbs:
+DATA/rac11g2/datafile/tabletbs.270.871989869
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 11:40:38
8. Copy the data files belonging to the tablespaces being transported to remote host. Data files in ASM could be copied to local host with asmcmd's cp command and then copied remotely or use ftp service provided with XML DB.
 asmcmd cp +DATA/rac11g2/datafile/tabletbs.270.871989869 /home/oracle/tabletbs.270.871989869
copying +DATA/rac11g2/datafile/tabletbs.270.871989869 -> /home/oracle/tabletbs.270.871989869


9. Import the tablespace metadata in the remote host. Reflect the new location of the data files copied over in the transport_datafiles parameter.
impdp system/ent11g2db directory=exec_dir dumpfile=tbs.dmp logfile=imp.log 
transport_datafiles='/home/oracle/trntbs/indextbs.269.871989879','/home/oracle/trntbs/lobstbs.268.871989887','/home/oracle/trntbs/tabletbs.270.871989869'

Import: Release 11.2.0.3.0 - Production on Wed Feb 18 10:55:10 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** directory=exec_dir dumpfile=tbs.dmp logfile=imp.log transport_datafiles=/home/oracle/trntbs/indextbs.269.871989879,/home/oracle/trntbs/lobstbs.268.871989887,/home/oracle/trntbs/tabletbs.270.871989869
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 10:55:13
10. After the import the tablespaces will be plugged in but in read only mode.
SQL> SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME PLU STATUS
------------------------------ --- ---------
tabletbs YES READ ONLY
indextbs YES READ ONLY
lobstbs YES READ ONLY
Change tablespace mode to read write.
SQL> alter tablespace lobstbs read write;
Tablespace altered.

SQL> alter tablespace indextbs read write;
Tablespace altered.

SQL> alter tablespace tabletbs read write;
Tablespace altered.

SQL> SELECT tablespace_name, plugged_in, status FROM dba_tablespaces;

TABLESPACE_NAME PLU STATUS
------------------------------ --- ---------
tabletbs YES ONLINE
indextbs YES ONLINE
lobstbs YES ONLINE
11. The transported tablespaces are not ready for use.

SP2-0310: unable to open file catmmig.sql

$
0
0
Patch 19518079 was needed to reflect the fact that DB was upgraded from 12.1.0.1 to 12.1.0.2.However this patch is now included in the PSU 12.1.0.2.2. This result in an error when the post patch installation steps are carried out.
Patch 19518079 rollback (pdb CDB$ROOT): WITH ERRORS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19518079/18024100/19518079_rollback_ENT12C_CDBROOT_2015Feb06_17_47_56.log (errors)
Error at line 27: SP2-0310: unable to open file "/opt/app/oracle/product/12.1.0/dbhome_2/sqlpatch/19518079/18024100/rollback_files/rdbms/admin/catmmig.sql"

Patch 19877336 apply (pdb CDB$ROOT): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19877336/18313828/19877336_apply_ENT12C_CDBROOT_2015Feb06_17_47_56.log (no errors)
Patch 19769480 apply (pdb CDB$ROOT): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19769480/18350083/19769480_apply_ENT12C_CDBROOT_2015Feb06_17_49_33.log (no errors)
Patch 19518079 rollback (pdb PDB$SEED): WITH ERRORS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19518079/18024100/19518079_rollback_ENT12C_PDBSEED_2015Feb06_17_49_40.log (errors)
Error at line 57: SP2-0310: unable to open file "/opt/app/oracle/product/12.1.0/dbhome_2/sqlpatch/19518079/18024100/rollback_files/rdbms/admin/catmmig.sql"

Patch 19877336 apply (pdb PDB$SEED): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19877336/18313828/19877336_apply_ENT12C_PDBSEED_2015Feb06_17_49_40.log (no errors)
Patch 19769480 apply (pdb PDB$SEED): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19769480/18350083/19769480_apply_ENT12C_PDBSEED_2015Feb06_17_51_28.log (no errors)
Patch 19877336 apply (pdb PDB12C): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19877336/18313828/19877336_apply_ENT12C_PDB12C_2015Feb06_17_51_33.log (no errors)
Patch 19769480 apply (pdb PDB12C): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19769480/18350083/19769480_apply_ENT12C_PDB12C_2015Feb06_17_53_00.log (no errors)
Patch 19877336 apply (pdb PDB12CDI): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19877336/18313828/19877336_apply_ENT12C_PDB12CDI_2015Feb06_17_51_33.log (no errors)
Patch 19769480 apply (pdb PDB12CDI): SUCCESS
logfile: /opt/app/oracle/cfgtoollogs/sqlpatch/19769480/18350083/19769480_apply_ENT12C_PDB12CDI_2015Feb06_17_53_00.log (no errors)
Rolling back of the patch fails with unable to open file catmmig.sql.



The log files shows
IGNORABLE ERRORS: NONE

INSTALL_FILE
--------------------------------------------------------------------------------
?/sqlpatch/19518079/18024100/rollback_files/rdbms/admin/catmmig.sql

SP2-0310: unable to open file "/opt/app/oracle/product/12.1.0/dbhome_2/sqlpatch/19518079/18024100/rollback_files/rdbms/admin/catmmig.sql"

PL/SQL procedure successfully completed.
The metalink notes listed at the end of the post gives a workaround for this which involves manually adding an entry to the registry$history to mention the upgrade.

Useful metalink notes
Oracle Quarterly Database Patch 12.1.0.2.4 Known Issues [ID 1942931.1]
Oracle Quarterly Database Patch 12.1.0.2.1 Known Issues [ID 1930503.1]
Bug 20421900 - catmmig.sql is missing from the 12.1.0.2.2/3/4 Engineered Systems / DB In-Memory Bundle Patch (DBBP) [ID 20421900.8]

DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled After Upgrading to 11.2.0.4 Standard Edition

$
0
0
"DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled" message could be seen while using export data dump and import data pump after upgrading a 11.2.0.3 standard edition database to 11.2.0.4 standard edition.
expdp system/testupg directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
>>> DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWMD: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWCREATE: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWCREATE10G: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWXML: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWREPORT: OLAP not enabled
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
>>> DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWMD: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWCREATE: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWCREATE10G: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWXML: OLAP not enabled
>>> DBMS_AW_EXP: SYS.AW$AWREPORT: OLAP not enabled
. . exported "SCOTT"."DEPT" 5.929 KB 4 rows
. . exported "SCOTT"."EMP" 8.562 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.859 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/opt/app/oracle/admin/testupg/dpdump/sct.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jan 28 13:00:10 2015 elapsed 0 00:00:09
The main reason for this is the OLAP components are not enabled for SE but could get installed if a database template is used (eg. online transaction processing template) and becomes invalid after the upgrade. A similar problem was observed earlier with regard to dbms_cube_exp.schema_info_imp_beg error.
MOS note 1638799.1 these messages are intentionally output for 11.2.0.4 SE1 and SE and could be safely ignored. But if it's desired not to have the message output then the package could be deleted with (take a full database backup before deleting)
delete from sys.exppkgact$ where package = 'DBMS_AW_EXP' and schema= 'SYS';
commit;
@?/rdbms/admin/utlrp
This is mentioned in other MOS notes (1921158.1,726637.1,1675617.1) though not directly related to this warning message. After the delete the expdp and impdp doesn't result in any warning messages
expdp system/testupg directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott

Export: Release 11.2.0.4.0 - Production on Wed Jan 28 13:02:55 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "SCOTT"."DEPT" 5.929 KB 4 rows
. . exported "SCOTT"."EMP" 8.562 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.859 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/opt/app/oracle/admin/testupg/dpdump/sct.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jan 28 13:03:01 2015 elapsed 0 00:00:06

impdp system/testupg directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott

Import: Release 11.2.0.4.0 - Production on Wed Jan 28 13:05:31 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** directory=DATA_PUMP_DIR dumpfile=sct.dmp logfile=sct.log schemas=scott
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SCOTT" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."DEPT" 5.929 KB 4 rows
. . imported "SCOTT"."EMP" 8.562 KB 14 rows
. . imported "SCOTT"."SALGRADE" 5.859 KB 5 rows
. . imported "SCOTT"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at Wed Jan 28 13:05:33 2015 elapsed 0 00:00:02
At the same time it would also be useful in clearing all OLAP related invalid objects. Invalid objects could be identified with
select owner, object_name, object_type from dba_objects where status <>'VALID' order by owner, object_type;
Since OLAP components are not installed in a SE home running the component removal scripts will fail
SQL> @?/olap/admin/catnoamd.sql
SP2-0310: unable to open file "/opt/app/oracle/product/11.2.0/dbhome_4/olap/admin/catnoamd.sql"
SQL> @?/olap/admin/olapidrp.plb
SP2-0310: unable to open file "/opt/app/oracle/product/11.2.0/dbhome_4/olap/admin/olapidrp.plb"
SQL> @?/olap/admin/catnoxoq.sql
SP2-0310: unable to open file "/opt/app/oracle/product/11.2.0/dbhome_4/olap/admin/catnoxoq.sql"
SQL> @?/olap/admin/catnoaps.sql
SP2-0310: unable to open file "/opt/app/oracle/product/11.2.0/dbhome_4/olap/admin/catnoaps.sql"
SQL> @?/olap/admin/cwm2drop.sql
SP2-0310: unable to open file "/opt/app/oracle/product/11.2.0/dbhome_4/olap/admin/cwm2drop.sql"
It is possible to copy in these files from a EE home and run to remove the OLAP components or manually drop the packages/synonyms and finally drop the OLAP user (refer 1593666.1 and 1900113.1 before dropping OLAP user).

Useful metalink notes
Datapump Export (expdp) Raises Warnings Like "DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled"[ID 1638799.1]
12.1 Export Gives EXP-8 ORA-29280 EXP-85 ORA-06512 "SYS.UTL_FILE""SYS.DBMS_AW_EXP"[ID 1921158.1]
Traditional Export (EXP) and DataPump (expdp) Fail With ORA-4063: Package Body SYS.DBMS_AW_EXP Has Errors [ID 726637.1]
DBMS_AW_EXP: Ignoring APPS.ODPCODE During Schema Data Pump Export [ID 1675617.1]
How To Find Out If OLAP Is Being Used [ID 739032.1]
How To Remove or De-activate OLAP After Migrating From 9i To 10g or 11g [ID 467643.1]
How To Remove The OLAP Option In 10g And 11g [ID 332351.1]
CATNOAMD.SQL DOES NOT DROP OLAPSYS USER IN 11.2.0.4 [ID 1900113.1]
Invalid OLAPSYS Objects After Upgrading TO 12C [ID 1593666.1]
Removing Oracle OLAP from the Database does not Remove All OLAP Objects [ID 1060023.1]
Remove Invalid OLAP Objects From SYS And OLAPSYS Schemas [ID 565773.1]
OWB Repository Upgrade To 11.2.0.4 Does Not Upgrade The Locations [ID 1637271.1]
Upgrading Oracle Warehouse Builder 11.2 - How To Upgrade From OWB 11.2.0.x To OWB 11.2.0.y [ID 1225254.1]

Restore RAC DB Backup as a Single Instnace DB

$
0
0
At times it may require a DBA to restore a backup of a RAC DB as a single instance DB. It could be that RAC DB is production system and copy of it is needed for development. This could be achieved with RAC to single instance duplication as well. However in this post steps are shown on how to restore RAC DB on ASM to a single instance DB which use local file system.
1. Create a backup of the RAC DB including the control files. In this case the backups are created in the local file system.
RMAN> backup database format '/home/oracle/backup/bakp%U' plus archivelog format '/home/oracle/backup/arch%U' delete all input;
RMAN> backup current controlfile format '/home/oracle/backup/ctl%U';
2. Create a pfile of the RAC. The output below shows the RAC DB pfile with RAC specific and instance specific parameters.
*.audit_file_dest='/opt/app/oracle/admin/rac11g2/adump'
*.audit_trail='NONE'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+DATA/rac11g2/controlfile/current.260.732796395','+FLASH/rac11g2/controlfile/current.256.732796395'#Restore Controlfile
*.db_32k_cache_size=67108864
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain='domain.net'
*.db_name='rac11g2'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=9437184000
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rac11g2XDB)'
rac11g21.instance_number=1
rac11g22.instance_number=2

*.java_jit_enabled=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=209715200
*.processes=150
*.remote_listener='(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCPS)(HOST=192.168.0.91)(PORT=1523))) '
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=633339904
rac11g21.thread=1
rac11g22.thread=2
rac11g21.undo_tablespace='UNDOTBS1'
rac11g22.undo_tablespace='UNDOTBS2'
3. Edit the pfile by removing the RAC and instance specific parameters, especially the cluster_database=true must be set to false. Output below gives the edited pfile. The new single instance will use OMF and the db_create_file and db_recoery_file_dest have been replaced with file system directories in place of ASM diskgroup used by the RAC DB. Also all instance specific parameters have been removed.
more rac11g2pfile.ora
*.audit_file_dest='/opt/app/oracle/admin/rac11g2/adump'
*.audit_trail='NONE'
*.cluster_database=false
*.compatible='11.2.0.0.0'
*.db_32k_cache_size=67108864
*.db_block_size=8192
*.db_create_file_dest='/data/oradata'
*.db_domain='domain.net'
*.db_name='rac11g2'
*.db_recovery_file_dest='/data/flash_recovery'
*.db_recovery_file_dest_size=9437184000
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rac11g2XDB)'
*.java_jit_enabled=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=209715200
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=633339904
*.undo_tablespace='UNDOTBS1'
4. Copy the backup and the pfile to the new host where the single instance DB will be created.

5. Create the audit dump directory
 mkdir -p /opt/app/oracle/admin/rac11g2/adump
Set the ORACLE_SID to the RAC DB sid (not the instance SID in this case the RAC DB is called rac11g2) and start the db in nomount mode.
export ORACLE_SID=rac11g2
SQL> startup nomount pfile='rac11g2pfile.ora';
6. Use rman to restore the control file from the location in the new host.
 rman target /

connected to target database: RAC11G2 (not mounted)

RMAN> restore controlfile from '/home/oracle/backups/ctl5upvj3oh_1_1';

Starting restore at 18-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/data/oradata/RAC11G2/controlfile/o1_mf_bg8ykm51_.ctl
output file name=/data/flash_recovery/RAC11G2/controlfile/o1_mf_bg8ykmdt_.ctl
Finished restore at 18-FEB-15
7. Mount the database and catalog the backups copied over earlier.
RMAN> alter database mount;

RMAN> catalog start with '/home/oracle/backups';

Starting implicit crosscheck backup at 18-FEB-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK
Crosschecked 20 objects
Finished implicit crosscheck backup at 18-FEB-15

Starting implicit crosscheck copy at 18-FEB-15
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 18-FEB-15

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /home/oracle/backups

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/backups/arch5spvj3m6_1_1
File Name: /home/oracle/backups/rac11g2pfile.ora
File Name: /home/oracle/backups/ctl5upvj3oh_1_1
File Name: /home/oracle/backups/arch5npvj3f0_1_1
File Name: /home/oracle/backups/arch5opvj3h1_1_1
File Name: /home/oracle/backups/bakp5qpvj3k0_1_1
File Name: /home/oracle/backups/bakp5rpvj3m2_1_1
File Name: /home/oracle/backups/arch5ppvj3ij_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/backups/arch5spvj3m6_1_1
File Name: /home/oracle/backups/ctl5upvj3oh_1_1
File Name: /home/oracle/backups/arch5npvj3f0_1_1
File Name: /home/oracle/backups/arch5opvj3h1_1_1
File Name: /home/oracle/backups/bakp5qpvj3k0_1_1
File Name: /home/oracle/backups/bakp5rpvj3m2_1_1
File Name: /home/oracle/backups/arch5ppvj3ij_1_1


8.Restore the database from the backups, switch datafiles to new file location and recover the database to the last archivelog available on the backups. Since OMF is used the newname for the databse is set as "to new".
run {
set newname for database to new;
restore database;
switch datafile all;
recover database;
}


RMAN> run {
2> set newname for database to new;
3> restore database;
4> switch datafile all;
5> recover database;
6> }

executing command: SET NEWNAME

Starting restore at 18-FEB-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /data/oradata/RAC11G2/datafile/o1_mf_system_%u_.dbf
channel ORA_DISK_1: restoring datafile 00002 to /data/oradata/RAC11G2/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /data/oradata/RAC11G2/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /data/oradata/RAC11G2/datafile/o1_mf_users_%u_.dbf
channel ORA_DISK_1: restoring datafile 00005 to /data/oradata/RAC11G2/datafile/o1_mf_undotbs2_%u_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /data/oradata/RAC11G2/datafile/o1_mf_test_%u_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /data/oradata/RAC11G2/datafile/o1_mf_test_%u_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /data/oradata/RAC11G2/datafile/o1_mf_travelbo_%u_.dbf
channel ORA_DISK_1: restoring datafile 00010 to /data/oradata/RAC11G2/datafile/o1_mf_tbxindex_%u_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /data/oradata/RAC11G2/datafile/o1_mf_tbxlobs_%u_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/bakp5qpvj3k0_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/bakp5qpvj3k0_1_1 tag=TAG20150218T121600
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /data/oradata/RAC11G2/datafile/o1_mf_tbx32ktb_%u_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/bakp5rpvj3m2_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/bakp5rpvj3m2_1_1 tag=TAG20150218T121600
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 18-FEB-15

datafile 1 switched to datafile copy
input datafile copy RECID=49 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_system_bg8z9rm9_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=50 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_sysaux_bg8z9rlo_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=51 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_undotbs1_bg8z9rnh_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=52 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_users_bg8z9rp1_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=53 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_undotbs2_bg8z9ro9_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=54 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_test_bg8z9rpp_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=55 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_test_bg8z9rq9_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=56 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_tbx32ktb_bg8zbvny_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=57 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_travelbo_bg8z9rr1_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=58 STAMP=871991658 file name=/data/oradata/RAC11G2/datafile/o1_mf_tbxindex_bg8z9s7h_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=59 STAMP=871991659 file name=/data/oradata/RAC11G2/datafile/o1_mf_tbxlobs_bg8z9s9k_.dbf

Starting recover at 18-FEB-15
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=216
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=180
channel ORA_DISK_1: reading from backup piece /home/oracle/backups/arch5spvj3m6_1_1
channel ORA_DISK_1: piece handle=/home/oracle/backups/arch5spvj3m6_1_1 tag=TAG20150218T121709
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_1_216_bg8zccx3_.arc thread=1 sequence=216
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_2_180_bg8zccy6_.arc thread=2 sequence=180
channel default: deleting archived log(s)
archived log file name=/data/flash_recovery/RAC11G2/archivelog/2015_02_18/o1_mf_2_180_bg8zccy6_.arc RECID=1606 STAMP=871991660
unable to find archived log
archived log thread=2 sequence=181
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/18/2015 11:54:21
RMAN-06054: media recovery requesting unknown archived log for thread 2 with sequence 181 and starting SCN of 50748909
9. Open the database with resetlogs
RMAN> alter database open resetlogs;
It is normal at this stage to observe following messages in the alert log
Starting background process ASMB
Wed Feb 18 15:56:57 2015
ASMB started with pid=58, OS id=26760
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
This is due to some old references to files in the ASM but should not affect the functioning of the database.

10. Create a spfile from the memory and restart the database using the spfile. At this stage the ASMB message observed earlier should not occur anymore.

11. Clean up the additional threads that came as part of the RAC. Since the RAC DB was a two instance the new single instance will have information of the two thread.
SQL> select THREAD#, STATUS, ENABLED from v$thread;

THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 CLOSED PUBLIC

SQL> select group# from v$log where THREAD#=2;

GROUP#
----------
3
4

SQL> alter database disable thread 2;
Database altered.

SQL> alter database clear unarchived logfile group 3;
Database altered.

SQL> alter database clear unarchived logfile group 4;
Database altered.

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter database drop logfile group 4;
Database altered.

SQL> select group# from v$log where THREAD#=2;
no rows selected
12. Drop undo tablespaces that are not used as part of the single instance. In this case the RAC DB had two undo tablespaces and one was chosen as the undo tablespace for the single instance. The other undo tablespace is dropped.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1

SQL>select tablespace_name from dba_tablespaces where contents='UNDO';

TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2

SQL>drop tablespace UNDOTBS2 including contents and datafiles;
Tablespace dropped.

SQL>select tablespace_name from dba_tablespaces where contents='UNDO';

TABLESPACE_NAME
------------------------------
UNDOTBS1
13. Since OMF was used the temp file for the temporary tablespace was automatically created when the database was restored. If not create the temp file and assigned to the temporary tablespace or create a new default temporary tablespace.
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

PROPERTY_VALUE
------------------------------
TEMP

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/data/oradata/RAC11G2/datafile/o1_mf_temp_bg8zfgxv_.tmp
14. Since the restored DB is non-RAC the registry shows RAC option as invalid. Run dbms_registry to remove the RAC option from the registry.
Select comp_name,status,version from dba_registry;

Oracle Real Application Clusters INVALID 11.2.0.3.0

SQL> exec dbms_registry.removed('RAC');

Oracle Real Application Clusters REMOVED 11.2.0.3.0
This conclude the restoring a RAC DB backup as single instance DB.

Useful metalink notes
RAC Option Invalid After Migration [ID 312071.1]
HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node [ID 415579.1]

Downgrade Grid Infrastructure from 11.2.0.4 to 11.1.0.7

$
0
0
This post shows the steps for downgrading clusterware from 11.2.0.4 to 11.1.0.7. This clusterware upgrade was a successful upgrade from 11.1.0.7 to 11.2.0.4 and only the clusterware is upgraded and the database is on 11.1.0.7. If there were nodes that failed the upgrade then follow the MOS notes for additional steps needed for such situation. There is a similar post for downgrading from 11.2.0.4 to 11.2.0.3. However this upgrade is from 11.2 to a pre-11.2 has some additional steps and pitfalls to look out for. The cluster is a two node cluster and had a separate home for ASM instance.
The upgraded system had following resource attributes changed
crsctl modify resource "ora.FLASH.dg" -attr "AUTO_START=always"
crsctl modify resource "ora.DATA.dg" -attr "AUTO_START=always"
crsctl modify resource ora.racse11g1.db -attr "ACTION_SCRIPT=/opt/app/11.2.0/grid/bin/racgwrap"
crsctl modify resource ora.racse11g1.db -attr "AUTO_START=always"

1. Verify clusterware upgrade is clean and active version is 11.2.0.4
[oracle@rac2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac2] is [11.2.0.4.0]

[oracle@rac2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]

[oracle@rac2 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].
2. Identify the "OCR-node". As mentioned in the earlier post the OCR-node is the node where the backup of the lower version OCR was taken during the upgrade. In this setup this was done on node "rac1".
[oracle@rac1 cdata]$ cd /opt/app/11.2.0/grid/cdata/
[oracle@rac1 cdata]$ ls -l
total 3132
drwxrwxr-x 2 oracle oinstall 4096 Mar 5 09:29 cg_11g_cluster
drwxr-xr-x 2 oracle oinstall 4096 Mar 4 16:47 localhost
-rw------- 1 root root 88875 Mar 4 17:11 ocr11.1.0.7.0
drwxr-xr-x 2 oracle oinstall 4096 Mar 4 17:11 rac1
-rw------- 1 root oinstall 272756736 Mar 5 09:31 rac1.olr
3. MOS note 1364946.1 says to run the rootcrs.pl with downgrade option on all but the OCR-node (i.e don't run this on the OCR-node). But this results in an error
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
One or more options required but missing: -oldcrshome -version
It could be that MOS not updated to reflect the changes of 11.2.0.4. In order to run the command specify the old crs home and the old crs version in 5 number format. Since rac1 is OCR-node this command is run on the other remaining node rac2
[root@rac2 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.racse11g1.racse11g2.inst' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2'
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1'
CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.racse11g1.racse11g2.inst' on 'rac2' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2'
CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2'
CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully downgraded Oracle clusterware stack on this node
4. On the OCR-node run the same command with the addition of the lastnode option.
[root@rac1 oracle]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /opt/crs/oracle/product/11.1.0/crs -version 11.1.0.7.0
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'
CRS-2673: Attempting to stop 'ora.racse11g1.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.racse11g1.racse11g1.inst' on 'rac1'
CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.racse11g1.db' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.racse11g1.racse11g1.inst' on 'rac1' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully downgraded OCR to 11.1.0.7.0
Run root.sh from the old crshome on all the cluster nodes one at a time to start the Clusterware

Successful deletion of voting disk /dev/sdh1.
Now formatting voting disk: /dev/sdh1.
Successful addition of voting disk /dev/sdh1.
Successful deletion of voting disk /dev/sdf1.
Now formatting voting disk: /dev/sdf1.
Successful addition of voting disk /dev/sdf1.
Successful deletion of voting disk /dev/sdg1.
Now formatting voting disk: /dev/sdg1.
Successful addition of voting disk /dev/sdg1.
5. Do not run the root.sh just yet. Edit the oratab to reflect the Oracle home the ASM instance would run out of. Since before the upgrade ASM ran out of separate home (not the same oracle home DB ran out of) this is added to the oratab on both nodes.
On rac1
+ASM1:/opt/app/oracle/product/11.1.0/asm_1:N
On rac2
+ASM2:/opt/app/oracle/product/11.1.0/asm_1:N
6. Clear the gpnp profile directories
rm -rf /opt/app/11.2.0/grid/gpnp/*
7. Make sure any changes subsequently made to the cluster are reflected on the root* script. In this cluster an ocr mirror was added after it was created (following the execution of root.sh). As such this ocr mirror location is missing on the root script but present in the ocr.loc file.
cat /etc/oracle/ocr.loc
ocrconfig_loc=/dev/sdb1
ocrmirrorconfig_loc=/dev/sde1 <-- added later
Runing the root.sh without correcting this resulted in the following error
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Current Oracle Cluster Registry mirror location '/dev/sde1' in '/etc/oracle/ocr.loc' and '' does not match
Update either '/etc/oracle/ocr.loc' to use '' or variable CRS_OCR_LOCATIONS in rootconfig.sh with '/dev/sde1' then rerun rootconfig.sh
To fix this edit the (11.1) $CRS_HOME/install/rootconfig and add the ocr mirror location
CRS_OCR_LOCATIONS=/dev/sdb1,/dev/sde1
After the change the root.sh runs without any issue.
[root@rac1 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-pvt rac1
node 2: rac2 rac2-pvt rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
rac1
Cluster Synchronization Services is inactive on these nodes.
rac2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
At the end of this execution the ASM and database instance will be up and running on this node. Run the root.sh on the second node
[root@rac2 crs]# /opt/crs/oracle/product/11.1.0/crs/root.sh
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-pvt rac1
node 2: rac2 rac2-pvt rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
rac1
rac2

Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
At the end of this script execution it was found that ASM instance was not up and running. Trying to manually start resulted in following error
[oracle@rac2 bin]$ srvctl start asm -n rac2
PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015
...
rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy

...
CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]
[PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 12:16:33 2015
...
rac2:ora.rac2.ASM2.asm:SQL> ORA-00304: requested INSTANCE_NUMBER is busy
rac2:ora.rac2.ASM2.asm:SQL> Disconnected
rac2:ora.rac2.ASM2.asm:
CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]
This is because during the upgrade the ASM spfile changed and remained the same even after the downgrade as well. Following is a pfile created from the spfile before the upgrade (11.1.0.7)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM1.asm_diskgroups='DATA','FLASH'
+ASM2.asm_diskgroups='DATA','FLASH'
*.cluster_database=true
*.diagnostic_dest='/opt/app/oracle'
+ASM2.instance_number=2
+ASM1.instance_number=1

*.instance_type='asm'
*.large_pool_size=12M
*.asm_diskstring='ORCL:*'
Below is the pfile created after the upgrade (and same content was there on the pfile created after the downgrade as well)
+ASM1.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
+ASM2.__oracle_base='/opt/app/oracle'#ORACLE_BASE set from environment
*.asm_diskgroups='DATA','FLASH'
*.asm_diskstring='ORCL:*'
*.asm_power_limit=1
*.diagnostic_dest='/opt/app/oracle'
*.instance_type='asm'
*.large_pool_size=16777216
*.memory_target=1627389952
*.remote_login_passwordfile='EXCLUSIVE'
Comparing the pfile entries it could be seen that after the upgrade of ASM the instance number entries and cluster database entries are lost. As a result after the downgrade only one instance could be started. To fix this shutdown the database and instances on the node ASM is running. Start the ASM instance in nomount mode with the pfile created before the upgrade to 11.1.0.7 and recreate the spfile.
SQL> create spfile='/dev/sdb3' from pfile='/home/oracle/asmpfile.ora';


8. After this the ASM instances and DB instances could be started on all nodes. However the listener will fail to start on all the nodes (rac1 and rac2) and trying to manually start it would result in following error
[oracle@rac1 admin]$ srvctl start listener -n rac1
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNSLSNR for Linux: Version 11.1.0.7.0 - Production
rac1:ora.rac1.LISTENER_RAC1.lsnr:System parameter file is /opt/app/oracle/product/11.1.0/asm_1/network/admin/listener.ora
rac1:ora.rac1.LISTENER_RAC1.lsnr:Log messages written to /opt/app/oracle/diag/tnslsnr/rac1/listener_rac1/alert/log.xml
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-01151: Missing listener name, LISTENER_RAC1, in LISTENER.ORA
rac1:ora.rac1.LISTENER_RAC1.lsnr:Listener failed to start. See the error message(s) above...
rac1:ora.rac1.LISTENER_RAC1.lsnr:Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.85)(PORT=1521)))
rac1:ora.rac1.LISTENER_RAC1.lsnr:TNS-12535: TNS:operation timed out
rac1:ora.rac1.LISTENER_RAC1.lsnr: TNS-12560: TNS:protocol adapter error
rac1:ora.rac1.LISTENER_RAC1.lsnr: TNS-00505: Operation timed out
rac1:ora.rac1.LISTENER_RAC1.lsnr: Linux Error: 110: Connection timed out
CRS-1006: No more members to consider
CRS-0215: Could not start resource 'ora.rac1.LISTENER_RAC1.lsnr'
Reason for this is that changes made to listener.ora during the upgrade are not rolled back or missing during downgrade. When upgraded to 11.2.0.4 the listener resource is named "ora.LISTENER.lsnr". However on 11.1 the listeners have node specific naming "ora.rac2.LISTENER_RAC1.lsnr" and "ora.rac2.LISTENER_RAC2.lsnr". During the downgrade the listener.ora file created is missing this node specific listener. Add this node specific listener entry to the listener.ora file in the ASM_HOME (only rac1 entry is shown below. Similar entry with correct vip and ip names must be added to rac2 as well).
LISTENER_RAC1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCPS)(HOST = rac1-vip)(PORT = 1523)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.85)(PORT = 1521)(IP = FIRST))
)
)
After this it is possible to start the listener and all resources will be online
[oracle@rac1 admin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora....11g1.db application ONLINE ONLINE rac2
ora....g1.inst application ONLINE ONLINE rac1
ora....g2.inst application ONLINE ONLINE rac2
9. No need to change the action script entry as this would have already been changed to refering 11.1 crs home
crs_stat -p
NAME=ora.racse11g1.db
TYPE=application
ACTION_SCRIPT=/opt/crs/oracle/product/11.1.0/crs/bin/racgwrap
10. Update the inventory information and crs=true for 11.1 crs home.
$CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/crs/oracle/product/11.1.0/crs CRS=true
$CRS_HOME/out/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/11.2.0/grid CRS=false
After these commands are run check the inventory.xml to see 11.1 has crs=true
<HOME NAME="clusterware_11g" LOC="/opt/crs/oracle/product/11.1.0/crs" TYPE="O" IDX="1"CRS="true">
<NODE_LIST>
<NODE NAME="rac1"/>
<NODE NAME="rac2"/>
</NODE_LIST>
</HOME>

<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/11.2.0/grid" TYPE="O" IDX="4"> <-- 11.2 home
<NODE_LIST>
<NODE NAME="rac1"/>
<NODE NAME="rac2"/>
</NODE_LIST>
</HOME>
11. Check the crs version information
[oracle@rac1 crs]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.7.0]

[oracle@rac1 crs]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.1.0.7.0]

[oracle@rac1 admin]$ crsctl query crs releaseversion
11.1.0.7.0
12. Check ocr integrity and manually backup the ocr
[root@rac1 admin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 296940
Used space (kbytes) : 3916
Available space (kbytes) : 293024
ID : 1749862721
Device/File Name : /dev/sdb1
Device/File integrity check succeeded
Device/File Name : /dev/sde1
Device/File integrity check succeeded

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@rac1 admin]# ocrconfig -manualbackup
13. Backup the vote disks using dd
crsctl query css votedisk
0. 0 /dev/sdh1
1. 0 /dev/sdf1
2. 0 /dev/sdg1
Located 3 voting disk(s).

dd if=/dev/sdh1 of=/home/oracle/votediskbackup bs=8192
34134+1 records in
34134+1 records out
279627264 bytes (280 MB) copied, 0.69875 seconds, 400 MB/s
14. As the last step detach the 11.2.0.4 GI Home from the inventory and remove it manually
./runInstaller -detachHome ORACLE_HOME=/opt/app/11.2.0/grid -silent
rm -rf /opt/app/11.2.0/grid # run on all nodes
Related post
Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3

Useful metalink notes
How to Downgrade 11.2.0.2 Grid Infrastructure Cluster to 11.2.0.1 [ 1364230.1]
How to Downgrade 11.2.0.3 Grid Infrastructure Cluster to Lower 11.2 GI or Pre-11.2 CRS [ 1364946.1]
How to Update Inventory to Set/Unset "CRS=true" Flag for Oracle Clusterware Home [ 1053393.1]
Oracle Clusterware (GI or CRS) Related Abbreviations, Acronyms and Procedures [ 1374275.1]

Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4

$
0
0
This post list points to look out for when upgrading a 11.2.0.3 single instance DB on ASM (with role separation) to 11.2.0.4. It's not a comprehensive upgrade guide, refer oracle documentation for more information.
1. Before installing the new 11.2.0.4 GI check in the inventory.xml has crs=true against the existing GI home. In some cases crs=true is missing, usually if GI was installed with software only and later converted to HAS.
<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1"/>
If there's no crs=true against the GI home the new version fails to detect the existing clusterware.
If crs=true is missing then update the inventory information by running the following command specifying the existing GI home and crs=true
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/11.2.0/grid_1 CRS=true

<HOME NAME="Ora11g_gridinfrahome1" LOC="/opt/app/oracle/product/11.2.0/grid_1" TYPE="O" IDX="1"CRS="true"/>
Afterwards run the installer of the 11.2.0.4 and the existing GI will be detected. For more information on this issue refer 1953932.1 and 1117063.1

2. Before the upgrade the HAS shows version 11.2.0.3 for release and software version.
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.3.0]
GI upgrades are out of place. i.e install in a different location to existing GI home (grid_4 in this case).
3. Run the rootupgrade.sh when prompted. This will upgrade the ASM instance.
# /opt/app/oracle/product/11.2.0/grid_4/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/oracle/product/11.2.0/grid_4

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/11.2.0/grid_4/crs/install/crsconfig_params
Creating trace directory

ASM Configuration upgraded successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel6m1 successfully pinned.
Replacing Clusterware entries in upstart
Replacing Clusterware entries in upstart

rhel6m1 2015/03/12 12:41:21 /opt/app/oracle/product/11.2.0/grid_4/cdata/rhel6m1/backup_20150312_124121.olr

rhel6m1 2015/03/11 17:57:18 /opt/app/oracle/product/11.2.0/grid_1/cdata/rhel6m1/backup_20150311_175718.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
4. Afterwards the software version will reflect the new GI version but release version will remain the same.
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
5. ASM instance and listener will run out of the new GI home.
$ srvctl config asm
ASM home: /opt/app/oracle/product/11.2.0/grid_4
ASM listener: LISTENER
Spfile: +DATA/asm/asmparameterfile/registry.253.874087179
ASM diskgroup discovery string: /dev/sd*

$ srvctl config listener
Name: LISTENER
Home: /opt/app/oracle/product/11.2.0/grid_4
End points: TCP:1521


6. Before installing the database software check if $ORACEL_BASE/cfgtoollogs has write permission for oracle user. As this is a setup with role separation this location must have write permission for oinstall group for oracle user to create directories/files inside cfgtoollogs.
chmod 770 $ORACEL_BASE/cfgtoollogs
7. Install the database software as an out of place upgrade. After database software is installed and before running the dbua make sure the oracle binaries have the correct permission on the new oracle home suited to a role separated setup. The group ownership should be asmadmin (or corresponding asm admin group used) but after the software install it remains as oinstall.
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle oinstall 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall 0 Aug 24 2013 oracleO
Change this with setasmgidwrap to the correct ownership
[oracle@rhel6m1 bin]$ ls -l oracle*
-rwsr-s--x. 1 oracle asmadmin 226889744 Mar 12 13:29 oracle
-rwxr-x---. 1 oracle oinstall 0 Aug 24 2013 oracleO
9. Once oracle binary permissions are set run the DBUA to upgrade the database. Verify post upgrade status with cluvfy and orachk
cluvfy stage -post hacfg

orachk -u -o post
10. Finally if upgrade satisfactory increase the compatible parameter on the database and ASM disk attributes to the new 11.2.0.4 version. These changes cannot be rolled backed. On database
alter system set compatible='11.2.0.4.0' scope=spfile sid='*';
On ASM instance
alter diskgroup data set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.asm'='11.2.0.4';
alter diskgroup data set attribute 'compatible.rdbms'='11.2.0.4';
alter diskgroup flash set attribute 'compatible.rdbms'='11.2.0.4';
The HAS shows 11.2.0.4 as the release version.
$  crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
This concludes the upgrade of single instance on ASM (with role separation) from 11.2.0.3 to 11.2.0.4.

There was incident where after the GI was upgraded the ASM instance didn't start and following error was shown
srvctl start asm
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_+ASM'
. For details refer to "(:CLSN00107:)" in "/opt/app/oracle/product/11.2.0/grid_4/log/rhel6m1/agent/ohasd/oraagent_grid/oraagent_grid.log".
This standalone system was created from a GI home first installed as software only and also had the spfile in the local file system (GI_HOME/dbs). Not sure if these were in any way contributed to this error being thrown. Looking at an ASM spfile it could be seen a local_listener entry has been added during the upgrade.
  large_pool_size          = 12M
instance_type = "asm"
remote_login_passwordfile= "EXCLUSIVE"
local_listener = "LISTENER_+ASM"
asm_diskstring = "/dev/sd*"
asm_diskgroups = "FLASH"
asm_diskgroups = "DATA"
asm_power_limit = 1
diagnostic_dest = "/opt/app/oracle"
Since the listener.ora doesn't have such an entry and more over on 11.2. the local listener entries are auto updated. Once a ASM pfile was created without the local_listener entry the ASM instance started. A new SPfile had to be created and registered with the ASM configuration information.

Useful metalink notes
Oracle Restart: GI Upgrade From 12.1.0.1 to 12.1.0.2 Fails With INS-40406 [ID 1953932.1]
Oracle Restart ASM 11gR2: INS-40406 Upgrading ASM Instance To Release 11.2.0.1.0 [ID 1117063.1]

Related Posts
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Database

Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2

$
0
0
The system used for upgrade is the same system featured on an earlier post (upgrading single instance with ASM from 11.2.0.3 to 11.2.0.4). This post list the main points to look out for when upgrading the grid infrastructure to 12.1.0.2. Only the grid infrastructure is upgraded as the database software is on standard edition (SE) and at the time of this post there's no SE for 12.1.0.2.
1. If the new GI installer is unable to detect the existing GI then it could be same issue mentioned in the previous post. Fix it by updating the inventory information with crs=true. Also check if $ORACLE_BASE/cfgtoollogs has write permission for grid user.

2. Run the 12c GI installer and select upgrade GI and ASM option. New GI install is done as an out of place upgrade.

3. When prompted the rootupgrade.sh script.
 /opt/app/oracle/product/12.1.0/grid_1/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/oracle/product/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/12.1.0/grid_1/crs/install/crsconfig_params

ASM Configuration upgraded successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel6m1 successfully pinned.
2015/03/12 15:30:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2015/03/12 15:31:14 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2015/03/12 15:33:01 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'

2015/03/12 15:33:11 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'



rhel6m1 2015/03/12 15:33:12 /opt/app/oracle/product/12.1.0/grid_1/cdata/rhel6m1/backup_20150312_153312.olr 0

rhel6m1 2015/03/12 12:41:21 /opt/app/oracle/product/11.2.0/grid_4/cdata/rhel6m1/backup_20150312_124121.olr -

rhel6m1 2015/03/11 17:57:18 /opt/app/oracle/product/11.2.0/grid_1/cdata/rhel6m1/backup_20150311_175718.olr -
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.std11g2.db' on 'rhel6m1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.std11g2.db' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel6m1'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel6m1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/03/12 15:38:33 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
4. Since the database remains in 11.2.0.4 version only the compatible.asm attribute is updated to 12c version.
alter diskgroup data set attribute 'compatible.asm'='12.1.0.2.0';
alter diskgroup flash set attribute 'compatible.asm'='12.1.0.2.0';
5. This concludes the upgrade of GI in a single instance environment with ASM. The software version will have the 12c version while release version remaining on 11.2.0.4
$  crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.1.0.2.0]


One thing that was noticed after the upgrade is that trying to access ASM config did not return any results but an error.
$ srvctl config asm
ASM home:
PRCA-1057 : Failed to retrieve the password file location used by ASM asm
PRCR-1097 : Resource attribute not found: PWFILE
Trying to modify the pw file location didn't work either
$ srvctl modify asm -pwfile /opt/app/oracle/product/12.1.0/grid_1/dbs/orapw+ASM
PRCR-1097 : Resource attribute not found: CARDINALITY
Only way to fix this is to stop all the resources and recreate the ASM config.
crsctl stop resource -all
srvctl remove asm -force
srvctl add asm -listener listener -spfile "+DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.874087179" -pwfile /opt/app/oracle/product/12.1.0/grid_1/dbs/orapw+ASM -diskstring "/dev/sd*"

crsctl start resource -all

srvctl config asm
ASM home:
Password file: /opt/app/oracle/product/12.1.0/grid_1/dbs/orapw+ASM
ASM listener: LISTENER
Spfile: +DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.874087179
ASM diskgroup discovery string: /dev/sd*
This is a known issue on 12c. For more follow MOS notes.

Useful metalink notes
Oracle Restart: WARNING: unknown state for ASM password file location resource, Return Value: 3 [ID 1935891.1]
Alternative Way To Upgrade An ASM Standalone Configuration From Release 11.2.0.<#> to release 12.1.0.<#>. [ID 1964405.1]
Reconfiguring & Recreating The 11gR2/12cR1 Restart/OHAS/SIHA Stack Configuration (Standalone). [ID 1422517.1]

Related Posts
Upgrading RAC from 12.1.0.1 to 12.1.0.2 - Grid Infrastructure
Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Grid Infrastructure
Upgrading RAC from 11.2.0.3 to 11.2.0.4 - Database

Downgrade Grid Infrastructure from 12.1.0.2 to 11.2.0.4

$
0
0
This post list steps for downgrading grid infrastructure from 12.1.0.2 to 11.2.0.4. The grid infrastructure upgrade has been successful on all nodes. If upgrade to 12.1.0.2 had failed on any of the nodes then follow oracle documentation for additional steps/options required for such cases. Only the GI was upgraded to 12.1.0.2 and the database remained on 11.2.0.4.
[grid@rhel6m1 ~]$  crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@rhel6m1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [12.1.0.2.0]

[root@rhel6m1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 9068
Available space (kbytes) : 400500
ID : 2072206343
Device/File Name : +CLUSTER_DG
The environment used here is the same environment used to upgrade from 11.2.0.3 to 11.2.0.4. As such the OCR-node (node containing the backup of the OCR created during the upgrade to 12.1.0.2) contains both 11.2.0.3 and 11.2.0.4 OCR backups but with different permissions.
[grid@rhel6m1 cdata]$ ls -lrt /opt/app/12.1.0/grid2/cdata/
total 266644
drwxr-xr-x. 2 grid oinstall 4096 Mar 26 17:22 localhost
-rw-r--r--. 1 grid oinstall 132453 Mar 26 17:34 ocr11.2.0.3.0
-rw-------. 1 root root 130412 Mar 26 17:44 ocr11.2.0.4.0
drwxr-xr-x. 2 grid oinstall 4096 Mar 26 17:46 rhel6m1
-rw-------. 1 root oinstall 503484416 Mar 26 17:54 rhel6m1.olr
drwxrwxr-x. 2 grid oinstall 4096 Mar 26 17:58 rhel6m-cluster
During the downgrade the the higher version is detected and restored (no harm even if the ocr11.2.0.3.0 is removed before the downgrade).
Also before the downgrade commands are run make sure only the 12.1 related binaries are in the PATH variable (crsctl, srvctl) and ORACLE_HOME points to 12.1. At times the downgrade failed with
Died at /opt/app/12.1.0/grid2/crs/install/crsdeconfig.pm line 2336.
The command '/opt/app/12.1.0/grid2/perl/bin/perl -I/opt/app/12.1.0/grid2/perl/lib -I/opt/app/12.1.0/grid2/crs/install /opt/app/12.1.0/grid2/crs/install/rootcrs.pl -downgrade -lastnode' execution failed
for which solution according to 1917922.1 is to run the downgrade command from a location where grid user has write access to. In this case it was run from grid user home so MOS note solution may not be applicable.
Another time the downgrade on the OCR-node failed with
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
2015-04-01 10:43:00.047
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1
CLSU-00103: error location: sclsdgcwd5
(:CLSD00183:)
MOS notes 1961344.1 and 1911258.1 gives causes and solutions for this but neither case was valid in this case as the permission of directories mentioned on these MOS were already set to correct values. These issues went away after removing from PATH any references to 11.2 binaries.
To begin the downgrade run the rootcrs.sh with downgrade option on all but the OCR-node. In this case the rhel6m1 is the OCR-node and downgrade first run on rhel6m2.
[root@rhel6m2 grid]# /opt/app/12.1.0/grid2/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.CLUSTER_DG.dg' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.std11g2.asa.domain.net.svc' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rhel6m2'
CRS-2677: Stop of 'ora.CLUSTER_DG.dg' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.std11g2.asa.domain.net.svc' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.std11g2.db' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.MYLISTENER.lsnr' on 'rhel6m2'
CRS-2677: Stop of 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel6m2'
CRS-2677: Stop of 'ora.MYLISTENER.lsnr' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.rhel6m2.vip' on 'rhel6m2'
CRS-2677: Stop of 'ora.std11g2.db' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel6m2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel6m2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rhel6m1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.rhel6m2.vip' on 'rhel6m2' succeeded
CRS-2672: Attempting to start 'ora.rhel6m2.vip' on 'rhel6m1'
CRS-2676: Start of 'ora.scan1.vip' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m1'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m2'
CRS-2677: Stop of 'ora.asm' on 'rhel6m2' succeeded
CRS-2676: Start of 'ora.rhel6m2.vip' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rhel6m2' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rhel6m1'
CRS-2676: Start of 'ora.oc4j' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rhel6m2'
CRS-2677: Stop of 'ora.ons' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel6m2'
CRS-2677: Stop of 'ora.net1.network' on 'rhel6m2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel6m2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m2'
CRS-2677: Stop of 'ora.storage' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m2'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m2'
CRS-2677: Stop of 'ora.crf' on 'rhel6m2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m2'
CRS-2677: Stop of 'ora.gipcd' on 'rhel6m2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2015/04/01 12:07:02 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2015/04/01 12:07:02 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

Successfully downgraded Oracle Clusterware stack on this node


Since this is a two node RAC the downgrade command is run with lastnode option on rhel6m1 (OCR-node). This will remove the GI management repository, downgrade the OCR
[root@rhel6m1 grid]# /opt/app/12.1.0/grid2/crs/install/rootcrs.sh -downgrade -lastnode
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.MYLISTENER.lsnr' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.std11g2.asa.domain.net.svc' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.rhel6m2.vip' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rhel6m1'
CRS-2677: Stop of 'ora.cvu' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.MYSCANLISTENER_SCAN1.lsnr' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rhel6m1'
CRS-2677: Stop of 'ora.MYLISTENER.lsnr' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.std11g2.asa.domain.net.svc' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.std11g2.db' on 'rhel6m1'
CRS-2677: Stop of 'ora.std11g2.db' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel6m1'
CRS-2677: Stop of 'ora.rhel6m2.vip' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.rhel6m1.vip' on 'rhel6m1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.rhel6m1.vip' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.CLUSTER_DG.dg' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rhel6m1'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.CLUSTER_DG.dg' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rhel6m1'
CRS-2677: Stop of 'ora.ons' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel6m1'
CRS-2677: Stop of 'ora.net1.network' on 'rhel6m1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel6m1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
CRS-2677: Stop of 'ora.storage' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m1'
CRS-2677: Stop of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

ASM downgrade operation succeeded

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m1'
CRS-2677: Stop of 'ora.crsd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m1'
CRS-2677: Stop of 'ora.storage' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m1'
CRS-2677: Stop of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel6m1'
CRS-2676: Start of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel6m1'
CRS-2676: Start of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel6m1'
CRS-2676: Start of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel6m1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel6m1'
CRS-2676: Start of 'ora.diskmon' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel6m1'
CRS-2676: Start of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel6m1'
CRS-2676: Start of 'ora.asm' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel6m1'
CRS-2676: Start of 'ora.storage' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel6m1'
CRS-2676: Start of 'ora.crf' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel6m1'
CRS-2676: Start of 'ora.crsd' on 'rhel6m1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rhel6m1
CRS-6016: Resource auto-start has completed for server rhel6m1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Starting the Grid Infrastructure Management Repository database
Using PFILE='/opt/app/12.1.0/grid2/dbs/init-dropmgmtdb.ora'
Command output:
SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 1 12:12:12 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to an idle instance. SQL> ORACLE instance started. Total System Global Area 788529152 bytes Fixed Size 2929352 bytes Variable Size 314576184 bytes Database Buffers 465567744 bytes Redo Buffers 5455872 bytes Database mounted. SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management and Advanced Analytics options
Starting the Grid Infrastructure Management Repository database succeeded
Creating new pfile for the Management Repository database

SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 1 12:12:25 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management and Advanced Analytics options

SQL>
File created.

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management and Advanced Analytics options
Succeeded in creating PFILE '/opt/app/12.1.0/grid2/dbs/init-dropmgmtdbSAVED.ora'
Dropping the Grid Infrastructure Management Repository database
Dropping the Grid Infrastructure Management Repository database succeeded

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.CLUSTER_DG.dg' on 'rhel6m1'
CRS-2677: Stop of 'ora.CLUSTER_DG.dg' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel6m1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel6m1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.storage' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m1'
CRS-2677: Stop of 'ora.crf' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m1'
CRS-2677: Stop of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel6m1'
CRS-2676: Start of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel6m1'
CRS-2676: Start of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel6m1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel6m1'
CRS-2676: Start of 'ora.diskmon' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2676: Start of 'ora.crf' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel6m1'
CRS-2676: Start of 'ora.asm' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel6m1'
CRS-2676: Start of 'ora.storage' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel6m1'
CRS-2676: Start of 'ora.crsd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel6m1'
CRS-2677: Stop of 'ora.crsd' on 'rhel6m1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel6m1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel6m1'
CRS-2677: Stop of 'ora.storage' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel6m1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6m1'
CRS-2677: Stop of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rhel6m1'
CRS-2677: Stop of 'ora.crf' on 'rhel6m1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel6m1'
CRS-2677: Stop of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel6m1'
CRS-2676: Start of 'ora.mdnsd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel6m1'
CRS-2676: Start of 'ora.gpnpd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel6m1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel6m1'
CRS-2676: Start of 'ora.diskmon' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rhel6m1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel6m1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel6m1'
CRS-2674: Start of 'ora.drivers.acfs' on 'rhel6m1' failed
CRS-2676: Start of 'ora.ctssd' on 'rhel6m1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel6m1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel6m1'
CRS-2676: Start of 'ora.asm' on 'rhel6m1' succeeded
2015/04/01 12:15:49 CLSRSC-338: Successfully downgraded OCR to version 11.2.0.4.0

CRS-2672: Attempting to start 'ora.crsd' on 'rhel6m1'
CRS-2676: Start of 'ora.crsd' on 'rhel6m1' succeeded
2015/04/01 12:20:01 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2015/04/01 12:20:01 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

Successfully downgraded Oracle Clusterware stack on this node
Run '/opt/app/11.2.0/grid4/bin/crsctl start crs' on all nodes to complete downgrade
Before starting the cluster with 11.2 update the inventory by setting crs=true for 11.2 GI home. At this point 12.1 GI home will have crs=true.
&ltHOME NAME="Ora11g_gridinfrahome2" LOC="/opt/app/11.2.0/grid4" TYPE="O" IDX="4"&gt
&ltNODE_LIST&gt
&ltNODE NAME="rhel6m1"/&gt
&ltNODE NAME="rhel6m2"/&gt
&lt/NODE_LIST&gt
&lt/HOME&gt
&ltHOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid2" TYPE="O" IDX="5"CRS="true"&gt
&ltNODE_LIST&gt
&ltNODE NAME="rhel6m1"/&gt
&ltNODE NAME="rhel6m2"/&gt
&lt/NODE_LIST&gt
&lt/HOME&gt
Run the following to update the inventory
cd /opt/app/12.1.0/grid2/oui/bin/
[grid@rhel6m1 bin]$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/opt/app/12.1.0/grid2
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

[grid@rhel6m1 bin]$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/opt/app/11.2.0/grid4
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Verify the inventory update
&ltHOME NAME="Ora11g_gridinfrahome2" LOC="/opt/app/11.2.0/grid4" TYPE="O" IDX="4"CRS="true"&gt
&ltNODE_LIST&gt
&ltNODE NAME="rhel6m1"/&gt
&ltNODE NAME="rhel6m2"/&gt
&lt/NODE_LIST&gt
&lt/HOME&gt
&ltHOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid2" TYPE="O" IDX="5"&gt
&ltNODE_LIST&gt
&ltNODE NAME="rhel6m1"/&gt
&ltNODE NAME="rhel6m2"/&gt
&lt/NODE_LIST&gt
&lt/HOME&gt
Also make sure the contents in /etc/init.d/ohasd and /etc/init.d/init.ohasd refers the 11.2 as the ORA_CRS_HOME and there are no references to 12.1. Few times the last downgrade node (rhel6m1) did contain references to 12.1 even though downgrade command had successfully completed. However these files on the other node had the correct references to 11.2.
[root@rhel6m1 grid]#  cat /etc/init.d/ohasd | grep ORA_CRS_HOME
ORA_CRS_HOME=/opt/app/11.2.0/grid4
export ORA_CRS_HOME

[root@rhel6m1 grid]# cat /etc/init.d/init.ohasd | grep ORA_CRS_HOME
ORA_CRS_HOME=/opt/app/11.2.0/grid4
export ORA_CRS_HOME
PERL="/opt/app/11.2.0/grid4/perl/bin/perl -I${ORA_CRS_HOME}/perl/lib"
Also as part of the downgrade /etc/oratab is modified. All references to MGMTDB are moved and ASM instances homes are set to 11.2. But this only happens on last node (rhel6m1). For other nodes /etc/oratab did not have any ASM related entry. Manually add this ASM related entry to /etc/oratab
+ASM2:/opt/app/11.2.0/grid4:N
Before starting the crs unset any references to 12.1 binaries from the environment (PATH, ORACLE_HOME variables). Make sure crsctl use to start the crs comes from 11.2 GI home.
[grid@rhel6m1 bin]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /opt/app/oracle
[grid@rhel6m1 bin]$ which crsctl
/opt/app/11.2.0/grid4/bin/crsctl
Start the crs on all nodes.
# crsctl start crs

crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].

crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [11.2.0.4.0]

[root@rhel6m1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3356
Available space (kbytes) : 258764
ID : 2072206343
Device/File Name : +CLUSTER_DG
This concludes the steps to successfully downgrading GI from 12.1.0.2 to 11.2.0.4.

There could be occasions the downgrade command successfully completes but start of crs fails. Symptoms in this case include unable to discover any voting disks crsctl query css votedisk doesn't return any vote disk infomration and ocssd.log will have entries similar to
2015-03-31 14:00:09.618: [    CSSD][898090752]clssnmvDiskVerify: Successful discovery of 0 disks
2015-03-31 14:00:09.618: [ CSSD][898090752]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-03-31 14:00:09.618: [ CSSD][898090752]clssnmvFindInitialConfigs: No voting files found
Other times includes corrupted OCR with ocssd.log having entries similar to
2015-03-26 11:33:05.633: [ CRSMAIN][3817551648] Initialing cluclu context...
[ OCRMAS][3776734976]th_calc_av:8': Failed in vsnupr. Incorrect SV stored in OCR. Key [SYSTEM.version.hostnames.] Value []
2015-03-26 11:33:06.618: [ OCRSRV][3776734976]th_upgrade:9 Shutdown CacheMaster. prev AV [186647552] new calc av [186647552] my sv [186647552]
No root cause was found of these cases and could only assume this may be due to some of the earlier mentioned reasons such as having backups of OCR from previous upgrades i.e. ocr11.2.0.3.0 (But it must be said a successful downgrade was archived while ocr11.2.0.3.0 was in the cdata directory), wrong binaries referred during downgrade due to environment variable settings. The only option to recover from such a situation is to restore the OCR taken while cluster was 11.2.
crsctl stop crs -f # run on all nodes
crsctl start crs -excl -nocrs # run only on one node
ocrconfig -restore /opt/app/11.2.0/grid4/cdata/rhel6m-cluster/backup_20150331_170634.ocr
crsctl replace votedisk +cluster_dg
Useful metalink notes
rootcrs.pl -downgrade -lastnode" Fails if Current Directory Can not be Accessed by Grid User [ID 1917922.1]
SRVCTL commands fails with error CLSU-00100: Operating System function: sclsdgcwd failed with error data: -1 [ID 1961344.1]
Srvctl Status Commands Report CLSU-00100, CLSU-00101 and CLSU-00103 Errors [ID 1911258.1]

Related Posts
Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3
Downgrade Grid Infrastructure from 11.2.0.4 to 11.1.0.7
Viewing all 315 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>