Quantcast
Channel: A! Help
Viewing all articles
Browse latest Browse all 314

Upgrading RAC from 11.2.0.4 to 12.1.0.2 - Grid Infrastructure

$
0
0
This post gives the highlights of upgrading grid infrastructure of a two node RAC from 11.2.0.4 to 12.1.0.2. There's an earlier post of upgrading GI for a single instance from 11.2.0.4 to 12.1.0.2. The system upgraded was the same system upgraded from 11.2.0.3 to 11.2.0.4 earlier.
As mentioned in upgrading from 12.1.0.1 to 12.1.0.2 the pre-reqs for upgrading to 12.1.0.2 has changed and most notably the GI management repository is no longer optional. Prior to the upgrade OCR and Vote files were moved to a diskgroup with sufficient space for GI management repository. Rest of the steps are listed assuming these pre-reqs are completed.
1. Before the GI upgrade backup the OCR manually. This could be used in downgrading the GI from 12.1.0.2 to 11.2.0.4 later one.
2. The RAC has been setup with role separation. This requires certain directories having permission so that both grid user and oracle user is able to write and read to them. Make cfgtoollogs writable to both grid and oracle, usually this is writable only for one user and group permission is only read and executable. Without write permission on this directory following error could be seen when upgrading GI.
PRCR-1158 : Directory /opt/app/oracle/cfgtoollogs/asmca in file path does not exist
PRCR-1154 : Failed to create file output stream with file name: /opt/app/oracle/cfgtoollogs/asmca/asmca-150325PM054138.logCouldn't get lock for /opt/app/oracle/cfgtoollogs/asmca/asmca-150325PM054138.log

chmod 775 $ORACLE_BASE/cfgtoollogs
Another directory requires write permission for grid user is the dbca inside cfgtoollogs.
chmod 770 $ORACLE_BASE/cfgtoollogs/dbca
As the GI management repository is a database the admin directory is another directory that requires write permission for grid user. Without this permission creation of GI management repository will fail with following
Cannot create directory "/opt/app/oracle/admin/_mgmtdb/dpdump".

chmod 770 $ORACLE_BASE/admin
These permissions must be set on all the nodes.
3.GI upgrade is an out of place upgrade. Create a new GI home on all nodes.
mkdir -p  /opt/app/12.1.0/grid2
chown -R grid:oinstall 12.1.0
chmod -R 775 12.1.0
4. Before the upgrade check GI active version is 11.2.0.4 and upgrade state
[grid@rhel6m1 grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
[grid@rhel6m1 grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [11.2.0.4.0]
[grid@rhel6m1 grid]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].
5. Verify the pre-reqs with orachk (refer 1268927.2) and cluvfy
orachk -u -o pre

[grid@rhel6m1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/app/11.2.0/grid4 -dest_crshome /opt/app/12.1.0/grid2 -dest_version 12.1.0.2.0 -fixup -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rhel6m1"
Destination Node Reachable?
------------------------------------ ------------------------
rhel6m1 yes
rhel6m2 yes
Result: Node reachability check passed from node "rhel6m1"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rhel6m2 passed
rhel6m1 passed
Result: User equivalence check passed for user "grid"

Check: Package existence for "cvuqdisk"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
rhel6m1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
Result: Package existence check passed for "cvuqdisk"

Check: Grid Infrastructure home writeability of path /opt/app/12.1.0/grid2
Grid Infrastructure home check passed

Checking CRS user consistency
Result: CRS user consistency check successful
Checking network configuration consistency.
Result: Check for network configuration consistency passed.
Checking ASM disk size consistency
All ASM disks are correctly sized
Checking if default discovery string is being used by ASM
ASM discovery string "/dev/sd*" is not the default discovery string
Checking if ASM parameter file is in use by an ASM instance on the local node
Result: ASM instance is using parameter file "+CLUSTER_DG/rhel6m-cluster/asmparameterfile/registry.253.785350531" on node "rhel6m1" on which upgrade is requested.

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking node connectivity...

Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rhel6m1 passed
rhel6m2 passed

Verification of the hosts config file successful


Interface information for node "rhel6m1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.85 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:29:3C:7A 1500
eth0 192.168.0.91 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:29:3C:7A 1500
eth0 192.168.0.89 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:29:3C:7A 1500
eth1 192.168.1.87 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:AE:A8:83 1500
eth1 169.254.31.63 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:AE:A8:83 1500


Interface information for node "rhel6m2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.0.86 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:6A:D4:18 1500
eth0 192.168.0.90 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:6A:D4:18 1500
eth1 192.168.1.88 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:D4:AC:BE 1500
eth1 169.254.160.199 169.254.0.0 0.0.0.0 192.168.0.100 08:00:27:D4:AC:BE 1500


Check: Node connectivity using interfaces on subnet "192.168.0.0"

Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1[192.168.0.89] rhel6m2[192.168.0.90] yes
rhel6m1[192.168.0.89] rhel6m1[192.168.0.85] yes
rhel6m1[192.168.0.89] rhel6m1[192.168.0.91] yes
rhel6m1[192.168.0.89] rhel6m2[192.168.0.86] yes
rhel6m2[192.168.0.90] rhel6m1[192.168.0.85] yes
rhel6m2[192.168.0.90] rhel6m1[192.168.0.91] yes
rhel6m2[192.168.0.90] rhel6m2[192.168.0.86] yes
rhel6m1[192.168.0.85] rhel6m1[192.168.0.91] yes
rhel6m1[192.168.0.85] rhel6m2[192.168.0.86] yes
rhel6m1[192.168.0.91] rhel6m2[192.168.0.86] yes
Result: Node connectivity passed for subnet "192.168.0.0" with node(s) rhel6m1,rhel6m2


Check: TCP connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1 : 192.168.0.89 rhel6m1 : 192.168.0.89 passed
rhel6m2 : 192.168.0.90 rhel6m1 : 192.168.0.89 passed
rhel6m1 : 192.168.0.85 rhel6m1 : 192.168.0.89 passed
rhel6m1 : 192.168.0.91 rhel6m1 : 192.168.0.89 passed
rhel6m2 : 192.168.0.86 rhel6m1 : 192.168.0.89 passed
rhel6m1 : 192.168.0.89 rhel6m2 : 192.168.0.90 passed
rhel6m2 : 192.168.0.90 rhel6m2 : 192.168.0.90 passed
rhel6m1 : 192.168.0.85 rhel6m2 : 192.168.0.90 passed
rhel6m1 : 192.168.0.91 rhel6m2 : 192.168.0.90 passed
rhel6m2 : 192.168.0.86 rhel6m2 : 192.168.0.90 passed
rhel6m1 : 192.168.0.89 rhel6m1 : 192.168.0.85 passed
rhel6m2 : 192.168.0.90 rhel6m1 : 192.168.0.85 passed
rhel6m1 : 192.168.0.85 rhel6m1 : 192.168.0.85 passed
rhel6m1 : 192.168.0.91 rhel6m1 : 192.168.0.85 passed
rhel6m2 : 192.168.0.86 rhel6m1 : 192.168.0.85 passed
rhel6m1 : 192.168.0.89 rhel6m1 : 192.168.0.91 passed
rhel6m2 : 192.168.0.90 rhel6m1 : 192.168.0.91 passed
rhel6m1 : 192.168.0.85 rhel6m1 : 192.168.0.91 passed
rhel6m1 : 192.168.0.91 rhel6m1 : 192.168.0.91 passed
rhel6m2 : 192.168.0.86 rhel6m1 : 192.168.0.91 passed
rhel6m1 : 192.168.0.89 rhel6m2 : 192.168.0.86 passed
rhel6m2 : 192.168.0.90 rhel6m2 : 192.168.0.86 passed
rhel6m1 : 192.168.0.85 rhel6m2 : 192.168.0.86 passed
rhel6m1 : 192.168.0.91 rhel6m2 : 192.168.0.86 passed
rhel6m2 : 192.168.0.86 rhel6m2 : 192.168.0.86 passed
Result: TCP connectivity check passed for subnet "192.168.0.0"


Check: Node connectivity using interfaces on subnet "192.168.1.0"

Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1[192.168.1.87] rhel6m2[192.168.1.88] yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rhel6m1,rhel6m2


Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rhel6m1 : 192.168.1.87 rhel6m1 : 192.168.1.87 passed
rhel6m2 : 192.168.1.88 rhel6m1 : 192.168.1.87 passed
rhel6m1 : 192.168.1.87 rhel6m2 : 192.168.1.88 passed
rhel6m2 : 192.168.1.88 rhel6m2 : 192.168.1.88 passed
Result: TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

Confirming that at least one ASM disk group is configured...
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Checking OCR integrity...
Disks "+CLUSTER_DG" are managed by ASM.

OCR integrity check passed

Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 3.9992GB (4193496.0KB) 4GB (4194304.0KB) failed
rhel6m1 3.9992GB (4193496.0KB) 4GB (4194304.0KB) failed
Result: Total memory check failed

Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 2.6916GB (2822336.0KB) 50MB (51200.0KB) passed
rhel6m1 2.4122GB (2529368.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed

Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 4GB (4194296.0KB) 3.9992GB (4193496.0KB) passed
rhel6m1 4GB (4194296.0KB) 3.9992GB (4193496.0KB) passed
Result: Swap space check passed

Check: Free disk space for "rhel6m2:/usr,rhel6m2:/var,rhel6m2:/etc,rhel6m2:/opt/app/11.2.0/grid4,rhel6m2:/sbin,rhel6m2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel6m2 / 11.7715GB 7.9635GB passed
/var rhel6m2 / 11.7715GB 7.9635GB passed
/etc rhel6m2 / 11.7715GB 7.9635GB passed
/opt/app/11.2.0/grid4 rhel6m2 / 11.7715GB 7.9635GB passed
/sbin rhel6m2 / 11.7715GB 7.9635GB passed
/tmp rhel6m2 / 11.7715GB 7.9635GB passed
Result: Free disk space check passed for "rhel6m2:/usr,rhel6m2:/var,rhel6m2:/etc,rhel6m2:/opt/app/11.2.0/grid4,rhel6m2:/sbin,rhel6m2:/tmp"

Check: Free disk space for "rhel6m1:/usr,rhel6m1:/var,rhel6m1:/etc,rhel6m1:/opt/app/11.2.0/grid4,rhel6m1:/sbin,rhel6m1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rhel6m1 / 10.072GB 7.9635GB passed
/var rhel6m1 / 10.072GB 7.9635GB passed
/etc rhel6m1 / 10.072GB 7.9635GB passed
/opt/app/11.2.0/grid4 rhel6m1 / 10.072GB 7.9635GB passed
/sbin rhel6m1 / 10.072GB 7.9635GB passed
/tmp rhel6m1 / 10.072GB 7.9635GB passed
Result: Free disk space check passed for "rhel6m1:/usr,rhel6m1:/var,rhel6m1:/etc,rhel6m1:/opt/app/11.2.0/grid4,rhel6m1:/sbin,rhel6m1:/tmp"

Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists(502)
rhel6m1 passed exists(502)

Checking for multiple users with UID value 502
Result: Check for multiple users with UID value 502 passed
Result: User existence check passed for "grid"

Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists
rhel6m1 passed exists
Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed exists
rhel6m1 passed exists
Result: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m2 yes yes yes yes passed
rhel6m1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 yes yes yes passed
rhel6m1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed

Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 3 3,5 passed
rhel6m1 3 3,5 passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 hard 65536 65536 passed
rhel6m1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 soft 1024 1024 passed
rhel6m1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 hard 16384 16384 passed
rhel6m1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rhel6m2 soft 2047 2047 passed
rhel6m1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"

There are no oracle patches required for home "/opt/app/11.2.0/grid4".

There are no oracle patches required for home "/opt/app/11.2.0/grid4".

Checking for suitability of source home "/opt/app/11.2.0/grid4" for upgrading to version "12.1.0.2.0".
Result: Source home "/opt/app/11.2.0/grid4" is suitable for upgrading to version "12.1.0.2.0".

Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 x86_64 x86_64 passed
rhel6m1 x86_64 x86_64 passed
Result: System architecture check passed

Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 2.6.32-220.el6.x86_64 2.6.32 passed
rhel6m1 2.6.32-220.el6.x86_64 2.6.32 passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3010 3010 250 passed
rhel6m2 3010 3010 250 passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 385280 385280 32000 passed
rhel6m2 385280 385280 32000 passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3010 3010 100 passed
rhel6m2 3010 3010 100 passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 128 128 128 passed
rhel6m2 128 128 128 passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 68719476736 68719476736 2008657920 passed
rhel6m2 68719476736 68719476736 2008657920 passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4096 4096 4096 passed
rhel6m2 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4294967296 4294967296 392316 passed
rhel6m2 4294967296 4294967296 392316 passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 6815744 6815744 6815744 passed
rhel6m2 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
rhel6m2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4194304 4194304 262144 passed
rhel6m2 4194304 4194304 262144 passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 4194304 4194304 4194304 passed
rhel6m2 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 1048576 1048576 262144 passed
rhel6m2 1048576 1048576 262144 passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 2097152 2097152 1048576 passed
rhel6m2 2097152 2097152 1048576 passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 3145728 3145728 1048576 passed
rhel6m2 3145728 3145728 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Kernel parameter for "panic_on_oops"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rhel6m1 1 1 1 passed
rhel6m2 1 1 1 passed
Result: Kernel parameter check passed for "panic_on_oops"

Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 binutils-2.20.51.0.2-5.28.el6 binutils-2.20.51.0.2 passed
rhel6m1 binutils-2.20.51.0.2-5.28.el6 binutils-2.20.51.0.2 passed
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
rhel6m1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
rhel6m1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libgcc(x86_64)-4.4.6-3.el6 libgcc(x86_64)-4.4.4 passed
rhel6m1 libgcc(x86_64)-4.4.6-3.el6 libgcc(x86_64)-4.4.4 passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libstdc++(x86_64)-4.4.6-3.el6 libstdc++(x86_64)-4.4.4 passed
rhel6m1 libstdc++(x86_64)-4.4.6-3.el6 libstdc++(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libstdc++-devel(x86_64)-4.4.6-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
rhel6m1 libstdc++-devel(x86_64)-4.4.6-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 sysstat-9.0.4-18.el6 sysstat-9.0.4 passed
rhel6m1 sysstat-9.0.4-18.el6 sysstat-9.0.4 passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 gcc-4.4.6-3.el6 gcc-4.4.4 passed
rhel6m1 gcc-4.4.6-3.el6 gcc-4.4.4 passed
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 gcc-c++-4.4.6-3.el6 gcc-c++-4.4.4 passed
rhel6m1 gcc-c++-4.4.6-3.el6 gcc-c++-4.4.4 passed
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 ksh ksh passed
rhel6m1 ksh ksh passed
Result: Package existence check passed for "ksh"

Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 make-3.81-19.el6 make-3.81 passed
rhel6m1 make-3.81-19.el6 make-3.81 passed
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 glibc(x86_64)-2.12-1.47.el6 glibc(x86_64)-2.12 passed
rhel6m1 glibc(x86_64)-2.12-1.47.el6 glibc(x86_64)-2.12 passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 glibc-devel(x86_64)-2.12-1.47.el6 glibc-devel(x86_64)-2.12 passed
rhel6m1 glibc-devel(x86_64)-2.12-1.47.el6 glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
rhel6m1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
rhel6m1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Check: Package existence for "nfs-utils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rhel6m2 nfs-utils-1.2.3-15.el6 nfs-utils-1.2.3-15 passed
rhel6m1 nfs-utils-1.2.3-15.el6 nfs-utils-1.2.3-15 passed
Result: Package existence check passed for "nfs-utils"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rhel6m2 passed
rhel6m1 passed

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
Node Name File exists?
------------------------------------ ------------------------
rhel6m2 no
rhel6m1 no
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking Core file name pattern consistency...
Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rhel6m2 passed does not exist
rhel6m1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed

Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rhel6m2 0022 0022 passed
rhel6m1 0022 0022 passed
Result: Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of 'domain' and 'search' entries is defined
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
Checking if 'domain' entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if 'search' entry in file "/etc/resolv.conf" is consistent across the nodes...
Checking file "/etc/resolv.conf" to make sure that only one 'search' entry is defined
More than one "search" entry does not exist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rhel6m1 passed
rhel6m2 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
checking DNS response from all servers in "/etc/resolv.conf"
checking response for name "rhel6m2" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
rhel6m2 10.10.10.10 IPv4 passed
rhel6m2 10.10.10.11 IPv4 passed
checking response for name "rhel6m1" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
rhel6m1 10.10.10.10 IPv4 passed
rhel6m1 10.10.10.11 IPv4 passed

Check for integrity of file "/etc/resolv.conf" passed


UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations

Check: Time zone consistency
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed.

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running

Check: Daemon "avahi-daemon" not configured
Node Name Configured Status
------------ ------------------------ ------------------------
rhel6m2 no passed
rhel6m1 no passed
Daemon not configured check passed for process "avahi-daemon"

Check: Daemon "avahi-daemon" not running
Node Name Running? Status
------------ ------------------------ ------------------------
rhel6m2 no passed
rhel6m1 no passed
Daemon not running check passed for process "avahi-daemon"

Starting check for Network interface bonding status of private interconnect network interfaces ...

Check for Network interface bonding status of private interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Starting check for /boot mount ...

Check for /boot mount passed

Starting check for zeroconf check ...

Check for zeroconf check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.
Reason for failure is the physical memory requirement is not exactly being 4GB, this is ignorable and upgrade could continue.



6. Run the installer and select upgrade GI to continue the upgrade on OUI.

7. When prompted run the rootupgrade.sh scripts on each node. Output below is from running rootupgrade.sh script on first node
[root@rhel6m1 ~]# /opt/app/12.1.0/grid2/rootupgrade.sh
Performing root user operation.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
2015/03/26 17:32:22 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/03/26 17:32:25 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/03/26 17:32:34 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/03/26 17:32:58 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/03/26 17:32:58 CLSRSC-363: User ignored prerequisites during installation

2015/03/26 17:33:28 CLSRSC-515: Starting OCR manual backup.

2015/03/26 17:33:34 CLSRSC-516: OCR manual backup successful.

2015/03/26 17:33:46 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2015/03/26 17:33:46 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /opt/app/11.2.0/grid4 -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true'

ASM configuration upgraded in local node successfully.

2015/03/26 17:33:59 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2015/03/26 17:34:00 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/03/26 17:34:28 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/03/26 17:38:13 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/03/26 17:44:21 CLSRSC-472: Attempting to export the OCR

2015/03/26 17:44:21 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2015/03/26 17:44:48 CLSRSC-473: Successfully exported the OCR

2015/03/26 17:44:55 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.


2015/03/26 17:44:55 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.

2015/03/26 17:44:55 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2015/03/26 17:44:55 CLSRSC-543:
3. The downgrade command must be run on the node rhel6m2 with the '-lastnode' option to restore global configuration data.

2015/03/26 17:45:21 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/03/26 17:45:51 CLSRSC-474: Initiating upgrade of resource types

2015/03/26 17:46:30 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'

2015/03/26 17:46:31 CLSRSC-475: Upgrade of resource types successfully initiated.

2015/03/26 17:46:46 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Running the rootupgrade.sh on the second node (last node)
[root@rhel6m2 ~]# /opt/app/12.1.0/grid2/rootupgrade.sh
Performing root user operation.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/12.1.0/grid2/crs/install/crsconfig_params
2015/03/26 17:48:12 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/03/26 17:48:15 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/03/26 17:48:18 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/03/26 17:48:34 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/03/26 17:48:34 CLSRSC-363: User ignored prerequisites during installation

ASM configuration upgraded in local node successfully.

2015/03/26 17:48:59 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/03/26 17:49:31 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/03/26 17:50:07 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/03/26 17:55:00 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2015/03/26 17:55:29 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2015/03/26 17:55:29 CLSRSC-482: Running command: '/opt/app/12.1.0/grid2/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2015/03/26 17:58:19 CLSRSC-479: Successfully set Oracle Clusterware active version

2015/03/26 17:58:28 CLSRSC-476: Finishing upgrade of resource types

2015/03/26 17:58:42 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'

2015/03/26 17:58:42 CLSRSC-477: Successfully completed upgrade of resource types

2015/03/26 17:59:32 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
8. Once the rootupgrade.sh are run the GI management repository will be created.

9. Once the OUI is completed verify the active version is upgraded to 12.1.0.2
[grid@rhel6m1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [12.1.0.2.0]
[grid@rhel6m1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL].
10. Use
cluvfy stage -post crsinst
or
orachk -u -o post
to check for any post upgrade issues. If satisfied with the upgrade uninstall the 11.2.0.4 version of grid infrastructure software. This conclude the upgrade of grid infrastructure from 11.2.0.4 to 12.1.0.2.

Related Posts
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2
Upgrading RAC from 12.1.0.1 to 12.1.0.2 - Grid Infrastructure
Upgrading 12c CDB and PDB from 12.1.0.1 to 12.1.0.2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure
Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure


Viewing all articles
Browse latest Browse all 314

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>