The flex cluster setup used here consists of two hub nodes. One of the nodes will be changed from hub node to leaf node and back to a hub node. The nodes are named rhel12c1 and rhel12c2.
Changing Hub Node to a Leaf Node
Node rhel12c1 will be changed from a hub node to a leaf node. The current role could listed with
Changing Leaf Node to a Hub Node
Leaf node created in the previous step will be changed back to a hub node in this place. The current node role is
Changing Hub Node to a Leaf Node
Node rhel12c1 will be changed from a hub node to a leaf node. The current role could listed with
[root@rhel12c1 oracle]# crsctl get node role configTo change the role of the node run the following command. This must be run on the node that's undergoing the role change as root
Node 'rhel12c1' configured role is 'hub'
[root@rhel12c1 grid]# crsctl set node role leafRestart the node after the role change command has been executed
CRS-4408: Node 'rhel12c1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
# crsctl stop crsVerify the node started as a leaf node.
# crsctl start crs -wait
[root@rhel12c1 grid]# crsctl get node role configThe cluster now consists of a hub node and a leaf node
Node 'rhel12c1' configured role is 'leaf'
[root@rhel12c2 grid]# crsctl get node role config -allThe database instance that was running while the rhel12c1 was a hub node will no longer be active and is in a shutdown state
Node 'rhel12c1' configured role is 'leaf'
Node 'rhel12c2' configured role is 'hub'
ora.rac12c1.dbThere's no resources running on the leaf node.
1 ONLINE OFFLINE Instance Shutdown,STABLE
2 ONLINE ONLINE rhel12c2 Open,STABLE
[root@rhel12c1 grid]# crsctl stat res -t -c rhel12c1Finally update inventory as follows. Run the update node list command on hub nodes listing all remaining hub nodes in the cluster_nodes option. In this case only rhel12c2 is the hub node.
[grid@rhel12c2 ~]$ $GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c2}" -silent -local CRS=TRUEOn the leaf node run the update node list containing only the leaf node in the cluster_nodes option.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4506 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@rhel12c1 ~]$ $GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c1}" -silent -local CRS=TRUEThis conclude converting from hub node to leaf node.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 5112 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Changing Leaf Node to a Hub Node
Leaf node created in the previous step will be changed back to a hub node in this place. The current node role is
[grid@rhel12c1 ~]$ crsctl get node role configAs root change the node role to hub node with
Node 'rhel12c1' configured role is 'leaf'
[root@rhel12c1 grid]# crsctl set node role hubStop the cluster stack on the node undergoing the role change
CRS-4408: Node 'rhel12c1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
# crsctl stop crsConfigure the Oracle ASM Filter Driver by running the following as root
# crsctl stop crsIf the node started as a leaf node then it may also require VIP to be configured on the node before changing to hub node. As this started off as a leaf node the VIP already exists and this step is not needed. Once the ASM driver is configured start cluster stack on the node and verify the node role
[root@rhel12c1 grid]# /opt/app/12.1.0/grid2/bin/asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
[root@rhel12c1 grid]# crsctl start crs -waitResources there were down when the role was in leaf node role will be up and running again now.
[grid@rhel12c1 ~]$ crsctl get node role config
Node 'rhel12c1' configured role is 'hub'
[root@rhel12c1 grid]# crsctl stat res -t -c rhel12c1Run the inventory update specifying all hub nodes in the cluster_node option
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rhel12c1 STABLE
ora.DATA.dg
ONLINE ONLINE rhel12c1 STABLE
ora.FRA.dg
ONLINE ONLINE rhel12c1 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rhel12c1 STABLE
ora.NEWCLUSTERDG.dg
ONLINE ONLINE rhel12c1 STABLE
ora.net1.network
ONLINE ONLINE rhel12c1 STABLE
ora.ons
ONLINE ONLINE rhel12c1 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rhel12c1 STABLE
ora.asm
2 ONLINE ONLINE rhel12c1 Started,STABLE
ora.rac12c1.db
1 ONLINE ONLINE rhel12c1 Open,STABLE
ora.rhel12c1.vip
1 ONLINE ONLINE rhel12c1 STABLE
ora.scan1.vip
1 ONLINE ONLINE rhel12c1 STABLE
--------------------------------------------------------------------------------
$GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid2 "CLUSTER_NODES={rhel12c1,rhel12c2}" -silent -local CRS=TRUEThis conclude the changing leaf node to hub node.