This article is only a simple guide for Configuring RHCS in vSphere 6 using shared disks and VMware fencing using the fence_vmware_soap device in Red hat Cluster Suite.
#The cluster node VMs can use RDM’s (Physical or Virtual) or they can use shared vmdk’s with multi-writer option enabled for the scsi drives.
Below is my Cluster node configuration:
- RHEL6
- 4vCPU and 12 GB RAM
- 80 GB Thick Provisioned Lazy Zeroed disk for OS.
- 5GB Shared Quorum on Second SCSI controller shared physical (scsi1:0 = “multi-writer” scsi1:1 = “multi-writer”)
- 100GB shared Data drive for Application
- Single Network card
The after creating the node 1 cln1.mylab.local, I cloned the machine cln2.mylab.local, assigned a new IP and added the shared resources.
Added the Quorum drive and Data drive on node 1 and the add the same drives on node 2 by using add existing harddisk option in vCenter
As I wanted to keep my nodes on separate physical servers (ESXi hosts) to provide hardware failure resiliency, I had to use physical bus sharing. As my Quorum and data drive are not shared SAN LUN but vmdks I have to enable scsi multi-writer in vmx advanced configuration for my scsi nodes. This is because although vmfs is clustered file system at any given point generally a vmdk can only be accessed by one powered on VM.
Also make sure ctk is disabled for the VM. Check the linked KB.
http://kb.vmware.com/kb/2110452
From the conga GUI in RHEL follow the instructions mentioned in the below KBs to create a cluster, add the cluster nodes and add VMware fence device. The article would make a lot more sense once you go through the Redhat KBs
For someone who is new to fencing below explanation from Redhat is Awesome.
A key aspect of Red Hat cluster design is that a system must be configured with at least one fencing device to ensure that the services that the cluster provides remain available when a node in the cluster encounters a problem. Fencing is the mechanism that the cluster uses to resolve issues and failures that occur. When you design your cluster services to take advantage of fencing, you can ensure that a problematic cluster node will be cut off quickly and the remaining nodes in the cluster can take over those services, making for a more resilient and stable cluster.
After the cluster creation, GFS has to be created to use the shared storage for the cluster. RHEL cluster is a mandatory requirement for creating clustered file system called the GFS.
https://access.redhat.com/solutions/63671
https://access.redhat.com/node/68064
Run the clustat command to verify the cluster creation.
Configure the nodes by adding VM node details, Guest name and UUID
In the shared fence device option under the cluster TAB please provide vCenter server details and the account details used for fencing.
Hostname: DNS name of your vCenter
Login: fencing account created, I preferred to create a domain account (fence@mylab.local) as my vCenter is Windows and provide Specific permissions to the domain account.
A vCenter role dedicated for the fencing task was created and assigned to “fence@mylab.local” user. The role requires permission to perform VM power operations.
Run the below command to list the guest name and UUID.
fence_vmware_soap -z -l “fence@mylab.local” -p mypasswd -a vcenter.mylab.local -o list
cln1.mylab.local, 5453d1874-b34f-711d-4167-3d9ty3f24647
cln2.mylab.local, 5643b341-39fc-1383-5e6d-3a71re4c540d
The cluster is now to be tested. And you encounter any issues with a particular node you can expect the fencing device to shut it down to avoid any issues.
Leave a comment if you have queries….
is there any problem with cluster in case of vcenter go down or user is blocked?
LikeLike
Sorry for the late reply..
Yes the fencing functionality would not work.. As the user in vCenter has the permissions to perform power operations on the nodes
LikeLike