Month: October 2017

Options to copy files from VM Guest to Client PC or vice versa

Not many of us use the clipboard option available in vSphere client to copy and paste files across VMs.

I just wanted to share couple of options that can be used other than normal RDP or SMB.

  1. Enable clipboard copy in vSphere Client
  • Log in to a vCenter Server system using the vSphere Client and power off the virtual machine.
  • Select the virtual machine and click the Summary tab.
  • Click Edit Settings.
  • Navigate to Options > Advanced > General and click Configuration Parameters.
  • Click Add Row.
  • Type these values in the Name and Value columns:

 

isolation.tools.copy.disable        FALSE
isolation.tools.paste.disable       FALSE

To enable the same across all VMs on a hosts.

  • Log in to the ESX/ESXi host as a root user.
  • Take a backup of the /etc/vmware/config file.
  • Open the /etc/vmware/config file using a text editor.
  • Add these entries to the file:

                vmx.fullpath = “/bin/vmx”
isolation.tools.copy.disable=”FALSE”
isolation.tools.paste.disable=”FALSE”

  • Save and Close the file.

The VM has be restarted for the changes to take effect.

2) Copy-VMGuestFile

This can used with powercli (32 bit) and it’s very helpful if you want to move some file from across VMs.

Especially for the ones which are in DMZ and Clipboard or RDP is disabled for security reasons.

Copy-VMGuestFile -Source c:\test.txt -Destination c:\temp\ -VM myVM -GuestToLocal -HostUser root -HostPassword pass1 -GuestUser user -GuestPassword pass2

Make sure you are connected to vCenter using the FQDN.

The Copy-VMGuestFile uses the vSphere API ad uses the below procedure.

CopyVMGuestFIle2

  1. A vSphere Web services client program calls a function in the vSphere API.
  2. The client sends a SOAP command over https (port 443) to vCenter Server.
  3. The vCenter Server passes the command to the host agent process hostd, which sends it to VMX.
  4. VMX relays the command to VMware Tools in the guest.
  5. VMware Tools has the Guest OS execute the guest operation.

The same cmdlet can be used to copy files from clients to VM.

Copy-VMGuestFile -VM myVM -LocalToGuest -Source “C:\temp\test.txt” -Destination “c:\test” -GuestUser “Guestaccoutdetails” -GuestPassword “passowrd”

RHEL Clustering in vSphere with fence_vmware_soap as fencing device.

This article is only a simple guide for Configuring RHCS in vSphere 6 using shared disks and VMware fencing using the fence_vmware_soap device in Red hat Cluster Suite.

#The cluster node VMs can use RDM’s (Physical or Virtual) or they can use shared vmdk’s with multi-writer option enabled for the scsi drives.

Below is my Cluster node configuration:

  • RHEL6
  • 4vCPU and 12 GB RAM
  • 80 GB Thick Provisioned Lazy Zeroed disk for OS.
  • 5GB Shared Quorum on Second SCSI controller shared physical (scsi1:0 = “multi-writer” scsi1:1 = “multi-writer”)
  • 100GB shared Data drive for Application
  • Single Network card

The after creating the node 1 cln1.mylab.local, I cloned the machine cln2.mylab.local, assigned a new IP and added the shared resources.

Added the Quorum drive and Data drive on node 1 and the add the same drives on node 2 by using add existing harddisk option in vCenter

As I wanted to keep my nodes on separate physical servers (ESXi hosts) to provide hardware failure resiliency, I had to use physical bus sharing. As my Quorum and data drive are not shared SAN LUN but vmdks I have to enable scsi multi-writer in vmx advanced configuration for my scsi nodes. This is because although vmfs is clustered file system at any given point generally a vmdk can only be accessed by one powered on VM.

Also make sure ctk is disabled for the VM. Check the linked KB.

http://kb.vmware.com/kb/2110452

From the conga GUI in RHEL follow the instructions mentioned in the below KBs to create a cluster, add the cluster nodes and add VMware fence device. The article would make a lot more sense once you go through the Redhat KBs

For someone who is new to fencing below explanation from Redhat is Awesome.

A key aspect of Red Hat cluster design is that a system must be configured with at least one fencing device to ensure that the services that the cluster provides remain available when a node in the cluster encounters a problem. Fencing is the mechanism that the cluster uses to resolve issues and failures that occur. When you design your cluster services to take advantage of fencing, you can ensure that a problematic cluster node will be cut off quickly and the remaining nodes in the cluster can take over those services, making for a more resilient and stable cluster.

After the cluster creation, GFS has to be created to use the shared storage for the cluster. RHEL cluster is a mandatory requirement for creating clustered file system called the GFS.

https://access.redhat.com/solutions/63671

https://access.redhat.com/node/68064

cluster1

Run the clustat command to verify the cluster creation.

Configure the nodes by adding VM node details, Guest name and UUID

cluster2

cluster3

In the shared fence device option under the cluster TAB please provide vCenter server details and the account details used for fencing.

Hostname: DNS name of your vCenter

Login: fencing account created, I preferred to create a domain account (fence@mylab.local) as my vCenter is Windows and provide Specific permissions to the domain account.

A vCenter role dedicated for the fencing task was created and assigned to “fence@mylab.local” user. The role requires permission to perform VM power operations.

cluster4

Run the below command to list the guest name and UUID.

fence_vmware_soap -z -l “fence@mylab.local” -p mypasswd -a vcenter.mylab.local -o list

cln1.mylab.local, 5453d1874-b34f-711d-4167-3d9ty3f24647

cln2.mylab.local, 5643b341-39fc-1383-5e6d-3a71re4c540d

The cluster is now to be tested. And you encounter any issues with a particular node you can expect the fencing device to shut it down to avoid any issues.

Leave a comment if you have queries….