What is fencing and how it is configured in Red Hat Cluster / Pacemaker?
Ans: Fencing is a technique or method to power off or terminate the faulty node from the cluster. Fencing is very important component of a cluster, Red Hat Cluster will not start resource and service recovery for non responsive node until that node has been fenced.
In Red Hat Clustering, fencing is configured via “pcs stonith“, here stonith stands for “Shoot The Other Node In The Head”
~]# pcs stonith create name fencing_agent  parameters
Q:11 How to view fencing configuration and how to fence a cluster node?
Ans: To view all the configuration of fencing execute the following command from any of node,
~]# pcs stonith show --full
To fence a manually , use the following command
~]# pcs stonith fence nodeb.example.com
Q:12 What is Storage based fencing device and how to create storage based fencing device?
Ans: As the name suggests, storage based fence device will cut off the faulty cluster node from storage access, it will not power off or terminate the cluster node.
Let’s assume shared storage like “/dev/sda” is assigned to all the cluster node, then you create the storage based fencing device using the below command,
~]# pcs stonith create {Name_Of_Fence_Device} fence_scsi devices=/dev/sda meta provides=unfencing
Use the following command to fence any cluster node for fence testing,
~]# pcs stonith fence {Cluster_Node_Name}
Q:13 How to display the useful information about the cluster resource?
Ans: To display the information about any cluster resource use the following command from any of the cluster node,
~]#  pcs resource describe {resource_name}
Example:
~]# pcs resource describe Filesystem
To Display the list of all the resources of a cluster, use the beneath command,
~]# pcs resource list
Q:14 Tell me the syntax to create a resource in Red Hat cluster?
Ans: Use the below syntax to create a resource in Red Hat Cluster / Pacemaker,
~]# pcs resource create {resource_name} {resource_provider} {resource_parameters} --group {group_name}
Let’s assume we want to create Filesystem resource,
~]# pcs resource create my_fs Filesystem device=/dev/sdb1 directory=/var/www/html fstype=xfs –group my_group
Q:15 How to list and clear the fail count of a cluster resource?
Ans: Fail count of a cluster resource can be displayed by using the following command,
~]# pcs resource failcount show
To clear or reset the failcount of a cluster resource, use the below pcs command,
~]# pcs resource failcount reset {resource_name} {cluster_node_name}
Q:16 How to move cluster resource from one node to another ?
Ans: Cluster resources and resource groups can be moved away from the cluster node using the below command,
~]# pcs resource move {resource_or_resources_group}  {cluster_node_name}
When a cluster resource or resources group moved away from a cluster node then a temporary constraint rule is enabled on the cluster for that node , means that resource / resources group can’t be run that cluster node, so to remove that constraint use the following command,
~]# pcs resource clear {resource_or_resource_group} {cluster_node_name}
Q:17 What is the default log file for pacemaker and corosync ?
Ans: Default log file for pacemaker is “/var/log/pacemaker.log” and for corosync is “/var/log/messages”
Q:18: what are constraints and its type?
Ans: Constraints can be defined as restrictions or rules which determine in which orders cluster resources will started and stopped. Constraints are classified into three types,
  • Order Constraints – It decides the orders how resources or resource group will be started or stopped.
  • Location constraints – It decides on which nodes resources or resource group may run
  • Colocation constraints- It decides whether two resources or resource group may run on the same node.
Q:19 How to use LVM (logical Volume) on shared storage in Red Hat clustering / Pacemaker?
Ans: There are two different ways to use LVM on shared storage in a cluster,
  • HA-LVM (Volume Group and its logical volumes can be accessed only one node at a time, can be used with traditional file systems ext4 and xfs)
  • Clustered LVM (It is commonly used while working with shared file system like GFS2)
Q:20 What are logical steps to configure HA-LVM in Red Hat Cluster?
Ans: Below are the logical steps to configure HA-LVM,
Let’s assume shared storage is provisioned on all the cluster nodes,
a) On any of the cluster node, do pvcreate, vgcreate and lvcreate on shared storage disk
b) Format the logical volume on storage disk
c) on each cluster node, enable HA-LVM tagging in file “/etc/lvm/lvm.conf
locking_type = 1
Also define logical volume group that are not shared in the cluster,
Volume_list = [rootvg,logvg]
rootvg & logvg are OS volume group and not shared among the cluster nodes.
d) On each cluster node, rebuild initramfs using the following command,
~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) ; reboot
e) Once all the cluster nodes are rebooted, verify the cluster status using “pcs status” command,
f)  On any of cluster node , create LVM resource using below command,
~]# pcs resource create ha_lvm LVM volumegroup=cluster_vg exclusive=true --group halvm_fs
g) Now create FileSystem resource from any of the cluster node,
~]# pcs resource create xfs_fs Filesystem device=”/dev/{volume-grp}/{logical_volume}” directory=”/mnt” fstype=”xfs” --group halvm_fs