What is fencing and how it is configured in Red Hat Cluster / Pacemaker?
Ans: Fencing is a technique or method to power off or terminate the faulty node from the cluster. Fencing is very important component of a cluster, Red Hat Cluster will not start resource and service recovery for non responsive node until that node has been fenced.
In Red Hat Clustering, fencing is configured via “pcs stonith“, here stonith stands for “Shoot The Other Node In The Head”
Q:11 How to view fencing configuration and how to fence a cluster node?
Ans: To view all the configuration of fencing execute the following command from any of node,
To fence a manually , use the following command
Q:12 What is Storage based fencing device and how to create storage based fencing device?
Ans: As the name suggests, storage based fence device will cut off the faulty cluster node from storage access, it will not power off or terminate the cluster node.
Let’s assume shared storage like “/dev/sda” is assigned to all the cluster node, then you create the storage based fencing device using the below command,
Use the following command to fence any cluster node for fence testing,
Q:13 How to display the useful information about the cluster resource?
Ans: To display the information about any cluster resource use the following command from any of the cluster node,
Example:
To Display the list of all the resources of a cluster, use the beneath command,
Q:14 Tell me the syntax to create a resource in Red Hat cluster?
Ans: Use the below syntax to create a resource in Red Hat Cluster / Pacemaker,
Let’s assume we want to create Filesystem resource,
Q:15 How to list and clear the fail count of a cluster resource?
Ans: Fail count of a cluster resource can be displayed by using the following command,
To clear or reset the failcount of a cluster resource, use the below pcs command,
Q:16 How to move cluster resource from one node to another ?
Ans: Cluster resources and resource groups can be moved away from the cluster node using the below command,
When a cluster resource or resources group moved away from a cluster node then a temporary constraint rule is enabled on the cluster for that node , means that resource / resources group can’t be run that cluster node, so to remove that constraint use the following command,
Q:17 What is the default log file for pacemaker and corosync ?
Ans: Default log file for pacemaker is “/var/log/pacemaker.log” and for corosync is “/var/log/messages”
Q:18: what are constraints and its type?
Ans: Constraints can be defined as restrictions or rules which determine in which orders cluster resources will started and stopped. Constraints are classified into three types,
- Order Constraints – It decides the orders how resources or resource group will be started or stopped.
- Location constraints – It decides on which nodes resources or resource group may run
- Colocation constraints- It decides whether two resources or resource group may run on the same node.
Q:19 How to use LVM (logical Volume) on shared storage in Red Hat clustering / Pacemaker?
Ans: There are two different ways to use LVM on shared storage in a cluster,
- HA-LVM (Volume Group and its logical volumes can be accessed only one node at a time, can be used with traditional file systems ext4 and xfs)
- Clustered LVM (It is commonly used while working with shared file system like GFS2)
Q:20 What are logical steps to configure HA-LVM in Red Hat Cluster?
Ans: Below are the logical steps to configure HA-LVM,
Let’s assume shared storage is provisioned on all the cluster nodes,
a) On any of the cluster node, do pvcreate, vgcreate and lvcreate on shared storage disk
b) Format the logical volume on storage disk
c) on each cluster node, enable HA-LVM tagging in file “/etc/lvm/lvm.conf”
Also define logical volume group that are not shared in the cluster,
rootvg & logvg are OS volume group and not shared among the cluster nodes.
d) On each cluster node, rebuild initramfs using the following command,
e) Once all the cluster nodes are rebooted, verify the cluster status using “pcs status” command,
f) On any of cluster node , create LVM resource using below command,
g) Now create FileSystem resource from any of the cluster node,
0 Comments
Post a Comment