Cluster Basics
A cluster is two or more computers
(called nodes or members) that
work together to perform a task.
Types of clusters
• Storage
• High availability
• Load balancing
• High performance
Storage
Storage clusters provide a consistent file system
image across servers in a cluster, allowing the
servers to simultaneously read and write to a
single shared file system. A storage cluster
simplifies storage administration by limiting the
installation and patching of applications to one file
system. Also, with a cluster-wide file system, a
storage cluster eliminates the need for redundant
copies of application data and simplifies backup
and disaster recovery. Red Hat Cluster Suite
provides storage clustering through Red Hat GFS.
High availability
High-availability clusters provide continuous
availability of services by eliminating single points
of failure and by failing over services from one
cluster node to another in case a node becomes
inoperative. Typically, services in a high-availability
cluster read and write data (via read-write
mounted file systems). Therefore, a high-
availability cluster must maintain data integrity as
one cluster node takes over control of a service
from another cluster node. Node failures in a
high-availability cluster are not visible from clients
outside the cluster.
Load balancing
Load-balancing clusters dispatch network service
requests to multiple cluster nodes to balance the
request load among the cluster nodes. Load
balancing provides cost-effective scalability
because you can match the number of nodes
according to load requirements. If a node in a
load-balancing cluster becomes inoperative, the
load-balancing software detects the failure and
redirects requests to other cluster nodes. Red Hat
Cluster Suite provides load-balancing through LVS
(Linux Virtual Server).
High performance
High-performance clusters use cluster nodes to
perform concurrent calculations.
A high-performance cluster allows applications to
work in parallel, therefore enhancing the
performance of the applications.
High performance clusters are also referred to as
computational clusters or grid computing.
Red Hat Cluster Suite
Red Hat Cluster Suite (RHCS) is an integrated
set of software components that can be
deployed in a variety of configurations to
suit your needs for performance, high-
availability, load balancing, scalability, file
sharing, and economy.
RHCS major components
• Cluster infrastructure — Provides fundamental
functions for nodes to work together as a
cluster: configuration-file management,
membership management, lock management,
and fencing.
• High-availability Service Management —
Provides failover of services from one cluster
node to another in case a node becomes
inoperative.
RHCS major components
• Red Hat GFS (Global File System) — Provides a
cluster file system for use with Red Hat Cluster
Suite. GFS allows multiple nodes to share
storage at a block level as if the storage were
connected locally to each cluster node.
• Cluster Logical Volume Manager (CLVM) —
Provides volume management of cluster
storage.
RHCS major components
• Global Network Block Device (GNBD) — An
ancillary component of GFS that exports block-
level storage to Ethernet. This is an economical
way to make block-level storage available to Red
Hat GFS.
• Linux Virtual Server (LVS) — Routing software
that provides IP-Load-balancing. LVS runs in a
pair of redundant servers that distributes client
requests evenly to real servers that are behind
the LVS servers.
RHCS major components
• Cluster administration tools — Configuration
and management tools for setting up,
configuring, and managing a Red Hat cluster.
The tools are for use with the Cluster
Infrastructure components, the High-availability
and Service Management components, and
storage.
• You can configure and manage other Red Hat
Cluster Suite components through tools for
those components.
RHCS major components
Cluster Infrastructure
The Red Hat Cluster Suite cluster infrastructure provides
the basic functions for a group of computers (called nodes
or members) to work together as a cluster. Once a cluster
is formed using the cluster infrastructure, you can use
other Red Hat Cluster Suite components to suit your
clustering needs.
The cluster infrastructure performs the following
functions:
• Cluster management
• Lock management
• Fencing
• Cluster configuration management
Cluster Management
• Cluster management manages cluster quorum
and cluster membership. CMAN (an
abbreviation for cluster manager) performs
cluster management in Red Hat Cluster Suite for
Red Hat Enterprise Linux.
• CMAN is a distributed cluster manager and runs
in each cluster node; cluster management is
distributed across all nodes in the cluster
Cluster Management
CMAN keeps track of cluster quorum by
monitoring the count of cluster nodes. If more
than half the nodes are active, the cluster has
quorum. If half the nodes (or fewer) are active,
the cluster does not have quorum, and all cluster
activity is stopped. Cluster quorum prevents the
occurrence of a "split-brain" condition — a
condition where two instances of the same cluster
are running. A split-brain condition would allow
each cluster instance to access cluster resources
without knowledge of the other cluster instance,
resulting in corrupted cluster integrity.
Cluster Management
Quorum is determined by communication of
messages among cluster nodes via Ethernet.
Optionally, quorum can be determined by a
combination of communicating messages via
Ethernet and through a quorum disk.
For quorum via Ethernet, quorum consists of 50
percent of the node votes plus 1. For quorum via
quorum disk, quorum consists of user-specified
conditions.
Cluster Management -Quorum
Lock management is a common cluster-
infrastructure service that provides a mechanism
for other cluster infrastructure components to
synchronize their access to shared resources.
In a Red Hat cluster, DLM (Distributed Lock
Manager) is the lock manager.
DLM is a distributed lock manager and runs in
each cluster node; lock management is distributed
across all nodes in the cluster.
Lock Management
GFS and CLVM use locks from the lock manager.
GFS uses locks from the lock manager to
synchronize access to file system metadata (on
shared storage).
CLVM uses locks from the lock manager to
synchronize updates to LVM volumes and volume
groups (also on shared storage).
Lock Management
Fencing
Fencing is the disconnection of a node from the
cluster's shared storage.
Fencing cuts off I/O from shared storage, thus
ensuring data integrity.
The cluster infrastructure performs fencing
through the fence daemon, fenced.
Fencing
When CMAN determines that a node has failed, it
communicates to other cluster-infrastructure
components that the node has failed.
Fenced, when notified of the failure, fences the
failed node.
Other cluster-infrastructure components
determine what actions to take — that is, they
perform any recovery that needs to done.
Fencing
For example, DLM and GFS, when notified of a
node failure, suspend activity until they detect
that fenced has completed fencing the failed
node.
Upon confirmation that the failed node is fenced,
DLM and GFS perform recovery.
DLM releases locks of the failed node; GFS
recovers the journal of the failed node.
Fencing
The fencing program determines from the cluster
configuration file which fencing method to use.
Two key elements in the cluster configuration file
define a fencing method: fencing agent and fencing
device.
The fencing program makes a call to a fencing agent
specified in the cluster configuration file.
The fencing agent, in turn, fences the node via a
fencing device. When fencing is complete, the
fencing program notifies the cluster manager.
Fencing Methods
• Power fencing — A fencing method that uses a
power controller to power off an inoperable node.
• Fibre Channel switch fencing — A fencing method
that disables the Fibre Channel port that connects
storage to an inoperable node.
• GNBD fencing — A fencing method that disables an
inoperable node's access to a GNBD server.
• Other fencing — Several other fencing methods
that disable I/O or power of an inoperable node,
including IBM Bladecenters, PAP, DRAC/MC, HP ILO,
IPMI, IBM RSA II, and others.
Power Fencing
Fibre Channel Switch Fencing
Fencing a Node with Dual Power Supplies
Fencing a Node with Dual Fibre Channel
Connections
Cluster Configuration System
The Cluster Configuration System (CCS) manages
the cluster configuration and provides
configuration information to other cluster
components in a Red Hat cluster. CCS runs in each
cluster node and makes sure that the cluster
configuration file in each cluster node is up to
date. For example, if a cluster system
administrator updates the configuration file in
Node A, CCS propagates the update from Node A
to the other nodes in the cluster.
Cluster Configuration System
Cluster Configuration System
Other cluster components (for example, CMAN) access configuration information
from the configuration file through CCS.
Cluster Configuration File
The cluster configuration file (/etc/cluster/cluster.conf) is
an XML file that describes the following cluster
characteristics:
• Cluster name — Displays the cluster name, cluster
configuration file revision level, and basic fence timing
properties used when a node joins a cluster or is fenced
from the cluster.
• Cluster — Displays each node of the cluster, specifying
node name, node ID, number of quorum votes, and
fencing method for that node.
• Fence Device — Displays fence devices in the cluster.
Parameters vary according to the type of fence device.
• Managed Resources — Displays resources required to
create cluster services. Managed resources includes the
definition of failover domains, resources (for example an
IP address), and services.
High-availability Service Management
• High-availability service management provides
the ability to create and manage high-availability
cluster services in a Red Hat cluster.
• The key component for high-availability service
management in a Red Hat cluster, rgmanager,
implements cold failover for off-the-shelf
applications.
• A high-availability cluster service can fail over
from one cluster node to another with no
apparent interruption to cluster clients.
Failover Domains
• A failover domain is a subset of cluster nodes
that are eligible to run a particular cluster
service.
• Cluster-service failover can occur if a cluster
node fails or if a cluster system administrator
moves the service from one cluster node to
another.
Failover Priority
• A cluster service can run on only one cluster
node at a time to maintain data integrity.
• Specifying failover priority consists of assigning
a priority level to each node in a failover
domain. The priority level determines the
failover order.
• If you do not specify failover priority, a cluster
service can fail over to any node in its failover
domain.
Failover Domains Example
Failover Domains Example
Failover Domain 1 is configured to restrict failover within that
domain; therefore, Cluster Service X can only fail over between
Node A and Node B.
Failover Domain 2 is also configured to restrict failover with its
domain; additionally, it is configured for failover priority. Failover
Domain 2 priority is configured with Node C as priority 1, Node B as
priority 2, and Node D as priority 3. If Node C fails, Cluster Service Y
fails over to Node B next. If it cannot fail over to Node B, it tries
failing over to Node D.
Failover Domain 3 is configured with no priority and no restrictions.
If the node that Cluster Service Z is running on fails, Cluster Service Z
tries failing over to one of the nodes in Failover Domain 3. However,
if none of those nodes is available, Cluster Service Z can fail over to
any node in the cluster.
Web Server Cluster Service Example
Web Server Cluster Service Example
• In the example, a high-availability cluster service
that is a web server named "content-webserver".
• It is running in cluster node B and is in a failover
domain that consists of nodes A, B, and D.
• In addition, the failover domain is configured with
a failover priority to fail over to node D before
node A and to restrict failover to nodes only in that
failover domain.
Web Server Cluster Service Example
• Clients access the cluster service through the IP
address 10.10.10.201, enabling interaction with the
web server application, httpd-content.
• The httpd-content application uses the gfs-content-
webserver file system.
• If node B were to fail, the content-webserver cluster
service would fail over to node D. If node D were not
available or also failed, the service would fail over to
node A.
• Failover would occur with no apparent interruption to
the cluster clients.

More Related Content

PDF
Introducing github.com/open-cluster-management – How to deliver apps across c...
PPTX
Redhat ha cluster with pacemaker
PPTX
Big Table, H base, Dynamo, Dynamo DB Lecture
PPTX
Introduction to the Container Network Interface (CNI)
PPTX
Virtualization
PPTX
Mininet multiple controller
PDF
MODULE-2-Cloud Computing.docx.pdf
PPTX
Building an Active-Active IBM MQ System
Introducing github.com/open-cluster-management – How to deliver apps across c...
Redhat ha cluster with pacemaker
Big Table, H base, Dynamo, Dynamo DB Lecture
Introduction to the Container Network Interface (CNI)
Virtualization
Mininet multiple controller
MODULE-2-Cloud Computing.docx.pdf
Building an Active-Active IBM MQ System

What's hot (20)

PDF
Linux Cluster Concepts
DOCX
Data power Performance Tuning
PDF
IT Automation with Ansible
PDF
Linux-HA with Pacemaker
PDF
Mastering VMware Snapshot
PPT
PPTX
Virtual Machine Concept
PPTX
Ame 2269 ibm mq high availability
PDF
Introduction - vSphere 5 High Availability (HA)
PPTX
Rhel cluster basics 2
PDF
Live migrating a container: pros, cons and gotchas
DOCX
Practica dhcp 16.03.17
PDF
IBM MQ cloud architecture blueprint
PPTX
Load balancing in cloud computing.pptx
PPTX
Containers and workload security an overview
PDF
IBM MQ High Availability 2019
PDF
[오픈소스컨설팅]Session Clustering
PPTX
01. Kubernetes-PPT.pptx
PPTX
Cluster computer
PDF
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Linux Cluster Concepts
Data power Performance Tuning
IT Automation with Ansible
Linux-HA with Pacemaker
Mastering VMware Snapshot
Virtual Machine Concept
Ame 2269 ibm mq high availability
Introduction - vSphere 5 High Availability (HA)
Rhel cluster basics 2
Live migrating a container: pros, cons and gotchas
Practica dhcp 16.03.17
IBM MQ cloud architecture blueprint
Load balancing in cloud computing.pptx
Containers and workload security an overview
IBM MQ High Availability 2019
[오픈소스컨설팅]Session Clustering
01. Kubernetes-PPT.pptx
Cluster computer
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Ad

Viewers also liked (20)

PDF
RedHat Cluster!
PDF
Red Hat Global File System (GFS)
PDF
Linux hpc-cluster-setup-guide
PDF
System Insight Manager on HP Servers
PPTX
Rhel cluster basics 3
ODP
Linux cluster introduction
ODP
Using CloudStack With Clustered LVM
ODP
Red Hat Gluster Storage : GlusterFS
PPT
Oracle RAC Presentation at Oracle Open World
PDF
RHEL6 High Availability Add-On Technical Guide
PDF
Pacemakerを使いこなそう
PDF
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
PDF
Red Hat Storage - Introduction to GlusterFS
PDF
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
PPT
Cluster Tutorial
ODP
Gluster technical overview
PPT
Database backup and recovery basics
PPTX
Windows clustering and quorum basics
PDF
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
PDF
Ansible 2.0 - How to use Ansible to automate your applications in AWS.
RedHat Cluster!
Red Hat Global File System (GFS)
Linux hpc-cluster-setup-guide
System Insight Manager on HP Servers
Rhel cluster basics 3
Linux cluster introduction
Using CloudStack With Clustered LVM
Red Hat Gluster Storage : GlusterFS
Oracle RAC Presentation at Oracle Open World
RHEL6 High Availability Add-On Technical Guide
Pacemakerを使いこなそう
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
Red Hat Storage - Introduction to GlusterFS
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
Cluster Tutorial
Gluster technical overview
Database backup and recovery basics
Windows clustering and quorum basics
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
Ansible 2.0 - How to use Ansible to automate your applications in AWS.
Ad

Similar to Rhel cluster basics 1 (20)

PPT
2.1 Red_Hat_Cluster1.ppt
PPTX
Cluster computings
PPTX
cluster computing
PPTX
Cluster computing
PPTX
Cluster computing ppt
PDF
Rha cluster suite wppdf
PPTX
Failover cluster
PDF
Cluster Computing
PPTX
Cluster computing
PPT
Cluster Computers
PPTX
GDPS and System Complex
PPT
Fundamentals Of Transaction Systems - Part 1: Causality banishes Acausality ...
PPTX
Clustercomputingpptl2 120204125126-phpapp01
PPTX
Cluster computing pptl (2)
DOC
Clustering & nlb
PDF
High Availability Storage (susecon2016)
PPTX
Cluster computing
PPTX
Cluster Computing
PPTX
Clusters
PPTX
2.1 Red_Hat_Cluster1.ppt
Cluster computings
cluster computing
Cluster computing
Cluster computing ppt
Rha cluster suite wppdf
Failover cluster
Cluster Computing
Cluster computing
Cluster Computers
GDPS and System Complex
Fundamentals Of Transaction Systems - Part 1: Causality banishes Acausality ...
Clustercomputingpptl2 120204125126-phpapp01
Cluster computing pptl (2)
Clustering & nlb
High Availability Storage (susecon2016)
Cluster computing
Cluster Computing
Clusters

Recently uploaded (20)

PDF
STKI Israel Market Study 2025 version august
PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
CloudStack 4.21: First Look Webinar slides
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
DP Operators-handbook-extract for the Mautical Institute
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
Tartificialntelligence_presentation.pptx
PDF
Getting Started with Data Integration: FME Form 101
PDF
Five Habits of High-Impact Board Members
PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PPTX
Chapter 5: Probability Theory and Statistics
PDF
Hybrid model detection and classification of lung cancer
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PPTX
Modernising the Digital Integration Hub
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
STKI Israel Market Study 2025 version august
Developing a website for English-speaking practice to English as a foreign la...
A novel scalable deep ensemble learning framework for big data classification...
Group 1 Presentation -Planning and Decision Making .pptx
Assigned Numbers - 2025 - Bluetooth® Document
CloudStack 4.21: First Look Webinar slides
O2C Customer Invoices to Receipt V15A.pptx
Getting started with AI Agents and Multi-Agent Systems
DP Operators-handbook-extract for the Mautical Institute
Module 1.ppt Iot fundamentals and Architecture
A comparative study of natural language inference in Swahili using monolingua...
Tartificialntelligence_presentation.pptx
Getting Started with Data Integration: FME Form 101
Five Habits of High-Impact Board Members
Web Crawler for Trend Tracking Gen Z Insights.pptx
Chapter 5: Probability Theory and Statistics
Hybrid model detection and classification of lung cancer
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Modernising the Digital Integration Hub
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf

Rhel cluster basics 1

  • 2. A cluster is two or more computers (called nodes or members) that work together to perform a task.
  • 3. Types of clusters • Storage • High availability • Load balancing • High performance
  • 4. Storage Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. Red Hat Cluster Suite provides storage clustering through Red Hat GFS.
  • 5. High availability High-availability clusters provide continuous availability of services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high-availability cluster read and write data (via read-write mounted file systems). Therefore, a high- availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high-availability cluster are not visible from clients outside the cluster.
  • 6. Load balancing Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Red Hat Cluster Suite provides load-balancing through LVS (Linux Virtual Server).
  • 7. High performance High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. High performance clusters are also referred to as computational clusters or grid computing.
  • 8. Red Hat Cluster Suite Red Hat Cluster Suite (RHCS) is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high- availability, load balancing, scalability, file sharing, and economy.
  • 9. RHCS major components • Cluster infrastructure — Provides fundamental functions for nodes to work together as a cluster: configuration-file management, membership management, lock management, and fencing. • High-availability Service Management — Provides failover of services from one cluster node to another in case a node becomes inoperative.
  • 10. RHCS major components • Red Hat GFS (Global File System) — Provides a cluster file system for use with Red Hat Cluster Suite. GFS allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. • Cluster Logical Volume Manager (CLVM) — Provides volume management of cluster storage.
  • 11. RHCS major components • Global Network Block Device (GNBD) — An ancillary component of GFS that exports block- level storage to Ethernet. This is an economical way to make block-level storage available to Red Hat GFS. • Linux Virtual Server (LVS) — Routing software that provides IP-Load-balancing. LVS runs in a pair of redundant servers that distributes client requests evenly to real servers that are behind the LVS servers.
  • 12. RHCS major components • Cluster administration tools — Configuration and management tools for setting up, configuring, and managing a Red Hat cluster. The tools are for use with the Cluster Infrastructure components, the High-availability and Service Management components, and storage. • You can configure and manage other Red Hat Cluster Suite components through tools for those components.
  • 14. Cluster Infrastructure The Red Hat Cluster Suite cluster infrastructure provides the basic functions for a group of computers (called nodes or members) to work together as a cluster. Once a cluster is formed using the cluster infrastructure, you can use other Red Hat Cluster Suite components to suit your clustering needs. The cluster infrastructure performs the following functions: • Cluster management • Lock management • Fencing • Cluster configuration management
  • 15. Cluster Management • Cluster management manages cluster quorum and cluster membership. CMAN (an abbreviation for cluster manager) performs cluster management in Red Hat Cluster Suite for Red Hat Enterprise Linux. • CMAN is a distributed cluster manager and runs in each cluster node; cluster management is distributed across all nodes in the cluster
  • 17. CMAN keeps track of cluster quorum by monitoring the count of cluster nodes. If more than half the nodes are active, the cluster has quorum. If half the nodes (or fewer) are active, the cluster does not have quorum, and all cluster activity is stopped. Cluster quorum prevents the occurrence of a "split-brain" condition — a condition where two instances of the same cluster are running. A split-brain condition would allow each cluster instance to access cluster resources without knowledge of the other cluster instance, resulting in corrupted cluster integrity. Cluster Management
  • 18. Quorum is determined by communication of messages among cluster nodes via Ethernet. Optionally, quorum can be determined by a combination of communicating messages via Ethernet and through a quorum disk. For quorum via Ethernet, quorum consists of 50 percent of the node votes plus 1. For quorum via quorum disk, quorum consists of user-specified conditions. Cluster Management -Quorum
  • 19. Lock management is a common cluster- infrastructure service that provides a mechanism for other cluster infrastructure components to synchronize their access to shared resources. In a Red Hat cluster, DLM (Distributed Lock Manager) is the lock manager. DLM is a distributed lock manager and runs in each cluster node; lock management is distributed across all nodes in the cluster. Lock Management
  • 20. GFS and CLVM use locks from the lock manager. GFS uses locks from the lock manager to synchronize access to file system metadata (on shared storage). CLVM uses locks from the lock manager to synchronize updates to LVM volumes and volume groups (also on shared storage). Lock Management
  • 21. Fencing Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon, fenced.
  • 22. Fencing When CMAN determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. Fenced, when notified of the failure, fences the failed node. Other cluster-infrastructure components determine what actions to take — that is, they perform any recovery that needs to done.
  • 23. Fencing For example, DLM and GFS, when notified of a node failure, suspend activity until they detect that fenced has completed fencing the failed node. Upon confirmation that the failed node is fenced, DLM and GFS perform recovery. DLM releases locks of the failed node; GFS recovers the journal of the failed node.
  • 24. Fencing The fencing program determines from the cluster configuration file which fencing method to use. Two key elements in the cluster configuration file define a fencing method: fencing agent and fencing device. The fencing program makes a call to a fencing agent specified in the cluster configuration file. The fencing agent, in turn, fences the node via a fencing device. When fencing is complete, the fencing program notifies the cluster manager.
  • 25. Fencing Methods • Power fencing — A fencing method that uses a power controller to power off an inoperable node. • Fibre Channel switch fencing — A fencing method that disables the Fibre Channel port that connects storage to an inoperable node. • GNBD fencing — A fencing method that disables an inoperable node's access to a GNBD server. • Other fencing — Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.
  • 28. Fencing a Node with Dual Power Supplies
  • 29. Fencing a Node with Dual Fibre Channel Connections
  • 30. Cluster Configuration System The Cluster Configuration System (CCS) manages the cluster configuration and provides configuration information to other cluster components in a Red Hat cluster. CCS runs in each cluster node and makes sure that the cluster configuration file in each cluster node is up to date. For example, if a cluster system administrator updates the configuration file in Node A, CCS propagates the update from Node A to the other nodes in the cluster.
  • 32. Cluster Configuration System Other cluster components (for example, CMAN) access configuration information from the configuration file through CCS.
  • 33. Cluster Configuration File The cluster configuration file (/etc/cluster/cluster.conf) is an XML file that describes the following cluster characteristics: • Cluster name — Displays the cluster name, cluster configuration file revision level, and basic fence timing properties used when a node joins a cluster or is fenced from the cluster. • Cluster — Displays each node of the cluster, specifying node name, node ID, number of quorum votes, and fencing method for that node. • Fence Device — Displays fence devices in the cluster. Parameters vary according to the type of fence device. • Managed Resources — Displays resources required to create cluster services. Managed resources includes the definition of failover domains, resources (for example an IP address), and services.
  • 34. High-availability Service Management • High-availability service management provides the ability to create and manage high-availability cluster services in a Red Hat cluster. • The key component for high-availability service management in a Red Hat cluster, rgmanager, implements cold failover for off-the-shelf applications. • A high-availability cluster service can fail over from one cluster node to another with no apparent interruption to cluster clients.
  • 35. Failover Domains • A failover domain is a subset of cluster nodes that are eligible to run a particular cluster service. • Cluster-service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another.
  • 36. Failover Priority • A cluster service can run on only one cluster node at a time to maintain data integrity. • Specifying failover priority consists of assigning a priority level to each node in a failover domain. The priority level determines the failover order. • If you do not specify failover priority, a cluster service can fail over to any node in its failover domain.
  • 38. Failover Domains Example Failover Domain 1 is configured to restrict failover within that domain; therefore, Cluster Service X can only fail over between Node A and Node B. Failover Domain 2 is also configured to restrict failover with its domain; additionally, it is configured for failover priority. Failover Domain 2 priority is configured with Node C as priority 1, Node B as priority 2, and Node D as priority 3. If Node C fails, Cluster Service Y fails over to Node B next. If it cannot fail over to Node B, it tries failing over to Node D. Failover Domain 3 is configured with no priority and no restrictions. If the node that Cluster Service Z is running on fails, Cluster Service Z tries failing over to one of the nodes in Failover Domain 3. However, if none of those nodes is available, Cluster Service Z can fail over to any node in the cluster.
  • 39. Web Server Cluster Service Example
  • 40. Web Server Cluster Service Example • In the example, a high-availability cluster service that is a web server named "content-webserver". • It is running in cluster node B and is in a failover domain that consists of nodes A, B, and D. • In addition, the failover domain is configured with a failover priority to fail over to node D before node A and to restrict failover to nodes only in that failover domain.
  • 41. Web Server Cluster Service Example • Clients access the cluster service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. • The httpd-content application uses the gfs-content- webserver file system. • If node B were to fail, the content-webserver cluster service would fail over to node D. If node D were not available or also failed, the service would fail over to node A. • Failover would occur with no apparent interruption to the cluster clients.