SlideShare a Scribd company logo
Red Hat GFS
Global File System
• Red Hat GFS is a cluster file system that allows a
cluster of nodes to simultaneously access a
block device that is shared among the nodes.
• GFS employs distributed metadata and multiple
journals for optimal operation in a cluster.
• To maintain file system integrity, GFS uses a lock
manager to coordinate I/O.
Benefits of Red Hat GFS
• Simplifying your data infrastructure
–Install and patch applications once for the
entire cluster.
–Eliminates the need for redundant copies of
application data (duplication).
–Enables concurrent read/write access to data
by many clients.
–Simplifies backup and disaster recovery (only
one file system to back up or recover).
Benefits of Red Hat GFS
• Maximize the use of storage resources;
minimize storage administration costs.
– Manage storage as a whole instead of by partition.
– Decrease overall storage needs by eliminating the
need for data replications.
• Scale the cluster seamlessly by adding servers
or storage on the fly.
– No more partitioning storage through complicated
techniques.
– Add servers to the cluster on the fly by mounting
them to the common file system.
Red Hat GFS
• Nodes that run Red Hat GFS are configured and
managed with Red Hat Cluster Suite
configuration and management tools.
• Volume management is managed through
CLVM (Cluster Logical Volume Manager).
Red Hat GFS
• Red Hat GFS provides data sharing among GFS
nodes in a Red Hat cluster.
• GFS provides a single, consistent view of the file-
system name space across the GFS nodes in a Red
Hat cluster.
• GFS allows applications to install and run without
much knowledge of the underlying storage
infrastructure.
• GFS provides features that are typically required in
enterprise environments, such as quotas, multiple
journals, and multipath support.
Red Hat GFS
• You can deploy GFS in a variety of
configurations to suit your needs for
performance, scalability, and economy.
• For superior performance and scalability, you
can deploy GFS in a cluster that is connected
directly to a SAN.
• For more economical needs, you can deploy
GFS in a cluster that is connected to a LAN with
servers that use GNBD (Global Network Block
Device) or to iSCSI (Internet Small Computer
System Interface) devices.
Red Hat GFS - Superior Performance and
Scalability
• You can obtain the highest shared-file performance
when applications access storage directly.
• The GFS SAN configuration provides superior file
performance for shared files and file systems.
• Linux applications run directly on cluster nodes
using GFS. Without file protocols or storage servers
to slow data access, performance is similar to
individual Linux servers with directly connected
storage.
• GFS supports over 300 GFS nodes.
Red Hat GFS - Superior Performance and
Scalability
GFS with a SAN
Red Hat GFS - Performance, Scalability,
Moderate Price
• Multiple Linux client applications on a LAN can
share the same SAN-based data.
• SAN block storage is presented to network clients
as block storage devices by GNBD servers.
• From the perspective of a client application,
storage is accessed as if it were directly attached to
the server in which the application is running.
Stored data is actually on the SAN.
• Storage devices and data can be equally shared by
network client applications.
• File locking and sharing functions are handled by
GFS for each network client.
Red Hat GFS - Performance, Scalability,
Moderate Price
GFS and GNBD with a SAN
Red Hat GFS - Economy and Performance
• Linux client applications can also take
advantage of an existing Ethernet topology to
gain shared access to all block storage devices.
• Client data files and file systems can be shared
with GFS on each client.
• Application failover can be fully automated with
Red Hat Cluster Suite.
Red Hat GFS - Economy and Performance
GFS and GNBD with Directly Connected Storage
Cluster Logical Volume Manager
Cluster Logical Volume Manager
• The Cluster Logical Volume Manager (CLVM)
provides a cluster-wide version of LVM2.
• CLVM provides the same capabilities as LVM2
on a single node, but makes the volumes
available to all nodes in a Red Hat cluster.
• The key component in CLVM is clvmd. clvmd is a
daemon that provides clustering extensions to
the standard LVM2 tool set and allows LVM2
commands to manage shared storage.
• clvmd runs in each cluster node and distributes
LVM metadata updates in a cluster.
CLVM Overview
CLVM Configuration
You can configure CLVM using the
same commands as LVM2, using the
LVM graphical user interface, or
using the storage configuration
function of the Conga cluster
configuration graphical user
interface
CLVM Configuration
LVM Graphical User Interface
CLVM Configuration
Conga LVM Graphical User Interface
CLVM Configuration
Basic concept of creating logical volumes
Global Network Block Device
Global Network Block Device
• Global Network Block Device (GNBD) provides
block-device access to Red Hat GFS over TCP/IP.
• GNBD is similar in concept to NBD; however,
GNBD is GFS-specific and tuned solely for use
with GFS.
• GNBD is useful when the need for more robust
technologies — Fibre Channel or single-initiator
SCSI — are not necessary or are cost-
prohibitive.
Global Network Block Device
• GNBD consists of two major components: a GNBD
client and a GNBD server.
• A GNBD client runs in a node with GFS and imports
a block device exported by a GNBD server.
• A GNBD server runs in another node and exports
block-level storage from its local storage (either
directly attached storage or SAN storage).
• Multiple GNBD clients can access a device exported
by a GNBD server, thus making a GNBD suitable for
use by a group of nodes running GFS.
Global Network Block Device
Linux Virtual Server
Linux Virtual Server
• Linux Virtual Server (LVS) is a set of integrated
software components for balancing the IP load
across a set of real servers. LVS runs on a pair of
equally configured computers: one that is an active
LVS router and one that is a backup LVS router. The
active LVS router serves two roles:
– To balance the load across the real servers.
– To check the integrity of the services on each real
server.
• The backup LVS router monitors the active LVS
router and takes over from it in case the active LVS
router fails.
LVS components and their interrelationship
LVS components and their interrelationship
The pulse daemon runs on both the active and passive LVS routers.
On the backup LVS router, pulse sends a heartbeat to the public
interface of the active router to make sure the active LVS router is
properly functioning.
On the active LVS router, pulse starts the lvs daemon and responds
to heartbeat queries from the backup LVS router.
Once started, the lvs daemon calls the ipvsadm utility to configure
and maintain the IPVS (IP Virtual Server) routing table in the kernel
and starts a nanny process for each configured virtual server on
each real server.
LVS components and their interrelationship
Each nanny process checks the state of one configured service
on one real server, and tells the lvs daemon if the service on
that real server is malfunctioning.
If a malfunction is detected, the lvs daemon instructs ipvsadm
to remove that real server from the IPVS routing table.
If the backup LVS router does not receive a response from the
active LVS router, it initiates failover by calling send_arp to
reassign all virtual IP addresses to the NIC hardware
addresses (MAC address) of the backup LVS router, sends a
command to the active LVS router via both the public and
private network interfaces.
LVS components and their interrelationship
Because there is no built-in component in LVS to share the
data among real servers, you have two basic options:
• Synchronize the data across the real servers.
• Add a third layer to the topology for shared data access.
The first option is preferred for servers that do not allow large
numbers of users to upload or change data on the real
servers.
If the real servers allow large numbers of users to modify
data, such as an e-commerce website, adding a third layer is
preferable.
Two-Tier LVS Topology
Three-Tier LVS Topology
Routing Methods
Routing Methods - NAT Routing
Routing Methods - Direct Routing
Persistence and Firewall Marks
Overview
In certain situations, it may be desirable for a client
to reconnect repeatedly to the same real server,
rather than have an LVS load-balancing algorithm
send that request to the best available server.
Examples of such situations include multi-screen
web forms, cookies, SSL, and FTP connections. In
those cases, a client may not work properly unless
the transactions are being handled by the same
server to retain context.
Persistence
When a client connects to a service, LVS remembers the last
connection for a specified period of time. If that same client
IP address connects again within that period, it is sent to the
same server it connected to previously — bypassing the
load-balancing mechanisms.
Persistence also allows you to specify a subnet mask to apply
to the client IP address test as a tool for controlling what
addresses have a higher level of persistence, thereby
grouping connections to that subnet.
persistence is not the most efficient way to deal with the
problem of grouping together connections destined for
different ports.
Firewall Marks
Firewall marks are an easy and efficient way to a group
ports used for a protocol or group of related protocols.
For example, if LVS is deployed to run an e-commerce
site, firewall marks can be used to bundle HTTP
connections on port 80 and secure, HTTPS connections
on port 443. By assigning the same firewall mark to
the virtual server for each protocol, state information
for the transaction can be preserved because the LVS
router forwards all requests to the same real server
after a connection is opened.

More Related Content

PPT
Active directory slides
PDF
FDD_LTE_19_New_Feature_Document.pdf.pdf
PPTX
Server load balancer ppt
PPTX
Cluster computing ppt
PPT
Cluster Computers
PPTX
Eucalyptus cloud computing
PDF
Introduction to virtual desktop infrastructure v3
PPTX
Cloud computing
Active directory slides
FDD_LTE_19_New_Feature_Document.pdf.pdf
Server load balancer ppt
Cluster computing ppt
Cluster Computers
Eucalyptus cloud computing
Introduction to virtual desktop infrastructure v3
Cloud computing

What's hot (20)

PDF
Device tree
PPTX
Data center network architectures v1.3
PPTX
Linux kernel
PDF
Design BBU Baseband Unit and telecommunication
PDF
Udev for Device Management in Linux
DOCX
Oracle Database 19c (19.3) Installation on Windows (Step-by-Step)
PDF
Software Defined Networking (SDN) Technology Brief
PPTX
Network attached storage (nas)
PPTX
Introduction to SDN and NFV
PPTX
Huawei bsc 6900
PDF
43096827 gsm-timers
PPTX
Cluster Computing
PDF
Mcse notes
PPTX
Active Directory component
PPTX
PDF
1 omo112050 bsc6000 gsm v9 r8c12 cell parameters issue1.00
PPTX
Sync in 3 g
PPTX
Network Attached Storage (NAS)
PDF
Virtualization and cloud Computing
Device tree
Data center network architectures v1.3
Linux kernel
Design BBU Baseband Unit and telecommunication
Udev for Device Management in Linux
Oracle Database 19c (19.3) Installation on Windows (Step-by-Step)
Software Defined Networking (SDN) Technology Brief
Network attached storage (nas)
Introduction to SDN and NFV
Huawei bsc 6900
43096827 gsm-timers
Cluster Computing
Mcse notes
Active Directory component
1 omo112050 bsc6000 gsm v9 r8c12 cell parameters issue1.00
Sync in 3 g
Network Attached Storage (NAS)
Virtualization and cloud Computing
Ad

Viewers also liked (8)

PDF
RHEL6 High Availability Add-On Technical Guide
PDF
Pacemakerを使いこなそう
PDF
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
PDF
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
PPTX
Rhel cluster basics 1
PDF
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
PDF
PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...
RHEL6 High Availability Add-On Technical Guide
Pacemakerを使いこなそう
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
Rhel cluster basics 1
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...
Ad

Similar to Rhel cluster basics 2 (20)

PPTX
FAILOVER
PPT
2.1 Red_Hat_Cluster1.ppt
ODP
Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...
PPTX
State of the Container Ecosystem
PDF
Gpfs introandsetup
PPTX
Hyper-V’s Virtualization Enhancements - EPC Group
PDF
Introducción a CloudStack
PDF
04_virtualization1_v1.pdf
PPT
Setting_up_hadoop_cluster_Detailed-overview
PDF
9-cloud-computing.pdf
PPTX
D108636GC10_les01.pptx
PPTX
Multi Tenancy In The Cloud
PDF
Could the “C” in HPC stand for Cloud?
PPT
Clustering
PPTX
2017 VMUG Storage Policy Based Management
PDF
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds Capacity
PPT
Server Farms and XML Web Services
PPTX
cluster computing
PDF
Understanding network and service virtualization
PDF
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
FAILOVER
2.1 Red_Hat_Cluster1.ppt
Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...
State of the Container Ecosystem
Gpfs introandsetup
Hyper-V’s Virtualization Enhancements - EPC Group
Introducción a CloudStack
04_virtualization1_v1.pdf
Setting_up_hadoop_cluster_Detailed-overview
9-cloud-computing.pdf
D108636GC10_les01.pptx
Multi Tenancy In The Cloud
Could the “C” in HPC stand for Cloud?
Clustering
2017 VMUG Storage Policy Based Management
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds Capacity
Server Farms and XML Web Services
cluster computing
Understanding network and service virtualization
VMworld 2013: vSphere Distributed Switch – Design and Best Practices

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Approach and Philosophy of On baking technology
PPT
Teaching material agriculture food technology
PDF
cuic standard and advanced reporting.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Cloud computing and distributed systems.
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
NewMind AI Monthly Chronicles - July 2025
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Network Security Unit 5.pdf for BCA BBA.
Approach and Philosophy of On baking technology
Teaching material agriculture food technology
cuic standard and advanced reporting.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Dropbox Q2 2025 Financial Results & Investor Presentation
“AI and Expert System Decision Support & Business Intelligence Systems”
Spectral efficient network and resource selection model in 5G networks
Cloud computing and distributed systems.
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
NewMind AI Monthly Chronicles - July 2025
Understanding_Digital_Forensics_Presentation.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Mobile App Security Testing_ A Comprehensive Guide.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Review of recent advances in non-invasive hemoglobin estimation
Chapter 3 Spatial Domain Image Processing.pdf

Rhel cluster basics 2

  • 2. Global File System • Red Hat GFS is a cluster file system that allows a cluster of nodes to simultaneously access a block device that is shared among the nodes. • GFS employs distributed metadata and multiple journals for optimal operation in a cluster. • To maintain file system integrity, GFS uses a lock manager to coordinate I/O.
  • 3. Benefits of Red Hat GFS • Simplifying your data infrastructure –Install and patch applications once for the entire cluster. –Eliminates the need for redundant copies of application data (duplication). –Enables concurrent read/write access to data by many clients. –Simplifies backup and disaster recovery (only one file system to back up or recover).
  • 4. Benefits of Red Hat GFS • Maximize the use of storage resources; minimize storage administration costs. – Manage storage as a whole instead of by partition. – Decrease overall storage needs by eliminating the need for data replications. • Scale the cluster seamlessly by adding servers or storage on the fly. – No more partitioning storage through complicated techniques. – Add servers to the cluster on the fly by mounting them to the common file system.
  • 5. Red Hat GFS • Nodes that run Red Hat GFS are configured and managed with Red Hat Cluster Suite configuration and management tools. • Volume management is managed through CLVM (Cluster Logical Volume Manager).
  • 6. Red Hat GFS • Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. • GFS provides a single, consistent view of the file- system name space across the GFS nodes in a Red Hat cluster. • GFS allows applications to install and run without much knowledge of the underlying storage infrastructure. • GFS provides features that are typically required in enterprise environments, such as quotas, multiple journals, and multipath support.
  • 7. Red Hat GFS • You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. • For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. • For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device) or to iSCSI (Internet Small Computer System Interface) devices.
  • 8. Red Hat GFS - Superior Performance and Scalability • You can obtain the highest shared-file performance when applications access storage directly. • The GFS SAN configuration provides superior file performance for shared files and file systems. • Linux applications run directly on cluster nodes using GFS. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage. • GFS supports over 300 GFS nodes.
  • 9. Red Hat GFS - Superior Performance and Scalability GFS with a SAN
  • 10. Red Hat GFS - Performance, Scalability, Moderate Price • Multiple Linux client applications on a LAN can share the same SAN-based data. • SAN block storage is presented to network clients as block storage devices by GNBD servers. • From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. • Storage devices and data can be equally shared by network client applications. • File locking and sharing functions are handled by GFS for each network client.
  • 11. Red Hat GFS - Performance, Scalability, Moderate Price GFS and GNBD with a SAN
  • 12. Red Hat GFS - Economy and Performance • Linux client applications can also take advantage of an existing Ethernet topology to gain shared access to all block storage devices. • Client data files and file systems can be shared with GFS on each client. • Application failover can be fully automated with Red Hat Cluster Suite.
  • 13. Red Hat GFS - Economy and Performance GFS and GNBD with Directly Connected Storage
  • 15. Cluster Logical Volume Manager • The Cluster Logical Volume Manager (CLVM) provides a cluster-wide version of LVM2. • CLVM provides the same capabilities as LVM2 on a single node, but makes the volumes available to all nodes in a Red Hat cluster. • The key component in CLVM is clvmd. clvmd is a daemon that provides clustering extensions to the standard LVM2 tool set and allows LVM2 commands to manage shared storage. • clvmd runs in each cluster node and distributes LVM metadata updates in a cluster.
  • 17. CLVM Configuration You can configure CLVM using the same commands as LVM2, using the LVM graphical user interface, or using the storage configuration function of the Conga cluster configuration graphical user interface
  • 19. CLVM Configuration Conga LVM Graphical User Interface
  • 20. CLVM Configuration Basic concept of creating logical volumes
  • 22. Global Network Block Device • Global Network Block Device (GNBD) provides block-device access to Red Hat GFS over TCP/IP. • GNBD is similar in concept to NBD; however, GNBD is GFS-specific and tuned solely for use with GFS. • GNBD is useful when the need for more robust technologies — Fibre Channel or single-initiator SCSI — are not necessary or are cost- prohibitive.
  • 23. Global Network Block Device • GNBD consists of two major components: a GNBD client and a GNBD server. • A GNBD client runs in a node with GFS and imports a block device exported by a GNBD server. • A GNBD server runs in another node and exports block-level storage from its local storage (either directly attached storage or SAN storage). • Multiple GNBD clients can access a device exported by a GNBD server, thus making a GNBD suitable for use by a group of nodes running GFS.
  • 26. Linux Virtual Server • Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers. LVS runs on a pair of equally configured computers: one that is an active LVS router and one that is a backup LVS router. The active LVS router serves two roles: – To balance the load across the real servers. – To check the integrity of the services on each real server. • The backup LVS router monitors the active LVS router and takes over from it in case the active LVS router fails.
  • 27. LVS components and their interrelationship
  • 28. LVS components and their interrelationship The pulse daemon runs on both the active and passive LVS routers. On the backup LVS router, pulse sends a heartbeat to the public interface of the active router to make sure the active LVS router is properly functioning. On the active LVS router, pulse starts the lvs daemon and responds to heartbeat queries from the backup LVS router. Once started, the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS (IP Virtual Server) routing table in the kernel and starts a nanny process for each configured virtual server on each real server.
  • 29. LVS components and their interrelationship Each nanny process checks the state of one configured service on one real server, and tells the lvs daemon if the service on that real server is malfunctioning. If a malfunction is detected, the lvs daemon instructs ipvsadm to remove that real server from the IPVS routing table. If the backup LVS router does not receive a response from the active LVS router, it initiates failover by calling send_arp to reassign all virtual IP addresses to the NIC hardware addresses (MAC address) of the backup LVS router, sends a command to the active LVS router via both the public and private network interfaces.
  • 30. LVS components and their interrelationship Because there is no built-in component in LVS to share the data among real servers, you have two basic options: • Synchronize the data across the real servers. • Add a third layer to the topology for shared data access. The first option is preferred for servers that do not allow large numbers of users to upload or change data on the real servers. If the real servers allow large numbers of users to modify data, such as an e-commerce website, adding a third layer is preferable.
  • 34. Routing Methods - NAT Routing
  • 35. Routing Methods - Direct Routing
  • 37. Overview In certain situations, it may be desirable for a client to reconnect repeatedly to the same real server, rather than have an LVS load-balancing algorithm send that request to the best available server. Examples of such situations include multi-screen web forms, cookies, SSL, and FTP connections. In those cases, a client may not work properly unless the transactions are being handled by the same server to retain context.
  • 38. Persistence When a client connects to a service, LVS remembers the last connection for a specified period of time. If that same client IP address connects again within that period, it is sent to the same server it connected to previously — bypassing the load-balancing mechanisms. Persistence also allows you to specify a subnet mask to apply to the client IP address test as a tool for controlling what addresses have a higher level of persistence, thereby grouping connections to that subnet. persistence is not the most efficient way to deal with the problem of grouping together connections destined for different ports.
  • 39. Firewall Marks Firewall marks are an easy and efficient way to a group ports used for a protocol or group of related protocols. For example, if LVS is deployed to run an e-commerce site, firewall marks can be used to bundle HTTP connections on port 80 and secure, HTTPS connections on port 443. By assigning the same firewall mark to the virtual server for each protocol, state information for the transaction can be preserved because the LVS router forwards all requests to the same real server after a connection is opened.