SlideShare a Scribd company logo
2
Most read
4
Most read
6
Most read
Resource Replication
in Cloud Computing
Dr. Hitesh Mohapatra
School of Computer Engineering
KIIT Deemed to be University
Definition
Resource replication in cloud
computing is the process of
making multiple copies of the
same IT resource.
This is done to improve the
availability and performance of
the resource.
Why is resource replication important?
• Reliability
• Resource replication helps ensure that users can access their
resources consistently, even if there are hardware failures or
network issues.
• Disaster recovery
• Resource replication can help with disaster recovery by creating
redundant copies of data in multiple locations.
• Application performance
• Resource replication can help applications run faster, especially
mobile applications.
How is resource replication done?
• Virtualization technology
• Virtualization technology is used to create multiple instances of the same
resource. For example, a hypervisor can use a virtual server image to create
multiple virtual server instances.
• Synchronous replication
• Data is saved to both the primary and secondary storage platforms at the
same time. This provides a more accurate backup, but it can impact network
performance.
• Asynchronous replication
• Data is saved to the primary storage first, then to the secondary storage. This
method puts less strain on systems, but there is a lag between storage
operations.
Synchronous and Asynchronous
What Is Remote Replication?
• Introduction
• Essential part of data protection and recovery.
• Historical Context
• Initially used for copying and storing application data in off-site locations.
• Technological Advancements
• Expanded capabilities over time.
• Now allows creating a synchronized copy of a VM on a remote target host.
• Functionality
• Replica: Synchronized copy of the VM.
• Functions like a regular VM on the source host.
• Flexibility
• VM replicas can be transferred to and run on any capable hardware.
• Disaster Recovery
• Powered on within seconds if the original VM fails.
• Significantly decreases downtime.
• Risk Mitigation
• Mitigates potential business risks and losses associated with disaster.
Factors to be considered!
• Distance — the greater the distance between the sites, the more
latency will be experienced.
• Bandwidth — the internet speed and network connectivity should be
sufficient to ensure an advanced connection for rapid and secure data
transfer.
• Data rate — the data rate should be lower than the available
bandwidth so as not to overload the network.
• Replication technology — replication jobs should be run in parallel
(simultaneously) for efficient network use.
Synchronous replication
• Introduction
• Essential part of data protection and recovery.
• Historical Context
• Initially used for copying and storing application data in off-site locations.
• Technological Advancements
• Expanded capabilities over time.
• Now allows creating a synchronized copy of a VM on a remote target host.
• Functionality
• Replica: Synchronized copy of the VM.
• Functions like a regular VM on the source host.
Cont.
• Flexibility
• VM replicas can be transferred to and run on any capable hardware.
• Disaster Recovery
• Powered on within seconds if the original VM fails.
• Significantly decreases downtime.
• Risk Mitigation
• Mitigates potential business risks and losses associated with disaster.
• Synchronous Replication
• Data replicated to a secondary remote location at the same time as new data is
created/updated in the primary datacenter.
• Near-instant replication: Data replicas are only a few minutes older than the source material.
• Both host and target remain synchronized, crucial for successful disaster recovery (DR).
• Impact on Network Performance
• Atomic operations: Sequence of operations completed without interruption.
• Write considered finished only when both local and remote storages acknowledge its
completion.
• Guarantees zero data loss, but can slow down overall performance.
Asynchronous replication
• Replication not performed at the same time as changes are
made in the primary storage.
• Data replicated in predetermined time periods (hourly, daily, or
weekly).
• Replica stored in a remote DR location, not synchronized in real
time with the primary location.
• Write considered complete once local storage acknowledges it.
• Improves network performance and availability without affecting
bandwidth.
• In a disaster scenario, DR site might not contain the most
recent changes, posing a risk of critical data loss.
Cont.
• Introduction
• Essential part of data protection and recovery.
• Historical Context
• Initially used for copying and storing application data in off-site locations.
• Technological Advancements
• Expanded capabilities over time.
• Now allows creating a synchronized copy of a VM on a remote target host.
• Functionality
• Replica: Synchronized copy of the VM.
• Functions like a regular VM on the source host.
• Flexibility
• VM replicas can be transferred to and run on any capable hardware.
• Disaster Recovery
• Powered on within seconds if the original VM fails.
• Significantly decreases downtime.
Cont.
• Risk Mitigation
• Mitigates potential business risks and losses associated with disaster.
• and target remain synchronized, crucial for successful disaster recovery (DR).
• Impact on Network Performance
• Atomic operations: Sequence of operations completed without interruption.
• Write considered finished only when both local and remote storages
acknowledge its completion.
• Guarantees zero data loss, but can slow down overall performance.
Synchronous Asynchronous
Distance Works better when locations are in close
proximity (performance drops in proportion to
distance).
Works over longer distances (as long as network
connection between datacenters is available).
Cost More expensive More cost-effective
Recovery Point Objective (RPO) Zero From 15 minutes to a few hours
Recovery Time Objective (RTO) Short Short
Network Requires more bandwidth and is affected by
latency; Can be affected by WAN interruptions
(as the transfer of replicated data cannot be
postponed until later).
Requires less bandwidth and is not affected by
latency; Is not affected by WAN interruptions
(as the copy of data can be saved at the local
site until WAN service is restored).
Data loss Zero Possible loss of most recent updates to data.
Resilience A single failure could cause loss of service;
Viruses or other malicious components that
lead to data corruption might be replicated to
the second copy of the data.
Loss of service can occur after 2 failures.
Performance Low (waits for network acknowledgement from
the secondary location).
High (does not wait for network
acknowledgement from the secondary
location).
Management May require specialized hardware; Supported
by high-end block-based storage arrays and
network-based replication products.
More compatible with other products;
Supported by array-, network- and host-based
replication products.
Use cases Best solution for immediate disaster recovery
and projects that require absolutely no data
loss.
Best solution for storage of less sensitive data
and immediate disaster recovery of projects
that can tolerate partial data loss.
What is data replication in Cloud Computing?
Data replication is the process of maintaining redundant copies of primary data.
This is important for several reasons, including fault tolerance, high availability,
read-intensive applications, reduced network latency, or supporting data
sovereignty requirements.
Fault Tolerance: Data replication is necessary when applications must preserve data in the
case of hardware or network failure due to causes ranging from someone tripping over a power
cable to a regional disaster such as an earthquake. Thus, every application needs data
replication for resilience and consistency.
High Availability: Data frequently accessed by many users or concurrent sessions needs
data replication. In this case, replicated data must remain consistent with its leader and other
replicas.
Reduce Latency: Data replication also helps modern cloud applications run off distributed
data in different networks or geographic regions that serve the end user better.
In short, it’s not only about backup and disaster management but also about
application performance. Let’s dive into how replication works and understand
these needs a little deeper.
Resource replication in cloud computing.
Cloud data replication vs. traditional data replication
Feature Traditional Data Replication Cloud Data Replication
Scope
- Local: Mobile device to PC, PC to
networked database
- Global: Applications to multiple cloud-
based data/services, replicating to other
cloud resources
Primary Use - Preserve data in case of failure
- Advanced data protection and high
availability
Accessibility
- Replicas not directly accessible until
primary nodes fail
- Near-instant access to replicas
Manual Work
- Requires manual work to reassemble
data while offline
- Automates replication and
management
Replication Levels
- From local to external network for
backup
- Multiple cloud-based machines in
same data center, rack-level distribution,
cross-data center replication
Real-Time Sync
- Not real-time, only becomes “active”
when primary fails
- Real-time or near-real-time replication
Disaster Recovery (DR)
- Relies on manual intervention to
activate replicas
- Automatic failover and faster recovery
times
Geographic Distribution
- Limited, typically within local or
external networks
- Wide geographic distribution, storing
master data and replicas in different
regions (e.g., San Francisco, New York,
London)
What is cloud-to-cloud data replication?
A modern hybrid cloud option uses your local network as a
master copy and multiple cloud services or varying regions
within one cloud as part of the replication. Ideally, all nodes in
this design are accessible to applications (for reading and
writing) even when no disaster is at play.
Resource replication in cloud computing.
Some tools
• AWS Migration Service
• Hevo Data, Carbonite
• Veeam Backup and Replication
• Microsoft Azure
• Google Cloud Storage Snapshots
• Informatica
Feature Traditional Data Replication Cloud Data Replication Data Backup
Scope
- Local: Mobile device to PC, PC to
networked database
- Global: Applications to multiple cloud-
based data/services, replicating to other
cloud resources
- Restores data to a specific point in time
Primary Use - Preserve data in case of failure
- Advanced data protection and high
availability
- Protects data from corruption, system
failure, outages, and other data loss
events
Accessibility
- Replicas not directly accessible until
primary nodes fail
- Near-instant access to replicas - Data can be restored from save points
Manual Work
- Requires manual work to reassemble
data while offline
- Automates replication and
management
- Typically scheduled during off-hours to
reduce impact on production systems
Replication Levels
- From local to external network for
backup
- Multiple cloud-based machines in the
same data center, rack-level distribution,
cross-data center replication
- Save points created at periodic intervals
Real-Time Sync
- Not real-time, only becomes “active”
when primary fails
- Real-time or near-real-time replication
- Not real-time, periodic backups can
take up to several hours
Disaster Recovery (DR)
- Relies on manual intervention to
activate replicas
- Automatic failover and faster recovery
times
- Provides a recovery point for restoring
data in the event of a disaster
Geographic Distribution
- Limited, typically within local or
external networks
- Wide geographic distribution, storing
master data and replicas in different
regions
- Data can be backed up on a variety of
media and locations, both on-premises
and in the cloud
Performance Impact
- May slow down overall performance
due to atomic operations
- Improves network performance and
availability without affecting bandwidth
- Backups can be time-consuming but
typically scheduled during off-hours to
minimize impact on production systems
Risk of Data Loss
- Guarantees zero data loss with
synchronous replication, but can slow
down performance
- Lower risk of data loss due to near-real-
time replication
- Risk of losing data between backups,
but suitable for long-term storage of
large sets of static data and compliance

More Related Content

PDF
BUSINESS CONSIDERATIONS FOR CLOUD COMPUTING
PDF
Introduction to Edge and Fog Computing.pdf
PPTX
Load balancing in cloud computing.pptx
PDF
Multitenancy in cloud computing architecture
PDF
Resource Replication & Automated Scaling Listener
PDF
The life cycle of a virtual machine (VM) provisioning process
PDF
Logical Network Perimeter in Cloud Computing
PDF
Resource Cluster and Multi-Device Broker.pdf
BUSINESS CONSIDERATIONS FOR CLOUD COMPUTING
Introduction to Edge and Fog Computing.pdf
Load balancing in cloud computing.pptx
Multitenancy in cloud computing architecture
Resource Replication & Automated Scaling Listener
The life cycle of a virtual machine (VM) provisioning process
Logical Network Perimeter in Cloud Computing
Resource Cluster and Multi-Device Broker.pdf

What's hot (20)

PDF
Storage Device & Usage Monitor in Cloud Computing.pdf
PDF
Failover System in Cloud Computing System
PDF
Networking in Cloud Computing Environment
PDF
Amazon Web Services (AWS) : Fundamentals
PDF
Server Consolidation in Cloud Computing Environment
PDF
Web Services / Technology in Cloud Computing
PDF
Deployment Models in Cloud Computing
PPTX
Deployment Models of Cloud Computing.pptx
PPTX
High Availability in Microsoft Azure
PPT
Cloud computing
PPTX
Load Balancing in Cloud
PPT
Cloud deployment models
PPTX
Cloud Computing : Revised Presentation
PPTX
Cloud Service Models
PPT
Distributed computing
PPTX
Cloud Computing: Virtualization
PPTX
Scheduling in Cloud Computing
PPTX
Migrating into a cloud
PPTX
2 vm provisioning
Storage Device & Usage Monitor in Cloud Computing.pdf
Failover System in Cloud Computing System
Networking in Cloud Computing Environment
Amazon Web Services (AWS) : Fundamentals
Server Consolidation in Cloud Computing Environment
Web Services / Technology in Cloud Computing
Deployment Models in Cloud Computing
Deployment Models of Cloud Computing.pptx
High Availability in Microsoft Azure
Cloud computing
Load Balancing in Cloud
Cloud deployment models
Cloud Computing : Revised Presentation
Cloud Service Models
Distributed computing
Cloud Computing: Virtualization
Scheduling in Cloud Computing
Migrating into a cloud
2 vm provisioning
Ad

Similar to Resource replication in cloud computing. (20)

PPTX
Database replication
PPTX
NetBackup Appliance Family presentation
PDF
Focus on business, not backups
PPTX
Data Replication from Storage Technology
PPTX
E2 evc 3-2-1-rule - mikeresseler
PDF
03 Data Recovery - Notes
PPTX
Cloud computing
PPTX
Cloud Computing - Geektalk
PDF
[IC Manage] Workspace Acceleration & Network Storage Reduction
PDF
How To Build A Stable And Robust Base For a “Cloud”
PDF
Webinar Slides: Geo-Distributed MySQL Clustering Done Right!
PDF
Sample Solution Blueprint
PPTX
Lesson 2: Server and clients and Lesson 3: Network management
PPTX
Cloud computing(Basic).pptx
PDF
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed Service
PDF
Compare Array vs Host vs Hypervisor vs Network-Based Replication
PDF
Backing up your virtual environment best practices
PDF
Enabling Continuous Availability and Reducing Downtime with IBM Multi-Site Wo...
PDF
Data Lake and the rise of the microservices
Database replication
NetBackup Appliance Family presentation
Focus on business, not backups
Data Replication from Storage Technology
E2 evc 3-2-1-rule - mikeresseler
03 Data Recovery - Notes
Cloud computing
Cloud Computing - Geektalk
[IC Manage] Workspace Acceleration & Network Storage Reduction
How To Build A Stable And Robust Base For a “Cloud”
Webinar Slides: Geo-Distributed MySQL Clustering Done Right!
Sample Solution Blueprint
Lesson 2: Server and clients and Lesson 3: Network management
Cloud computing(Basic).pptx
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed Service
Compare Array vs Host vs Hypervisor vs Network-Based Replication
Backing up your virtual environment best practices
Enabling Continuous Availability and Reducing Downtime with IBM Multi-Site Wo...
Data Lake and the rise of the microservices
Ad

More from Hitesh Mohapatra (17)

PDF
Uniform-Cost Search Algorithm in the AI Environment
PPT
Software Product Quality - Part 1 Presentation
PDF
Software Measurement and Metrics (Quantified Attribute)
PDF
Software project management is an art and discipline of planning and supervis...
PDF
Software project management is an art and discipline of planning and supervis...
PDF
Traditional Data Center vs. Virtualization – Differences and Benefits
PDF
Inter-Cloud Architecture refers to the design and organization of cloud services
PDF
Route Finder Using Bi-Directional BFS/DFS
PDF
Python Program for Depth First Search or DFS for a Graph
PDF
The Importance of Software Quality: Benefits and Implications for Organizatio...
PDF
Breadth-first search is a graph traversal algorithm
PDF
Cloud integration with IoT enables seamless data collection, storage, and pro...
PPTX
Advancements in Smart Air Pollution Monitoring: Innovations for a Sustainable...
PPTX
Smart Weather Monitoring System Using IoT
PPTX
Smart Surveillance & Emergency Response Using IoT
PPTX
Smart Structural Health Monitoring Through IoT and Sensor
PPTX
Smart Road and Application of IoT and Sensor Network
Uniform-Cost Search Algorithm in the AI Environment
Software Product Quality - Part 1 Presentation
Software Measurement and Metrics (Quantified Attribute)
Software project management is an art and discipline of planning and supervis...
Software project management is an art and discipline of planning and supervis...
Traditional Data Center vs. Virtualization – Differences and Benefits
Inter-Cloud Architecture refers to the design and organization of cloud services
Route Finder Using Bi-Directional BFS/DFS
Python Program for Depth First Search or DFS for a Graph
The Importance of Software Quality: Benefits and Implications for Organizatio...
Breadth-first search is a graph traversal algorithm
Cloud integration with IoT enables seamless data collection, storage, and pro...
Advancements in Smart Air Pollution Monitoring: Innovations for a Sustainable...
Smart Weather Monitoring System Using IoT
Smart Surveillance & Emergency Response Using IoT
Smart Structural Health Monitoring Through IoT and Sensor
Smart Road and Application of IoT and Sensor Network

Recently uploaded (20)

PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
web development for engineering and engineering
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Well-logging-methods_new................
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Sustainable Sites - Green Building Construction
PPTX
Construction Project Organization Group 2.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Geodesy 1.pptx...............................................
PDF
R24 SURVEYING LAB MANUAL for civil enggi
DOCX
573137875-Attendance-Management-System-original
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
web development for engineering and engineering
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Embodied AI: Ushering in the Next Era of Intelligent Systems
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Well-logging-methods_new................
CH1 Production IntroductoryConcepts.pptx
OOP with Java - Java Introduction (Basics)
Lecture Notes Electrical Wiring System Components
Operating System & Kernel Study Guide-1 - converted.pdf
Sustainable Sites - Green Building Construction
Construction Project Organization Group 2.pptx
UNIT 4 Total Quality Management .pptx
Geodesy 1.pptx...............................................
R24 SURVEYING LAB MANUAL for civil enggi
573137875-Attendance-Management-System-original
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Model Code of Practice - Construction Work - 21102022 .pdf
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx

Resource replication in cloud computing.

  • 1. Resource Replication in Cloud Computing Dr. Hitesh Mohapatra School of Computer Engineering KIIT Deemed to be University
  • 2. Definition Resource replication in cloud computing is the process of making multiple copies of the same IT resource. This is done to improve the availability and performance of the resource.
  • 3. Why is resource replication important? • Reliability • Resource replication helps ensure that users can access their resources consistently, even if there are hardware failures or network issues. • Disaster recovery • Resource replication can help with disaster recovery by creating redundant copies of data in multiple locations. • Application performance • Resource replication can help applications run faster, especially mobile applications.
  • 4. How is resource replication done? • Virtualization technology • Virtualization technology is used to create multiple instances of the same resource. For example, a hypervisor can use a virtual server image to create multiple virtual server instances. • Synchronous replication • Data is saved to both the primary and secondary storage platforms at the same time. This provides a more accurate backup, but it can impact network performance. • Asynchronous replication • Data is saved to the primary storage first, then to the secondary storage. This method puts less strain on systems, but there is a lag between storage operations.
  • 6. What Is Remote Replication? • Introduction • Essential part of data protection and recovery. • Historical Context • Initially used for copying and storing application data in off-site locations. • Technological Advancements • Expanded capabilities over time. • Now allows creating a synchronized copy of a VM on a remote target host. • Functionality • Replica: Synchronized copy of the VM. • Functions like a regular VM on the source host. • Flexibility • VM replicas can be transferred to and run on any capable hardware. • Disaster Recovery • Powered on within seconds if the original VM fails. • Significantly decreases downtime. • Risk Mitigation • Mitigates potential business risks and losses associated with disaster.
  • 7. Factors to be considered! • Distance — the greater the distance between the sites, the more latency will be experienced. • Bandwidth — the internet speed and network connectivity should be sufficient to ensure an advanced connection for rapid and secure data transfer. • Data rate — the data rate should be lower than the available bandwidth so as not to overload the network. • Replication technology — replication jobs should be run in parallel (simultaneously) for efficient network use.
  • 8. Synchronous replication • Introduction • Essential part of data protection and recovery. • Historical Context • Initially used for copying and storing application data in off-site locations. • Technological Advancements • Expanded capabilities over time. • Now allows creating a synchronized copy of a VM on a remote target host. • Functionality • Replica: Synchronized copy of the VM. • Functions like a regular VM on the source host.
  • 9. Cont. • Flexibility • VM replicas can be transferred to and run on any capable hardware. • Disaster Recovery • Powered on within seconds if the original VM fails. • Significantly decreases downtime. • Risk Mitigation • Mitigates potential business risks and losses associated with disaster. • Synchronous Replication • Data replicated to a secondary remote location at the same time as new data is created/updated in the primary datacenter. • Near-instant replication: Data replicas are only a few minutes older than the source material. • Both host and target remain synchronized, crucial for successful disaster recovery (DR). • Impact on Network Performance • Atomic operations: Sequence of operations completed without interruption. • Write considered finished only when both local and remote storages acknowledge its completion. • Guarantees zero data loss, but can slow down overall performance.
  • 10. Asynchronous replication • Replication not performed at the same time as changes are made in the primary storage. • Data replicated in predetermined time periods (hourly, daily, or weekly). • Replica stored in a remote DR location, not synchronized in real time with the primary location. • Write considered complete once local storage acknowledges it. • Improves network performance and availability without affecting bandwidth. • In a disaster scenario, DR site might not contain the most recent changes, posing a risk of critical data loss.
  • 11. Cont. • Introduction • Essential part of data protection and recovery. • Historical Context • Initially used for copying and storing application data in off-site locations. • Technological Advancements • Expanded capabilities over time. • Now allows creating a synchronized copy of a VM on a remote target host. • Functionality • Replica: Synchronized copy of the VM. • Functions like a regular VM on the source host. • Flexibility • VM replicas can be transferred to and run on any capable hardware. • Disaster Recovery • Powered on within seconds if the original VM fails. • Significantly decreases downtime.
  • 12. Cont. • Risk Mitigation • Mitigates potential business risks and losses associated with disaster. • and target remain synchronized, crucial for successful disaster recovery (DR). • Impact on Network Performance • Atomic operations: Sequence of operations completed without interruption. • Write considered finished only when both local and remote storages acknowledge its completion. • Guarantees zero data loss, but can slow down overall performance.
  • 13. Synchronous Asynchronous Distance Works better when locations are in close proximity (performance drops in proportion to distance). Works over longer distances (as long as network connection between datacenters is available). Cost More expensive More cost-effective Recovery Point Objective (RPO) Zero From 15 minutes to a few hours Recovery Time Objective (RTO) Short Short Network Requires more bandwidth and is affected by latency; Can be affected by WAN interruptions (as the transfer of replicated data cannot be postponed until later). Requires less bandwidth and is not affected by latency; Is not affected by WAN interruptions (as the copy of data can be saved at the local site until WAN service is restored). Data loss Zero Possible loss of most recent updates to data. Resilience A single failure could cause loss of service; Viruses or other malicious components that lead to data corruption might be replicated to the second copy of the data. Loss of service can occur after 2 failures. Performance Low (waits for network acknowledgement from the secondary location). High (does not wait for network acknowledgement from the secondary location). Management May require specialized hardware; Supported by high-end block-based storage arrays and network-based replication products. More compatible with other products; Supported by array-, network- and host-based replication products. Use cases Best solution for immediate disaster recovery and projects that require absolutely no data loss. Best solution for storage of less sensitive data and immediate disaster recovery of projects that can tolerate partial data loss.
  • 14. What is data replication in Cloud Computing? Data replication is the process of maintaining redundant copies of primary data. This is important for several reasons, including fault tolerance, high availability, read-intensive applications, reduced network latency, or supporting data sovereignty requirements. Fault Tolerance: Data replication is necessary when applications must preserve data in the case of hardware or network failure due to causes ranging from someone tripping over a power cable to a regional disaster such as an earthquake. Thus, every application needs data replication for resilience and consistency. High Availability: Data frequently accessed by many users or concurrent sessions needs data replication. In this case, replicated data must remain consistent with its leader and other replicas. Reduce Latency: Data replication also helps modern cloud applications run off distributed data in different networks or geographic regions that serve the end user better. In short, it’s not only about backup and disaster management but also about application performance. Let’s dive into how replication works and understand these needs a little deeper.
  • 16. Cloud data replication vs. traditional data replication Feature Traditional Data Replication Cloud Data Replication Scope - Local: Mobile device to PC, PC to networked database - Global: Applications to multiple cloud- based data/services, replicating to other cloud resources Primary Use - Preserve data in case of failure - Advanced data protection and high availability Accessibility - Replicas not directly accessible until primary nodes fail - Near-instant access to replicas Manual Work - Requires manual work to reassemble data while offline - Automates replication and management Replication Levels - From local to external network for backup - Multiple cloud-based machines in same data center, rack-level distribution, cross-data center replication Real-Time Sync - Not real-time, only becomes “active” when primary fails - Real-time or near-real-time replication Disaster Recovery (DR) - Relies on manual intervention to activate replicas - Automatic failover and faster recovery times Geographic Distribution - Limited, typically within local or external networks - Wide geographic distribution, storing master data and replicas in different regions (e.g., San Francisco, New York, London)
  • 17. What is cloud-to-cloud data replication? A modern hybrid cloud option uses your local network as a master copy and multiple cloud services or varying regions within one cloud as part of the replication. Ideally, all nodes in this design are accessible to applications (for reading and writing) even when no disaster is at play.
  • 19. Some tools • AWS Migration Service • Hevo Data, Carbonite • Veeam Backup and Replication • Microsoft Azure • Google Cloud Storage Snapshots • Informatica
  • 20. Feature Traditional Data Replication Cloud Data Replication Data Backup Scope - Local: Mobile device to PC, PC to networked database - Global: Applications to multiple cloud- based data/services, replicating to other cloud resources - Restores data to a specific point in time Primary Use - Preserve data in case of failure - Advanced data protection and high availability - Protects data from corruption, system failure, outages, and other data loss events Accessibility - Replicas not directly accessible until primary nodes fail - Near-instant access to replicas - Data can be restored from save points Manual Work - Requires manual work to reassemble data while offline - Automates replication and management - Typically scheduled during off-hours to reduce impact on production systems Replication Levels - From local to external network for backup - Multiple cloud-based machines in the same data center, rack-level distribution, cross-data center replication - Save points created at periodic intervals Real-Time Sync - Not real-time, only becomes “active” when primary fails - Real-time or near-real-time replication - Not real-time, periodic backups can take up to several hours Disaster Recovery (DR) - Relies on manual intervention to activate replicas - Automatic failover and faster recovery times - Provides a recovery point for restoring data in the event of a disaster Geographic Distribution - Limited, typically within local or external networks - Wide geographic distribution, storing master data and replicas in different regions - Data can be backed up on a variety of media and locations, both on-premises and in the cloud Performance Impact - May slow down overall performance due to atomic operations - Improves network performance and availability without affecting bandwidth - Backups can be time-consuming but typically scheduled during off-hours to minimize impact on production systems Risk of Data Loss - Guarantees zero data loss with synchronous replication, but can slow down performance - Lower risk of data loss due to near-real- time replication - Risk of losing data between backups, but suitable for long-term storage of large sets of static data and compliance