Seamless PostgreSQL RDS Migration Across AWS Accounts and Regions Using Logical Replication
Migrating a PostgreSQL database across AWS accounts and regions is a complex task that requires careful planning and execution. Recently, I had to migrate an RDS PostgreSQL database while adhering to client-imposed constraints, such as avoiding AWS DMS due to restrictions on VPC peering and PrivateLink. After researching various approaches, we chose PostgreSQL Logical Replication, a method that AWS DMS also leverages internally.
In this blog, I'll walk through the problem statement, the troubleshooting process, and the step-by-step solution to successfully migrate the database.
Problem Statement
Migration Requirements:
Challenges Faced:
Approaches Considered
Given our constraints, we opted for PostgreSQL Logical Replication as the best solution.
Limitations
We recommend that you consider the following limitations of using logical replication before starting your migration:
Steps to Migrate PostgreSQL Using Logical Replication
1. Creating a VPC Endpoint for Source Database Access
Since VPC peering was not an option, we created a VPC endpoint to securely access the source database. This allowed the destination AWS account to establish a private connection to the source RDS instance without requiring VPC peering.
2. Preparing the Destination Database
Create the target database:
CREATE DATABASE database_name;
3. Export and Import Schema
Enable logical replication on the source RDS instance:
SHOW rds.logical_replication;
SHOW wal_level;
Export the schema from the source database:
pg_dump --host=SOURCE_HOST --port=5432 --username=username --schema-only --dbname=database_name --file=schema_dump.sql
Import the schema into the destination database:
psql --host=DESTINATION_HOST --port=5432 --username=username --dbname=database_name --file=schema_dump.sql
Validate schema consistency using system tables or pgAdmin.
4. Setting Up Logical Replication
On the Source Database:
Create a publication to define what to replicate:
CREATE PUBLICATION test_replica FOR ALL TABLES;
Create a logical replication slot:
SELECT * FROM pg_create_logical_replication_slot('test_replica', 'pgoutput');
Verify the setup:
SELECT * FROM pg_publication;
SELECT * FROM pg_replication_slots;
Monitor replication lag:
SELECT slot_name, confirmed_flush_lsn, pg_current_wal_lsn(),
(pg_current_wal_lsn() - confirmed_flush_lsn) AS lsn_distance
FROM pg_catalog.pg_replication_slots WHERE slot_type = 'logical';
On the Destination Database:
Create a subscription to pull data from the source:
CREATE SUBSCRIPTION aws_pg_subscription
CONNECTION 'host=SOURCE_HOST port=5432 dbname=database_name user=username password=your_password'
PUBLICATION test_replica
WITH (
copy_data = true,
create_slot = false,
enabled = true,
synchronous_commit = false,
connect = true,
slot_name = 'test_replica'
);
Explanation of Subscription Parameters:
5. Validating Replication
Check Subscription Status
On the destination database:
SELECT * FROM pg_stat_subscription;
Verify Data Synchronization
Compare row counts between source and destination:
SELECT COUNT(*) FROM your_table;
Monitor WAL Lag
On the source database:
SELECT slot_name, confirmed_flush_lsn, pg_current_wal_lsn(),
(pg_current_wal_lsn() - confirmed_flush_lsn) AS lsn_distance
FROM pg_replication_slots WHERE slot_type = 'logical';
Final Steps and Monitoring
Key Takeaways
Conclusion
Migrating a PostgreSQL database across AWS accounts and regions can be challenging, especially with network constraints. By leveraging Logical Replication, we achieved a seamless migration without relying on AWS DMS. This approach ensures minimal downtime while maintaining data integrity despite handling a large dataset.
If you are facing a similar challenge, I hope this guide helps streamline your migration process!
💬 Have you worked on PostgreSQL migrations before? Share your experiences and challenges in the comments!
#AWS #cloud #technology #CloudComputing #devops #PostgreSQL #DatabaseMigration #LogicalReplication
Backend Engineer | Coder | Problem Solver | Cloud Enthusiast
5moWhat an insightful blog! Kinnari Sutaria
Quality Control Manager II at Walmart | USF MBA | Ex. Coca-Cola | Ex. McCain Foods
5moWell framed thoughts 💭
Infrastructure Consultant | DevOps | Cloud | Automation | Gold Medalist in BE
5moCheckout my previous blog here on building a sink connector to transfer data from kafka to documentDB- https://www.linkedin.com/posts/kinnari-sutaria-828720152_aws-devops-tech-activity-7299675346113589248--Gc0?utm_source=share&utm_medium=member_desktop&rcm=ACoAACTFGqABmgN4ThWRsiX_w6r90IDPrGsBi1k