Dump & Push: Automating MySQL Backups with Git and Docker (Tears-Free)

Dump & Push: Automating MySQL Backups with Git and Docker (Tears-Free)

🌐 Introduction (For Non-Technical Readers)

“A Safer Way to Keep Things in Sync”

Imagine you're working on a website or app that has two versions:

  • One that's live and public (used by real people)

  • One that's private and experimental (used for testing new things)

Sometimes, you need to make sure that both versions are looking at the same set of data — especially if you want to test something new without risking what’s already working. But doing that manually can be slow, messy, or even dangerous.

That’s where automation comes in.

We created a simple tool (called a script) that helps us:

✅ Make a safe copy of the live data (called a “database dump”) ✅ Send that copy to the private testing version ✅ Replace the old testing data with the latest one ✅ And finally, store a versioned copy, just like a backup, in case something goes wrong later

It’s like taking a snapshot of the real system, saving it carefully, and using it in a test environment — all with a single click.

This tool doesn’t touch or change the live system. It only reads the data and copies it securely into a place where experiments can happen safely.

Why is this important? Because good testing leads to better apps, fewer bugs, and happier users — without compromising anything that’s already working.

1. Introduction: A Tale of Two Environments

At some point in every developer's journey, the need for seamless synchronization between development and production environments becomes essential. This is the story of how I solved that need in one of my personal projects.

The issue was simple but critical: the production frontend was based on a build whose exact version was unknown—a temporary "patch" made under pressure. The latest development build conflicted with the backend. As a result, updating the site became risky: breaking the only working version was not an option.

The solution? Build a solid process to extract, transfer, and version database backups—reliably, reversibly, and automatically. With that in place, testing real data became easy, and I could safely move forward.


2. The Goal: Reliable, Versioned Synchronization

The main goal was to create a repeatable, safe flow that would:

  • Create a MySQL dump from the production server.

  • Transfer it securely to the development/test server.

  • Import the dump into a dedicated test database.

  • Automate the entire workflow with a single shell command.

  • Version each backup using Git in a dedicated branch (dumps).

This made it possible to test real-world data in an isolated environment—without risking the live system—and maintain a full history of backups.


3. The Tech Stack

The solution uses a well-known, robust stack:

  • Docker: for managing MySQL and PhpMyAdmin containers (production and test).

  • SSH: for secure remote access and file transfer.

  • Git: for backup versioning and traceability.

  • MySQL 8: as the database engine.

  • Bash: for scripting and automation.

  • Two servers: one for production, one for testing/development.


4. The Magic Behind dump_and_push.sh

The heart of the system is a Bash script that automates the entire process. Here's a step-by-step breakdown:

✅ 1. Dump the Production Database

The script connects to the production server via SSH, and runs a mysqldump directly from inside the Docker MySQL container using docker exec. The dump is streamed and saved locally on the machine running the script.

Note: This step is read-only and does not affect the production database in any way.

📦 2. Transfer the Dump to the Dev Server

Using scp, the .sql dump file is copied from local storage to the test/dev server into a predefined path like /opt/project/dumps/. If the path doesn't exist, it is created.

🔁 3. Import into the Test Database

Once copied, the script connects via SSH to the dev server and uses docker exec again—this time on the test MySQL container—to import the dump into a clean test DB. This DB is isolated from the main one and can be reset or overwritten without risk.

📌 4. Git Commit & Push

Finally, the script stages the new .sql file, commits it with a timestamped message, and pushes it to a dedicated Git branch (dumps). This builds a complete version history of every backup.


5. Advantages & Best Practices

🔐 Security & Isolation

All database operations are read-only on production.

🧾 Traceability

Every backup is committed with a clear timestamp. You can compare versions, roll back to a previous one, or track when changes happened.

🧪 Safer Testing

You can test migrations, queries, or schema changes with real data, without touching production.

🔁 Repeatability & Automation

The script is idempotent and easily executable manually or as a cronjob (e.g. nightly backups).

💼 Professionalism, Even in Personal Projects

Building this system showed me how personal projects can become testbeds for robust, real-world practices. Treating data seriously—even in solo work—is a sign of maturity and discipline.

⚠️ Credential Management

  • .my.cnf files with restricted permissions

  • secret management systems (Vault, AWS Secrets Manager, etc.)


6. Future Improvements

Some features I plan to add:

  • Compression of .sql files with gzip or xz

  • Retention policy to auto-delete old dumps (e.g. find . -mtime +30 -delete)

  • Notifications via Slack, Telegram, or email when a backup is completed

  • CI/CD integration for automated test deployments after backup import


7. Final Thoughts

This workflow turned a messy problem into a clean, automated system. It gave me back control over the data lifecycle and allowed confident development and testing.

Sometimes, all it takes is one well-crafted script to bridge the gap between chaos and clarity.

To view or add a comment, sign in

Others also viewed

Explore topics