Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Minimizing downtime during MariaDB Replication Setup

$
0
0

I have the task of setting up MariaDB (10.0) replication from a master running in a datacenter to a slave running out in an AWS VPC. Reading up on the instructions on Setting Up Replication - MariaDB Knowledge Base there's mention of taking a lock on all tables on the master (FLUSH TABLES WITH READ LOCK) while using mysqldump to transfer the Databases across to the slave. This step has me concerned a slight bit for the following reasons

  1. We have a number of large databases (~20 DBs @ 40G) with some huge ones (~3 @ 100G).
  2. It takes about ~40 minutes to transfer each DB across to the slave. In total, this entire process could take up to 20 hours (assuming good conditions).
  3. We can't keep locks on the master for too long - as we disrupt business while the tables are locked.
  4. Should anything fail in the process of copying the dumps across to the slave - we'll have taken the master down in vain and may need to attempt this again.

What would be a good way to go about setting up replication in this scenario with minimal downtime to production. I'm open to some out of the box thinking - is there a way to do this by breaking things down and doing them in stages so that we can verify each stage?

EDIT I should add that we do take nightly dumps of the DBs as part of the backup strategy. Perhaps there's a way of using these to aid the process?


Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>