Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Performing frequent backups of a large MySQL slave

$
0
0

We replicate our production MySQL databases (120 GB on production) to a dedicated local desktop machine. The local slave is a 2014-era machine with 8 GB of RAM.

A full mysqldump of the databases on the slave takes just under 48 hours. The resulting dumps, after zipping, are 5.4 GB. This process yields usable dumps, but the duration of the dump process prevents us from doing daily dumps of the database.

At most, it seems that we can only expect to achieve a full dump every other day with our current setup. Something needs to change in our approach in order to facilitate daily dumps again.

One possibility is simply adding another slave and having the two slaves alternate days of dumping.

A different approach (which I loathe, but could technically work) would be to do a full dump every, say, week, and then also keep all of the binary logs in case more fine-grained data is needed. (We could restore the weekly dump and then execute the binary logs up until a certain timestamp)

I suppose a totally different approach would be to use a RAID array and hot-swap out drives to effectively get a "snapshot" of the slave at the time the drive was pulled. (This seems like it may work but also feels quite risky and like a gross abuse of what RAID is supposed to do)

Another completely different approach would be to simply "prioritize" certain tables/databases that are dumped every day, while other "less important" tables/databases are dumped every week.

What am I missing? Is there a "canonical" solution for archiving daily snapshots of "large" MySQL slaves? This seems like something that would be encountered quite frequently by those maintaining larger databases.

Is the best option really to just continue throwing more hardware at the problem?


Viewing all articles
Browse latest Browse all 17268

Trending Articles