I have a database running on our production web server that I have set up as a master with a single slave in another datacenter. When reading through the MySQL replication setup instructions, they advice to take the server offline, place a read lock, do the dump, and then create the slave with it. I did that, got the replication set up and we were happy.
Today I decided that having that replicate in clear over the WAN probably isn't good practice. So I looked into setting up replication through an SSH tunnel (temporary). This required me to change master_host
to 127.0.0.1 instead of my Master's IP. In doing this, I borked my replication and now I have to start over. Problem is, the Master only had expire_logs_days=1
, so I can't repeat the original process since it was last week. I do have backups of the other binlogs, but using mysqlbinlog to restore all of them keeps failing due to temporary tables problems.
So now I'm trying to get the slave backup and running without taking the master down. Every 3 hours, on the master, we do a database dump for backups. We use mysqldump -v --flush-logs --single-transaction --routines ....
so the current binary log is cut off and a new one created with every backup we do. However, if I restore a database dump, then try and start the replication backup with the new binary log that was created with the last --flush-logs command, I still run into key collisions, just like they warned.
Given this information, is there way that I can successfully start the slave back up with dumps we have without taking the server down again? I'm not going to be in a good place if I have to go ask for more down time.