When we need to [re]build a MySQL-5.5 slave, our usual approach is this (CHANGE MASTER TO
/ START SLAVE
omitted for clarity):
master$ mysqldump -uroot -p --all-databases --master-data > dump.sql
master$ scp dump.sql slave:
slave$ mysql -uroot -p < dump.sql
This has always worked flawlessly, and we are used to doing it on busy masters without any problem. Now we're trying to automate it a little more and I came out trying to run this from the slave system:
ssh masterhost.mydomain "mysqldump -uroot -p --all-databases --master-data" \|
mysql -uroot -p
We're basically doing the same thing we do with a dump file, but we're doing it "on the fly" over SSH. At first it seems to work, after asking for passwords it runs mysqldump
on the master system, its output goes through ssh
and gets piped in mysql
on the slave system. But then when we START SLAVE
we almost always get duplicate primary key conflicts on the slave system (note that IO thread is running without problems).
As far as I know --master-data
should acquire enough locking (--lock-all-tables
implied, and we also tried adding it explicitly) to avoid this kind of problems, and it takes care of sending out correct log positions from the master so we cannot get them wrong.
I should mention that we have a mix of InnoDB and MyISAM tables.
Why doesn't this dump-and-restore-over-SSH work just like the dump-to-file-restore-from-file, as I would expect?