We currently have a MySQL application living on a remote server which is used by our main application, a Rails app. We also do events in locations with unreliable internet connections. Because our connection to the remote DB is unreliable and our app is mission critical, we've been syncing our data between master and onsite instances with at the application level, using JSON APIs. Short story, we've outgrown this and we need another approach.
What we're thinking is MySQL multi-master replication between the onsite and main (cloud) DBs, and moving our primary keys to UUIDs to prevent collisions. My question is, what sort of gotchas will we have with an unreliable internet connection?
If we are using UUIDs, what sort or write collisions could happen? How are these conflicts resolved?
Will we need to kick processes when the connection goes down, or will sync restart itself?
Is it possible to set up this kind of multi-master replication on an existing database, or will we need to make a new DB and transfer the data?
Will there be any negative effect to our onsite DB effecting the app's ability to use it if the connection goes down?
Is there any risk of taking down the cloud DB if the network goes down?
Is it possible to limit which data gets replicated, by query, transaction log date, table, etc? We do not need all data in our onsite server, just that pertaining to the particular event.