If a data block is replicated, in which data node will it be replicated to? Is there any tool to show where the replicated blocks are present?
How to track which data block is in which data node in hadoop?
step-by-step replication data from local MySQL master to Google Cloud SQL slave
I need to replicate data from local database master (using MySQL 5.6) to Google Cloud SQL slave. I read the google instructions and I did it until now:
- Installed MySQL 5.6 on local machine and created local instance, database and tables;
- Set up to use binary files (it´s necessary for replication);
- Created user "replica" to have access to master;
- Created backup.sql file using mysqldump and I followed google instructions;
- Upload this file to Google storage bucket;
- Created 1º generation instance in google Cloud SQL;
- Created database into 1º generation instance;
- Restored backup.sql into database;
And now I´m stuck. It´s not possible use "Change master To..." SQL statement in the Cloud SQL, google don´t give SUPER privilegies to do it. The next steps in google instruction is type the code "ACCESS_TOKEN...", but I don´t know where I need to input this code and I don´t know what is this or even if this is necessary. Someone has step-by-step to configure local database using MySQL to replicate data to Google Cloud SQL?
Mysql replication, error 13 (permission denied) with slave_load_tmpdir
I've setup a virtual machine on Azure with Ubuntu 16.10 and Mysql 5.7 for a database replication Master/Slave from another server.
I followed this article: https://www.opsdash.com/blog/mysql-replication-howto.html
All seems to works correctly, replication works but I've serious problems with the use of DATA LOAD INFILE. If I use this command, the replication doesn't works.
I tried with relative varible "slave_load_tmpdir", creating a new directory '/var/slavetmp' and updating my.cnf with these values, but this is the error in mysql with "SHOW SLAVE STATUS":
Unable to use slave's temporary directory /var/slavetmp - Can't read dir of '/var/slavetmp/' (Errcode: 13 - Permission denied)
Have any ideas?
Thanks in advance
MariaDB replication - remove gtid in relay log
I'm doing a migration work from MySql to MariaDB where replication is involved, everything is working fine and compatibility of master MySql (5.5.59) to slave MariaDB (10.1.26) is good.
The problem occur when I enable the replication from MariaDB master to MariaDB slave (same versions: 10.1.26). In some situations, identified on massive updates, the slave start to lag. If I restore the master to MySql (5.5.59) and I replicate to the same slave in MariaDB, the lag never occur on the same set of updates.
I checked the relay logs in the MariaDB slave that is lagging, comparing the ones received when MySql is the master and the ones received when MariaDB is the master, the only differences are that when the master is MariaDB I can see statements related to gtid.
I would like to disable the presence of the gtid statements on the relay log when the master is MariaDB and make a replication similar to the "old style" MySql replication without gtid, but I've not found if is possible to do that.
Thanks for your help
SQL Server Merge Replication Computed Column Conflict Resolution
I'm setting up a merge replication web publication for a table that needs to use the DateTime (Later Wins) conflict resolver so that the most recently modified row wins in any conflict. This table (and hundreds of other tables) use a datetimeoffset
for their modified date as subscribers can be in many timezones. In order to get the required datetime
value as a column I've added a ModifiedDateUTC datetime computed column which does a simple CONVERT
to a datetime. The issue I'm seeing is that even though the article setup accepts this as a valid column, whenever I generate a conflict I get the error 'The specified conflict resolution column 'ModifiedDateUTC' could not be found.' I've also tried doing this as a stored procedure
SQL Server Msmerge_content error
hope every one is fine. i have truncated msmerge_content on publisher due to huge no of records. against one of the table name like "ABCxyz" I have disabled its replication triggers (update,insert,delete) on publisher & subscriber end as I don't need its replication. after truncation msmerge_content I have inserted last 2 days data which i have been kept as backup into temp table. Now I'm facing a issue table as mentioned above "ABCxyz" start uploading data while its triggers are disabled on both end Publisher & subscriber. another thing I'm using SQL Server merge replication.
Can you help me out why replication start uploading data from that table due to this activity downloading process is being stopped. As after uploading SQL Server start downloading.
Problem when using gtid with MySQL replication
I am faced with this problem when using gtid in MySQL replication:
When @@SESSION.GTID_NEXT is set to a GTID, you must explicitly set it to a different value after a COMMIT or ROLLBACK. Please check GTID_NEXT variable manual page for detailed explanation. Current @@SESSION.GTID_NEXT is 'xxx'"
It makes my slave SQL thread is impossible to run.
I am stuck with it. What is the cause of the problem ?
Configuring slave from EC2 master to Amazon RDS slave
I am configuring replication from EC2 master to Amazon RDS instance.
After starting the slave, I don't see any errors but I see slave_IO:
_thread is connecting.
Master version:5.6.23
Slave Version:5.6.19
show slave status \G
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: XXXXXXXXXXX
Master_User: XXXXXX
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: cli-bin.000032
Read_Master_Log_Pos: 713
Relay_Log_File: relaylog.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: cli-bin.000032
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table: mysql.plugin,innodb_memcache.cache_policies,mysql.rds_sysinfo,mysql.rds_replication_status,mysql.rds_history,innodb_memcache.config_options
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 713
Relay_Log_Space: 618
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
show global variables
mysql> show global variables like '%old_passwords%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| old_passwords | 0 |
+---------------+-------+
1 row in set (0.01 sec)
mysql> show global variables like '%secure_auth%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| secure_auth | OFF |
+---------------+-------+
1 row in set (0.00 sec)
====================
The problem is Slave_IO_state is connecting and Slave_io_thread is connecting state but replication is not happening.
MySQL replication slave hangs after encountering SET @@SESSION.GTID_NEXT= 'ANONYMOUS';
I recently installed two identical default installations of MySQL 5.7 under Ubuntu Server 16.04, and configured them to do a binary log replication. Until now this has been working fine, but suddenly the replication stops continuing and the slave query thread runs at 100% CPU without doing any work.
After some searching I found that the slave status tells that it is way behind master. Using mysqlbinlog on the binlog file indicated by Relay_Master_Log_File and position Exec_Master_Log_Pos, I found out that the statement executed at this position is:
SET @@SESSION.GTID_NEXT= 'ANONYMOUS';
Somehow the slave hangs when trying to execute this statement, sending the CPU load to 100% (which is how I discovered the situation in the first place).
Except by having the slave skip that statement using SET GLOBAL sql_slave_skip_counter=1
it is unclear to me what is the actual cause of this issue, and how I should solve this.
Any help would be really appreciated!
Restore server instance using logical backup and wal files
Is it possible to restore a database instance using logical backup and wal files?
A senior SQL Server DBA asked me to implement a scenario in PostgreSQL
took the logical backup using pg_dumpall of the master then do failover after some time. Now restore database instance using the logical backup of primary + wal files of primary + wal files of secondary.
Why does mongodb primary stops writing sometimes?
I have 3 mongodb servers in our production environment with replica set configured. All the 3 are running in 3 different Ec2 instances. The application server is running in another instance. The server connects to primary monogdb for write. And the read preference is set to Primary so it should contact primary for read too.
Everything was working normal until one day suddenly the write operations on primary stopped working. I was not able to update, insert anything through command line. Even our application stopped working (user were able to view the page but not do any actions).
The application was down for 15 minutes and i had to restart the mongo instance for it to resume working properly. Any idea why this happens and how to prevent this from happening in the future?
Filling the data frame with duplicates based on another with unique values
I have a dataframe with unique IDs:
Example:
set.seed(1)
df_Unique <- matrix(rnorm(20),4,5)
colnames(df_Unique) <- paste0("k_",1:5)
rownames(df_Unique) <- paste0("Project",1:4)
k_1 k_2 k_3 k_4 k_5
Project1 -0.6264538 0.3295078 0.5757814 -0.62124058 -0.01619026
Project2 0.1836433 -0.8204684 -0.3053884 -2.21469989 0.94383621
Project3 -0.8356286 0.4874291 1.5117812 1.12493092 0.82122120
Project4 1.5952808 0.7383247 0.3898432 -0.04493361 0.59390132
And I have another dataframe with the duplicates of the Project names:
df_Dublicates <- matrix(NA,12,5)
colnames(df_Dublicates) <- paste0("k_",1:5)
rownames(df_Dublicates) <- sample(paste0("Project",1:4),size = 12,replace = T)
k_1 k_2 k_3 k_4 k_5
Project2 NA NA NA NA NA
Project1 NA NA NA NA NA
Project1 NA NA NA NA NA
Project1 NA NA NA NA NA
Project2 NA NA NA NA NA
Project3 NA NA NA NA NA
Project3 NA NA NA NA NA
Project2 NA NA NA NA NA
Project4 NA NA NA NA NA
Project2 NA NA NA NA NA
Project2 NA NA NA NA NA
Project2 NA NA NA NA NA
I want the values in df_Unique
to be replicated accordingly for the df_Dublicates
data frame. I am not sure which function to use for this case.
mysql replication missed quite a lot SQL statement
I have set up a master/slave with mysql-5.1.73 The master's binlog format is "statement". The master and slave seemed running very well with slave status:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
And when I modified the content on the master manually, whether it was select or update or insert, or alter table, the modification will be synchronized to the slave instantly.
However, after running for several days, I found the slave missed a lot of insert statements, these insertion statements didn't violate any PRIMARY key rule. More ever, when I tried to redo the binlog on another slave with:
mysqlbinlog mysql-binlog.00000X | mysql
Those missed statement were missed again with no warning or error.
Have you ever met such a situation, what should I do to restore all the changes to the slave? (There are quite a lot of missed changes, so I could not restore them one by one).
I dug into this matter to find that, the relay log on the slave contains all the insertion statement, which means the binlog is transmitted to the slave correctly. However, the binlog on the slave missed some of the insertion statement, so this issue appeared to occurred during the redo process of the slave.
Any suggestions to diagnose into this issue or work around it?
Cleanup Error Messages in SQL Server ERRORLOG
The SQL Server's ERRORLOG file has a lot of messages like the following in AG in publisher (primary) as you know that distributor is out of AG Server. Subscriber is in another AG setup.
Logon Login failed for user 'domain\svc-prod'. Reason: Failed to open the explicitly specified database 'DB'. [CLIENT: xx.xxx.xxx.xxx]
Logon Error: 18456, Severity: 14, State: 38.
This error is because the replication server thinks there are either Publication on production SQL Server and I need to clean up these publisher/subscriber items and error messages.
Please suggest me how to cleanup these error messages in error logs.
MySQL Replication : Duplicate entries - SQL_SLAVE_SKIP_COUNTER or delete row?
I have a system which replicates one database to four slave servers. Every now and then, when traffic is high, one or more of the slave servers hits a duplicate insert error and the slave stops running.
When this happens, I have two choices - I can either SET GLOBAL SQL_SLAVE_SKIP_COUNTER
or I can delete the offending row in the slave. Both seem to work and my logic says that given something happened to cause this problem, there is a possibility that the data in the slave is corrupted. Given that this can only happen on INSERTs, by deleting the row I guarantee the slave data will match the master once replication resumes. By skipping, if the data for that row is corrupted in the slave, it will remain corrupted.
Am I missing anything?
Further, given that this happens once every couple of months on two specific tables, is there any reason I shouldn't automate a process that triggers when this error is encountered, deletes that row in the slave and re-starts the slave?
EDIT: MySQL 5.5.29 and statement replication I believe.
MySQL 5.6: explicit_defaults_for_timestamp
I have the following replication topology:
DB1 (MySQL 5.5) -> DB2 (MySQL 5.6, explicit_defaults_for_timestamp = 1) -> DB3 (MySQL 5.6, explicit_defaults_for_timestamp = 1)
- "date" field:
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
- DB3 replication error:
[ERROR] Slave SQL: Error 'Column 'date' cannot be null' on query. Default database: 'test'. Query: 'INSERT INTO test_log VALUES (null,'12345',12345,'test','saved')', Error_code: 1048
The reason why DB3 is failing is explained here:
No TIMESTAMP column is assigned the DEFAULT CURRENT_TIMESTAMP or ON UPDATE CURRENT_TIMESTAMP attributes automatically. Those attributes must be explicitly specified.
I would like to understand why DB2 is working fine, I guess that's because it's replicating from MySQL 5.5 but what settings are responsible for this?
Update Wed 1 Oct 09:34:03 BST 2014:
Table definition match on all three servers:
mysql> SHOW CREATE TABLE test_log\G
*************************** 1. row ***************************
Table: feedback_log
Create Table: CREATE TABLE `feedback_log` (
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`order_ref` varchar(32) NOT NULL,
`id` int(11) NOT NULL,
`version` varchar(12) NOT NULL,
`event` varchar(60) NOT NULL,
KEY `order_ref` (`order_ref`),
KEY `id` (`id`),
KEY `version` (`version`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
SQL_MODE shouldn't be the case here:
- DB1: None
- DB2, DB3: NO_ENGINE_SUBSTITUTION
Summary:
I can't run this query manually on both slaves (DB1, DB2) but it's replicated successfully on DB2:
mysql [5.6.20-68.0-log]> INSERT INTO test_log VALUES (null,'12345',12345,'test','saved')';
ERROR 1048 (23000): Column 'date' cannot be null
Another quick test showing this behaviour:
DB1
mysql [5.5.39-log]> CREATE TABLE t1 (date TIMESTAMP);
Query OK, 0 rows affected (0.20 sec)
mysql [5.5.39-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
DB2
mysql [5.6.20-68.0-log]> SELECT @@explicit_defaults_for_timestamp;
+-----------------------------------+
| @@explicit_defaults_for_timestamp |
+-----------------------------------+
| 1 |
+-----------------------------------+
1 row in set (0.00 sec)
mysql [5.6.20-68.0-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
DB3
mysql [5.6.20-68.0-log]> SELECT @@explicit_defaults_for_timestamp;
+-----------------------------------+
| @@explicit_defaults_for_timestamp |
+-----------------------------------+
| 1 |
+-----------------------------------+
1 row in set (0.04 sec)
mysql [5.6.20-68.0-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`date` timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
Increasing Replication Factor in Kafka gives error - "There is an existing assignment running"
I am trying to increase the replication factor of a topic in Apache Kafka.In order to do so I am using the command
kafka-reassign-partitions --zookeeper ${zookeeperid} --reassignment-json-file ${aFile} --execute
Initially my topic has a replication factor of 1 and has 5 partitions, I am trying to increase it's replication factor to 3.There are quite a bit of messages in my topic. When I run the above command the error is - "There is an existing assignment running". My json file looks like this :
{
"version": 1,
"partitions": [
{
"topic": "IncreaseReplicationTopic",
"partition": 0,
"replicas": [2,4,0]
},{
"topic": "IncreaseReplicationTopic",
"partition": 1,
"replicas": [3,2,1]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 2,
"replicas": [4,1,0]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 3,
"replicas": [0,1,3]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 4,
"replicas": [1,4,2]
}
]
}
I am not able to figure out where I am getting wrong. Any pointers will be greatly appreciated.
ChronicleMap replication: 2 write processes, 1 read-only process?
How do I TCP replicate between two ChronicleMap instances with 3 processes operating on the maps across 2 servers?
According to https://groups.google.com/forum/#!topic/java-chronicle/CMxwwdPO5j0, it is possible to setup replication between 3 processes on 2 servers, where only 2 processes are engaged in TCP replication and the third process is merely reading the updated values from the shared off-heap map.
Starting up 2 processes that do TCP replication, replication works fine in both directions: put on process 1 is reflected in process 2 map, and vice/versa. Server 1 identifier is set to 1, and server 2 identifier is set to 2.
Starting up the third process that does not have TcpTransportAndNetwork config, and only references either identifier 1 or identifier 2 depending on which server I start it up on, then the replication becomes unidirectional.
For example if i start it on server 1 and set its identifier to 1, then replication only works from server 2 to server 1 (both processes can read the new value), but stops working from server 1 to server 2 (server 2 never sees the updated value).
I have configured the three maps exactly as below - what am I doing wrong? Using ChronicleMap version 2.4.17
Quote from the linked thread above:
SERVER 1 PROCESS 1
TcpTransportAndNetworkConfig tcpConfig1 = ...
TcpTransportAndNetworkConfig tcpConfig2 = ...
byte server1Identifier = 1;
byte server2Identifier = 2;
map1Server1 = ChronicleMapBuilder
.of(Integer.class, CharSequence.class)
.replication(server1Identifier, tcpConfig1)
.createPersistedTo(file);
SERVER 1 PROCESS 2
// Notice no TCP config since this is a process #2 on same server, read-only
map2Server1 = ChronicleMapBuilder
.of(Integer.class, CharSequence.class)
.replication(server1Identifier)
.createPersistedTo(file);
SERVER 2 PROCESS 1:
map3Server2 = ChronicleMapBuilder
.of(Integer.class, CharSequence.class)
.replication(server2Identifier, tcpConfig2)
.createPersistedTo(file);
Replicate from InnoDB cluster
I am trying to build two InnoDB clusters (A and B) each with 3 machines. This setup works fine now but i want replication between my two clusters. I understand the replication can only work in one way and my B cluster has to be readonly.
So Master from B has to be a slave to Master from A. I have already set this up in galera but I want an InnoDB cluster. As soon as I append binlog_do_db = testdb
to the my.cnf file for the master@B and restart mysqlinstance my instance quits and gives no information besides
**info: Bootstrapping Group Replication cluster using --group_replication_group_name=f2295307-731c-4136-b228-a2ad5f584656
You will need to specify GROUP_NAME=f2295307-731c-4136-b228-a2ad5f584656 if you want to add another node to this cluster
Initializing database**
appending binlog_do_db = testdb
is the only thing changed. I cannot configure the replication because the service does not keep running.
Does anyone have more detailed information about how to do this setup, or knows what I'm doing wrong?
sidenote, this is all running in docker containers on two physical machines. (A and B) docker run --network=grnet -e MYSQL_ROOT_PASSWORD=slavepassword -e SERVER_ID=100 --name=mysqlgr1 --hostname=mysqlgr1 --network-alias=myinnodbcluster --ip=192.168.3.2 -e BOOTSTRAP=1 -e GROUP_SEEDS="mysqlgr1:6606,mysqlgr2:6606,mysqlgr3:6606" -itd innodb-slave
Database replication issues - Master is attempting to 'replicate' updates to the mysql users (password updates)
I have mysql database replication setup with the slave on a cloud provider and the master on a dedicated server with cPanel / WHM installed.
My replication stalled with a series of error messages. I managed to clear each individually, however, I'm concerned that these errors even appeared to begin with.
I have about 40 individual databases being replicated, all listed in the my.cnf file as binlog_do_db = dbname
and all working so far as expected. I have several other databases that are not being replicated from that server as well for reference.
Here's a list of the errors I've received. All of these have appeared after I updated the mysql root password on the Master server.
Error 'Can't find any matching row in the user table' on query. Default database: 'dbname'. Query: 'SET PASSWORD FOR 'root'@'mywebsite.com'='xxx''
Error 'Can't find any matching row in the user table' on query. Default database: 'dbname'. Query: 'SET PASSWORD FOR 'root'@'127.0.0.1'='xxx''
Error 'Unknown column 'Password' in 'field list'' on query. Default database: ''. Query: 'UPDATE mysql.user SET Password=PASSWORD('xxx') WHERE User='root''
Prior to these errors, I had another error:
Error 'Can't find any matching row in the user table' on query. Default database: ''. Query: 'SET PASSWORD FOR 'username'@'localhost'='xxx''
This first error prompted me to reset passwords as an extra precaution as the mysql user username
hadn't been updated by me on the master server. By updating the other passwords (including root), the remaining errors were triggered.
To clear the first error, I had to create a mysql user for username
on my slave mysql server before it would clear. This means that effectively, anytime I update a user in cPanel on the master server, it's trying to update that user with the same credentials on the slave server.
Why is this happening and how do I disable it?
For reference, I used this tutorial to setup the master - slave relationship.