Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Replication error from Mariadb 10.1 to Mysql 5.1/5.0/5/5 when master's logging format is set to row based

$
0
0

While replicating from Mariadb 10.1 to MySQL (5.0, 5.1, 5.5) or Mariadb (5.2, 5.5) lower versions, if master's binlog_format is set to row, the replication failure occurs with the following message at slave (show slave status \G;):

Last_Error: Table definition on master and slave does not match: Column 18 type mismatch - received type 19, rtmariadb10.empdetails has type 11

Here

Master: Mariadb 10.1,binlog_format: row ; 
Slave : Mysql 5.1, binlog_format=statement/row/mixed(any one of these) 

Can someone please help to solve this issue?


invalid integer value "CONNECTION" for connection option "port" Previous connection kept

$
0
0

I'm new to postgres and I'm trying to configure table level logical replication(PUBLICATION/SUBSCRIPTION) on postgres-12.

I have updated postgresql.conf file and updated wal_level=replica. Also, then created 2 databases(test1 and test2). Here is what , I'm doing:

test1=# create table t1(a int primary key, b int);
CREATE TABLE
test1=# insert into t1 values(1, 1);
INSERT 0 1
test1=# create publication my_pub for table t1;
CREATE PUBLICATION

test2=# CREATE TABLE t1(a int primary key, b int);
CREATE TABLE
test2=# CREATE SUBSCRIPTION my_sub CONNECTION 'host=localhost port=5432 dbname=test2 user=postgres 
password=password' PUBLICATION my_pub;
invalid integer value "CONNECTION" for connection option "port"
Previous connection kept

When creating subscription I'm getting this message: invalid integer value "CONNECTION" for connection option "port" Previous connection kept and I'm unable to add this table into replication. In my postgresql.conf file as well I have mentioned 5432 port only. However, I'm unable to understand why it's saying invalid integer value

Replication throw an error after failover mirror partner is down in SQL Server

$
0
0

Received an error mentioned below after failover when mirror partner is down.

Error :

to fail over to a database which is not configured for database mirroring. code: 22037, text: 'Invalid connection string attributeCannot open database "DBName" requested by the login. The login failed.Login failed for user 'domain\user'.The connection attempted to fail over to a database which is not configured for database mirroring.'.

Scenario :

  1. X is a Principal server and Y is a mirror partner of X
  2. X is a publisher too, Z is a distributor server
  3. Z is added as distributor in X and Y Server. X and Y added as publisher in Z server too
  4. Configured Y as a "PublisherFailoverPartner" in replication log reader agent profile setting
  5. Did failover of X
  6. Thereafter Y became a principal and X became mirror partner
  7. Break mirror from Y to X or X server is down and unavailable

After above steps performed, replication is started to throw an error. When mirror is ON the replication is working, otherwise it raising an error mentioned.

Can you please suggest me steps to resolve it?

MySQL group-replication error

$
0
0

We have MySQL 5.7 cluster with 4 nodes. We get a problem with one node in this cluster often.

Our applications write data to node1 and node1 replicates data to other 3 nodes. In node1, I keep getting this in /etc/log/mysqld.log

2020-08-17T04:03:07.709440Z 21269681 [Note] Aborted connection 21269681 to db: 'pro_sample' user: 'user_pro' host: 'node2.example.com' (Got an error reading communication packets)
2020-08-17T04:06:19.152707Z 21271865 [Note] Aborted connection 21271865 to db: 'pro_sample' user: 'user_pro' host: 'node2.example.com' (Got an error reading communication packets)

in my node1MySQL database, I get many sleep connections.

select * from INFORMATION_SCHEMA.PROCESSLIST where db ='pro_sample';

| 21270100 | user_pro | node2.example.com:45134 | pro_sample | Sleep   |  651 |              | NULL                                                                                           |
| 21270089 | user_pro | node2.example.com:45102 | pro_sample | Sleep   |  668 |              | NULL

and node2 is the server often goes offline and causing replica issue.

What do we missing here?

MariaDB threads goes around 10K suddenly

$
0
0

We have a big Database, It is about 750G. And also for replication this is our Master DB and we have 4 slaves, 3 of them are sync without delay, and one of them has 2 hours delay.

OS : Ubuntu 16.04 DB : MariaDB 10.2.9 app : php 7

Sometimes mysql thread increase suddenly (about 5k) without any reason, by chance I found that if I stop one of first three servers slave trough stop slave it can decrease the threads to normal numbers (around 500), and start slave by star slave after 1 minute or more.

Every time Mysql threads goes up more than 4000 in a second I should use :stop slave; start slave; on every server ! and this is not good solution!

*** And even worse cases, stop slave; start slave; won't work and Mysql threads is going up more than 10K and I should stop and start Master database Mariadb ( I mean systemctl stop mysql && systemctl stop mysql) and it is clear that at that time my application is not reachable by users!

My questions are : -1 : what is the reason of this accidents ? -2 : what is temporary solution ? -3 : will something like maxscale help me in this scenario or not -3.1 : If yes how does maxscale will help me ? -4 : what is the best solution for my problem, how can I trace the issue ?

Database replication from local Postgres to live

$
0
0

I have a software whose data from its local Postgres server has to be synced to live database, so that I can query data from that live Postgres server via api. I am new to Postgres. So after a small research I came across Database replication concept such as hot standby, streamed replication, wal files, background processes etc.

My question is whether it is possible to replicate my local Postgres database to live Postgres database and also access or query the live Postgres database for CRUD operations and also I want both the databases to be in sync irrespective of the database on which the CRUD operations are being performed.

Found similar questions already answered.

  1. How to PUSH data between a Local DB Server to Cloud DB Server?
  2. Streaming replication Postgresql 9.3 using two different servers (can we write on the copy database?)

Adding a replica in MongoDB throws an error

$
0
0

I'm trying to add a node to a replica set using rs.add("developer-ViratualBox:30103") and I'm getting the following error message:

{
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: developer-VirtualBox:30101; the following nodes did not respond affirmatively: developer-ViratualBox:30103 failed with Failed attempt to connect to developer-ViratualBox:30103; couldn't initialize connection to host developer-ViratualBox, address is invalid",
"code" : 74
}

The node is already running and I'm connected to it using mongo shell. What could possibly be the problem?

MySQL 5.6 replication causes 'waiting for table lock'

$
0
0

All of the sudden queries on slave server stopped with status "Waiting for table level lock"

I restart mysql service and stop replications and locking does not show up anymore. Once I turn replications back on I see huge increase in "waiting for table level lock" status for queries (show full processlist)

Replication is crucial for our situation and we can't keep it turned off.

What might cause this problem? Replication was running fine for last 5 months or so.

MySQL 5.6


AWS DMS Source Database Transaction Boundaries

$
0
0

When performing ongoing CDC replication, does AWS Database Migration Service have any way of recognizing database transaction boundaries on a source database? If so, it will be available for both - migration of existing data and replication of ongoing change or only for replication of ongoing change?

For example when using Apache Kafka as a target for AWS Database Migration Service https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html there is an option

IncludeTransactionDetails– Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous_transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.

but I don't understand how to get exact transaction boundaries by this information. For example how to get the exact number of records in a scope of particular transaction. Based on the provided documentation, looks like I can only get - transaction_record_id which is not enough..

Using Galera Cluster(Percona XtraDB Cluster, MariaDB Cluster) for setting up multi master

$
0
0

Before i gotta use one I want to know if it is working in windows OS cause I saw a lot using it in linux OS. Does using galera (Percona XtraDB Cluster, MariaDB Cluster) cluster works in windows?

How to Achieve Concurrent Write Transactions to MySQL Group Replication in Multi-Primary Mode

$
0
0

I've set up a MySQL "cluster" (four CentOS 7 servers running MySQL 5.7.19) using Group Replication in multi-primary mode, but I can't get it to accept writes from more than one thread at a time. This mode is recommended only for "advanced users", according to Oracle, so I guess that's the source of my troubles.

The group that I've set up works: I can write and read from it, it stays in sync, all good. However, we have a load test (in Java, running on Tomcat) that I'm running on the cluster, that consistently fails when launched with more than one thread. This load test runs a variety of transactions in as many threads as wanted as fast as it can towards a single node. What happens is that the transactions result in java.sql.SQLException: Plugin instructed the server to rollback the current transaction.. (This is, as far as I can gather, what is printed any time the group replication plugin has determined that some transaction must be rolled back for whatever reason). This eventually kills all but one thread, which continues happily until completion. The odd thing is that this load test is made to never create contention on any row; each thread gets its own set of rows to manipulate. Stopping the group replication plugin or running in single-primary mode fixes the issue, allowing me to run concurrent threads with write transactions.

Only having one writer at a time would be unacceptable in production, so this is a showstopper.

I've tried all the isolation levels (including read-uncommitted). I've tried running the appliers in parallel. I've read the requirements and limitations in particular and the entire group replication dev documentation from Oracle in general. I've tried reading bad translations of Chinese open source forums... No luck.

Has anyone gotten this to work, or knows how to?

EDIT: It is possible to run more than one thread against the same server, if the transactions are timed so that they interleave. That is, more than one connection can execute transactions, but only one can execute a transaction at any given point in time, otherwise one of the transactions will fail.




EDIT: Clarifying based on kind input from Matt Lord:

"Perhaps the writes being executed by your benchmark/load test are against a table with cascading FKs?" No, the output from grep --perl-regexp "ON DELETE CASCADE|ON UPDATE CASCADE|ON DELETE SET NULL|ON UPDATE SET NULL|ON DELETE SET DEFAULT|ON UPDATE SET DEFAULT" mysqldump_gr.sql -ni (where mysqldump_gr.sql is the result of mysqldump -u root -pvisa --triggers --routines --events --all-databases > mysqldump_gr.sql) results in one huge text insert into mysql.help_topic.

"[Can you give me a] MySQL error log snippet covering the relevant time period from the node(s) you're executing writes against[?]" As weird as it sounds, this varies. Either there is no output to the error log during the test or there are lines like this one: [Note] Aborted connection 1077 to db: 'mydb' user: 'user' host: 'whereISendTransactionsFrom' (Got an error reading communication packets). I didn't write about this error message because I thought it was just a one-off the first time we tested and none of the google results had anything to do with GR, but now I did another test and here it is again...

"[Can you give me] A basic definition of the load test: schema, queries, write pattern[?] (e.g. is each benchmark/client thread being executed against a different mysqld server?)" Unfortunately that's proprietary, but I can reiterate some info from above: The test is executed against a single node (i.e. a single server). Each thread gets its own rows to manipulate.

"[Can you give me] The my.cnf being used on the mysql instances[?]" I've tried with two different ones, though with many similarities due to requirements. This is the latest one, anonymized a bit:


[mysql]
port                           = 3306
socket                         = /var/lib/mysql/mysql.sock
[mysqld]
port                           = 3306
socket                         = /var/lib/mysql/mysql.sock
transaction_isolation          = READ-UNCOMMITTED
explicit_defaults_for_timestamp= ON
user                           = mysql
default-storage-engine         = InnoDB
socket                         = /var/lib/mysql/mysql.sock
pid-file                       = /var/lib/mysql/mysql.pid
bind-address                   = 0.0.0.0
skip-host-cache
secure-file-priv               = ""
report_host                    = "realIpAddressHere"
datadir                        = /var/lib/mysql/
log-bin                        = /var/lib/mysql/mysql-bin
relay-log                      = /var/lib/mysql/relay-bin
server-id                      = 59331200
server_id                      = 59331200
auto_increment_increment       = 10
auto_increment_offset          = 1
replicate-ignore-db            = mysql
slave-skip-errors              = 1032,1062
master-info-repository         = TABLE
relay-log-info-repository      = TABLE
binlog_checksum                = NONE
gtid_mode                      = ON
enforce_gtid_consistency       = ON
log_slave_updates              = ON
log_bin                        = binlog
binlog_format                  = ROW
transaction_write_set_extraction         = XXHASH64
loose-group_replication_group_name       = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot    = off
loose-group_replication_local_address    = "localAddressHere"
loose-group_replication_group_seeds      = "groupSeedsHere"
loose-group_replication_bootstrap_group  = off
loose-group_replication_single_primary_mode = OFF
loose-group_replication_enforce_update_everywhere_checks = ON
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
loose-group_replication_ip_whitelist="ipRangeHere"
slave_parallel_workers         = 1024
slave_transaction_retries      = 18446744073709551615
slave_skip_errors              = ddl_exist_errors
loose-group_replication_gtid_assignment_block_size = 1024
log-error                      = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes  = 0
slow-query-log                 = 1
slow-query-log-file            = /var/lib/mysql/mysql-slow.log
event_scheduler=ON
loose-group_replication_single_primary_mode = OFF
loose-group_replication_enforce_update_everywhere_checks = ON

We do not have a MySQL Enterprise subscription.

MySQL failover - Master to Master Replication

$
0
0

My company is trying to implement a MySQL failover mechanism, to achieve higher availability in our webservices tier - we commercialize a SaaS solution. To that end we have some low-end VMs scattered through different geographical locations, each containing a MySQL 5.5 server with several DBs, that for the time being are merely slave-replicating from the production server - the objective up until now was just checking the latency and general resilience of MySQL replication.

The plan however is to add a Master-Master replication environment between two servers in two separate locations, and these two instances would handle all the DB writes. The idea wouldn't necessarily imply concurrency; rather the intention is having a single one of the instances handling the writes, and upon a downtime situation using a DNS Failover service to direct the requests to the secondary server. After the primary comes back online, the b-log generated in the meantime in the secondary would be replicated back, and the DNS Failover restored the requests back to the first one.

I am not an experienced administrator, so I'm asking for your own thoughts and experiences. How wrong is this train of thought? What can obviously go wrong? Are there any much better alternatives? Bash away!

Thanks!

MongoDB can't add new replica set member [rsSync] SEVERE: Got signal: 6

$
0
0

I have a replica set and I added a new member to the set. The initialSync begins for the new member and rs.status (on primary) shows STARTUP2 as status. However, after a long enough time, there's a cryptic fassert error coming on the new instance.

Log dump is as following:

2014-11-02T22:53:23.995+0000 [clientcursormon] mem (MB) res:330 virt:45842
2014-11-02T22:53:23.995+0000 [clientcursormon]  mapped (incl journal view):45038
2014-11-02T22:53:23.995+0000 [clientcursormon]  connections:27
2014-11-02T22:53:23.995+0000 [clientcursormon]  replication threads:32
2014-11-02T22:53:25.427+0000 [conn2012] end connection xx.xx.xx.xx:1201 (26 connections now open)
2014-11-02T22:53:25.433+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1200 #2014 (27 connections now open)
2014-11-02T22:53:25.436+0000 [conn2014]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:26.775+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1058 #2015 (28 connections now open)
2014-11-02T22:53:26.864+0000 [conn1993] end connection xx.xx.xx.xx:1059 (27 connections now open)
2014-11-02T22:53:29.090+0000 [rsSync] Socket recv() errno:110 Connection timed out xx.xx.xx.xx:27017
2014-11-02T22:53:29.096+0000 [rsSync] SocketException: remote: xx.xx.xx.xx:27017 error: 9001 socket exception [RECV_ERROR] server [168.63.252.61:27017] 
2014-11-02T22:53:29.099+0000 [rsSync] DBClientCursor::init call() failed
2014-11-02T22:53:29.307+0000 [rsSync] replSet initial sync exception: 13386 socket error for mapping query 0 attempts remaining
2014-11-02T22:53:36.113+0000 [conn2013] end connection xx.xx.xx.xx:1056 (26 connections now open)
2014-11-02T22:53:36.153+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1137 #2016 (27 connections now open)
2014-11-02T22:53:36.154+0000 [conn2016]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:55.541+0000 [conn2014] end connection xx.xx.xx.xx:1200 (26 connections now open)
2014-11-02T22:53:55.578+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1201 #2017 (27 connections now open)
2014-11-02T22:53:55.580+0000 [conn2017]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:56.861+0000 [conn2015]  authenticate db: admin { authenticate: 1, user: "root", nonce: "xxx", key: "xxx" }
2014-11-02T22:53:59.310+0000 [rsSync] Fatal Assertion 16233
2014-11-02T22:53:59.723+0000 [rsSync] 0x11c0e91 0x1163109 0x114576d 0xe84c1f 0xea770e 0xea7800 0xea7af8 0x1205829 0x7ff728cf8e9a 0x7ff72800b3fd 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11c0e91]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x1163109]
 /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x114576d]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xe84c1f]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x18e) [0xea770e]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x30) [0xea7800]
 /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0xa8) [0xea7af8]
 /usr/bin/mongod() [0x1205829]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7ff728cf8e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ff72800b3fd]
2014-11-02T22:53:59.723+0000 [rsSync] 

***aborting after fassert() failure


2014-11-02T22:53:59.728+0000 [rsSync] SEVERE: Got signal: 6 (Aborted)

.

The worst part is that when I try to re-start the mongod service, the replication begins afresh trying to reSync all the files which are already there. This seems bizarre and useless.

Can someone please help me understand what is going on?

MySQL replication: slave is not getting data from master

$
0
0

My problem is that all setup is done for MySQL replication but the slave is unable to sync the master's data. To understand I am sharing link below; please visit.

mysql>  SHOW SLAVE STATUS \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.10.110
                  Master_User: slaveuser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000033
          Read_Master_Log_Pos: 402
               Relay_Log_File: VoltyLinux-relay-bin.000046
                Relay_Log_Pos: 317
        Relay_Master_Log_File: mysql-bin.000033
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: replica
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 402
              Relay_Log_Space: 692
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: f1739fcc-0d2d-11e6-a8cc-c03fd56585b5
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 60
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
1 row in set (0.00 sec)

https://stackoverflow.com/q/36929641/2644613

Keepalived vIP as Galera wsrep_cluster_address

$
0
0

I have a MariaDB Galera cluster. If some nodes fail, I cannot blindly restart them, I have to determine a good wsrep_cluster_address first.

If I can keep a keepalived virtual IP on one of the healthy nodes, can I use this IP as wsrep_cluster_address on other nodes? So in case of node failure, the joining node would always have a right wsrep_cluster_address? Or are there any other solutions enabling automatic rejoin?

I feel it should be somehow possible to keep the cluster up and automatically rejoin nodes as long as there is at least 1 healthy node (or Primary Component?) up.

(Note: I am aware of the answer in Galera cluster without having to specify all hosts on wsrep_cluster_address, but multicast is unfortunately not an option.)


S3 Cross Region Replication - Reverse Replication

$
0
0

I currently have replication configured such that my S3 documents on us-east-1 are replicated to a bucket on us-west-2. In light of today's (gasp) AWS outage, I considered failing over to us-west-2 (which appears to be online at the moment). So, I have several questions about this:

  1. Would documents uploaded to us-west-2 be replicated back to us-east-1 once services are restored? I suspect the answer is no since I have not found any documentation regarding bi-directional replication.
  2. If bi-directional replication does not happen and I decide to failover to us-west-2, what's the process for recovering once us-east-1 comes back online? I assume this would require writing a script to copy all missing documents back to us-east-1. Any other ideas or suggestions?

Setting up replication with MariaDB 10.3.4 docker images

$
0
0

I'm attempting to set up replication between two docker containers, both running the stock MariaDB 10.3.4 images (which are the latest versions as of right now). When I start up the containers, I get error code 1062 (Duplicate key) on table mysql.user for key localhost-root. The slave is clearly trying to replicate the mysql.user table from the master and failing because they both have root@localhost users. This does not seem to be Docker-related - I would imagine the same issue will arise when setting up any master/slave pair from scratch.

How can I set up a slave to replicate everything? I'm starting from scratch, so I want the slave to be a (more-or-less) perfect copy of the master.

Here is the set up:

I'm running the containers from a docker-compose.yml file:

version: '2'

volumes:
    dbdata:
        external: false

services:

    # the MariaDB database MASTER container
    #
    database:
        image: mariadb:10.3.4
        env_file:
            - ./env/.env.database
        volumes:
            - dbdata:/data/db
            - /etc/localtime:/etc/localtime:ro
            # mount the configuration files in the approriate place
            #
            - ./database/master/etc/mysql/conf.d:/etc/mysql/conf.d:ro
            # mount the SQL files for initialization in a place where the
            # database container will look for it on initialization; see
            # "Initializing a fresh instance" at
            # https://hub.docker.com/_/mariadb/ for details
            #
            - ./database/master/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d:ro
        ports:
            - "3306:3306"

    # the MariaDB database SLAVE container
    #
    slave:
        image: mariadb:10.3.4
        # env_file:
        #     - ./env/.env.database
        environment:
            - MYSQL_ALLOW_EMPTY_PASSWORD=yes
        volumes:
            - /etc/localtime:/etc/localtime:ro
            # mount the configuration files in the approriate place
            #
            - ./database/slave/etc/mysql/conf.d:/etc/mysql/conf.d:ro
            # mount the SQL files for initialization in a place where the
            # database container will look for it on initialization; see
            # "Initializing a fresh instance" at
            # https://hub.docker.com/_/mariadb/ for details
            #
            - ./database/slave/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d:ro
        depends_on:
            - database

The .env/.env.database file simply exposes the environment variables that the MariaDB docker image requires:

# the root user password
#
MYSQL_ROOT_PASSWORD=password

# the database to use
#
MYSQL_DATABASE=mydatabase

Note that this is my development environment, so I'm using a dumb password.

The master & slave configuration files are mounted from my local host.

000-replication-master.sql:

GRANT REPLICATION SLAVE ON *.* TO 'replicant'@'%' IDENTIFIED BY 'password';

replication.cfg for the master:

[mariadb]
    log-bin
    server_id=1
    log-basename=master1

    # force binlog format to ROW to avoid issues with
    # replicate_do_db
    binlog_format=ROW

000-replication-slave.sql:

-- configure the connection to the master
--
CHANGE MASTER TO
    MASTER_HOST='database',
    MASTER_USER='replicant',
    MASTER_PASSWORD='password',
    MASTER_PORT=3306,
    MASTER_USE_GTID=slave_pos,
    MASTER_CONNECT_RETRY=10;

-- start the slave
--
START SLAVE;

replication.cnf for the slave:

[mariadb]
    server_id=1000
    relay-log = /var/log/mysql/mysql-relay-bin.log
    log_bin = /var/log/mysql/mysql-bin.log

The error I'm seeing on the slave is this:

Could not execute Write_rows_v1 event on table mysql.user; Duplicate entry 'localhost-root' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; 

The issue is similar to this question, but I'm attempting to use the stock MariaDB images (instead of a custom Docker image).

I've tried a number of different things:

  1. I set it up with replicate_do_db = mydatabase on the slave and it did work, but given the concerns with slave filtering, I'd prefer not to use it. I think it's set up correctly but I'd rather not take the chance.

  2. I've tried deleting the offending row from the mysql.user table (both with a DELETE statement and, when that didn't with, with TRUNCATE) on the slave before the CHANGE MASTER statement, but this does not work.

I should mention that I've searched for an answer to this problem, but all the tutorials online suggest getting the binary log position on the master and manually updating the slave position before starting replication. I'm looking for a solution that will allow me to set up the slave immediately after the master is created and start syncing from scratch.

So, in short, the question is how do I set up a master & slave to replicate everything, starting from a brand-new installation of MariaDB on both master and slave?

Mongodb restore “local” db

$
0
0

I need to restore a replica set. Unfortunately my lack of experience as administrator (I'm not) let me with the only solution of restore the replica using the data of the primary db (the only replica element still alive) as a "seed". At the moment I'm copying the files of the primary to secondary PC using rsync (there are 600G to copy). Then I have three questions regarding the restore:

1 - Once the files are copied should I use this data as the dbpath of the secondary?

2- Is the "local" db of primary copied? This may be a silly question but in https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/#replica-set-resync-by-copying the say to be sure that this db is copied along with the data files, and I'm not sure about it.

3- How long it will take to "synchronize" with primary once I start the mongod instance in the secondary PC?

Why has deadlock behaviour changed between MariaDB 10.1.22 and 10.2.14?

$
0
0

Since upgrading MariaDB 10.1.22 to 10.2.14 our MariaDB slaves are encountering deadlocks that are not handled in less than 600 seconds thus the classic semaphore decision to crash the server. The server has crashed 3 times. The extremely high volumes have not changed; only the MDB performance has improved with the upgrade.

Note we have Insert on Duplicate Updates that process super high volumes on our master. The deadlocks on same queries occur on the slaves so it has to be related to the slave parallel replication locking. Reducing slave_parallel_workers has mitigated some of this.

In summary looking to understand what has changed with mdb 10.2.x regarding threads, timeouts, etc. to zoom in on this issue. Why MDB is unable to determine the deadlock and rollback one of the offending transactions.

I ACKNOWLEDGE all deadlocks should be addressed but as stated above they are not occuring on the master, only on the slave for same statements.

We had the deadlocks prior to the upgrade but MDB always managed same with NO problems.

2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait: --Thread 139518736328448 has waited at read0read.cc line 579 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dc13a0, Mutex TRX_SYS created trx0sys.cc:554, lock var 2

2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait: --Thread 139518749968128 has waited at dict0dict.cc line 1160 for 910.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

2018-06-11 10:32:02 139519224362752 [Note] InnoDB: A semaphore wait: --Thread 139518750574336 has waited at dict0dict.cc line 1160 for 890.00 seconds the semaphore: Mutex at 0x7f2b63dcb500, Mutex DICT_SYS created dict0dict.cc:1096, lock var 2

InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info: InnoDB: Pending reads 2, writes 0 InnoDB: ###### Diagnostic info printed to the standard error stream 2018-06-11 10:32:32 139519224362752 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 600 seconds. We intentionally crash the server because it appears to be hung. 180611 10:32:32 [ERROR] mysqld got signal 6 ;

Replication fails with "Transaction log for database is full due to LOG_BACKUP"

$
0
0

Replication is failing with the subject error.

The log_reuse_wait_desc states LOG_BACKUP.

I have performed both database and log backups.

After changing the recovery mode to single, performing backup, and then changing recovery mode back to FULL, log_reuse_wait_desc goes to NOTHING.

As soon as I start the replication jobs they fail with the subject error.

I did shrink on log file to no avail. Any suggestions to resolve this?

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>